Discussion
How I'm Productive with Claude Code
felipevb: > The worktree system removed the friction of context-switching - juggling multiple streams of work without them colliding.I'm so conflicted about this. On the one hand I love the buzz of feeling so productive and working on many different threads. On the other hand my brain gets so fried, and I think this is a big contributor.
CrzyLngPwd: `And like any good manager, you get to claim credit for all the work your “team” does.`Is that how it works? Do managers claim credit for the work of those below them, despite not doing the work?I hope they also get penalised when a lowly worker does a bad thing, even if the worker is an LLM silently misinterpreting a vague instruction.
jmathai: Yup, the manager gets implicit credit for the work their team does. In most cases, deservedly so. I don't see why it should be any different for engineers using LLMs as "direct reports". Not all engineers will be the same level of "good" with LLM tools so the better you are (as with any other skill as well) the more credit you would receive.
idiotsecant: Yes. That is how management works. Although a good manager will focus some of that praise onto team members who deserve it.
MeetingsBrowser: > I’m not “using a tool that writes code.” I’m in a tight loop: kick off a task, the agent writes code, I check the preview, read the diff, give feedback or merge, kick off the next taskthe assumption to this workflow is that claude code can complete tasks with little or no oversight.If the flow looks like review->accept, review->accept, it is manageable.In my personal experience, claude needs heavy guidance and multiple rounds of feedback before arriving at a mergeable solution (if it does at all).Interleaving many long running tasks with multiple rounds of feedback does not scale well unfortunately.I can only remember so much, and at some point I spend more time trying to understand what has been done so far to give accurate feedback than actually giving feedback for the next iteration.
serf: I like llms too, and I think they make me more productive..but a chart of commits/contribs is such a lousy metric for productivity.It's about on par with the ridiculousness of LOC implying code quality.
markbao: > What’s become more fun is building the infrastructure that makes the agents effective.Solving new problems is a thing engineers get to do constantly, whereas building an agent infrastructure is mostly a one-ish time thing. Yes, it evolves, but I worry that once the fun of building an agentic engineering system is done, we’re stuck doing arguably the most tedious job in the SDLC, reviewing code. It’s like if you were a principal researcher who stopped doing research and instead only peer reviewed other people’s papers.The silver lining is if the feeling of faster progress through these AI tools gives enough satisfaction to replace the missing satisfaction of problem-solving. Different people will derive different levels of contentment from this. For me, it has not been an obvious upgrade in satisfaction. I’m definitely spending less time in flow.
i_love_retros: Man I must just be from a different time and place to this person, because i just can't imagine being comfortable with such self promotion and well, bragging.
aguimaraes1986: This is the "lines of code per week" metric from the 90s, repackaged. "I'm doing more PRs" is not evidence that AI is working, it's evidence that you are merging more. Whether thats good depends entirely on what you are merging. I use AI every day too. But treating throughput of code going to production as a success metric, without any mention of quality, bugs, or maintenance burden is exactly the kind of thinking developers used to push back on when management proposed it.Turns out we weren't opposed to bad metrics! We were just opposed to being measured! Given the chance to pick our own, we jumped straight to the same nonsense.
dgunay: I do parallel agents in worktrees and I don't always constantly keep an eye on them like a fry cook flipping 20 burgers at once. Sometimes it's just nice to know that I can spin one up, come back tomorrow, and some progress has been made without breaking my current flow.
saadn92: the way I handle this is that I just create pull requests (tell the agent to do it at the end), and then I'll come back at a later time to review, so I always have stuff queued up to review.
dakiol: Are you kidding? What else would managers get credit from? They don't produce anything the company is interested in. They steer, they manage, and so if the ones being managed produce the thing the company is interested in, then sure all the credit goes to the team (including the manager!). As it usually happens, getting credit means nothing if not accompanied by a salary bump or something like that. And as it usually happens, not the whole team can get a salary bump. So the ones who get the bump are usually one or two seniors on the team, plus the manager of course... because the manager is the gatekeeper between upper management (the ones who approve salary bumps) and the ICs... and no sane manager would sacrifice a salary bump for themselves just to give it away to an IC. And that's not being a bad manager, that's simply being human. Also if you think about it, if the team succeeded in delivering "the thing", then the manager would think it's partially because of their managing, and so he/she would believe a salary bump is deservedWhen things go south, no penalization is made. A simple "post-mortem" is written in confluence and people write "action items". So, yeah, no need for the manager to get the blame.It's all very shitty, but it's always been like that.
kace91: I would like some research regarding multi agent flows and impact on speed and correctness, because I have a feeling that it's like a texting and driving situation, where self perception of skill loss and measured skill loss diverge.I have nothing to back up the idea though.
saadn92: you do lose context, but if you generate a plan beforehand and save it, then it makes it easier to gain that context when you come back. I've been able to get out things a lot more quickly this way, because instead of "working" that day, I'll just review the work that's been queued up and focus on it one at a time, so I'm still the bottle neck but it has allowed me to move more quickly at times
dakiol: I don't understand the "being more productive" part. Like, sure, LLMs make us iterate faster but our managers know we're using them! They don't naively think we suddenly became 10x engineers. Companies pay for these tools and every engineer has access to them. So if everyone is equally productive, the baseline just shifted up... same as always, no?Mentioning LLM usage as a distinction is like bragging about using a modern compiler instead of writing assembly. Yeah it's faster, but so is everyone else code... Besides, I wouldn't brag about being more productive with LLMS because it's a double edge sword: it's very easy to use them, and nobody is reviewing all the lines of code you are pushing to prod (really, when was the last time you reviewed a PR generated by AI that changed 20+ files and added/removed thousands of lines of code?), so you don't know what's the long game of your changes; they seem to work now but who knows how it will turn out later?
bluelightning2k: Sometimes outcomes and achievements and work product are useful beyond just... stack ranking yourself against your peers. Seems so odd to me that this is your mentality unless you're earlier in your career.
dabedee: Kudos to OP, genuinely interesting stuff.I don't want to be too hard on this. I keep reading these posts, and there are a lot of these, and they all have the same structure. They mass-produced commits with AI. Then they optimized their commit-mass-production factory.Presumably someone is deciding what to build and presumably the features are useful and presumably customers exist. But the post is not about any of that. And that's fine. I too get bored. And when I get bored of the problem I am supposed to be solving, I become fascinated by an adjacent problem that is more tractable and more fun. The adjacent problem is almost always about the tools or the pipeline.
dakiol: Honest question: if you're using multiple agents, it's usually to produce not a dozen lines of code. It's to produce a big enough feature spanning multiple files, modules and entry points, with tests and all. So far so good. But once that feature is written by the agents... wouldn't you review it? Like reading line by line what's going on and detecting if something is off? And wouldn't that part, the manual reviewing, take an enormous amount of time compare to the time it took the agents to produce it? (you know, it's more difficult to read other people's/machine code than to write it yourself)... meaning all the productivity gained is thrown out the door.Unless you don't review every generated line manually, and instead rely on, let's say, UI e2e testing, or perhaps unit testing (that the agents also wrote). I don't know, perhaps we are past the phase of "double check what agents write" and are now in the phase of "ship it. if it breaks, let agents fix it, no manual debugging needed!" ?
troyvit: > I hope they also get penalised when a lowly worker does a bad thing, even if the worker is an LLM silently misinterpreting a vague instruction.Yeah the buck stops with the manager (IMO the direct manager). So I can do some constructive criticism with my dev if they make a mistake, but it's my fault in the larger org that it happened. Then it's my manager's job to work with me to make sure I create the environment where the same mistake doesn't happen again. Am I training well? Am I giving them well-scoped work? All that.
jannyfer: Ooooh very interesting idea.I also have nothing to back it up, but it fits my mental models. When juggling multiple things as humans, it eats up your context window (working memory). After a long day, your coherence degrades and your context window needs flushing (sleeping) and you need to start a new session (new day, or post-nap afternoon).
Salgat: This is the biggest bottleneck for me. What's worse is that LLMs have a bad habit of being very verbose and rewriting things that don't need to be touched, so the surface area for change is much larger.
kalaksi: Is constant juggling of multiple agents productive? I haven't seen the allure (except maybe with 2 agents sometimes). I guess it depends on what kind of tasks one is doing and I can imagine it working if doing large, long-running tasks, but then reviewing those large changes and refactoring becomes more difficult. And if you're juggling multiple agents, there's the mental context switching and tooling overhead for managing them. Maybe predictable and repetitive tasks can work well.I prefer focusing mostly on 1 task at a time (sometimes 2 for a short time, or asking other agent some questions simultaneously) and doing the task in chunks so it doesn't take much time until you have something to review. Then I review it, maybe ask for some refactoring and let it continue to the next step (maybe let it continue a bit before finishing review if feeling confident about the code). It's easier to review smaller self-contained chunks and easier to refer to code and tell AI what needs changing because of fewer amount of relevant lines.
browningstreet: Maybe author knows that too, but wants to talk about it nonetheless. First line of article: “Commits are a terrible metric for output, but they're the most visible signal I have.”
skydhash: What about number of working features or system completeness? Current state vs desired state is fairly visible.
browningstreet: I use coding agents to produce a lot of code that I don’t ship. But I do ship the output of the code.
paganel: > The PR descriptions are more thorough than what I’d writeWhy do people do this? Why do they outsource something that is meant to have been written by a human, so that another human can actually understand what that first human wanted to do, so why do people outsource that to AI? It just doesn't make sense.
paulhebert: Yeah I agree.We have “Cursor Bot” enabled at work. It reviews our PRs (in addition to a human review)One thing it does is add a PR summary to the PR description. It’s kind of helpful since it outlines a clear list of what changed in code. But it would be very lacking if it was the full PR description. It doesn’t include anything about _why_ the changes were made, what else was tried, what is coming next, etc.
prmoustache: So many pretend they are more productive but so few are able to articulate what they actually produced.Some says features. Well. Are they used. Are they beneficial in any way for our society or humanity? Or are we junk producing for the sake of producing?
Leynos: Here's what I suggest:Serious planning. The plans should include constraints, scope, escalation criteria, completion criteria, test and documentation plan.Enforce single responsibility, cqrs, domain segregation, etc. Make the code as easy for you to reason about as possible. Enforce domain naming and function / variable naming conventions to make the code as easy to talk about as possible.Use code review bots (Sourcery, CodeRabbit, and Codescene). They catch the small things (violations of contract, antipatterns, etc.) and the large (ux concerns, architectural flaws, etc.).Go all in on linting. Make the rules as strict as possible, and tell the review bots to call out rule subversions. Write your own lints for the things the review bots are complaining about regularly that aren't caught by lints.Use BDD alongside unit tests, read the .feature files before the build and give feedback. Use property testing as part of your normal testing strategy. Snapshot testing, e2e testing with mitm proxies, etc. For functions of any non-trivial complexity, consider bounded or unbounded proofs, model checking or undefined behaviour testing.I'm looking into mutation testing and fuzzing too, but I am still learning.Pause for frequent code audits. Ask an agent to audit for code duplication, redundancy, poor assumptions, architectural or domain violations, TOCTOU violations. Give yourself maintenance sprints where you pay down debt before resuming new features.The beauty of agentic coding is, suddenly you have time for all of this.
101011: how do you define system completeness? what if you ship one really big feature vs three really small ones?I would posit that you need extra context to obtain meaning from those metrics, which inherently makes them less visible
jmathai: This is basically the same workflow I've come to adopt. I don't use any "pre-built" skills, mine are actually still .md files in the .claude/command/ folder because that's when I started. The workflow is so good, I'm the bottleneck.I've started to use git worktrees to parallelize my work. I spend so much time waiting...why not wait less on 2 things? This is not a solved problem in my setup. I have a hard time managing just two agents and keeping them isolated. But again, I'm the bottleneck. I think I could use 5 agents if my brain were smarter........or if the tools were better.I am also a PM by day and I'm in Claude Code for PM work almost 90% of my day.
orwin: I like Claude, at least when the user reviews the code before asking for a PR. But gods I hate tickets/feature requests written by Opus/Sonnet (or worse: Codex or Gemini). If you know/understand your product enough it's probably less of a problem for your team than it is for mine, but each time I see a feature request automagically written in the backlog I know I will have to spend at least 30 minutes rewriting in so that it doesn't take us one hour to refine it collectively.
imiric: > The PR descriptions are more thorough than what I’d write, because it reads the full diff and summarises the changes properly. I’d gotten so used to the drudgery that I’d stopped noticing it was drudgery.Who are you creating PR descriptions for, exactly? If you consider it "drudgery", how do you think your coworkers will feel having to read pages of generic "AI" text? If reviewing can be considered "drudgery" as well, can we also offload that to "AI"? In which case, why even bother with PRs at all? Why are you still participating in a ceremony that was useful for humans to share knowledge and improve the codebase, when machines don't need any of it?> My role has changed. I used to derive joy from figuring out a complicated problem, spending hours crafting the perfect UI. [...] What’s become more fun is building the infrastructure that makes the agents effective. Being a manager of a team of ten versus being a solo dev.Yeah, it's great that you enjoy being a "manager" now. Personally, that is not what I enjoy doing, nor why I joined this industry.Quick question: do you think your manager role is safe from being automated away? If machines can write code and prose now better than you, couldn't they also manage other machines into producing useful output better than you? So which role is left for you, and would you enjoy doing it if "manager" is not available?Purely rhetorical, of course, since I don't think the base premise is true, besides the fact that it's ignoring important factors in software development such as quality, reliability, maintainability, etc. This idea that the role of an IC has now shifted into management is amusing. It sounds like a coping mechanism for people to prove that they can still provide value while facing redundancy.
Sabu87: I'm also trying everything to learn how to use Claude, everything is so new. And keep upgrading.
jwpapi: I have a little ai-commit.sh as "send" in package.json which describes my changes and commits. Formatting has been solved by linters already. Neither my approach nor OP approach are ground-breaking, but i think mine is faster, you also !p send (p alias pnpm) inside from claude no need for it to make a skill and create overhead..Like thinking about it a pr skill is pretty much an antipattern even telling ai to just create a pr is faster.I think some vibe coders should let AI teach them some cli tooling