Skip to content
All posts
AICareerEngineering

AI Made You Faster. Your Reward Was More Work.

March 30, 2026·Read on Medium·

The productivity you gained was never yours to keep.

I remember the exact sprint when things changed.

We had just rolled out Copilot across the team. Two weeks later, my velocity went up. Not by some abstract amount I had to squint at in a Jira chart. I could feel it. Boilerplate that used to eat an afternoon disappeared in minutes. Test scaffolding wrote itself. I was closing tickets faster than I had in years.

Then my manager pulled up the sprint report and smiled. “Great output this cycle,” he said. “I’m adding three more stories to next sprint.”

That was the deal. I just hadn’t read the fine print.

The Fastest Hamster Still Runs on a Wheel

Here is what nobody told you when your company handed you an AI coding license: the time you save does not belong to you. It belongs to the roadmap. It belongs to the backlog. It belongs to whoever decides what “enough output” looks like for your role.

You are not being given a tool to work less. You are being given a tool to produce more.

This is not a conspiracy theory. It is the oldest pattern in labor economics. When machines made factory workers faster, factories did not send people home early. They raised quotas. When spreadsheets made accountants faster, firms did not cut billing hours. They gave each accountant more clients. The pattern has not changed. The machines just got better at writing code.

The LeadDev Engineering Leadership Report from 2025 surveyed 617 engineering leaders and found that 65% reported expanded responsibilities. Forty percent were managing more direct reports than the year before. Only 3% saw any decrease in scope. Meanwhile, 22% reported critical burnout levels and 38% said they were working longer hours.

Read those numbers together. More scope. More reports. More hours. Less energy. The AI tools arrived at the same time the workload expanded, and that is not a coincidence.

You Think You Are Faster. The Data Disagrees.

Here is the part that should make you uncomfortable.

METR, an AI evaluation research organization, ran a randomized controlled trial in 2025. They took 16 experienced open-source developers and had them complete 246 tasks across projects they had worked on for an average of five years. Half the time, developers could use AI tools like Cursor and Claude. Half the time, they could not.

The result: developers with AI access took 19% longer to complete their tasks.

But here is the kicker. Those same developers estimated that AI had made them 20% faster. They genuinely believed they were saving time while the stopwatch said otherwise.

That is a 39-percentage-point gap between perception and reality. And it explains why your manager keeps adding stories to the sprint. You feel faster. You report faster. The tools promise faster. Nobody has actually checked whether “faster” is real.

Now, to be fair, METR’s sample was small and focused on experienced developers in mature codebases. The researchers themselves noted that the effect might differ for less experienced developers or newer projects. But the perception gap is the finding that matters most, because it is the perception that drives organizational decisions. Your company is resourcing your team based on a speedup that may not exist.

The 37% Tax You Did Not Sign Up For

Even when AI genuinely saves you time, you do not get to keep all of it.

Workday partnered with Hanover Research in January 2026 and surveyed 3,200 employees and business leaders. They found that 37% of the time employees saved using AI was lost to rework: correcting, clarifying, or rewriting low-quality AI-generated output.

Only 14% of employees consistently got clear, positive net outcomes from their AI usage. And 77% of daily AI users said they review AI-generated work just as carefully as human-produced work, if not more so.

So the workflow becomes: generate with AI, review it yourself, fix the parts that are wrong, double-check the parts that look right but might not be, and ship it. You have not eliminated work. You have traded one kind of effort for another. The cognitive load shifted from creation to verification, and verification is harder to see on a sprint board.

The Stack Overflow 2025 Developer Survey tells a similar story. Out of over 65,000 respondents, 84% are using or planning to use AI tools. But 46% actively distrust the accuracy of AI output, while only 33% trust it. The top frustration, cited by 66% of respondents, was AI solutions that are “almost right, but not quite.” And 45% said debugging AI-generated code is time-consuming.

Almost right is the most expensive kind of wrong.

The Burnout Nobody Attributes to AI

Here is what I see happening on teams. The official narrative is that AI adoption is going great. Developers have better tools. Output is up. The velocity charts look healthy.

Under the surface, the humans running those tools are eroding.

LeadDev’s report found that 40% of engineering leaders said their teams are less motivated than a year ago. That is not a staffing problem or a culture problem. That is a load problem wearing a productivity costume.

When you ask an engineer to do more because the tools should make it possible, you are making an implicit promise: the work will feel the same or easier. But it does not feel the same. It feels like doing two jobs. One job is the actual engineering. The other job is babysitting an AI that confidently generates plausible nonsense 46% of the time.

The burnout is real, but it does not show up in the metrics your leadership watches. Velocity is green. PRs are merging. The dashboard says everything is fine. The person behind the dashboard has not taken a real lunch break in three months because they are spending that time re-reading AI output that looked correct on the first pass but introduced a subtle bug on line 47.

The Quiet Contract Rewrite

What happened is that your employment contract got rewritten without anyone signing anything.

Before AI tools, your implicit deal was: produce X amount of work per sprint, maintain Y level of quality, and you have met expectations. The tools have not changed Y. Quality standards are the same or higher. But X has been quietly recalibrated upward. You are expected to produce more because the tools suggest you should be able to.

Nobody sat you down and said, “We are raising your quota.” They just kept adding tickets and pointing at the AI subscription as justification.

This is the version of the conversation that never happens in a retro or a one-on-one. Nobody says, “The AI is not making our lives better; it is making our backlogs bigger.” Because saying that sounds like you are against progress. It sounds like you cannot keep up. It sounds like a skill issue.

It is not a skill issue. It is a math issue. You cannot save 30% of your time on code generation and then spend 37% of that savings on AI output review and come out ahead. The numbers do not work. But the expectation that they should work is now baked into your performance evaluation.

What Actually Changed

I am not anti-AI. I use these tools every day. They are genuinely useful for specific tasks: generating boilerplate, exploring unfamiliar APIs, rubber-ducking a design decision at 2 AM when no human is available.

But I stopped pretending they made me faster in any way that mattered to my calendar. What they did was change the shape of my work. Less typing. More reading. Less creating from scratch. More editing and verifying. Less flow state. More context-switching between what the model generated and what I actually needed. The total hours stayed the same. The cognitive profile shifted. And nobody asked whether I preferred the new shape.

The problem is that organizations adopted the tools based on a marketing promise (your developers will be X% more productive) and then operationalized that promise into headcount decisions, sprint planning, and performance expectations. When the promised productivity did not materialize at the organizational level, they did not walk back the expectations. They just assumed developers were not using the tools well enough.

So now you are in a position where you have more work, more tools, less trust in the tools, and less room to say any of this out loud without sounding like you are the problem.

The Only Honest Conversation

If you are a tech lead, the most useful thing you can do right now is have the conversation nobody wants to have. Not about which AI tool to adopt. Not about prompt engineering workshops. About what actually changed in your team’s workload since adoption, measured in hours, not story points.

Story points lie. They always have, but they lie more now because the same story point represents a different amount of cognitive effort depending on how much AI-generated code you need to verify.

Measure the rework. Track how many PRs need revision after the first review. Ask your team, honestly, whether they feel like they have more time or less. Not in a survey that goes to leadership. In a one-on-one where the answer does not affect their performance review.

The answer will probably be: less.

And then you have to decide what to do with that information. You can push it up the chain and risk being the person who says the new tools are not magic. Or you can absorb it silently and watch your team’s motivation number tick down another 10% by next quarter.

I have been on both sides of that decision. Pushing it up is harder and lonelier. It is also the only version where your team survives the year without losing the people you cannot afford to lose.

The Bill Comes Due

The companies that get this right will be the ones that treat AI tools the way they should have treated every productivity tool in history: as a way to reduce load, not increase output. Give your team the same amount of work and let them go home earlier. Let them use the saved time to write better tests, refactor the code they have been meaning to fix for six months, or just think.

That will never show up on a quarterly OKR slide. But the people you keep will be worth more than the extra stories you shipped.

The ones that get it wrong will keep raising the bar. Velocity will look great on paper. And two years from now, they will write a blog post about their “unexpected attrition problem” and wonder what happened.

What happened was the deal they made. They took the time AI saved and gave it to the backlog instead of the humans.

The humans noticed.

Found this helpful?

If this article saved you time or solved a problem, consider supporting — it helps keep the writing going.

Originally published on Medium.

View on Medium
AI Made You Faster. Your Reward Was More Work. — Hafiq Iqmal — Hafiq Iqmal