Skip to content
All posts
LeadershipAI

What I Learned Managing a Team That Uses AI to Code

February 18, 2026·Read on Medium·

AI made my developers faster. It also created problems nobody warned me about.

Photo by Flipsnack on Unsplash

Last month, one of my developers submitted a pull request. Clean code. Well-structured. Even had comments — good ones. I almost approved it on the spot.

Then I asked him one question: “Why did you use a repository pattern here instead of just calling the model directly?”

Silence. A long pause. Then: “I think… the AI suggested it.”

That was the moment I realized we had a problem. Not with AI. With how we were using it.

I am a Technical Lead. I manage a team of developers building real application with the kind that handle real users, real data and real money. When AI coding tools started becoming mainstream, I did not resist. I am not one of those people. I gave the green light. I encouraged my team to use them. Cursor, Windsurf, Copilot, ChatGPT whatever makes you faster, go for it.

And they did get faster. Oh, they got fast.

But speed without understanding is just expensive technical debt with a nice commit message.

The Honeymoon Was Real

I will not pretend AI tools are useless. That would be dishonest.

The gains were immediate and obvious. Boilerplate code that used to take an hour? Done in ten minutes. Writing unit tests, something my team previously “forgot” to do and suddenly became painless. Scaffolding new modules, generating migrations, drafting API documentation. All of it got faster.

One of my developers told me he used to dread writing form request validations in Laravel. Now he describes the rules to Copilot and it spits them out. I cannot argue with that.

For the first two months, our sprint velocity went up noticeably. Management was happy. The team was happy. I was cautiously optimistic.

Cautiously, because I have been in this industry long enough to know that when something feels too good, you are probably not seeing the full picture yet.

The Cracks I Started Noticing

The first sign was code reviews.

AI-generated code has a specific quality to it: it looks correct. It follows conventions. It is syntactically clean. But it has this subtle emptiness like a house with perfect furniture but no one living in it.

I started finding patterns in PRs that worried me. Developers were using design patterns they had never used before and could not justify. Services were being abstracted into layers that added complexity without adding value. Code was “technically correct” but architecturally confused.

The worst part? It was harder to catch during review. When code is messy, your brain flags it immediately. When code is clean but wrong, it slips through.

Then there was the junior developer problem.

I have always believed that the first two years of a developer’s career are the most important. That is when you build instinct. You learn why things break. You struggle with a query for three hours and walk away understanding joins forever. You deploy a bug to production and never forget to write that edge case test again.

AI shortcuts all of that. And I do not mean that in an inspirational way. I mean it literally removes the struggle that builds competence.

I watched one of my junior developers go from “I don’t know how to do this” to submitting a complete feature in 45 minutes. Impressive, until I realized he could not modify it when the requirements changed slightly. He did not understand what he had submitted. He had shipped AI’s code, not his.

That is not development. That is copy-paste with extra steps.

The Money Conversation Nobody Wants to Have

Here is something I wrote about before and I will say it again: you cannot demand AI-level speed from your team and then refuse to pay for AI tools.

I have seen this happen in multiple companies. Management sees the hype. They read articles about 10x productivity. They set expectations accordingly. Then when developers ask for Copilot licenses or ChatGPT Pro subscriptions, suddenly there is no budget.

So what happens? Developers use free tiers. They paste proprietary code into public AI tools. They work around token limits with fragmented prompts that produce fragmented code. Or they just quietly pay for it themselves and resent the company for it.

If you are a manager or a CTO reading this which either invest in the tools properly or stop expecting the output. You do not get to have it both ways.

And if you are investing in the tools, understand what you are actually paying for. You are not paying for a magic code generator. You are paying for a productivity multiplier that still requires skilled humans to operate. The moment you forget that, you start hiring cheaper developers thinking AI will cover the gap.

It will not.

What I Changed as a Tech Lead

After a few months of observation, I made changes to how my team works with AI. Not restrictions / adjustments. Here is what actually worked.

PR reviews now require explanation, not just code. If you submit a PR, you need to be able to walk me through it. Not line by line, but you need to explain the decisions. Why this approach? What were the alternatives? What happens if this input is null? If you cannot explain it, it goes back. I do not care if the code works. If you do not understand it, we do not ship it.

Juniors get “no-AI” tasks every sprint. I designate at least one task per sprint for each junior developer where AI tools are off limits. These are not punishment. They are learning opportunities. Usually something foundational like writing a service class from scratch, debugging a failing test without asking ChatGPT why. The complaints lasted about two weeks. Then they started thanking me when they realized they could actually debug production issues on their own.

We separate “generation” from “decision-making.” AI is excellent at generating code. It is terrible at deciding what code should exist in the first place. Architecture, system design and data modelling, these are human conversations. We use whiteboards before we use prompts. The AI comes in after we have decided what to build, not before.

Code review standards went up, not down. This sounds counterintuitive. AI produces cleaner code, so reviews should be easier, right? Wrong. Because the code looks clean, you have to look harder for logical issues. I trained my senior developers to review AI-assisted PRs with more skepticism, not less. Trust the syntax. Question the logic.

We talk about AI use openly. No shame. No hiding. If you used AI to generate something, say so. If it helped, great. If it produced something you do not fully understand, that is where the learning happens. The worst outcome is a team that secretly relies on AI and pretends they wrote everything themselves.

The Question I Still Cannot Fully Answer

Here is what keeps me up at night as a technical lead in 2026: how do I evaluate developer skill now?

Previously, it was relatively straightforward. Can you solve problems? Can you write clean code? Can you debug under pressure? Can you design a system that scales?

Now, some of those signals are muddied. A developer who writes beautiful code with AI assistance, is it that a skilled developer or a skilled prompt engineer? When someone delivers a feature in record time because Cursor wrote 70% of it, do I evaluate the output or the process?

I do not have a clean answer. I suspect nobody does yet. But I lean towards this: a good developer in the AI era is someone who can work with AI and without it. Someone who uses AI as a tool, not as a crutch. Someone who can look at AI-generated code and immediately tell you what is wrong with it because they understand the fundamentals deeply enough to judge.

If you can only code with AI, you are not a developer. You are an operator. And operators are replaceable the moment the tool changes.

Here Is My Honest Take

AI tools are here to stay. I am not fighting that. My team uses them every day and we are better for it IF when we use them correctly.

But the industry is lying to itself right now. We are pretending that faster output means better engineering. We are pretending that AI can replace the years of experience it takes to build good judgment. We are promoting the idea that you can skip the hard parts of learning to code and still end up competent.

You cannot.

As a tech lead, my job is no longer just reviewing code and planning sprints. My job now includes protecting my team’s ability to think. Making sure that in the rush to ship faster, we do not lose the people who actually understand what they are shipping.

Because the AI can write the code.

But it still takes a human to know if it should.

If this resonated with you, I have written about related topics before — including why vibe coding might be draining your skills and why you can’t demand AI speed without paying for AI tools. Give them a read.

Found this helpful?

If this article saved you time or solved a problem, consider supporting — it helps keep the writing going.

Originally published on Medium.

View on Medium
What I Learned Managing a Team That Uses AI to Code — Hafiq Iqmal — Hafiq Iqmal