The project turned into a disaster. But instead of feeling vindicated, I realized we needed to change.

I need to tell you about a tender we lost.
We had a strong proposal. Solid technical approach. Realistic timeline. A team with direct experience building the exact type of system the client needed. Our pricing was competitive. Our references were strong.
We lost.
The feedback was polite but clear: another vendor had proposed a significantly shorter timeline, roughly half of ours and cited their use of generative AI tools across the development lifecycle as the reason they could deliver faster.
Fifty percent faster. Because of AI.
I was frustrated. But I told myself: let it go. If they can actually deliver, good for them.
They could not.
What Happened to the Project
I heard the updates the way you always hear them in this industry through contacts, casual conversations or people who know people.
The first three months sounded fine. Code was moving fast. Early demos looked decent. The client seemed satisfied.
Then it fell apart.
The system was full of half-baked features that looked functional in a demo but collapsed under real usage. Forms that did not validate properly. Workflows that skipped steps. A user interface that looked like it was generated by AI and never touched by a human because it probably was. Inconsistent layouts. Buttons that led nowhere. A dashboard that displayed data nobody asked for while missing the metrics the client actually needed.
The business logic was shallow. It handled the happy path and nothing else. The moment a user did something slightly unexpected, an edge case, a special character in a name field or a transaction that needed to be reversed lead to the system either broke or produced wrong outputs.
Integration with the client’s existing systems was barely functional. Data was not syncing correctly. Reports were pulling wrong numbers. The client’s finance team refused to use it because they did not trust the calculations.
The project was delivered. Technically. But it was not functional. Not in any way that mattered.
Last I heard, the client is stuck. They have paid for a system that does not work properly, the original vendor’s team has moved on to other projects and the budget for fixes is a painful conversation nobody wants to have.
My First Reaction Was Wrong
My first reaction was exactly what you think it was. I felt vindicated. I told my team: “See? This is what happens when you promise things you cannot deliver.”
That felt good for about a day.
Then I sat with it longer. And I realized something uncomfortable.
Being right did not matter. We still lost the tender. We still did not get the project. We still did not get paid. Our realistic timeline and honest proposal did not help us build our portfolio, grow our revenue, or keep our team busy during a slow quarter.
The other vendor got the contract. They got paid (at least partially). They got the experience on their company profile. And even though the project was a mess, they are already bidding on the next one.
Meanwhile, we were sitting here with our integrity intact and our pipeline empty.
That is when I stopped feeling vindicated and started feeling stupid.
Not stupid for being honest. Stupid for not adapting. Stupid for watching the game change and refusing to change with it.
The Lesson I Did Not Want to Learn
Here is the harsh truth I had to accept: the market does not reward the most honest proposal. It rewards the most compelling one.
And right now, in 2026, a proposal without a credible AI narrative is not compelling. It is invisible.
Evaluation committees, especially non-technical ones have been conditioned to expect AI in proposals. They read articles about generative AI transforming software development. They see headlines about 10x productivity. They attend conferences where vendors showcase AI-powered workflows.
When they read our proposal and see no mention of AI, they do not think “this team is honest and realistic.” They think “this team is behind.”
That perception is unfair. But it is real. And I cannot win tenders by fighting perception. I can only win by shaping it.
So I made a decision: we are going to build a real AI capability. Not a marketing slide. Not a buzzword section in our proposals. A genuine, documented, measurable generative AI integration into how we work.
And then we are going to talk about it — loudly, specifically and credibly.
Building Our Gen AI Roadmap
I spent two months building a generative AI roadmap for my team. Not a fantasy document. A practical plan with phases, tools, training and measurable outcomes.
Here is what we did.
Phase 1: Audit What We Already Do
Before adding anything new, I mapped our entire development workflow from requirements gathering to deployment and identified where AI was already creeping in informally. Developers using ChatGPT to debug. Someone using Copilot for boilerplate. A team member generating test data with AI.
The problem was that none of this was standardized. Everyone was using different tools, different prompts, different approaches. There was no consistency, no measurement and no way to put it in a proposal because we could not quantify it.
Step one was simply documenting what was already happening.
Phase 2: Standardize the Tools
We picked our stack. After testing multiple options across real projects, we settled on a core set:
An AI code assistant integrated into our IDE for every developer. A dedicated AI tool for proposal writing and documentation. An AI-powered testing assistant for generating test cases and test data. An internal prompt library, a shared document of tested prompts for common tasks, refined over time by the team.
Everyone uses the same tools. Everyone follows the same workflow. This is critical because it means we can train new team members quickly and we can measure productivity consistently.
Phase 3: Measure Everything
This is where most teams stop. They adopt AI tools and assume things are faster. We measured it.
For every sprint over three months, we tracked: time spent on boilerplate versus custom logic. Number of AI-assisted versus manually written tests. Documentation turnaround time. Code review cycle time. Defect rate in AI-assisted code versus manually written code.
The results were honest. AI-assisted development was roughly 20 to 30 percent faster on eligible tasks from the boilerplate, tests, documentation and standard patterns. It had negligible impact on architecture, business logic, integration and client communication.
Total project timeline improvement: approximately 15 to 20 percent. Not 50. But real, documented and defensible.
Phase 4: Build the Proposal Narrative
This is the part that changed everything.
With three months of measured data, I wrote a new section for our proposal template. It includes:
Our specific AI tools and how they integrate into each phase of development. Actual metrics from past projects showing where AI improved efficiency and by how much. A clear breakdown of what AI accelerates and what remains human-driven. A realistic adjusted timeline that reflects genuine AI-assisted productivity gains and possibily not fantasy numbers.
The section is honest. It does not claim 50 percent. It claims 15 to 20 percent with evidence. And it explicitly states that requirements gathering, system design, UAT, deployment and training are not significantly impacted by AI. And of course, because they are not.
Here is what I discovered: when you present AI capability with specifics and evidence, evaluation committees take you more seriously than vendors who make big claims with no proof. Anyone can write “we use AI to accelerate development.” Not everyone can show you a table of measured sprint velocities comparing AI-assisted and non-assisted tasks.
Specificity beats hype. Data beats promises. But you need to actually have the data first.
What Changed in Our Win Rate
I will not pretend we suddenly win everything. We do not.
But we stopped being invisible. Our proposals no longer read like we are stuck in 2020. Evaluation committees see a team that uses AI thoughtfully, measures its impact honestly and builds timelines based on evidence rather than marketing.
We have won two tenders since implementing this approach where the feedback specifically mentioned that our AI methodology section was a differentiator. Not because we promised the most. Because we proved the most.
One evaluator told us informally: “Other vendors said AI would make them faster. You were the only one who showed us the numbers.”
That one sentence made the entire two-month roadmap worth it.
The Playbook for Getting Your Team AI-Ready
If you are a technical lead or a founder reading this and you are still submitting proposals without an AI narrative, here is what I would do starting tomorrow.
Week 1 to 2: Audit your current AI usage. Talk to your developers. Find out who is already using what. Document it. You will be surprised how much informal AI adoption is already happening — it is just not visible or standardized.
Week 3 to 4: Pick your tools and standardize. Choose one code assistant, one documentation tool, one testing tool. Get everyone on the same platform. Build a shared prompt library. Start small.
Month 2 to 3: Measure relentlessly. Track sprint velocity before and after. Track defect rates. Track documentation time. Track everything you can tie to a number. You need data, not feelings.
Month 3 to 4: Build your proposal narrative. Take your measured data and turn it into a professional, evidence-backed section for your proposal template. Include specific tools, specific metrics and specific phase-by-phase impact analysis.
Ongoing: Keep updating. AI tools evolve fast. New models, new capabilities, new integrations. Review your toolset quarterly. Update your metrics. Keep your proposal narrative fresh.
Being Honest in a Dishonest Market
I still refuse to claim 50 percent. I still refuse to promise timelines I know we cannot deliver. That has not changed and it will not change.
But I am no longer bringing a knife to a gunfight.
The vendors who overpromise will keep overpromising. Some of them will keep winning tenders. Some of those projects will keep failing. That is their problem and their clients’ misfortune.
My job is not to fix the market. My job is to compete in it without losing my integrity.
The way I see it now, there are three types of vendors in the AI era:
Those who ignore AI entirely and wonder why they keep losing. Those who exaggerate AI capabilities and win tenders they cannot deliver. And those who adopt AI genuinely, measure it honestly and let the evidence speak for itself.
I spent too long in the first category. I refuse to join the second. The third is where I intend to stay.
And slowly, project by project, proposal by proposal, the market is starting to notice the difference.