Skip to content
All posts
AICareer

Stop Slapping “AI-Powered” on Everything. Consumers Are Starting to Notice.

March 7, 2026·Read on Medium·

The reality check on one of tech’s most abused marketing tricks. Why the people approving it often have no idea what AI actually is.

Photo by Immo Wegmann on Unsplash
In May 2025, a $1.5 billion AI startup backed by Microsoft collapsed.

Builder.ai had spent nearly a decade promising clients it could build custom apps “as easy as ordering pizza.” Its AI assistant, named Natasha, was marketed as a breakthrough in no-code app development. Investors poured in over $445 million: Microsoft, SoftBank and the Qatar Investment Authority.

When the company filed for bankruptcy, auditors found it had told investors it generated $220 million in 2024 revenue when actual revenue was closer to $50 million. A 300% inflation. The Wall Street Journal had raised questions about its AI claims as early as 2019. The company owed $85 million to Amazon and $30 million to Microsoft in unpaid cloud bills. Around 1,000 employees lost their jobs.

The AI was not what it claimed to be. The revenue was not what it claimed to be. The entire story was, to borrow a phrase from the audit documents: not what it claimed to be.

And nobody asked hard enough questions while it was happening.

The 2025–2026 AI-Washing Explosion

Builder.ai was not a one-off. It was a preview.

By 2025 the practice of dressing up products as “AI-powered” without credible AI underneath had become standard enough that regulators coined a name for it: AI washing. It follows the same playbook as greenwashing. Slap a label on the tin. Hope consumers do not look inside.

Here is what the data actually shows about the scale of the problem:

  • 374 S&P 500 companies mentioned AI in earnings calls between September 2024 and 2025, according to a Financial Times analysis. The majority described AI implementation as “entirely positive.”
  • A February 2026 NBER study surveyed nearly 6,000 CEOs, CFOs and executives across the US, UK, Germany and Australia. Around 90% said AI had no impact on productivity or employment at their company over the past three years. About 89% saw no change in productivity.
  • Workforce firm ManpowerGroup’s 2026 Global Talent Barometer, which surveyed nearly 14,000 workers in 19 countries, found workers’ regular AI use increased 13% in 2025. Confidence in the technology’s usefulness fell 18%.

So companies are publicly celebrating their AI success in earnings calls while their own executives privately admit, in confidential surveys, that nothing measurable has changed. The gap between public narrative and private reality has never been wider.

The FTC responded. In September 2024 it launched “Operation AI Comply”: five simultaneous enforcement actions targeting companies making false AI claims. Cases continued through 2025 and into early 2026. In January 2025 the FTC finalised its case against DoNotPay, which had marketed its service as “the world’s first robot lawyer” while its product was, in the FTC’s finding, not adequately trained on the law and unable to do what it promised. DoNotPay paid $193,000 in consumer refunds. The SEC separately fined investment firms Delphia USA ($225,000) and Global Predictions ($175,000) for overstating their AI capabilities to investors.

These are not isolated enforcement actions. They are the visible tip of an industry-wide habit.

Real Incidents from 2025: The Year the Label Broke Down

Several high-profile cases from 2025 made the AI washing problem impossible to ignore.

The McDonald’s Drive-Thru Disaster (continued fallout)

McDonald’s ended its IBM AI drive-thru partnership in June 2024 after two-and-a-half years. The AI had been deployed at over 100 US locations with the promise that it would take orders autonomously. One viral TikTok video showed the system adding 260 Chicken McNuggets to a customer’s order as she pleaded with it to stop. The New York Times headline literally read “260 McNuggets? McDonald’s Ends A.I. Drive-Thru Tests.” The problem was not marketing fraud. McDonald’s genuinely tried to build this. The problem was shipping an “AI-powered” product to customers before the AI was ready for those customers’ real environment: background noise, regional accents and people ordering Mountain Dew at a restaurant that does not sell Mountain Dew.

CES 2026: The AI Everything Show

At CES 2026 in January, analysts from Yanko Design described the show floor as “AI-powered stickers on everything.” A specific pattern emerged: TVs marketed with “AI motion smoothing” were running the same interpolation algorithm they had used for years. Just renamed. Smart home devices with “AI features” turned out to be cloud-dependent chatbot wrappers anyone could license from OpenAI or Google in a week. The tell, analysts noted, was simple: disconnect the device from the internet. If the AI stops working, it was never on the device in the first place. It was just a chatbot rental with the company’s logo on top.

Instacart AI Pricing Controversy (December 2025)

In December 2025 Instacart faced backlash after reports revealed it was using an “AI-powered pricing experiment” that showed different customers different prices for identical grocery items at the same store. The programme was designed to test price sensitivity. Customers were not clearly informed the prices were being dynamically tested by AI. Consumer advocacy groups and lawmakers questioned whether AI-driven pricing unfairly targeted vulnerable shoppers. The incident highlighted something important: “AI-powered” does not always mean convenient or helpful. Sometimes it means the AI is being used on consumers, not for them.

Malaysia: The Most Trusting Market in the Region Has No Safety Net

The cases above happened in markets that have regulators actively watching for this. Malaysia does not yet have that protection, and the exposure gap is significant.

Start with the demand side. The 2025 e-Conomy SEA report, published by Google, Temasek and Bain & Company, found that 92% of Malaysian digital consumers said they would share personal data such as shopping history, viewing habits and social connections with AI agents. That figure is the highest comfort level recorded across the entire Southeast Asian region. Malaysian consumers are the most willing to trust and engage with AI-branded products. They are also, by definition, the most exposed when that trust is exploited.

On the supply side, the incentive to overclaim is obvious: revenue growth for apps with marketed AI features in Malaysia surged 103% in the first half of 2025 compared to the same period in 2024, according to the same report. The premium for the AI label is real, measurable and large. A 103% revenue lift for marketed AI features — not verified AI, not audited AI, simply marketed AI — is one of the most direct incentives for AI washing you will find anywhere in the data.

Malaysia also captured 32% of all Southeast Asian AI funding between the second half of 2024 and the first half of 2025, according to the e-Conomy SEA figures. That is $759 million flowing into a market where investor enthusiasm for the AI label is high and regulatory scrutiny is low.

That last part is the gap nobody is talking about.

Malaysia currently has no binding AI legislation. What exists are the National Guidelines on AI Governance and Ethics, published by the Ministry of Science, Technology and Innovation. They are voluntary. Companies are asked to follow them. There is no penalty if they do not. The Securities Commission and MDEC have not yet published AI washing enforcement actions comparable to the FTC’s Operation AI Comply or the SEC’s fines against investment firms. The National AI Office only launched in December 2024.

Compare this to the United States, where the FTC launched five simultaneous enforcement actions the same month Malaysia’s AI regulatory office was being established.

What this means in practice: a company operating in Malaysia that slaps “AI-powered” on a product backed by a basic chatbot wrapper faces less regulatory risk than its equivalent in the US, UK or EU. That does not make it honest. It makes it cheaper to be dishonest.

The parallel to financial regulation is instructive. Malaysia’s capital markets are well-regulated because the SC spent decades building enforcement capacity. That work happened because gaps in enforcement produced visible harm. AI washing is at the beginning of the same curve. The harm is accumulating before the enforcement catches up.

Malaysian consumers spending on AI-powered products deserve the same truth-in-advertising protection as consumers anywhere. Right now, the burden is on them to ask the questions that regulators elsewhere are beginning to ask on consumers’ behalf: What does “AI-powered” actually mean here? What does it do that the previous version did not? Can you show me?

Those are fair questions. Any honest company should have no trouble answering them.

Why They Do It: The Three Real Reasons

This is not complicated to understand. Companies add “AI-powered” to their products because it works. At least in the short term.

Reason 1: Investors reward the label, not the substance

Between 2022 and 2025, venture capital poured into anything with AI in the pitch. According to Pitchbook data, nearly two-thirds of all US deal value in the first half of 2025 went to AI and machine learning startups. Companies that used the phrase “AI-powered” in their materials commanded higher valuations than comparable companies that did not.

Venture capitalist Alan Patricof, with over 60 years of investment experience, put it bluntly: “Just because ‘AI’ is attached to the name, or they incorporate it into their business plan, it gets a lot of people excited.” The incentive structure was clear. If you could get away with calling your product AI-powered, the numbers looked better.

Builder.ai demonstrated the logical extreme of this: when the company began struggling financially, it told investors its 2024 revenue was $220 million. The real number was about $50 million. The AI story had been doing the same work. Inflating perceived value beyond what reality supported.

Reason 2: No one is fact-checking the product teams

Marketing writes “AI-powered” in the campaign brief. Legal approves it because the product does have some code that someone once called a model. The CEO signs off because the board wants to see AI in the annual report. The engineers know it is a rule-based system with a chatbot wrapper. But nobody in that chain has the incentive or authority to say: this is not actually AI. We should not call it that.

This happens in company after company. The people with technical knowledge are not in the marketing meetings. The people in the marketing meetings do not have technical knowledge.

Reason 3: Consumers could not tell the difference. Until recently.

For a window of about two years, the average consumer had no frame of reference for what AI could and could not do. “AI-powered” sounded impressive. Who was going to argue?

That window is closing. By 2025, millions of people had used ChatGPT, Gemini and Claude regularly. They had experienced both what AI can genuinely do and what it gets embarrassingly wrong. The gap between the label and the experience is now visible in a way it was not in 2022. ManpowerGroup’s data shows consumer confidence in AI falling 18% in 2025. Adoption rose, trust fell. People are using it more and trusting it less.

Do They Actually Know What AI Is? The Management Problem

Here is the question nobody wants to ask in the boardroom: do the executives approving these AI features actually understand what AI is?

The data suggests: not really.

The NBER February 2026 study, across nearly 6,000 executives from four countries, found that while about two-thirds of executives reported personally using AI, their average usage was only 1.5 hours per week. One quarter reported not using AI in the workplace at all. These are the same executives who, in public-facing earnings calls, described their AI strategies as “entirely positive.”

The PwC 29th Global CEO Survey, released at Davos in January 2026 and based on responses from 4,454 CEOs across 95 countries, found that 56% of companies reported seeing neither increased revenue nor decreased costs from their AI investments. Yet almost nobody was questioning whether to invest further.

The Adecco Group’s 2025 survey of 2,000 C-suite leaders across 13 countries found that only 10% of companies qualified as “future-ready,” meaning they had structured plans to support workers and build AI skills. Of those companies investing in AI: 60% expected employees to update skills for AI, yet 34% had no actual policy on its use.

This creates a specific failure mode: executives who hear “AI” and think it means something magical are approving product claims about AI without the ability to evaluate those claims. They know the label is valuable. They do not know what is behind the label. So they approve it.

The most charitable version of what happens is this: a product manager genuinely believes that adding a machine learning recommendation engine counts as “AI-powered.” They are not lying. They just do not understand that there is a difference between a statistical model trained on purchase history and the kinds of adaptive, context-aware AI that the label implies to most consumers.

The least charitable version: they know it is a stretch and use it anyway because the stock price responds.

Both versions happen. And in both cases, the consumer pays the difference.

What “AI-Powered” Has Come to Mean in Practice: A Taxonomy

Because words matter, here is what companies actually mean when they say “AI-powered” in 2025–2026.

What they say vs what they built:

  • “AI-powered recommendations” . In most cases: a list sorted by purchase frequency or click rate with a fancier label. Sometimes real: Netflix, Spotify and similar services that train on billions of user interactions and where the model genuinely changes based on your behaviour.
  • “AI-powered customer service” . In most cases: a keyword-triggered FAQ tree that routes to a human when the script breaks. Sometimes real: a system that can read unstructured text, understand intent across different phrasings and resolve issues without a human for the majority of cases.
  • “AI-powered writing assistant” . In most cases: a wrapper calling GPT-4 or Claude’s API with a custom prompt and the company’s logo on the interface. The AI is real but the company did not build it. What they built is a product on top of an AI someone else built.
  • “AI-powered analytics” . In most cases: a dashboard with better charts than before, possibly with a natural language query feature. Sometimes real: predictive models that find patterns in data a human analyst would miss.
  • “AI-powered security” . In most cases: rule-based threat detection that existed before “AI” became a marketing word. Sometimes real: models trained to detect novel attack patterns that have not been seen before.

None of the “in most cases” versions are dishonest in the technical sense. They may involve code that a data scientist would classify as machine learning. But they imply a capability level to the consumer that the product does not deliver.

The Honest Version: Why It Matters to Tell the Truth

There is a business case for honesty here, not just an ethical one.

The companies that are surviving the 2025–2026 AI credibility correction are the ones whose AI claims matched their reality. The ones being punished by the market, by regulators and by consumers are the ones that oversold.

What honest AI marketing looks like:

  • Specific claims over vague ones. “This feature analyzes the last 90 days of your sales data and flags the three accounts most likely to churn in the next 30 days, based on a model trained on 50,000 similar accounts” is a promise you can verify. “AI-powered sales insights” is not.
  • Transparency about what the AI actually does. If your product calls the OpenAI API and wraps it in your UI, that is not dishonest. But you should not imply you built the underlying model. Consumers can tell the difference between a product built on real proprietary AI and a chatbot rental.
  • Honesty about failure rates. Every AI system gets things wrong. Products that tell users “this suggestion is 78% confident based on your history” build more trust than products that present AI outputs as certain, because users will calibrate. They will come to trust the high-confidence outputs and question the low-confidence ones. Products that claim AI certainty they do not have get destroyed by the first wrong answer.
  • Not using AI where you do not need it. A user who says “why is this a chatbot when a drop-down would be faster” is a user who has already lost trust. AI should solve a problem the previous approach could not. If there is no such problem, there is no honest reason for the label.

The companies that built real AI features (not labels) are not having credibility problems right now. GitHub Copilot does something a traditional autocomplete cannot do, and every developer using it can feel the difference. That is why it earns its “AI-powered” description. The bar is not high. It just requires the feature to actually deliver what the label implies.

The 2026 Reckoning

Something has shifted. The hype cycle is not over, but the immunity that “AI-powered” once provided is wearing off.

The NBER study data from February 2026 tells the full story in one number: roughly 90% of executives across four countries admit AI has had no impact on productivity or employment. They are still using the language in investor calls. They are still approving the marketing. But in the confidential surveys, the honest answers are coming out.

ManpowerGroup found that consumer and employee confidence in AI dropped 18% in 2025 while use increased 13%. People are using the tools and forming opinions based on direct experience. The experience is not matching what the marketing said.

The FTC’s Operation AI Comply is continuing. Legal teams at companies using “AI-powered” in marketing without clear substance behind it should be paying attention. The enforcement pattern from 2024 to 2026 is consistent: vague AI claims without substantiation are deceptive trade practices under existing law. The AI label does not grant immunity from truth-in-advertising rules.

Regulators and consumers are arriving at the same conclusion simultaneously.

The “AI-powered” badge used to open doors. In 2026, it is increasingly a prompt for the question: prove it.

A Note on Honesty

There is a version of this story where being honest about AI capabilities is the smart move rather than the virtuous one.

Companies that are specific and honest about what their AI does are harder to attack when the AI underperforms. They have set the right expectation. Companies that make vague claims like “AI-powered intelligence” cannot define what success looks like. Neither can their users.

The consumer protection angle matters too. When someone buys a product because it claims to use AI and the feature does not work as implied, that is a purchase made under false pretenses. Those consumers do not come back. The short-term valuation bump from the label costs more in long-term customer trust than it was worth.

The most durable AI products of 2025–2026 are the ones where users cannot imagine going back to the previous version. The AI is earning its description every time someone uses it. That is the only version of “AI-powered” that ages well.

Found this helpful?

If this article saved you time or solved a problem, consider supporting — it helps keep the writing going.

Originally published on Medium.

View on Medium
Stop Slapping “AI-Powered” on Everything. Consumers Are Starting to Notice. — Hafiq Iqmal — Hafiq Iqmal