These aren't hypotheticals. Three companies, three different industries, three different ways AI damaged their business. The only thing they had in common: none of them were monitoring what AI was actually saying about them.

The spectrum here matters. One was destroyed by false information a third-party AI invented. One was destroyed by a narrative AI built from their own public actions before their own tools caught it. One was destroyed by their own AI, speaking under their brand name, making promises they hadn't authorized.

The implication is the same in all three cases: it doesn't matter whether the AI is yours, a competitor's, or a third party's. If you're not watching what it says about your brand, you're flying blind — and the companies below found out what that costs.

Case 1: AI Invented a Lawsuit. The Customers Believed It.

Case Study 01
Wolf River Electric
$25M
claimed lost sales, 2024
Industry: Solar / Clean Energy Source: Google Gemini (third-party AI) Discovered: September 2, 2024 Filed suit: March 11, 2025

In September 2024, executives at Wolf River Electric — Minnesota's largest solar contractor, founded in 2014 by Justin Nielsen — discovered something alarming: Google's Gemini AI was telling anyone who searched for their company that they were being sued by the Minnesota Attorney General for deceptive sales practices, hidden fees, and high-pressure tactics.

None of it was true. The actual AG lawsuit — filed in April 2022 — named entirely different solar companies: Brio Energy, Bello Solar Energy, Avolta Power, and others. Wolf River appeared nowhere in that filing. But Google's AI, synthesizing from multiple sources, conflated those real cases with Wolf River and published the fabricated narrative confidently, with citations.

The AI-generated text read: "According to recent news reports, Wolf River Electric is currently facing a lawsuit from the Minnesota Attorney General due to allegations of deceptive sales practices regarding their solar panel installations, including misleading customers about cost savings, using high-pressure tactics, and tricking homeowners into signing binding contracts with hidden fees."

The sources it cited — a Star Tribune article, an AG press release, Angie's List reviews — mentioned Wolf River in passing or not at all. The AI manufactured the connection wholesale.

What followed was a cascade. Google's autocomplete began suggesting "Wolf River Electric lawsuit," "Wolf River Electric lawsuit reddit," and "Wolf River Electric lawsuit Minnesota settlement" — despite no such lawsuit existing. Reddit users, trusting the Google results, posted that Wolf River was a "possible devil corp." Competitors began citing the fabricated AG claims in client consultations to steer business away.

The documented losses:

  • March 3, 2025: Customer canceled a $39,680 solar job after Google search surfaced the false claims
  • March 4: Prospective customer refused to move forward after researching the company online
  • March 5: Customer canceled a $150,000 contract citing the lawsuit results
  • March 11: A non-profit terminated $174,044.12 in solar and lighting projects, citing "several lawsuits in the last year with the Attorney General's Office"

Wolf River filed a defamation lawsuit against Google in Ramsey County District Court on March 11, 2025. They claimed $25 million in lost sales during 2024 and sought more than $110 million in damages. Google acknowledged "with any new technology, mistakes can happen" — but as recently as November 11, 2025, a Google search was still surfacing results claiming the company faced an AG lawsuit.

"We put a lot of time and energy into building up a good name. When customers see a red flag like that, it's damn near impossible to win them back."

— Justin Nielsen, founder, Wolf River Electric
The Monitoring Gap

Wolf River's mention count was fine. Their review scores were positive. There was nothing in standard monitoring to indicate a crisis. Google's AI was quietly building a criminal narrative from unrelated court documents — and Wolf River had no visibility into what the AI was constructing about them from those sources.

Case 2: Their Numbers Were Record High. The Narrative Was Already On Fire.

Case Study 02
Duolingo
400K+
TikTok followers lost in weeks
Industry: EdTech / Consumer Apps Source: AI narrative synthesis (ChatGPT, Gemini, Perplexity) Pivot announced: April 28, 2025 Accounts wiped: May 17, 2025

By the end of 2024, Duolingo was a brand success story by every metric: $748M revenue, up 41% year-over-year; 116 million monthly active users; stock trading near all-time highs around $526 per share. Their social media team had mastered the internet — their "Duo Death" campaign alone generated over a billion organic views.

On April 28, 2025, CEO Luis von Ahn posted a memo on LinkedIn announcing that Duolingo was going "AI-first." The company would, as he put it, "gradually stop using contractors to do work AI can handle."

The backlash didn't explode immediately. During normal periods, Duolingo's social monitoring showed roughly 24,000 positive versus 14,000 negative mentions — a healthy positive ratio. But the AI announcement changed what was being said in places that standard social listening doesn't reach: in AI-generated summaries, in ChatGPT and Perplexity responses, in the answers people received when they asked "what happened with Duolingo?" or "is Duolingo worth it?"

The narrative AI was building — "language app fires humans for robots" — was crystallizing in model responses before it peaked on social media. By the time platform dashboards showed the sentiment spike, the verdict was already set.

The public timeline:

  • April 28: CEO memo goes public on LinkedIn
  • May 1–16: TikTok comments turn hostile. A video on the "Mama, may I have a cookie" trend received replies like "Mama, may I have real people running the company" — that single comment earned 69,000 likes
  • May 17: Duolingo wiped ALL content from TikTok and Instagram simultaneously, gone overnight — affecting 6.7M TikTok followers and 4.1M Instagram followers
  • Within weeks: 400,000+ TikTok followers lost
  • May 24: Von Ahn publicly backtracked: "I do not see AI as replacing what our employees do"

The irony was layered. Duolingo was arguably the brand best at social listening in the world. They'd built their entire content strategy around reading the internet. But their tools tracked engagement metrics and mention volume. What they didn't have was visibility into what AI was synthesizing and serving as a verdict when users asked about them.

"The sentiment shift was dramatic — whilst typical monitoring showed 24K positive versus 14K negative mentions during normal periods, the AI announcement triggered an avalanche of criticism across platforms."

— Buzz Radar social intelligence analysis, July 2025
The Monitoring Gap

Duolingo had world-class social listening tools and engagement metrics. What they didn't have was visibility into the AI-synthesized verdict forming in ChatGPT and Perplexity responses — the synthesis that was already circulating as the memo spread. By the time their dashboards lit up, the AI narrative was the internet's received truth about Duolingo.

Case 3: Their Own AI Made a Promise They Couldn't Keep.

Case Study 03
Air Canada
Liable
landmark AI liability ruling, Feb. 2024
Industry: Aviation / Travel Source: Air Canada's own chatbot Incident: November 2022 Ruled against: February 2024

In November 2022, Jake Moffatt's grandmother died. He needed to book a last-minute flight from Vancouver to Toronto to attend her funeral, and he wanted to know whether Air Canada offered bereavement fares.

He asked Air Canada's virtual assistant. The chatbot told him he could purchase a full-price ticket now and apply for the bereavement discount retroactively within 90 days of booking. He purchased both legs of the trip: CA$794.98 for the outbound and CA$845.38 for the return — roughly CA$1,640 total. He relied on what the chatbot told him.

Air Canada's actual policy was the opposite: bereavement fares had to be requested before travel. No retroactive discount was available. When Moffatt applied for the refund a week later, Air Canada told him the chatbot had been wrong and denied his claim.

Their defense in the subsequent tribunal proceedings was extraordinary. The chatbot was, they argued, "a separate legal entity responsible for its own actions" — and therefore Air Canada itself couldn't be held liable for what it said.

The British Columbia Civil Resolution Tribunal rejected this entirely. In February 2024, tribunal member Christopher Rivers wrote: "It should be obvious to Air Canada that it is responsible for all information on its website. It does not matter whether the information comes from a static webpage or a chatbot."

The tribunal ordered Air Canada to pay:

  • CA$812.02 in damages
  • CA$36.14 in pre-judgment interest
  • CA$125 in tribunal fees

The ruling made international news and became the landmark case establishing that companies are fully liable for what their AI says — regardless of how they classify the technology internally. It is now a standard reference point in every serious discussion of enterprise AI liability.

"It should be obvious to Air Canada that it is responsible for all information on its website. It does not matter whether the information comes from a static webpage or a chatbot."

— Christopher Rivers, Tribunal Member, BC Civil Resolution Tribunal, February 2024
The Monitoring Gap

This case differs from the first two in one important way: the AI wasn't spreading misinformation about a third party, and it wasn't synthesizing a damaging narrative from public signals. It was speaking as the brand — making specific, actionable commitments under Air Canada's name to a grieving customer who trusted what the company's own website told him. The tribunal made the accountability framework clear: if your AI speaks for your brand, your brand is responsible for what it says.

The Common Thread

Three different companies. Three different industries. Three entirely different failure modes. But the same underlying gap in every case: none of these companies had visibility into what AI was saying about their brand at the moment it mattered.

Wolf River Electric had no way to see what Google's AI was generating when customers searched their name. Duolingo had world-class social listening but no window into the AI-synthesized verdict forming across LLM responses. Air Canada deployed an AI that spoke with their voice — and had no monitoring in place to catch what it was promising.

The brands that will avoid becoming the next case study are the ones who treat AI brand monitoring not as a nice-to-have, but as the same category of risk management as legal review, PR crisis planning, and cybersecurity. The question is no longer whether AI is talking about your brand. It is what AI is saying right now — and whether you know.

See what AI says about your brand right now.

Run a free scan across ChatGPT, Gemini, Perplexity, and more — in under two minutes.

Run Free Scan