The cancellations started quietly. Wolf River Electric — Minnesota's largest solar contractor, 200 employees, built over a decade — started losing contracts in late 2024. When the sales team called customers to ask why, the answers were the same: they'd searched the company's name on Google, and the results said Wolf River was under a Minnesota Attorney General lawsuit for deceptive sales practices. Some said they'd already filed complaints against Wolf River with the state AG's office.
The lawsuit never happened. The attorney general had never investigated them. Google's Gemini had fabricated the entire thing — conflating unrelated legal actions against four different solar companies with Wolf River's name — and surfaced that fabricated narrative at the top of search results for anyone who looked them up.
By the time Wolf River's executives traced the source, they had $388,000 in documented canceled contracts. Their own estimate of 2024 sales losses: $24.7 million. Total damages claim in the suit they eventually filed against Google: $110–210 million. The Gemini answer, as of November 2025, had not been corrected.
What the AI Was Actually Saying
On September 2, 2024, Wolf River executives typed their company name into Google and found this:
"According to recent news reports, Wolf River Electric is currently facing a lawsuit from the Minnesota Attorney General due to allegations of deceptive sales practices regarding their solar panel installations, including misleading customers about cost savings, using high-pressure tactics, and tricking homeowners into signing binding contracts with hidden fees."
Gemini cited four sources: a Star Tribune article, a KROC News article, an AG press release from April 2022, and a page from Angie's List. None of those sources mentioned Wolf River Electric in connection with a lawsuit. The AG's April 2022 action named ten solar companies. Wolf River was not among them. Gemini read the document, connected dots that weren't there, and published a confident summary as if it were established fact.
The autocomplete made it worse. Anyone who searched "Wolf River Electric" was immediately prompted to search "Wolf River Electric lawsuit Minnesota Settlement." Google's AI then told users to file a complaint with the attorney general's office against Wolf River — and some did.
Competitors caught wind of the Gemini results and started referencing them in sales consultations with prospective Wolf River customers. Reddit posts appeared calling Wolf River a "possible devil corporation." By the time the company knew any of this was happening, the damage had been running for months.
"We don't have a backup plan. We built this from the ground up. Our reputation is all we have." — Vladimir Marchenko, CEO, Wolf River Electric
How AI Constructs Your Brand Narrative — And Why It Gets It Wrong
This is the part most brand teams miss. AI models don't summarize what you publish. They synthesize everything they can find — your content, competitors' content, review sites, Reddit threads, old press coverage, local news archives, AG press releases — and build a narrative from the aggregate. That narrative is confident, fluent, and authoritative-sounding. It's also unreviewed.
The Wolf River error happened because Gemini found signals in the vicinity of Wolf River's name — a Star Tribune article about a solar industry lawsuit that mentioned Wolf River at the end, without implication — and resolved its uncertainty by generating a coherent story. The story was wrong. But it was generated with the same confident tone AI uses when it's right.
This is not a one-off. An HVAC company in the midwest had its name conflated with a national chain that shared similar branding; AI Overviews began attributing the national chain's complaints and headquarters location to the local business. A tech founder found Gemini describing his company's funding history, partnerships, and product capabilities in ways that had no sourcing — a confabulation of adjacent signals. The correction process — reporting the issue to Google — took weeks, with no guarantee of resolution. Wolf River's error persisted for months after the company contacted Google, and was still active on November 11, 2025, the day before the New York Times published its investigation.
The damage runs silent. A buyer who sees a false AI summary doesn't call to ask if it's true. They choose someone else.
What Your Brand Team Can Actually Do
The instinct is to focus on the correction — get the wrong answer fixed. That matters, but it's downstream. The real leverage is detection: knowing what AI says about your brand before your customers do.
Run a brand audit across AI platforms now
Ask ChatGPT, Gemini, and Perplexity what they know about your company. Search your name combined with "lawsuit," "complaint," "review," and your top competitors' names. Read what comes back. This is the baseline. Most brands have never done it.
Brand Story — ShensuoShensuo's Brand Story function does this systematically — flagging the exact sentences where AI characterizes your brand negatively, including the specific claim, on the specific platform, in the specific response. Not a sentiment score. The actual language.
Set alerts for narrative changes
Wolf River didn't know the Gemini answer existed for months. Alert settings that monitor your brand's AI narrative on a regular cadence would have surfaced the false claim before it ran through an entire sales season. The question isn't just "what does AI say today?" — it's "did it change since last week, and did it change in a direction that costs you money?"
Alert Settings — ShensuoBuild the correction inputs before you need them
AI models pull from what they can find. If the most authoritative signal Gemini can locate about your brand is a vague mention in a competitor's lawsuit press release, that's the signal it weights. The fix is publishing authoritative, specific, accurate content: official company history, clear descriptions of what you do and don't do, any legal or compliance standing relevant to your category.
Auditor — ShensuoThe Auditor converts monitoring findings into specific action items — what to write, where to publish it, to correct the input signal AI is reading.
Document what the AI was saying
If the narrative has already shifted against you, you need a record. Screenshot everything. Note the date. Log the exact language. Wolf River's case was stronger in part because they documented specific customers, contract numbers, and correspondence tracing the cancellation to the Gemini result. Brand teams that maintain an AI narrative audit trail have something to work with. Teams that didn't know the answer existed have nothing.
The Brand Team That Finds It First
Wolf River didn't learn about the Gemini answer from Google. They learned it from canceling customers.
That's the timing problem. By the time a customer is canceling a contract and citing an AI result, the narrative has been running for months. It's in Reddit threads. Competitors have discovered it. Potential new customers have walked away without explanation.
The brand team that catches a narrative shift early — before any customer has seen it — has options. They can publish corrective content. They can submit feedback to Google. They can brief their sales team so reps know what's circulating. They can document the error before it becomes a pattern.
The brand team that finds out from a canceling customer gets to count the contracts they already lost.
AI will describe your brand with or without your input. The only variable is whether you know what it's saying.
Shensuo — Brand Narrative Intelligence. See what AI says about your brand.
New York Times — "Who Pays When A.I. Is Wrong?" (November 12, 2025) · PPC Land — "Minnesota solar company sues Google over false AI-generated claims" (November 14, 2025) · GovTech — "Minnesota Solar Company Sues Google Over AI Summary" (June 13, 2025) · Reason / Volokh Conspiracy — Google missed key removal deadline (January 12, 2026) · Fast Company — "The new reputation risk: When AI misquotes you" (July 18, 2025)