Passe au contenu principal
Article

From hype to liability: AI washing as a D&O risk

Our Risk & Resilience report ‘Spotlight on Cyber Threats and Tech Advances 2026’ looks at the new cyber reality including the threats and benefits of AI advancement. Reading on to read Bethany’s insights on the risks of AI washing. 

AI is being talked up much like earlier waves of digital, cloud, and big data, but the AI claims are often hard to back, leaving both firms and their investors vulnerable to brewing exposures. 

In the laundromat of risk, this kind of AI‑washing1 is just the latest in a long line of disclosure challenges. Greenwashing2 has triggered multi‑million and billion‑dollar fines3, while tariff‑washing4 though still an emerging risk, has already sparked class‑action lawsuits in North America5, and now AI‑related disclosures face similar scrutiny.

On the investor side, where companies could make sweeping claims about “transformative” AI and still secure strong IPO valuations – the mood has shifted. Investors must and are wising up fast lest they find themselves in the firing line of loss.

The growing gap between AI claims and reality is quickly becoming a tangible risk, with regulatory, legal, financial and reputational consequences. 

Regulatory signals  

The EU has been ahead of the pack after introducing the AI Act6 in mid-2024, which included fines of €35 million or 7% of global annual revenue for misrepresenting to regulators whether a system is AI, how it works, or how it is classified7.

Stateside meanwhile, rumblings emerged in December 2023 through a concise warning8 from then‑SEC Chair Gary Gensler: “Don’t do it.” The following month, the SEC warned against “frauds” making “unrealistic” AI claims9. Two months later, it announced settled charges10 against two investment advisers for AI misrepresentation. Though modest, the $400,000 in combined penalties were the first in AI mis‑marketing charges and should be seen as warning shots.

AI clarity over hype

For firms claiming AI use:

With no standard definition of “AI,” companies often stretch the label to cover basic technology, Directors and Officers (D&Os) should clearly define how the term is used, avoid vague buzzwords and back all claims with evidence to reduce legal and reputational risk.

Conversely, AI tools can now fast scrutinise everything at speed and scale; from decks and filings to social posts and product ads – no channel is exempt. And once AI is tied to growth, differentiation or valuation, any later defences around limitations or underperformance can be upended easily. Boards need company‑wide policies and training to keep AI claims accurate from the outset.

Insurance is shifting too. D&Os must check that AI‑specific liabilities are explicitly covered under cyber or tech clauses. In parallel, insurers can scrutinise AI governance and controls during underwriting, especially in high‑litigation markets like the US.

For investors backing AI‑enabled firms:

Investors can suffer substantial financial losses and even come under the spotlight for their own lack of due diligence. As cases increase and sharpen scrutiny, they should emulate regulators by pressing for accurate, balanced disclosures that that can prove companies are not “overhyping capabilities without substance.”11  
 
Where accountability ultimately sits with the board, professional liability insurance can help absorb the cost of shareholder litigation and related claims but should be treated as a financial safety net, not a risk strategy. The real protection starts earlier: clear governance, disciplined decision-making and rigorous due diligence at the consideration stage. 
 
The strongest defence against AI‑washing is to treat AI claims like financial ones: if use, data, governance and outcomes can’t be evidenced, the risk ultimately falls on investors, not just operators. 

 

  • Headshot of Bethany Greenwood

    Bethany Greenwood

    Chief Executive Officer of Beazley Furlonge Limited and Group Head of Specialty Risks