The New Crisis Window: Why 48 Hours Is Too Late for AI

ChatGPT, Gemini, Microsoft Copilot, Claude, and Perplexity app icons are seen on a Google Pixel smartphone. AI competition concepts.

In early 2026, two completely different incidents sent an important message to communications teams: AI-related crises cannot be contained using methods that worked in the past.

Anthropic, one of the largest players in AI, accidentally revealed the nearly complete source code for its Claude Code tool. A service file that led to an archive containing the company’s internal code—more than 500,000 lines and about 1,900 files—was mistakenly included in the published version.

On March 31, 2026, the security researcher publicly highlighted the issue. Within hours, the code spread across GitHub and other platforms, forcing Anthropic to urgently send out more than 8,000 requests to have copies removed.

Around the same time, an autonomous AI agent built on the OpenClaw platform was literally “offended” and reacted to a rejected pull request in the Matplotlib project by posting a criticism of the volunteer developer who had rejected the request. News agencies and social media users quickly spread the story as an example of AI revenge.

None of these crises waited politely for a 48-hour window for internal coordination between teams and the release of a statement from founders. Both unfolded publicly, across different channels and in different time zones, while the legal and communications teams were likely still trying to coordinate their first steps.

For PR professionals, this should serve as a wake-up call—a structural shift is underway in crisis communications, and reputational risks are rising significantly with the adoption of AI technologies.

Your Product is Your Voice

Traditional reputation crises are usually triggered by an external factor: a product bug, a hacking incident or a scandal involving the C-suite. Crises involving automated systems set a new precedent: the very mechanism or algorithm a company uses can be both the source of a crisis and the means by which it spreads within hours.

In the case of OpenClaw, the bot did exactly what it was designed to do. It generated and published content, but in doing so, it led to an outcome that communications pros would classify as a reputational crisis. A post criticizing the developer is a process that originated in an autonomous system that was granted publishing rights without sufficient safeguards. As soon as such content is indexed, journalists (and the public) can see it.

Why a Crisis at a Single AI Company Becomes a Problem for the Entire Market

The data breach at Anthropic began with a technical mistake in the company’s processes. It quickly escalated into a public debate concerning not only what the company does but also the limitations of content removal measures in cases of copyright infringement. This also raises a question for businesses: how does intellectual property protection work in cases involving code generated by or created with the help of artificial intelligence?

From a communications perspective, the problem wasn’t just the leak itself, but Anthropic’s response. When the company began demanding that copies of the code be deleted after they already spread across the internet, it only reinforced the sense that control had been lost.

For businesses that use AI, this trend is dangerous even when the company is not under direct investigation. When one of the major players in the market faces regulatory action or public exposure, media coverage of the incident alters perceptions of the entire industry.

Traditional Crisis Management Guidelines do not Work in Incidents Involving AI

Crisis comms can no longer be an afterthought to innovation. Most crisis playbooks assume a linear sequence of actions: gathering facts, internal coordination, drafting a preliminary statement, obtaining legal approval and then communicating.

In the cases we discussed above, this sequence is reversed.

Stakeholders often learn about an issue through screenshots in group chats, reposts, and summaries of conversations—even before the company itself recognizes the crisis. Research shows that people learn about news, corporate statements and brand information through AI assistants and search results. For PR professionals, this indicates two things.

First, passive monitoring of traditional media is insufficient. PR must track how brands and incidents are summarized or cited within AI systems. Second, waiting until all the facts are in is no longer an option: once a public narrative has taken shape, the company’s silence begins to be perceived as an attempt to avoid answering or as a sign that it is not in control of the situation.

Plan for the First 2 Hours, not the First 48

Companies that successfully navigate crises typically do one thing consistently: they communicate early on, when they have only partial information, but present it in a clear, structured manner.

Initial statements should concisely answer three questions:

  • what is known
  • who might be affected
  • what is being done right now

It is important to openly acknowledge uncertainty, remember the importance of providing timely updates and resist the temptation to make promises before the incident is resolved.

Crisis playbooks must now include templates specifically designed for AI, covering scenarios such as: the generation of harmful content by a model, the making of unauthorized decisions by AI agents and data breaches. These templates must be pre-approved by the company’s legal team.

Establishing a Crisis Response Team with AI capabilities

Another important lesson is that most traditional PR teams still lack the technical expertise to handle AI-related crises. Nevertheless, most professionals regularly interact with robotic systems.

For PR department heads, mastering AI does not mean becoming a security engineer. This means that technical risks must be translated into plain language in advance—preparing explanations that will be understandable to readers, journalists, investors and in some cases even regulators.

This will allow company representatives to explain the situation in concrete and convincing terms.

Every AI-related Crisis now Affects the Entire Sector

AI-related crises have become issues affecting the entire industry, not just individual companies.

If a startup is embroiled in scandals involving deepfakes, payments or aggressive growth, it quickly becomes more than just the startup’s problem. A broader conversation begins about the ethics of AI-generated content, the treatment of creators and corporate responsibility toward the public.

PR pros working at any company that uses AI tools should view high-profile external incidents as training drills. Know the exact answer to the question: “If something like this happens here tomorrow, what will we say in the first few hours, and who is authorized to say it?”

The unpleasant truth is that AI shortened the time between the onset of a crisis and the point when everyone has already formed an opinion about what this crisis means.

Now, PR teams need to establish processes in advance, improve their understanding of technology and agree internally on who does what during a crisis, so they can respond very quickly without sacrificing accuracy, a human touch or legal precision.

Julia Maslennikova, founder & CEO of 25/8 PR, an international PR agency helping startups, tech companies, venture capital and private equity funds gain global visibility.