Germany Warns China Is Close to an AI 'Superhacker' — and Building It in Secret

Germany's top cybersecurity official told lawmakers that China is close to building an AI model with superhacking capabilities — developed in secret. The warning lands a month after Anthropic gated Mythos. AI cyber capability is now great-power competition.

Share
White line art on deep teal: server with neural-net glow, government silhouette, globe with tension lines. Red dot accent on alert beacon at center.

Germany's top cybersecurity official just told lawmakers that China is close to building an AI model with superhacking capabilities — and that the development appears to be happening in secret. The warning lands one month after Anthropic released its own such model only to a closed circle of trusted firms. AI cyber capability is now a great-power competition.

BERLIN — Germany's top cybersecurity official has warned German lawmakers that China appears close to developing an AI model with "superhacking" capabilities, and that the development appears to be happening in secret. Politico's reporting on the warning was carried May 11, 2026 by MS NOW (formerly MSNBC) anchor Katy Tur. The German alert lands roughly one month after Anthropic's April 2026 announcement of Mythos — a model the company described as "strikingly capable" at hacking and cybersecurity work, deemed dangerous enough that Anthropic chose to release it only to a small group of trusted organizations rather than the general public. The German warning's effective claim is that a Chinese government-aligned parallel may be underway without an equivalent governance scaffolding.

German Warning & AI Cyber Capability Profile
DetailInformation
Source of warningGermany's top cybersecurity official, briefing German lawmakers (per Politico via MS NOW)
SubstanceChina appears close to developing an AI model with "superhacking" capabilities; development appears to be happening in secret
CoveragePolitico (US/European) original; MS NOW (Katy Tur) on May 11, 2026
Western analogueAnthropic Mythos — announced approximately April 2026; described as "strikingly capable" at hacking; released only to small group of trusted organizations
Anthropic response frameworkProject Glasswing — initiative with Amazon, Apple, Google, Microsoft, JPMorgan Chase to harden critical software against severe fallout
Adjacent confirmationGoogle GTIG detected first AI-generated zero-day on May 11, 2026; Hultquist: "The era of AI-driven vulnerability and exploitation is already here"
Strategic frameAI weaponization is now a great-power competition dynamic; Western model has governance scaffolding (Glasswing), Chinese parallel reportedly does not

Mythos, Glasswing, and the asymmetry the German warning describes

Anthropic's Mythos release was paired with Project Glasswing — an initiative the company describes as bringing together Amazon, Apple, Google, Microsoft, JPMorgan Chase, and other major firms in an effort to harden the world's critical software against the "severe" fallout the model could pose to public safety, national security, and the economy. The University of Queensland's analysis captured the technical stakes: if models like Mythos can scan the hidden plumbing of the internet — operating systems, browsers, routers, and shared open-source code — at unprecedented scale, then what is now specialised hacking could become a routine and automated process. The Western response to that prospect is the Glasswing-style coordination model: gated release, defender hardening, industry-wide remediation lead time.

The German warning describes the asymmetry. If Chinese state-aligned developers reach equivalent capability without equivalent gating, the offensive side of the curve arrives before the defensive side has been hardened. That is, structurally, the dynamic regulators and CISOs have been worried about since LLM-based exploit research first appeared in production tooling. Google GTIG's May 11 disclosure of the first AI-generated zero-day gave the curve an actual data point — John Hultquist's framing, "the era of AI-driven vulnerability and exploitation is already here," is now operational truth, not forecast. The German warning extends that frame to state-versus-state competition.

What the warning does and does not say

The publicly available coverage does not identify the specific German official, the specific Chinese AI model, the specific timeline behind "close to developing," or the intelligence sources underlying the assessment. That ambiguity is normal for intelligence-sourced briefings shared with lawmakers, and it should not be read as weakness in the warning itself. What the coverage does establish is a concrete signal from a serious national cybersecurity authority that AI-as-cyber-superpower is a credible state-level concern — not a hypothetical future scenario. For CISOs, the implication is a structural one: AI weaponization has moved from commercial-only to state-level, and Western-only to multi-pole. Your AI/cybersecurity threat model needs to reflect that.

The adjacent industry signal matters too. Anthropic's relationship with the US government is reportedly complicated by a Pentagon-related dispute over military use of AI technology, which could affect US government defender access to Mythos-class capability. OpenAI launched a cybersecurity-specific ChatGPT model branded "Daybreak." The defender-augmentation argument — that AI capability should reach defenders first or simultaneously — is the operational corollary to the German warning. If offense scales but defense does not, the asymmetry compounds. The AI platform attack surface defenders saw in the Hugging Face Boxter typosquat campaign is one operational example of how that asymmetry already plays out at the developer-tooling level.


The CyberSignal Analysis

Signal 01 — AI cyber capability is now a state-level strategic asset, not a commercial-only product

The German warning is the clearest open-source-confirmable signal to date that AI offensive capability is being developed as state strategy, not as commercial product. Brief your board on the structural shift. Update your enterprise risk register to include state-aligned AI capability development as a measurable exposure. Track Project Glasswing's participant list and disclosed remediation areas — the specific corporate participants give clues about which infrastructure is being hardened first, which translates directly to which infrastructure is in scope for prioritized defender investment.

Signal 02 — The defender-augmentation curve has to keep pace with the attacker-augmentation curve, and Western governance is the visible variable

Anthropic's gated release and Project Glasswing scaffold are the Western governance scaffolding the German warning implicitly contrasts against. The strategic question for policy-engaged CISOs and government affairs teams is whether equivalent governance scaffolding can be constructed for non-Western capabilities, and what role industry coordination should play if it cannot. The defensive operational answer in the meantime is unglamorous and well-rehearsed: rigorous patching, multi-factor authentication, password managers, and the rest of the basic hygiene stack. The UQ analysis flagged it explicitly, and AI-augmented attackers do not change the value of those controls — they raise it.

What to do this week

  1. Update your AI/cybersecurity threat model to explicitly include state-aligned AI capability development as a measurable risk vector. Brief your board on the structural shift from commercial-only to state-level AI weaponization. Document the implication for your AI vendor dependency map.
  2. If you are in critical infrastructure, defense, finance, or another strategic sector, engage your national cybersecurity authority for sector-specific threat briefings on AI-augmented attacks. Pre-establish intelligence-sharing relationships before you need them.
  3. Audit your AI vendor portfolio. Anthropic, OpenAI, Google, and equivalent providers are now national-security-adjacent vendors. Document each provider's gating and disclosure posture, your access tier, and your fallback if a provider's access is restricted or revoked under government dispute.
  4. Reinforce baseline hygiene under the assumption that AI-augmented attackers will compress vulnerability-to-exploitation timelines. MFA, password managers, patch cadence, EDR — every control gets more valuable in an AI-augmented threat environment, not less.
  5. For policy-engaged CISOs: track Project Glasswing developments and the EU AI Act implementation timeline. Both shape the regulatory and industry framework your future AI security investments will sit inside.

Sources

TypeSource
PrimaryMS NOW / Yahoo: Politico — German Cybersecurity Official Warns on China AI Superhacker
ReportingFortune: Anthropic Mythos, Project Glasswing, and the AI Cyber Capability Race
AnalysisUniversity of Queensland: Why "AI Superhacker" Has Tech World on Alert
ReportingMS NOW (Katy Tur, May 11, 2026): Politico Segment Video