US–China AI Rivalry Escalates as Anthropic Alleges Massive Model Copying Scheme

Anthropic claims millions of Claude interactions were harvested via fake accounts to train competing Chinese systems

American AI companies are escalating their warnings that China’s AI industry is siphoning off expensive research by using a technique known as “distillation” — and that the practice could tilt the balance in the global race for advanced models.

The latest claims come from Anthropic, which alleges that three Chinese AI firms — DeepSeek, Moonshot AI and MiniMax — covertly generated more than 16 million conversations with its chatbot Claude. According to Anthropic, the activity was carried out using more than 24,000 fake accounts, allowing the companies to harvest large volumes of responses and use them to train their own competing systems.

Anthropic’s allegations are part of a broader set of complaints raised this month by other major US AI developers. OpenAI and Google have also said they have seen similar patterns of attempted model extraction tied to Chinese firms, raising fears that rivals can compress years of research and compute spending into a much shorter — and cheaper — path to comparable performance.

The technique at issue is typically referred to as model extraction attacks, or distillation. In basic terms, a party with access to a powerful “frontier” model repeatedly queries it with large sets of prompts, collects the answers, and then uses those outputs as training material for a smaller model designed to mimic the larger system’s reasoning and style. When used within a company’s own ecosystem, distillation is a standard way for labs to build faster, cheaper versions of their flagship models. The controversy begins when the same approach is applied to a competitor’s system without authorization.

Google has described one reason distillation is attractive: smaller models can respond faster and require less computing power and energy than large models. Anthropic argues the cost advantages are precisely why the technique is tempting for firms trying to catch up quickly.

Anthropic also frames the issue as a safety problem, not just an intellectual property dispute. It claims that models built through unauthorized extraction may lack the safeguards that commercial “frontier” systems include to prevent misuse — including assistance in developing biological weapons or enabling cyberattacks. Google, meanwhile, has said distillation attacks do not typically compromise end users directly, because they do not threaten the confidentiality or integrity of the AI service itself.

In its account of how the alleged extraction worked, Anthropic says traffic was routed through proxy addresses and coordinated through a “hydra network” — a large web of fake accounts meant to distribute activity and avoid detection. Anthropic notes that its service is banned in China, and alleges the proxy setup was used to obtain access anyway.

Once inside, the accounts allegedly generated high volumes of prompts aimed at collecting high-quality outputs for training or producing large numbers of tasks for reinforcement learning — the feedback-based process used to shape how an AI system behaves. Anthropic claims some prompts sought step-by-step explanations of how Claude arrived at answers, producing large volumes of “chain-of-thought” style training material. It also alleges that some queries involved politically sensitive topics, with requests for “censorship-safe” alternative responses.

OpenAI has separately told US lawmakers that it identified attempts by DeepSeek to copy its most advanced models and warned that the company was developing new techniques to disguise the activity. Google has said its Gemini chatbot is frequently misused for tasks such as coding and scripting support, as well as for gathering intelligence like account credentials and email addresses.

Anthropic says it has built systems to detect suspicious patterns as they occur, but argues the scale of the problem is larger than any single company can handle alone — highlighting how the intensifying US–China tech rivalry is now playing out inside the architecture of artificial intelligence itself.

Written by Thorben Thiede

© The Alpine Weekly Newspaper Limited 2026