US artificial intelligence company Anthropic has accused three Chinese AI firms of orchestrating large-scale campaigns to extract capabilities from its Claude model. In a blog post published Monday, the company said DeepSeek, Moonshot AI and MiniMax generated more than 16 million exchanges with Claude through approximately 24,000 fraudulent accounts.
Anthropic described the activity as illicit distillation. Distillation refers to training a smaller or less capable model on the outputs of a stronger system. The method remains common inside the industry when companies create lighter versions of their own models. Anthropic stated that competitors used the technique to acquire capabilities from Claude in violation of its terms of service and regional access restrictions.
“These campaigns are growing in intensity and sophistication,” the company wrote. “The window to act is narrow.”
CNN reported that Anthropic framed the issue as a national security concern. The company warned that models built through illicit distillation may not retain safety guardrails designed to prevent misuse, including restrictions related to bioweapons development or malicious cyber activity.
How the campaigns operated
Anthropic said the three campaigns followed a similar structure. Each relied on fraudulent accounts and proxy services to access Claude at scale while attempting to evade detection. The company attributed the activity through IP address correlation, request metadata, infrastructure indicators, and corroboration from industry partners.
DeepSeek generated more than 150,000 exchanges, according to Anthropic. The prompts targeted reasoning tasks, rubric-based grading and the generation of censorship-safe alternatives to politically sensitive queries. Anthropic stated that DeepSeek synchronized traffic across accounts and used shared payment methods and coordinated timing. The company said some prompts asked Claude to articulate internal reasoning step by step, which could produce chain-of-thought data suitable for reinforcement learning.
Moonshot AI’s campaign accounted for more than 3.4 million exchanges. Anthropic said the operation targeted agentic reasoning, tool use, coding, data analysis and computer vision. The company attributed the campaign through request metadata that matched public profiles of senior Moonshot staff.
MiniMax ran the largest campaign, with more than 13 million exchanges, according to Anthropic. The company stated that MiniMax pivoted within 24 hours after Anthropic released a new model, redirecting nearly half of its traffic to capture capabilities from the updated system. Anthropic said it detected this campaign before MiniMax launched the model it was training, which provided visibility into the cycle from data extraction to release.
To bypass regional restrictions, Anthropic said the labs routed traffic through commercial proxy services that managed extensive networks of accounts. In one instance, a single proxy network managed more than 20,000 fraudulent accounts simultaneously.
Export controls and competitive tensions
Anthropic has publicly supported US export controls on advanced AI chips. The company argued that distillation undermines those controls by allowing foreign labs to narrow capability gaps without independent development.
“In reality, these advancements depend in significant part on capabilities extracted from American models, and executing this extraction at scale requires access to advanced chips,” the company wrote.
The allegations arrive after OpenAI made similar claims earlier this month. In a memo to the US House Select Committee on China, OpenAI said Chinese companies engaged in “ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier labs.” OpenAI stated it was reviewing evidence that DeepSeek “may have improperly distilled” its models.
DeepSeek has not publicly responded to OpenAI’s claims. DeepSeek drew attention last year after releasing a model that approached the performance of leading US systems while requiring fewer computing resources. That release prompted debate about the effectiveness of US export controls and the cost structure of frontier AI development.
Market reaction and broader debate
The allegations surfaced during heightened volatility in AI-linked equities. The Kobeissi Letter wrote on X that markets erased $800 billion in capitalization amid concerns that AI could destabilize employment and margins. The account argued that the “Anthropic takedowns are real” but questioned whether fears of systemic collapse may prove overstated.
Recent product announcements tied to Claude have coincided with sharp moves in individual stocks. According to The Kobeissi Letter, IBM posted its worst day since October 2000 after Anthropic said Claude could streamline COBOL code. CrowdStrike lost $20 billion in market capitalization within two trading days of a Claude Code Security announcement, based on the same account’s post.
Online commentary has also reflected polarization around the issue. An account named X Freeze criticized Anthropic’s stance and described its model as overly constrained.
> be me
— X Freeze (@XFreeze) February 24, 2026
> name company "Anthropic" (literally Greek for "human-centered")
> hire a bunch of doomers who secretly think humanity is the disease
> raise billions from Big Tech to build the world's most anxious, heavily-censored chatbot
> write a 50-page "Constitutional AI"… pic.twitter.com/PRAPgZcQsa
Another user, tetsuo, wrote sarcastically about intellectual property concerns, highlighting the broader debate over training data and model outputs.
I can't believe someone would just steal from Anthropic like this. The millions of man-hours Anthropic spent hand-writing code, text, art, books, etc. to generate enough data for training must be taken into consideration here. Where is the respect for IP? https://t.co/ICejPo8gpG
— tetsuo (@tetsuoai) February 23, 2026
In the comment, Elon Musk made a sarcastic remark about that as well.
"Banger, how dare they steal the stuff Anthropic stole from human coders?"
Response and next steps
Anthropic said it has invested in detection systems, behavioral fingerprinting and coordinated activity analysis to identify distillation patterns. The company stated it has strengthened verification for educational and startup accounts and is sharing technical indicators with other AI labs and authorities.
“But no company can solve this alone,” Anthropic wrote, calling for coordinated action among industry participants, cloud providers and policymakers.
The dispute highlights a deeper tension in artificial intelligence development: the boundary between legitimate model optimization and prohibited capability extraction. Court filings have not yet surfaced in relation to Anthropic’s claims, and no enforcement action has been announced. For now, the matter remains in the realm of corporate disclosures and public statements, with significant geopolitical and commercial implications.

Disclaimer: All materials on this site are for informational purposes only. None of the material should be interpreted as investment advice. Please note that, despite the nature of much of the material created and hosted on this website, HODL FM operates as a media and informational platform, not a provider of financial advisory services. The opinions of authors and other contributors are their own and should not be taken as financial advice. If you require advice, HODL FM strongly recommends contacting a qualified industry professional.





