Tencent's Hy3: A New Era in AI Development

 

Tencent's

Tencent employees used Anthropic's Claude chatbot to help fine-tune the company's latest Hy3 artificial intelligence model, according to a report published Tuesday by The Information. The revelation comes just days after Tencent launched and open-sourced the Hy3 preview model on April 23, and it reignites questions about the use of rival AI systems in training competing products.

The report noted that Hy3 has received positive reviews from developers, with Anthropic's involvement credited as a factor in the model's improved performance.

A Fast-Moving Model Launch

Tencent released Hy3 preview last week as a 295-billion-parameter Mixture-of-Experts model that activates only 21 billion parameters per token, supporting a 256K context window. The company described it as its most intelligent model to date, built on a pre-training and reinforcement learning infrastructure that was overhauled starting in February 2026. Led by Yao Shunyu, a former OpenAI research scientist recruited by Tencent, the model went from cold start to open-source release in under three months.

On SWE-bench Verified, a software engineering benchmark, Hy3 scored 74.4%, a sharp jump from its predecessor Hy2's 53%, though it still trails leading closed models. Tencent made the model available through its cloud platform and on Hugging Face.

A Familiar Tension

The disclosure that Claude played a role in Hy3's development lands in a fraught environment. In February, Anthropic publicly accused three Chinese AI firms — DeepSeek, Moonshot AI, and MiniMax — of running "industrial-scale" distillation campaigns against Claude, generating over 16 million interactions through roughly 24,000 fraudulent accounts. Anthropic called the technique a legitimate training method when used properly but warned that illicit distillation strips away safety guardrails and poses national security risks.

It remains unclear whether Tencent's use of Claude constituted a similar terms-of-service violation or fell within permissible usage. Anthropic's policies prohibit competitors from using Claude outputs to train rival models, and the company's services are restricted in China. The cross-border dimension — a Chinese tech giant drawing on a U.S. AI safety company's technology — adds a geopolitical layer to what is already an unsettled area of industry practice.

What Comes Next

The episode underscores how intertwined the global AI ecosystem has become, even as governments in Washington and Beijing move to draw sharper lines around technology transfer. For Tencent, the immediate question is whether the use of Claude will invite scrutiny from Anthropic or U.S. regulators. For Anthropic, it is another test of whether its terms of service can effectively govern how its models are used — and by whom.
Next Post Previous Post