Nvidia CEO Jensen Huang warned that China already has the chips, datacenter capacity and talent to train an AI model comparable to Anthropic’s Claude Mythos, and that such capability could pose a serious cybersecurity risk.
Speaking on the Dwarkesh Patel podcast, Huang said the compute used to train Mythos was not exotic and that the necessary processors and infrastructure are widely available in China. He said the country manufactures roughly 60 percent of mainstream chips, employs top computer scientists, accounts for about half of global AI researchers and has abundant energy and datacenter capacity that can be scaled up quickly.
Anthropic limited access to Mythos in April after the model identified thousands of software vulnerabilities across major operating systems and browsers, prompting concerns about misuse in cyberattacks. Anthropic reported that about 99 percent of the vulnerabilities Mythos flagged remain unpatched. Independent assessments, including one from the AI Security Institute, found the model could autonomously discover and exploit vulnerabilities and carry out multi-stage attacks that would otherwise take human experts days.
Huang cautioned that if a model with similar capabilities were produced elsewhere, including in China, it could be dangerous if misused. He pointed to spare capacity in some Chinese datacenters and broader manufacturing scale as factors that could enable rapid development of powerful models.
Despite framing China as a potential competitor, Huang urged engagement and research dialogue rather than outright confrontation or victimization, arguing that dialogue is a safer path while still seeking to maintain US leadership in AI.
US officials have also flagged Mythos as consequential. Treasury Secretary Scott Bessent described the model as a step change in learning and capabilities that affects the competitive landscape.
Security analysts worry that AI-augmented attacks could threaten institutions that run legacy software, such as banks and critical infrastructure. Anthropic said last year that a state-sponsored group from China attempted to misuse its Claude Code tool against roughly 30 global entities, with limited success in some cases.
Huang’s comments highlight a broader tension: advanced defensive and offensive AI capabilities are increasingly possible wherever sufficient compute, data and expertise exist. That decentralization raises policy and cybersecurity challenges, underscoring the need for cross-border research cooperation, defensive measures, and careful consideration of how to manage risks from powerful models developed outside the companies that first build them.