Michael Selig, chair of the U.S. Commodity Futures Trading Commission, said blockchain could be important for verifying AI-generated content, arguing the technology can help separate authentic media from synthetic outputs amid rising misinformation concerns.
Speaking on The Pomp Podcast, host Anthony Pompliano asked about AI-generated memes and images in markets and whether intent should matter or such content should be limited. Selig replied: “The private markets have solutions — blockchain technology is a great one. If you can timestamp things and make sure there’s an identifier for each meme or AI generated posts, you can verify if it’s real or generated by AI… Having these technologies here in the US is critical.” He also said regulators are focused on preserving U.S. leadership in crypto and added, “you can’t have AI without blockchain.”
On regulating AI agents as autonomous trading grows, and on distinguishing automated tools from fully autonomous agents that might require different oversight, Selig warned against overregulation: “I’m concerned that we over-regulate and strangle some of the technology here in the US… I’m taking a very much minimum effective dose of regulation approach, where we’re… making sure that we’re regulating the actors… and not the software developers. The software developers are the ones building the tools, but they’re not actually engaging in the financial transactions.” He said the CFTC is evaluating how AI models are used in markets and believes enforcement should target participants conducting financial activity.
A central challenge as AI use expands is telling real content from synthetic media. Selig’s remarks echo broader interest among policymakers and developers in applying blockchain for content verification and provenance.
One route is proof-of-personhood systems, which aim to confirm that accounts belong to unique humans rather than bots. A high-profile example is Sam Altman’s World and its World ID protocol, which lets users prove humanity without exposing personal data by using encrypted biometric iris scans stored on the user’s device. The system has faced criticism over privacy risks and possible coercion.
In March, World launched AgentKit, a toolkit enabling AI agents to prove they are linked to a verified human while interacting with online services. AgentKit pairs proof-of-personhood credentials with the x402 micropayments protocol developed by Coinbase and Cloudflare, allowing agents to pay for access and present cryptographic proof of human backing.
Ethereum co-founder Vitalik Buterin has proposed using cryptography and blockchain to make online systems more verifiable, suggesting tools like zero-knowledge proofs and on-chain timestamps to validate how content is created and distributed without revealing sensitive information.
These proposals arrive as U.S. policymakers consider broader AI regulation. On March 20, the Trump administration released a national framework advocating a unified federal approach and warning that a patchwork of state laws could impede innovation and competitiveness.
Cointelegraph is committed to independent, transparent journalism. This article follows Cointelegraph’s Editorial Policy and aims to provide accurate, timely information; readers are encouraged to verify details independently. Read the Editorial Policy at https://cointelegraph.com/editorial-policy