Blockchain Technology Could Authenticate AI Content, Says CFTC's Selig

Blockchain Technology Could Authenticate AI Content, Says CFTC's Selig

According to the regulatory official, onchain identifiers and timestamp technology can serve as mechanisms for separating authentic media from AI-created materials, as he advocates for minimal regulation of AI agents.

US Commodity Futures Trading Commission chair Michael Selig has suggested that blockchain technology could serve as a crucial mechanism for authenticating content generated by artificial intelligence, arguing the innovation offers solutions for separating genuine media from artificially created materials amid rising misinformation challenges.

Speaking with host Anthony Pompliano during a Thursday episode of The Pomp Podcast, Selig addressed questions about AI-created memes and imagery in financial markets, specifically whether intent should be considered or if such materials require outright limitations. His response to Pompliano stated:

The private markets have solutions — blockchain technology is a great one. If you can timestamp things and make sure there's an identifier for each meme or AI generated posts, you can verify if it's real or generated by AI… Having these technologies here in the US is critical.

The regulatory chief emphasized that authorities are committed to preserving American dominance in cryptocurrency innovation, further noting that "you can't have AI without blockchain."

Vitalik Buterin, CFTC, United States, AI, Donald Trump, Worldcoin
Source: The Pomp Podcast

When questioned about regulatory strategies for AI agents, particularly as automated trading grows more common in financial sectors and regulators face pressure to differentiate between simple automated tools and truly autonomous agents, along with appropriate oversight approaches for the latter category, Selig explained:

I'm concerned that we over-regulate and strangle some of the technology here in the US… I'm taking a very much minimum effective dose of regulation approach, where we're… making sure that we're regulating the actors… and not the software developers. The software developers are the ones building the tools, but they're not actually engaging in the financial transactions.

According to Selig, the CFTC is currently evaluating artificial intelligence model deployment in marketplace environments, stressing that regulatory enforcement ought to target participants directly involved in financial activities.

Proof-of-personhood systems and blockchain emerge as AI verification solutions

A fundamental obstacle accompanying the artificial intelligence boom involves separating authentic content from synthetically produced media. Selig's statements appear to align with a wider movement among technology developers and government officials toward leveraging blockchain for establishing content authenticity and tracking provenance.

One methodology involves proof-of-personhood frameworks, which seek to establish that an account represents a genuine, distinct human individual rather than an automated bot. Sam Altman's World presents the most well-known implementation, offering its World ID protocol that enables users to demonstrate their humanity while protecting personal information. The framework relies on encrypted biometric iris scanning data maintained on individual user devices, although the approach has faced scrutiny regarding privacy vulnerabilities and potential exploitation concerns.

World introduced AgentKit in March, representing a development toolkit enabling AI agents to demonstrate their connection to authenticated humans during interactions with digital platforms. The system combines proof-of-personhood verification credentials with Coinbase and Cloudflare's x402 micropayments protocol, permitting agents to compensate for service access while supplying cryptographic evidence of human authorization.

Vitalik Buterin, co-founder of Ethereum, has advocated for deploying cryptography and blockchain technologies to enhance verification capabilities across online platforms, including through zero-knowledge proof implementations and onchain timestamp mechanisms that could assist in validating content creation and distribution processes while protecting confidential information.

These proposals emerge as American lawmakers consider comprehensive artificial intelligence regulatory frameworks. The Trump administration published a national strategy on March 20, advocating for a consolidated federal methodology while cautioning that fragmented state-level legislation could undermine technological advancement and global competitiveness.