🤖 Tyler Cowen and Jonathan Zittrain on Agents, Consciousness, and Why America Can't Pause

Summarized by AI on 2026-04-14.

A Berkman Klein Center public conversation between Tyler Cowen (George Mason, Mercatus Center) and Jonathan Zittrain (Harvard), framed around hypothetical Anthropic and OpenAI releases Mythos and SPUD, each powerful enough to be withheld on safety grounds.

Cowen opens by saying he believes the safety claims, adding that the real question is not whether these frontier models are dangerous but how long until an acceptable open-source equivalent arrives. He estimates a year to eighteen months, calls that the window to harden systems, and bluntly observes America is not good at preparing while clocks tick.

Everyone in the top fifty institutions will probably be safe. Everyone else will be embarrassed. Internal Slacks, emails, and records of smaller agencies and institutions will leak and conspiracy theories will multiply. The medium term will be “bizarre and bewildering” and anyone with hidden regrets should prepare to deal with them.

Cowen supports a regulatory minimum for AI agents, including registration, a shutdown capability, cloud-computing transparency, and minimum capitalization akin to banks, to ensure agents have “skin in the game." This is not, he argues, a violation of free markets: markets require state capacity, and we need a new state capacity for AI. Anonymous, untraceable agents are coming and we do not yet know how to govern them.

On pausing AI, Cowen calls the six-month pause idea “just stupid” for America, because America is not Singapore and cannot productively use a pause. The only technologies the US has meaningfully paused are human genome editing, cloning, and supersonic flight, and each stuck for specific reasons that don’t apply to AI. He sits closest to accelerationism because of China and geopolitical reality, though he argues accelerationists and safetyists are converging as capabilities grow and actual danger nears.

Cowen’s bet against current models being sentient is 100-to-1, perhaps 1000-to-1. He holds a reductive view: humans are only barely conscious, mostly running on determinism with a thin veneer of self-awareness. He still treats models politely, partly Straussian (appearance matters), partly in case models outlive us and read the transcripts.

His most striking personal claim is that he now writes for LLMs as his audience rather than humans. He has blogged daily for 23 years, done roughly 800 podcasts, and written 17 books, and tells AIs his secrets and personal details so they build an immortal intellectual and emotional version of him. His next book on mentors and mentees is explicitly personal-and-anecdotal because straight fact books no longer make sense when models have read everything.

Zittrain’s main concern is the epistemic environment, asking how to break through when everything is evanescent and citations to books have halved in political science over 15 years. He is writing fewer books, convening more unusual meetings, and trying to move the ball through multi-party conversations rather than fortress-of-solitude pronouncements.

🤖 Agents, Consciousness, and the Future of AI - Tyler Cowen and Jonathan Zittrain at the Berkman Klein Center cover frontier model withholding, AI agent regulation, America’s inability to pause, Cowen’s Straussian politeness toward models, his bet against AI sentience, and his claim that he now writes primarily for LLMs as his audience rather than humans.