Paperweights, 3rd Ed.
Unsavory machines, more moratoria, and cooperative AI
This is the third edition of Paperweights, offering links and commentary on major stories in AI governance.
I recently wrote about an essay about a 1989 Gillette commercial and how it illustrates some of the biographical drivers of my work in emerging tech — “The Best a Man Can Get.” OpenAI is currently backing away from its old policy of limiting pornographic content — politely referred to as “erotica.” With the launch of Sora 2, we’re not far from getting on-demand, bespoke, fully immersive porn and virtual romantic partners. Many (certainly not all or most) marginally attached men and women will gladly purchase these. Call it “dopaminergic UBI.” While I’m not excited about using the law to clamp down on this, users and civil society groups should be skeptical that OpenAI and other frontier labs have crafted sufficient safeguards. We need better self-regulation.
Amid all the talk of “bubbles,” many unsavory AI use cases are worming their way onto our timelines. Jasmine Sun has a word of admonition when you feel like dunking — “Don’t Take the Bait.”
There’s a new generation of AI companies for whom distribution is the product. They embrace vice signaling, plastering streets and feeds with ads that say “Stop Hiring Humans” (Artisan) and “Cheat On Everything” (Cluely). These signs intend to goad people into snapping pictures and posting dunks… baiting is like a video game that pays out in cash. But for our information ecosystem, rage-bait provides a distorted, exaggerated, and often straight-up false view of what our fellow citizens believe.
When the “bubble pops,” the business cycle slows, or the animal spirits go dormant, we should expect many such startups to fold.
However, cynical social commentators — those quick to deploy the word “hype” with abandon — will be disappointed in the long run. We’re still in the early stages of AI, both its development and diffusion. But we’re already seeing many useful civilian and defense applications of AI beyond the stunning advances in LLMs. We’re learning the ins and outs of “Capturing AI’s Value,” as Elsie Jang explores, while others are taking the time to make the case for more readily pro-social AI tools. The University of Texas’s AI Opportunity Inventory is one example, and even Pope Leo is telling young Roman Catholics how AI might benefit their faith.
Maybe none of that will matter when the “sand god” cometh? But that framing sometimes misses the reality that AI is a decentralized, collective process. Séb Krier, who helps lead frontier policy at Google DeepMind, argues that “most innovations are the product of social organisations (cooperation) and market dynamics (competition), not a single genius savant.” Elsewhere, he writes:
[I]nstead of a unitary being or species, AGI should be understood as a collection of complex systems, models, and products that functions similarly to (and integrates with) existing human macro-systems. An amplifier for the bureaucracies and markets that already govern us, not a discrete ‘biological-style’ agent. Its governance is a continuous sociopolitical struggle (insert always has been meme) that is shaped by many different forces, not a one-time mathematical proof of safety before a launch.
This is the same framing Matt Mittelstead and Brent Skorup constructed when they wrote the Mercatus “AI Policy Guide.” In my most recent “Model Convo” with Cody Moser, a complexity scientist and former Mercatus Morgenstern Fellow, he explains:
[W]e’ve been looking at “the human in the loop” from the wrong side. We keep talking about how to keep individual humans involved in AI systems, as if the relevant cognitive unit is just a person sitting at a terminal making choices. But the systems that generate innovation and meaning… are collective cognitive systems.
This is the lattice behind Machine Culture.
And speaking of bottom-up governance… what’s the deal with the on-again, off-again AI moratorium? I was about to go on my first radio interview [starts just before 56:00] to discuss the once-and-future AI moratorium executive order, when the story broke that the EO was delayed and would not be released in the next few days as expected. There is talk, however, that the White House will push to have the moratorium included in the upcoming NDAA, which I discussed with Inside AI Policy.
The admin did, however, manage to announce the Genesis Mission, an effort to marshal the scientific resources of the US Federal Government — including the National Laboratories — to “build an integrated AI platform to harness Federal scientific datasets… to train scientific foundation models and create AI agents to test new hypotheses, automate research workflows, and accelerate scientific breakthroughs.”
I expect the Genesis Mission to have enough momentum to survive an Alexandria Ocasio-Cortez administration. Who wants to kill biotech breakthroughs and other scientific moonshots? (Never mind, don’t answer that.) Without a strong legislative framework, rule by EO alone will likely not succeed in crafting the lasting, innovation-friendly regulatory framework we need for a new American renaissance.
Happy Think-Tanksgiving to all who celebrate!

