Paperweights, 6th Ed.
Moltbookian Norms, Coasean AI, and the Luttwakian Enigma
A small request: Machine Culture is nearing 200 subscribers!
If you get something out of this publication, please consider forwarding it to a friend or colleague who you think might also appreciate these dispatches on AI governance and emergent order.
And a special thanks to readers who have already done so. It’s a major boost, and I’m deeply grateful. 🦾🔥
I. AI, Transaction Costs, and a Quiet Shift Toward Self‑Employment
Labor economist (and fellow Mercatian) Liya Palagashvili writes about how the unfolding transaction-cost shock of AI could change not just what jobs get done, but how the market organizes firms and individual workers.
Technological change doesn’t just substitute capital for labor—it also reshapes the boundary between firms and markets. And on that margin, AI may be exerting a transaction‑cost shock that quietly expands self‑employment and contract‑based work across a much wider set of occupations than we usually associate with the “gig economy.”
We’re still in the early days of what many expect is a new managerial revolution. But incentives from technological change are not predestined to decentralize economic activity, so Palagashvili is careful to note the countervailing factors at play.
Large firms may have advantages in building and deploying sophisticated AI systems. Firms with proprietary datasets or specialized AI might make employees more productive than those same workers operating independently. When workers invest in learning firm-specific AI tools, this creates the relationship-specific investments that favor [firm-based] employment…
The French political economist Bertrand de Jouvenel saw this push and pull, this expansion and contraction, as a core feature of modern political life — something I discuss in my essay on “Sporadic Growth.”
II. L.M. Sacasas and Adam Thierer on Writing Tech Criticism
Twelve years ago, tech critic and essayist L.M. Sacasas wrote “10 Points of Unsolicited Advice for Tech Writers.” This prompted a reply from former Mercatian tech policy analyst Adam Thierer followed by an additional response from Sacasas. The exchange is a model for good-faith dialogue that I feel we have lost in many ways. An excerpt from Thierer:
[H]umans have exhibited the uncanny ability to adapt to changes in their environment, bounce back from adversity, and learn to be resilient over time. A great deal of wisdom is born of experience, including experiences that involve risk and the possibility of occasional mistakes and failures while both developing new technologies and learning how to live with them.
Their disagreement is humane — a time capsule buried before the 2016 election, back when we could not yet “feel the AGI.”
III. Moltbook and the Pluralist Turn in AGI Governance
In the wake of the recent Moltbook furor, Elsie Jang directs our attention to the deeper work being done in pluralistic, multi-agent approaches to AI safety and cooperative AI.
Much of the AI governance world still adheres to the singularity as their guiding paradigm — and there are good and bad arguments for this framework. My money is on the pluralist approach, however, both as an empirical matter and normative account.
IV. AI Norms — On the Farm, In the Wild
The pluralist turn in AI governance, as it turns out, demands the careful transfer of models from older domains. Here’s Séb Krier, of Google Deepmind.
[T]his will benefit from people with deep knowledge in all sorts of domains like economics, game theory, psychology, cybersecurity, mechanism design, and many more… It’s time to build!
In my piece, I suggest analysts start with Cristina Bicchieri’s contemporary research program on the emergence and measurement of social norms. Also, noting my bias, the Bloomington, Virginia, and Austrian schools are well-placed for the pluralist turn.
So I suppose we ought to get to it…
V. Ed Luttwak on Military Revolution
This three-hour behemoth of an interview is a wild ride through the Luttwakian countryside — an autobiographical tour covering nearly a century of grand strategy and Shakespearean defense consulting as lived by the “Machiavelli of Maryland.”
Here are a few excerpts. First, Luttwak’s thoughts on autonomous vehicles, in air and on land:
[Edward Luttwak:] Pilots add nothing to artificial intelligence, because remember, you’re starting from a hard spot, you land in a hard spot, and in between, it’s only air. Even if the air is occupied by birds or by other airplanes, you can definitely avoid them. If anybody’s talking about having cars running around San Francisco, it must mean that we should have pilots.
Jordan Schneider: You should try to ride in one of those the next time you go. It’s a cool experience. I felt way more in danger as soon as I got a normal Uber right afterwards, because the Waymo is never going to break the law. Your normal driver will brake too hard or speed or run a red light.
Edward Luttwak: I always drive too fast, and I brake hard. I enjoy doing that. I hate smooth rides.
Also, Luttwak’s two recommendations for books on war.
Of all the books in this room — and this room is full of books — there are two books that are dominantly important from my understanding of war. One is the Iliad, and I have 10 different editions because people ask me to review them.
The other is the British official history of the strategic bombing offensive. The official book on the British strategic bombing offensive, by itself, is worth three-quarters of the books in this room put together.
Self-recommending.

