Paperweights, 5th Ed.
Blizzards, zombies, and the Critique of Judgment
I. Why AI Research Needs the Philosophy of Science
Elsie Jang makes the case for “real-time history and philosophy of science, conducted by philosophers embedded at the frontier of [artificial intelligence] research.”
Rather than a clarion call for more deep thinking on the consequences of AI — is the market not saturated? — Elsie instead offers a practical proposal for bringing philosophical resources to bear on the subfield of mechanistic interpretability. She concludes:
Artificial intelligence is moving fast! The birds, it turns out, could use some ornithologists. Without philosophy, the field risks becoming an engineering discipline that changes the world without understanding what it is building.
Put another way, philosopher-builders need philosopher-historians.
II. Why We Should Be Talking about Zombie Reasoning
Writing in The Pursuit of Liberalism, Rebecca Lowe offers a word of philosophic caution to those who speak with loose abandon about the supposed agency and cognition of AI models.
Talking about the ‘actions’ of AI in loose ways comes with serious epistemic risk, therefore. Doing so will deaden our awareness to truths of the revolutionary moment in which we live. It will leave us open to manipulation by people with an interest in covering up the ways in which AI is developing.
Despite being in broad agreement with Rebecca, I too am guilty of this. Now… how to deal with the nomenclature of reasoning models and agentic commerce?
III. Brendan Foody on Teaching AI and the Future of Knowledge Work
And speaking of zombies, agents, and real-time history… this was a great interview! Foody is the youngest unicorn founder ever. He runs Mercor, a company that helps train frontier AI models by hiring domain experts to give, well, expert feedback.
Here’s one excerpt.
COWEN: To get all nerdy here, Immanuel Kant in his third critique, Critique of Judgment, said, in essence, taste is that which cannot be captured in a rubric. If the data you want is a rubric and taste is really important, maybe Kant was wrong, but how do I square that whole picture? Is it, by invoking taste, you’re being circular and wishing for a free lunch that comes from outside the model, in a sense?
FOODY: There are other kinds of data they could do if it can’t be captured in a rubric. Another kind is RLHF, where you could have the model generate two responses similar to what you might see in ChatGPT, and then have these people with a lot of taste choose which response they prefer, and do that many times until the model is able to understand their preferences. That could be one way of going about it as well.
(I’d stick with Kant on this one.)
The snow storm that swept through the US this past week caused many to worry whether the Texas power grid would hold up. The soaring demand for new data centers and recent memories of the deadly 2021 Texas winter blackouts only heightened that anxiety.
But the grid held up, and it did so in part thanks to Texas’s stunning efforts to scale its power grid — more than doubling capacity over a ten-year period — and doing so overwhelmingly through the US of renewable fuel sources.
Congress, as well as other states, should take notice. Resilience and efficiency can work together, and markets help with both.
V. On Bullshit: Anniversary Edition
My reading of hard-copy books has been in the doldrums… It’s too easy to get satiated with instrumental reading. Daniel Muñoz recently had an excellent piece on getting back into the rhythm, and I found it helpful. So, in an effort to reinvigorate, I’m starting with slimmer, more enjoyable works.
And at 96 pocket-sized pages, what could be slimmer or more enjoyable than Harry Frankfurt’s 2005 book and 1986 essay, On Bullshit? A new edition was just published for the book’s 20th anniversary, so the timing is auspicious.
I was also delighted to see John Maier, a fellow Mercatian, pop up in Princeton’s blurbs for the book. You can find his full review in The Sunday Times. Here is an excerpt:
You don’t even have to be human to produce bullshit these days: the way ChatGPT mixes truth with confident assertions of error makes it a dead fit for Frankfurt-style bullshit. The surfeit of information produced by modern technology suggests that the present abounds with bullshit. Are we doomed to drown in it? Such dramatic forecasts should be made with caution, if only for the fear that they themselves might be an unverifiably speculative kind of bullshit.
I will be compiling future micro book reviews here as well as posting them in future editions of Paperweights.

