Welcome back to Machine Culture’s “Model Convo” series — micro interviews with researchers in AI policy and related fields. If you know someone I should interview, send them my way.
Present: Rebecca Lowe is Philosophy and Senior Research Fellow at the Mercatus Center, directing the new Emerging Scholars Program.
She writes a Substack called the ends don’t justify the means and just launched Working Definition, a philosophical podcast. She is writing two books — Freedom in Utopia and Speaking Freely. Rebecca is Consulting Space Philosopher at AstroAnalytica, a strategic space advisory firm. She has written on AI as the age of philosophy and ChatGPT as philosophical interlocutor.
Past: She has worked as a researcher at various UK organizations, directed a think tank called FREER, and spent four years as Research Director at an investment company. She has also worked in freelance writing, consulting, and political journalism. Rebecca has a PhD from King’s College London, where she focused on moral property rights. She also has an MA in modern and contemporary literature from Birbeck and studied music and aesthetics at Cambridge. She once ran for Parliament.
How did you get into AI?
Toward the end of last year, I became a heavy user of ChatGPT. Here’s a piece I wrote about how that changed my views on the philosophical value of AI. And here’s a recent follow-up piece in which I argue that the Age of AI is the Age of Philosophy.
What work of art has most shaped your views on emerging tech?
I’m an AI optimist, and a tech optimist more generally — insofar as ‘optimist’ is a useful term, as I’m using it here in a relative sense to track current discourse. But the artworks that have probably affected my views the most on these matters are dystopian: stuff like Brave New World, Slaughterhouse-Five, Gattaca, and Minority Report. As well as being aesthetically great, these artworks provide a useful constraint on my natural positivity while appealing to my automatic hatred of any kind of overly top-down control or over-interference. I also enjoy films about space: the 2019 documentary Apollo 11 is my favorite.
What’s your most contrarian take on AI?
If you speak of AI consciousness, you’re making a category error.
What are you reading now?
Among other things, I’m currently reading A Heart So White by Javier Marías, which so far is hard to tie into AI! Also, Sense and Sensibilia by J.L. Austin, which is a philosophy book about appearance and reality — about what and how we perceive. This has relevance for thinking about AI, in that Austin forces us to stop and assess what we can take from how things seem to us. Relevant here is what it means to be the kind of thing that can do stuff like perceiving. I’m always surprised when people take AI behavior at face value!
Who should Machine Culture talk to next?
You should talk with my excellent friend Josiah Ober. He’s a brilliant classicist and political theorist — very philosophical, very smart, very fun. These days, he often writes and talks about AI, particularly in relation to Aristotelian moral theory. Here’s a great piece by him, on how to go about modeling human-AI relations in light of the moral truth that human slavery and indentured servitude are always wrong.
Let’s hear your go-to emerging tech policy track.
A friend recently introduced me to “Robocop” by Kanye West. I listen to it most days now.