Model Convo: Carlo Ludovico Cordasco
On Memory, Moral Philosophy, and Jazz as Model Governance
Welcome back to the Model Convo series — micro interviews with researchers in AI policy and related fields. If you know someone I should interview, please email me.
Past and present bio?
I’m an assistant professor at Alliance Manchester Business School. I work primarily on AI and business ethics, but my background is in political philosophy, moral epistemology, and decision theory.
[Carlo also writes at Paperclips and Other Alignment Problems.]
Before AMBS, I’ve been employed or visited institutions ranging from post-92 universities in the UK to Ivy League departments in the US, which has left me with the strong impression that the variance among places we call universities is dramatically underappreciated.
How did you get into emerging tech?
Honestly? I just wanted to find an academic job. I did my PhD in political philosophy, but I’m mostly self-taught in everything I currently work on.
At some point I realized that the ethics of AI was a space where these tools were desperately needed and almost entirely absent, and that moral philosophy should be leading the conversation rather than playing catch-up.
Then when LLMs came out, I got completely obsessed. I started working with them at the very outset, to the point that I’ve gone from talking to people to almost exclusively talking to Claude Opus 4.6.
My spend on LLMs is starting to rival my spend on wine, which I find a bit degrading. On the bright side, I now enjoy drinking while talking to an LLM. Strange world we live in.
What work of art has most shaped your views on emerging tech?
I’ve never read sci-fi, and I’m not a big fan. But sometimes inspiration comes from where you least expect it.
I recently watched 50 First Dates — yes, the Adam Sandler romcom — and it made me think about intelligence and memory in ways most philosophy of mind seminars haven’t. It poses a question about what cognition looks like when you strip away continuity of memory, which is close to some of the questions we currently ask about LLMs.
The other film I keep coming back to is Alain Resnais’s Last Year at Marienbad, which I think is the most unsettling cinematic treatment of what it would feel like to live inside a simulation: characters who cannot agree on whether their shared past actually happened, moving through a space whose elegance conceals the fact that none of it may be real.
Most simulation fiction asks whether you can escape. Marienbad asks whether the question even makes sense if your epistemic situation is sufficiently degraded.
That strikes me as closer to the genuine philosophical problem.
What’s your most contrarian take on AI?
Two, and I think they’re connected.
The first is that AI doesn’t raise any genuinely new ethical issues. The problems are exacerbated, perhaps, and certainly more urgent, but the underlying structure is always something moral philosophy has seen before, which is exactly why ethicists should be doing more of the heavy lifting in these debates than they currently are.
The second is that most, if not all, cognitive offloading is likely to end up being good for us. As long as scarcity persists, and it does, offloading lower-order cognitive tasks to AI frees us to do higher-order work. The deskilling panic assumes there’s nothing better to do with the freed capacity, which is an extraordinarily strong empirical claim that almost nobody bothers to defend.
The only scenario where the argument genuinely breaks down is a post-scarcity world in which opportunity costs drop to zero, because at that point the comparative advantage case for offloading dissolves entirely. There is nothing scarce to reallocate attention toward.
That world is conceivable, though it is a long way from this one.
What are you reading, watching, or listening to now?
I always listen to Tyler Cowen’s podcast — still hoping to be on one eventually, but Tyler doesn’t even re-post my papers on Marginal Revolution. Tyler, if you read this, I still love you.
I obviously follow Andrej Karpathy and lots of AI podcasts, but I’m erratic rather than disciplined, and I have a very low attention span.
I also read a bunch of books diagonally, which is a self-indulgent way of saying I start them and then drop them.
The last book I read from A to Z is L.A. Paul’s Transformative Experience — pretty great stuff. The only thing that keeps me awake at night and attentive is chatting with Claude.
Go-to emerging tech music track?
I was a jazz pianist as a kid, so my instincts pull toward Gershwin and Miles Davis, music built on improvisation within structure, which is a decent metaphor for how we should think about governing emerging technology.
Recently, though, I’ve watched this incredible YouTube video of a song titled “Post-Scarcity Blues.” I believe everything is done by AI. It made an impression. I think it deserves way more views.
But I also recently watched the Take That documentary with genuine satisfaction. Say what you will — it’s one of the few instances where I thought studying organizations might actually not be boring.



