Model Convo: Kevin Frazier
Tech progress, human flourishing, and John Steinbeck
Welcome back to Machine Culture’s “Model Convo” series — micro interviews with researchers in AI policy and related fields. If you know someone I should interview, send them my way.
Present: Kevin Frazier is the AI Innovation and Law Fellow at the University of Texas School of Law. He also co-hosts the Scaling Laws podcast on AI policy and writes about the topic via his Substack, Appleseed AI. Kevin frequently hosts the Regulatory Transparency Project’s Fourth Branch Podcast, including a multi-episode series, “Law for Little Tech.” You can follow him on X, Instagram, Bluesky, and SSRN.
Past: Prior to moving to Austin, Kevin served as an assistant professor at St. Thomas University College of Law, where he taught administrative law, constitutional law, and civil procedure. He completed his undergrad in economics at the University of Oregon and worked for the Governor of Oregon before joining Google’s legal department. The gulf of expertise and knowledge between state government and big tech inspired him to become a “translator” between these domains, both of which are needed for a vibrant and flourishing society. He leaned into that mission and went on to earn his JD from the UC-Berkeley School of Law and an MPA from the Harvard Kennedy School, later clerking for Chief Justice Mike McGrath on the Montana Supreme Court.
How did you get into emerging tech policy?
It goes back to a childhood tragedy in which a dear friend lost his life because of a lack of access to and lack of familiarity with a medical tool — a defibrillator — that could have saved his life. Today, those tools are ubiquitous. Policy changes mandate that public places keep them on hand.
Why didn’t that mandate occur sooner? I think there are at least three answers. First, policymakers tend to get distracted by issues that are headline news. In our current environment, lawmakers face an incentive to take action on the issue of the day, regardless of whether more pressing policy matters exist. Second, the status quo bias is real. We all get accustomed to the way things are and impose a high bar on any proposed change. This is a particular problem in the political domain, where special interests benefiting from the status quo will spend significant sums to keep it that way. Third, there’s a big knowledge gap between lawmakers and innovators. The former simply do not hear enough from the latter.
On the whole, these factors add up to a political culture in which techno-pessimism is a reliable electoral strategy — it’s easier to be against something than for something. Sadly, this is especially true for lawmakers who cannot guarantee that the churn produced by technological innovation will carry the same benefits for the entire community. In short, it’s hard to defend an uncertain future. The result of this pessimism is that proven solutions to enduring issues go unrealized. Through my winding professional journey, I’ve seen those barriers play out in real time. It’s incredibly frustrating. A freer future in which more people experience meaningful control over their lives is possible. That’s the future I want to bring about.


What work of art has most shaped your views on emerging tech?
This may sound like a bit of an odd answer, but East of Eden by John Steinbeck. It’s a story that subtly and beautifully details human agency and endurance. From a policy lens, the greatest shift in my views has been the proper role of the state in managing daily affairs and decisions. I used to generally back government intervention into various markets and actions in an attempt to prevent “bad” outcomes. While I think there’s still a role for the government to play in cultivating a market and ecosystem in which competition is fair and oriented toward innovation, I question whether it’s proper for the government to insert itself into what are often personal choices.
Perhaps the best example is AI in the classroom. Many public schools are cash-strapped and under-resourced. They lack the training resources and political will to meaningfully incorporate AI into their classrooms so that they can teach AI literacy and expertise at a proper age. Private schools, however, are forging experiments with AI that carry tremendous promise and, of course, some downsides. Parents often do not have a meaningful choice between the two because the costs of the latter are artificially high. This is the result of a market that’s been heavily skewed by government conceptions of what’s proper and best. It’s a market in which we’ve diminished human agency. What’s fantastic about Steinbeck’s book is that he makes real the costs and benefits of free will and autonomy. Technological progress has the potential to accentuate those benefits, if we let it. I think that the vast majority of people want to do good, to live impactful lives, and to contribute to their communities. Throughout history, technological advances have enabled people to do just that.
What’s your most contrarian take on AI?
Privacy law is one of the biggest inhibitors to AI progress and human flourishing. We need more data sharing, not less. For AI tools to contribute to healthcare, mental health, education, and similarly sensitive areas, it must be true that there are vast troves of high-quality data for models to train on. Yet, these are exactly the domains in which privacy laws enacted in the 20th century are inhibiting the collection and use of such data. Something has got to give. All exchanges of data involve a trade-off between the benefits of that disclosure and the potential risks of its misuse. I’d argue that the laws on the books were developed when those risks were much greater relative to the benefits. The math has changed; our laws should too.
What are you reading, watching, or listening to now?
I’m always listening to Acquired and EconTalk and reading anything shared by the Abundance Institute or the Foundation for American Innovation. Of course, I also spend quite a lot of time digging through the Pessimist’s Archive for great case studies on losing bets cast against technology unleashing human flourishing.
Go-to emerging tech music track?
Not sure if I can tie this back to tech. I used to say Noah Kahan (was on that train way before it departed the station), but to avoid being too “basic,” I’ll point to Ian Munsick.
[Editor’s note: I chose my favorite Ian Munsick song, “White Buffalo” (2023), perhaps because trying to make good AI policy is like finding the aforementioned mythic megafauna — also because my lifelong friend and fellow Buckeye Aaron Rochotte did the drums, percussion, and engineering for the song.]

