Questions about AI
What I'm thinking about in 2026 and beyond

I’m nine months into my work as an AI governance research fellow at the Mercatus Center at GMU, so it’s about time I publicly write down the big questions I’m thinking through.
Many (most?) of these queries will not be answered in one year. Some may never be answered satisfactorily. But writing down our doubts and uncertainties is critical if we want to make any scientific and philosophic headway. Richard Feynman observed that:
[I]t is of paramount importance, in order to make progress, that we recognize this ignorance and this doubt. Because we have the doubt, we then propose looking in new directions for new ideas. The rate of the development of science is not the rate at which you make observations alone but, much more important, the rate at which you create new things to test.
This is especially true for squishy fields with fuzzy boundaries, where progress is often opaque and new ideas are not obviously better than old ones.
AI governance is one of these fields. It tends to attract entrepreneurial, interdisciplinary minds that feel — for good and ill — unconstrained by institutional or academic norms. Peer review be damned (were it not already).
With the specter of AGI or ASI looming, researchers can only deploy very rough heuristics, models, and theories to make sense of things. And while these tools are useful, they don’t necessarily add up to a body of literature or a scientific tradition — the kind we see in mathematics, biology, economics, and physics.
These dynamics often make the conversation feel more vibrant, more urgent. But they also mean many assumptions get swept under the rug or ignored. Worse still, arguments and claims by individual researchers can mushroom into quick-takes that are then amplified as “scientific consensus.” This happens even when there is no unified science of AI governance — not that there ever could be — and no mechanism or norms for generating consensus.
Some epistemic hygiene is in order, and I hope bringing these questions to the surface will at least nudge myself in that direction.
So, here are my queries, beginning with the foundational and moving into the more applied.
Is hylomorphism correct? Does the emergence of AI support or undercut arguments for hylomorphism?
How do varieties of cognition embed in animal, human, and machine systems?
What are the computational and physical limits of intelligence?
Is superintelligence a coherent concept?
Can intelligence be coherently described as one kind of thing across biological and machine systems?
How will AI agents create new markets and social cognitive systems?
What are the limits, if any, to AI persuasion of humans, and how can we forecast that or measure it experimentally?
Does the use of experimental methods raise the social status of the behavioral and decision sciences, especially economics, psychology, and cognitive science?
Are LLMs “bullshitters” in the Frankfurtian sense?
To what extent are attempts to rein in LLM hallucinations constrained by parallel efforts to maintain AI alignment?
Do LLMs evolve their own social norms? In what ways can we meaningfully talk about machines responding to or partaking in human social norms?
Is tacit knowledge a fundamental obstacle to AI development or is it a navigable bottleneck?
Does the translation of tacit knowledge into legible code assume physicalism? Does the stickiness of tacit knowledge undermine physicalism?
How will we use AI systems to manage, understand, or prolong our grief in the face of loss?
Can we meaningfully talk about virtue or agency in machine systems?
What is the nature of bubbles in long- and short-term investment in science and technology?
Is “bubble” a meaningful or useful term if one assumes the subjective theory of value?
On the whole, do AI systems enhance or diminish political centralization? How might this change given the relative balance of edge versus cloud-based computing?
To what extent can or should we use AI to automate the process of scientific discovery? What is lost and what is gained by this?
Must discovery always be serendipitous? Is serendipity mostly or always random?
To what extent are core social problems in AI diffusion, growth, and benefits sharing subject to the standard models and theories of neoclassical economics? Is “this time is different” really true?
To what extent does mimesis drive efforts to replicate industrial policy programs in the US, based on case studies from Europe and Asia?
How will AI and AGI reshape HUMINT collection and how should we adapt offensive and defensive cyber capabilities accordingly?
How can decentralized, autonomous systems revitalize American military strategy? Consider the work of John Boyd.
Will AGI accomplish the reshoring of old and new manufacturing capabilities given the lower the cost of labor? What policies need to be removed to allow this to happen? Will this harm lower-income countries?
How can we equip and promulgate voluntary groups like the Institute of Electrical and Electronics Engineers or the Frontier Model Forum in their efforts to advance private AI governance and security?
How could we build a better DOGE? [Or just a non-insane one…]
Outside of the administrative state, how can AI systems help empower underfunded, understaffed offices across all branches and levels of government? What does this mean for federalism?
