Everything Is Computer
Cybersecurity, NIST, and Federal AI Legislation

We are so back! Well, the US Federal Government is anyway. So, it’s time again to consider federal AI legislation — or the lack thereof — and what that means for emerging tech governance. In the immediate wake of the AI moratorium legislative failure this summer, I spoke with Lauren Wagner and Matt Mittelsteadt. This is the second part of that conversation, which took place before the release of the White House’s AI Action Plan.
We discuss…
How a patchwork of state AI laws makes the US more vulnerable.
Why disharmonized regulations generate heavy compliance costs.
Empowering NIST for federal and even global cyber standards.
Federal procurement as harmonization mechanism.
Benchmarks and authentication for AI agents.
Cybersecurity Dilemmas
Ryan Hauser: How bad is the moratorium failure for American innovation?
Matt Mittelsteadt: It’s hard to say. No matter what our approach is, disharmonized regulation is going to be a huge problem moving forward if we have 50 different sets of laws. For consumers, first of all, any transparency legislation or any reporting is going to be potentially contradictory, varied, and hard to follow.
There are certainly safety concerns for consumers with such variation, but also for companies if we don’t get everyone on the same page. They’re going to be spending unnecessary amounts of resources trying to comply with these things. That can actually have real costs if you look at other areas in which there already is a regulatory patchwork. There’s disharmonized regulation.
Cybersecurity is a great example. You see some real problems in cybersecurity right now. By one federal estimate, there are currently 22 federal cybersecurity reporting laws on top of 45 separate state cybersecurity reporting laws. Both figures may have changed since I read the report back in 2022. The result of such a mess has unfortunately been high security costs. If you look at various reports from industry, basically everybody's saying as much as 50 percent of their time is spent on compliance. It’s spent on going down checklists, it’s spent on appeasing regulators, rather than doing the actual engineering that is required to secure these systems. This is engineering that, in many cases, can actually dramatically improve the security of these things. But we just can’t do it because we don't have the time.
That’s the type of cost that we could expect to see. If people are forced to go down checklists from 50 different laws and trying to appease everybody, then they’re not going to be investing in trust and safety. I don’t think anybody’s going to benefit from that. Of course, they’re also not going to be pushing the innovation needle forward to compete with the increasingly competitive Chinese AI sector that could even overtake us this year.
Lauren Wagner: Yeah, just to underscore a few of those points, I think that with the laws that are being proposed, a lot of these mechanisms can ultimately be incredibly distracting to a company. It’s not just the topic they're covering. It’s how it actually is operationalized, what is actually included, and who can do what.
Think of things like transparency regulations and whistleblower protections. I am nominally in favor of all those things. The devil is in the details of what companies are being asked to do. I’ve been on the receiving end of this. I’ve built products that need to comply with the California Consumer Privacy Act. What ends up happening is you’re just siphoning out data and adding different protections for specific users. But at the end of the day, do I feel like those users are really more protected than users in New York or South Carolina or anywhere else in the country? No, it was just another compliance cost that we needed to bear at a big company like Google or Meta.
Then you pull those people out, they go to startups, they know how to routinize some of this as they’re building out products. But if you imagine that is simple compared to what we’re looking at in terms of future AI regulation — even around things that seem as simplistic as voluntary reporting and transparency requirements, where there isn't pre-deployment testing or a kill switch or something like that. Even that can get incredibly complicated.
I’ve been working with these multi-stakeholder coalitions to identify the right standards. How do we align incentives to actually ensure that they will be adhered to? Some of that can be deployers — companies that are adopting third-party AI systems. They’re going to be looking for specific things when they're evaluating AI tools. That could be a standard. It’s thankless work. You're having to bring together all of these different types of entities that have competing interests, and it’s really hard to build that alignment, technical specifications, and detail that you have to get into.
The question is, do we think that state governments are capable of doing that? I actually founded an organization with RAND and the Tony Blair Institute — the AI-Enabled Policymaking Project (AIPP). We’re trying to bring AI into end-to-end policymaking. My understanding is that states are quite resource-constrained. So “yes” to everything Matt said. It does worry me moving forward, especially when the ultimate goal is making safe AI systems.
I've seen a lot of technical unlocks — from startups and from different research teams — integrate those and pull those through into technically driven safety solutions. That’s not a transparency requirement. So, you're being distracted by those requirements.
Matt Mittelsteadt: I'm also nominally in support of certain things like transparency rules. But I think the functionality of those things is based on how we’re going to measure. Using the transparency example, how do we measure that? What certain benchmark should we be shooting for? What should be reported? How is that reported? How is that reporting defined? And are any of those reports duplicating or stepping on the toes of some sort of preexisting regulation? And unfortunately, if you look at the state bills, oftentimes that is indeed the case.
These things are not necessarily being conceived as one piece in a holistic, systemic regulatory environment. They’re conceived in a vacuum, which can be very dangerous. Just to throw an example out there, SB 53 out of California — one of the provisions, if you look at it, might be a good provision if again, it was conceived in a vacuum, but because it's in a system, it may be harmful. The bill currently [as of early July 2025] requires reports on any theft of model weights. If somebody's AI system, AI algorithm was stolen, they have to report it to the government.
The challenge with that type of thing is that California already has various reporting laws on these types of things, even sectorally. There are also these 22 federal laws on top of that. And so if you start adding new reporting schemes, what you don't necessarily get is transparency. You just get one more report that can create a swirl of confusion about what’s actually happening on the ground and who's getting what reports and whether or not those reports line up, and that can actually counterintuitively decrease transparency, or at least just increase confusion. So, we really need to be thinking of these things holistically, thinking about how these regulations play together to get this thing right.
Ryan Hauser: So, all this might have the unintended consequence of distracting people and making it more difficult to actually have concrete, engineered products that can ensure safety?
Matt Mittelsteadt: Yeah, one of the challenges with something like cybersecurity is that it's inherently interstate. If you look at companies like Google, Meta, or any company really that's providing digital services, their services are first of all deployed across state lines and countless data centers that are all talking to each other. They're interfacing with all sorts of unrelated third-party services that may be in any number of states or even in other countries. And so in that type of situation, cybersecurity basically cannot be governed at the state level.
If you have certain laws that might limit the ability to deploy certain cybersecurity services, that essentially would — when cybersecurity demands everyone get on the same page — that could create a security hole that, if one attacker is able to get into a system in New Hampshire and take down a system in California, they’re going to do that. And so this is an area in which we do need consistency in order to get the safety that I think most people would agree we need. And if we don't do that, I really am worried for the future, especially because AI is going to be huge for cybersecurity. We're already seeing it being used in places like vulnerability discovery. AI in the last few months, studies have shown it can autonomously discover attack points in software, identify them for developers so that they can actually get these things fixed ahead of time. That could be truly huge. But that type of benefit can only happen if we have consistent regulations, and so we do need to get that done for things like cybersecurity and other types of technologies that are inherently interstate.
Empowering NIST
Ryan Hauser: How does the National Institute of Standards and Technology fit into AI governance? Should it take a more active regulatory role? Is that the best thing the admin can do?
Lauren Wagner: NIST should be leading the conversation on a lot of this, and not NIST in isolation, but NIST activating the types of coalitions and entities that I've worked with over the past two years. So how do you bring the right voices into the room with NIST, which is nominally nonpartisan, science-focused, and standards-focused?
How do you foreground all of that in the conversation and bring people together and align incentives so we can actually get workable standards and frameworks out there? And so there is a bunch of low-hanging fruit for NIST. One, to go back to a topic I brought up before, is procurement. So if you have all federal agencies thinking about how they're bringing third-party AI systems into their work, NIST ideally should be involved in developing the framework for how they evaluate those systems. And we had the AI Risk Management Framework, which was released during the Biden administration, which I think was a good start. I think it provided directional information for how I worked in the private sector, how industry should be thinking about evaluating AI systems.
But how do we build on that? Another area where NIST could really play is in benchmarking. How are we benchmarking these systems? I work with ARC Prize Foundation, which is an AGI benchmark, so trying to understand where we are in terms of intelligence, superintelligence, what AI is capable of, and what it's not capable of at a high level.
And NIST should be able to help drive our shared reality around what AI is and what it isn't. And that is key and foundational to making good decisions about AI, whether it's what you're building in a lab or the types of policies that are being developed. And so I think having that evidence base and that foundation of knowledge is where I would like to see NIST.
But that really does require an investment in resources and people and making sure that the science is rigorous and not co-opted by specific ideological groups and things like that.
Matt Mittelsteadt: Yeah. To add on to that, there are a couple of huge benefits I think we could have if we properly funded and built trust, and continue to build trust in NIST. First of all, I think NIST is one of our best harmonization tools. If you look at other fields, technical fields like cybersecurity, to go back to that example, many states today have passed laws that directly say what NIST says on cybersecurity. And beyond the states, other countries have adopted NIST standards as their go-to standards for things like cybersecurity, trying to guide and shape their overall approach.
In terms of the benefits of NIST, it helps to get states not only on the same page, but to actually extend the influence of our policy to other countries, which are much harder to corral. If you're thinking about regulatory harmonization, we have no power there. All we can do is project soft power and this scientific influence through bodies like NIST that can help lead the globe to a final resting place in terms of good policy. Now, a second thing that I think we really need to get right in terms of investing in NIST is our necessary standards.
Lauren mentioned benchmarking. I think there's huge value in that. I’m also thinking about emerging fields like agentic technologies. One of the big areas where NIST has provided huge, clear value to the economy in the past is authentication standards and trying to identify who's who online and providing tools like cryptography. Agents, I think it goes without question, if they're going to do things like book your flights for you, log into your bank account and manage your finances, do all sorts of complicated things, they're going to need to be able to manage authentication standards clearly. How does an agent store a password? That's an unresolved question, as far as I can tell. That’s something an organization like NIST could research and develop standards for. If they do that, it could help get everybody on the same page, improve safety, and really catalyze this potential industry.

