November 25, 2025

Jennifer Huddleston

To date, states have considered over 1,000 bills about artificial intelligence (AI). More than 160 state-level laws have been passed, including everything from model-level regulations in Colorado to specific applications like the use of AI in hiring to more general studies of the technology’s potential impact. With so much action on a state level, there is a significant risk of state-level regulation derailing the light touch approach to innovation that allowed America to be a global leader during the internet era.

The specifics of any federal preemption or moratorium will matter, as will the method in which it is enacted. AI is, by its nature, almost always a matter of interstate, not intrastate, commerce, making it an inherently federal issue. There are still elements outside of proposed preemption that could address specific concerns and balance different policy approaches at both a state and federal level, while allowing the US to remain a leader in innovation.

In June 2025, there was an attempt to include a ten-year moratorium on state AI laws in the “One Big Beautiful Bill,” but it did not make it through the Senate. More recently, the conversation around the need to prevent a state patchwork of AI laws and pursue a light touch federal framework has been endorsed by President Trump, and there are discussions of a possible legislative proposal to be included in the NDAA or other legislation.

Problems of a Patchwork

Before diving into the specifics of renewed pushes for federal preemption of state AI laws, it is important to understand why a patchwork is so potentially problematic for AI.

State-level laws could require innovators to engage in compliance specific to each state, raising the overall costs of offering a product nationwide. This could make it more difficult, particularly for smaller players, to compete due to both the costs and complexity of navigating such regulation, even if their products would not be harmful. This is true even when a state takes a more positive approach, such as sandboxes for AI products in certain regulated industries. While such an approach can allow more certainty and opportunities in a particular state, a 50-state patchwork of such without clear reciprocity would require innovators to seek approval in each state.

Legislation that regulates AI models at a state level, such as that enacted in Colorado, is particularly problematic. The result could be that products are not available anywhere in the United States to avoid running afoul of a single state’s law. Additionally, such approaches place static requirements on an innovation undergoing a great period of transformation.

Balancing Federalism and AI Concerns

Many opponents to preemption or a federal moratorium on AI policy point to concerns about specific harms, such as fraud, discriminatory use, or specific harmful applications. These opponents also bemoan the “failure” of federal action on AI and see the states as filling a necessary void.

While there is a potential role for states to act on some matters within their own borders, most of the current debates around AI are inherently interstate matters. Issues such as computer power, model development, and other key components of AI cross state borders in the interactions needed both to develop and deploy the technology. State laws regulating model development, computing power, and other general aspects of AI would impact AI technologies well beyond their borders in ways that raise serious interstate commerce concerns. If states are to provide positive models of what AI governance could look like or address specific concerns, this policy approach would need to be limited to intrastate issues. Some examples include providing safeguards for civil liberties around state government use of AI or the use of state government collected data or updating state laws to allow for AI or clarify that AI does not absolve a bad actor of their violation.

There are areas unrelated to AI where states could help encourage innovation. As my colleague Travis Fisher and I have written, this provides an opportunity for states to consider energy policy reform that could allow further innovation and benefit consumers and businesses more generally. This type of reform that has incidental effects on AI would not be prevented or likely preempted under a general proposal.

The same can be true of existing laws at both the state and federal levels. Many generally applicable laws apply to AI, addressing concerns such as fraud or discrimination. In some cases, there may need to be a re-examination of how the law might be preventing positive AI use cases, such as autonomous vehicles; however, in many of the harms typically discussed, the issue is not the technology but the bad actor using the technology. Rather than rush to pass AI-specific legislation, policymakers should pause to consider if something is truly a novel issue or if it is better addressed by providing clarity or updates to existing laws. Generally applicable laws can handle many of the most pressing AI concerns and would allow states to respond to clear harm that bad actors might use AI to inflict.

Executive Order v. Legislative Approach to Preemption

A draft executive order directing various actions on state AI legislation has also been under consideration, although it currently appears unlikely. There are important considerations about either a legislative or executive approach that should be considered along with the consequences.

Some elements of the circulated draft of the executive order would merely formalize existing authority and reflect the overall executive posture towards state actions on AI and its development. For example, having the Department of Justice consider litigation over state laws in AI does not confer new powers or change existing authority on the issue but would merely signal the direction of resources. Other elements reflect ongoing actions, such as a prior comment period on the economic impact of state laws on interstate commerce. Still other elements would have little effect, such as providing proposals to Congress and emphasizing the administration’s desire to see a federal AI framework. Some elements, though, could raise concerns, such as the direction to the FTC to determine which state laws impact the truthfulness of AI. Even if such laws have concerning consequences for speech, so would any determinations of truthfulness on a federal level. Additionally, elements such as tying BEAD funding read much like prior legislative attempts. Creating “legislation” by executive order raises concerns regardless of the administration.

An executive order-based approach is likely to be far less lasting than a legislative approach. As we have seen in the different positioning of administrations recently, an executive order could be quickly revoked. Additionally, unless courts were to rule for the government in challenges to the state laws, it would not change the overall landscape in the same way a legislative approach would.

A legislative approach can be more carefully tailored to reflect the appropriate balance of federalism and prevent a patchwork of state laws in ways that are long-lasting. As I discussed in June, there are existing examples of how such an approach worked when the internet was the new disruptive technology and laws such as the Internet Tax Freedom Act and Section 230 stopped a potential disruptive patchwork.

The specific language of any preemption will still matter, and policymakers should be cautious of actions that might preempt but provide greater unchecked authority over this technology to the administrative state. Ideally, any preemption or moratorium should focus on preventing a disruptive patchwork while also considering ways to continue the light touch approach and resolve any remaining concerns.

Conclusion

The last year has largely focused on embracing the potential of AI innovation at a federal level. A disruptive state patchwork could impact the development and deployment of this important technology well beyond a single state’s borders. The impact of this could prevent not only many AI tools we already enjoy but also the development of AI in life-saving ways, like in the medical field and natural disaster response. The specifics of any proposal will matter, as will the nature of a federal policy framework. Ultimately, however, preemption is likely to be part of a solution before a patchwork creates problems.