Digital democracy|Dystopian new tech
One challenge, two paths: AI legislation in the US and the EU

Artificial intelligence has the power to drive innovation and reshape both industry and government. As AI technologies continue to advance and become more widely adopted, governments, businesses, and the public are increasingly evaluating the risks and rewards of their use. Yet, given the rapid pace of development and the novelty of these tools, comprehensive legislation has yet to catch up. In response to growing concerns about unregulated AI, major powers like the EU and the US have begun taking steps to implement AI laws. The EU has introduced a centralized framework to protect fundamental rights, ensure trustworthy AI use, and guide innovation according to European values. In contrast, the US did not implement a single federal AI law, choosing a more fragmented, state-led approach.
This study looks at the current status of AI laws in the US, highlighting both enacted legislation and pending bills and compares these efforts with the EU’s approach.
Current status of AI legislation in the US
State-level AI legislation in the United States is evolving rapidly, as lawmakers across the country actively introduce and debate bills to regulate a wide range of AI applications. The main concerns are privacy, accountability, the replacement of workers by AI, and the ethical use of AI in sensitive areas such as law enforcement, healthcare, and education.
Because a comprehensive federal AI law has yet to be enacted, the legal landscape is largely being defined by state-level efforts. All 50 states have introduced AI-related legislation during the 2025 session, but the number and focus of these bills vary greatly. This reflects varying regional priorities and concerns about AI’s societal impact. While federal agencies are working on broader AI standards and guidelines, most regulatory changes in the US currently come from individual states.
The most active US states in terms of AI-related legislation are New York (106 bills), Texas (72), New Jersey (67), Massachusetts (49), Virginia (46), Illinois (45), California (44), Maryland (43), Hawaii (40), Georgia (32), and Montana (32). These states have introduced the highest number of AI-related bills, indicating strong legislative interest in various aspects of artificial intelligence, including privacy, ethics, workforce implications, and public safety.
In contrast, states such as Iowa, Wisconsin, Kentucky, Vermont, Alabama, and 15 others have introduced 10 or fewer bills. This suggests a more cautious approach, possibly reflecting a strategy of observing how early regulations develop in other jurisdictions or focusing only on specific, high-priority issues.
Nineteen states, including Connecticut, Florida, North Carolina, and Tennessee, fall in the middle range, having introduced between 11 and 24 AI-related bills.
Among the US territories, Puerto Rico leads with 6 bills, while the District of Columbia, Guam, and the US Virgin Islands have each introduced 1 bill.
The variation in legislative activity across states and territories reflects differences in regional priorities, available resources, and the perceived local impact of AI. States like New York and Texas, with larger populations or significant technology sectors, are more likely to propose a higher volume of legislation, likely due to greater exposure to both the opportunities and challenges presented by artificial intelligence.
Across the United States, most AI-related bills are still being reviewed, with a total of 762 bills currently pending in state legislatures. So far, 54 bills have been officially enacted into law, and 31 have been formally adopted through other legislative processes. However, a larger number (159 bills) have failed to move forward and were rejected or withdrawn. Additionally, 27 bills have passed the legislature and are now waiting for approval or veto by state governors. This pattern suggests that AI legislation is highly active and evolving, with the majority of proposals still under consideration and only a small fraction having fully passed into law.
We selected some examples to better understand which types of bills are being prioritized. These examples include bills that have been enacted, adopted, sent to the governor, or failed.
Enacted:
- New York, Utah, and Arkansas passed bills intended to provide protections for individuals whose photograph, voice, or likeness is reproduced through artificial intelligence;
- North Dakota criminalizes using AI-powered robots to harass or stalk someone, treating such conduct as harassment;
- Arkansas law clarified that the person who provides the input or directive to a generative artificial intelligence tool is the owner of the generated content;
- Virginia enacted multiple laws related to healthcare, including one that requires all hospitals, nursing homes, and certified nursing facilities to establish and implement policies that govern how patients may access and use an "intelligent personal assistant" (such as a digital or virtual assistant powered by artificial intelligence and natural language processing) while receiving inpatient care.
Adopted:
- New Jersey adopted a bill to ensure that AI company employees and evaluators can safely and responsibly report risks and problems in generative AI systems;
- The majority of the adopted bills (18 out of 31) provide honors and recognition for contributions to the AI field. Three bills were adopted to create committees to study the current and future use of AI, and most of the remaining bills only briefly mention AI and are not related to the main topics of the legislation.
Sent to the governor:
- In Tennessee, victims of deepfake images can sue for actual damages or receive up to $150,000 in liquidated damages;
- Explicitly allows Indiana state agencies to use artificial intelligence (AI) software when preparing required statements and budget projections;
- Maryland adds strict oversight, transparency, and patient safety requirements to the use of AI in health insurance utilization review, ensuring AI decisions are individualized, compliant, and cannot substitute for human clinical judgment;
- Maryland: Evaluate the use of software that uses artificial intelligence (AI) to identify deadly weapons and explore how this AI software can be effectively integrated with security cameras and other school safety technologies.
Failed:
- Maryland failed to pass a bill that would have made a person who intentionally, knowingly, or negligently designs or creates artificial intelligence software capable of causing physical injury or death strictly liable for damages and subject to a civil penalty if the software is used to cause such harm. Penalties could have included imprisonment for up to 20 years, a fine of up to $100,000, or both;
- Montana's failed bill proposes new rules restricting how artificial intelligence (AI), algorithms, or similar software tools can be used by health insurance companies for utilization review/management (e.g., approving, delaying, or denying health insurance claims based on medical necessity);
- Connecticut failed to pass a bill that prohibits residential rental property owners from using pricing algorithms and competitors' sensitive data to set rental prices. Three similar bills also failed in New Mexico, New Hampshire, and Maryland.
Current status of AI legislation in the EU
As artificial intelligence continues to evolve, the European Union has taken a proactive and unified approach to its regulation. In contrast to the fragmented, state-level efforts seen in the US, the EU introduced the Artificial Intelligence Act (AI Act) in 2024. It’s the world’s first comprehensive legal framework specifically designed to govern the development, deployment, and use of AI technologies.
The AI Act aims to ensure that AI systems used within the EU are safe, transparent, and respect fundamental rights. It classifies AI applications by risk level, placing stricter requirements on those deemed high-risk. Like the General Data Protection Regulation (GDPR) did for data privacy, this legislation is expected to influence AI governance globally.
The enforcement of the AI Act will be carried out at both the national and EU levels. Each EU member state must appoint one or more national supervisory authorities to oversee compliance within its jurisdiction. These authorities will oversee the implementation of the AI Act, conduct audits, investigate violations, and impose penalties where necessary. At the EU level, the European Artificial Intelligence Board will coordinate national efforts, promote consistent application of the law, and provide guidance on complex cases¹.
Noncompliance with the AI Act can result in substantial financial penalties. According to Article 99: Penalties, violations of Article 5, which prohibits unacceptable-risk AI practices, can lead to fines of up to €35 million or up to 7% of a company’s total global annual turnover from the previous financial year, whichever is higher.
EU countries use varied approaches when assigning governing bodies to enforce the AI Act. Countries such as Belgium, Spain, Germany, and Austria have adopted a decentralized approach, assigning many different governing bodies (ranging from 19 to 26). In contrast, nations like France, Cyprus, Estonia, Luxembourg, Czechia, Slovakia, and Latvia have opted for a more centralized model, with very few enforcement bodies monitoring AI (between 1 and 3).
Interestingly, this pattern does not strictly correlate with country size or population. For example, smaller countries such as Malta, Finland, and Slovenia have each assigned 10 governing bodies, surpassing larger countries like France. Notably, Hungary and Italy are the only EU countries that currently have no designated enforcement bodies.
Overall, 22 out of 27 EU countries have assigned up to 10 enforcement bodies, while five countries fall into the higher range, between 14 and 26. This diversity highlights the absence of a standardized approach to AI Act enforcement across the EU, as countries tailor their regulatory frameworks to suit their national contexts and priorities.
Structural differences: the US AI legislation vs. the EU AI Act
In the European Union, AI legislation is characterized by a centralized framework established at the Union level. The forthcoming EU AI Act exemplifies this model, providing a unified legal standard that will be directly applicable across all member states. Enforcement and supervision are delegated to designated public authorities within each member country. This top-down approach is designed to minimize legal fragmentation, harmonize AI oversight, and provide clear, consistent rules for developers and users throughout the internal market.
In contrast, the United States adopts a highly decentralized approach to AI legislation. The absence of comprehensive federal AI law has led to a patchwork of initiatives at the state level. Each state may propose, enact, or reject its own AI-related bills, often focusing on specific sectors or policy concerns. This is evident in the highly variable legislative activity across US jurisdictions, with some states proposing multiple bills and others having little or no legislative movement. The result is a diverse regulatory landscape with inconsistent coverage and standards, reflecting local priorities and political will rather than a coordinated national strategy.
Focus and priorities in the US
The five key areas with the highest number of AI-related bills in the US are Health Use (12.6% of all bills), Government Use (11.9%), Education (11.6%), Private Sector Use (10.5%), and Criminal Use (8.3%). Together, these categories represent the majority of AI legislation. Other topics, including Elections, Employment, Responsible AI Use, and various additional issues, make up the remaining 45.1% of bills.
While the majority of bills — an average of 74% — remain pending, certain topics show a higher rate of failure than others. The energy sector has the highest failure rate, with 31% failed bills, and most others still in pending status. Bills related to Criminal use also show a high failure rate, with 23 bills already rejected and only 2 enacted into law.
The topics with the lowest percentage of pending bills include Studies, where 64% of bills have already moved beyond the pending stage, followed by Energy (60%), Criminal Use (48%), Education (46%), Elections (42%), and Government Use (41%). This suggests that these areas are priorities for legislative processing. In contrast, the lowest-priority areas appear to be Responsible Use (14% of bills moved beyond pending), Housing (14%), and Employment (21%).
Focus and priorities in the EU
The EU AI Act exemplifies a risk-based regulatory strategy, where AI systems are classified and governed according to the potential risks they pose to society, particularly regarding safety, privacy, non-discrimination, and personal freedoms².
Key priorities include:
- Protecting fundamental rights: the legislation places strong emphasis on protecting individuals, particularly in sensitive areas such as biometric recognition, surveillance, and high-impact decision-making;
- Risk management: high-risk AI systems — such as those used in critical infrastructure, healthcare, education, and law enforcement — are subject to strict requirements for transparency, accountability, and human oversight;
- Transparency and accountability: the EU prioritizes making AI systems explainable and ensuring clear lines of responsibility throughout their development and deployment process;
- Promoting innovation: in addition to regulation, the EU aims to create a supportive environment for ethical AI innovation by providing regulatory sandboxes and guidance for businesses and researchers;
- Harmonization across member states: by establishing EU-wide standards and a coordinated network of supervisory authorities, the EU seeks to prevent fragmented national approaches and create a single, trusted market for AI.
Methodology and sources
Data, including bill numbers, status, state, and subject matter, on AI legislation for the US was collected from the National Conference of State Legislatures. For the EU, key priorities and the designation of national supervisory authorities were identified from the AI Act. The data was then analyzed to compare the current status of AI legislation in the US and the EU, with particular attention to the differing focuses and priorities of the legislation in each region.
For complete research material behind this study, visit here.