As artificial intelligence continues to reshape business operations, enterprises face a growing landscape of AI regulations. 2025 marks a pivotal moment for AI regulation as state laws in the U.S. and key provisions of the EU AI Act begin to take effect. This blog post outlines the most important regulations taking effect in 2025 and what enterprises can do today to meet this moment.
U.S. regulations entering into force in 2025
On January 1, 2025, California began enforcement of three AI laws relevant to enterprises impacting AI-processed personal information, healthcare services and healthcare facilities.
CA Consumer Privacy Protection (AB 1008)
Applies to: any company employing AI where an AI system could generate personal information subject to the protection of the CCPA.
AB 1008 amends the California Consumer Privacy Act (CCPA) to clarify that personal information can exist in various formats, including artificial intelligence systems that are capable of outputting personal information.
Enterprises must:
- Extend existing data privacy rights to AI-processed personal information
- Update privacy policies to address AI's role in data processing
- Implement mechanisms for consumers to access and control their AI-processed dats
- Prepare for potential penalties and legal action for non-compliance
CA Healthcare Utilization Review (SB 1120)
Applies to: healthcare services
SB 1120 amends Section 1367.01 of the Health and Safety Code and Section 10123.135 of the Insurance Code, introducing specific regulations for health care service plans and disability insurers that employ artificial intelligence, algorithms, or other software tools in their utilization review (UR) and utilization management (UM) processes.
For healthcare organizations and insurers using AI in utilization review processes, this law mandates:
- Licensed physician supervision for AI-driven decisions
- Individual assessment based on specific patient history rather than generic datasets
- Fair and equitable application without discrimination
- Transparent policies and procedures available upon request
- Compliance with strict oversight requirements to avoid penalties
CA Patient Communications (AB 3030)
Applies to: healthcare facilities
AB 3030 introduces new requirements for health facilities, clinics, and physician practices in California that use genAI to generate written or verbal patient communications pertaining to patient clinical information.
Healthcare facilities using generative AI for patient communications must:
- Include clear disclaimers when communications are AI-generated
- Provide instructions for contacting human healthcare providers
- Note that communications reviewed by licensed healthcare providers are exempt
We expect the U.S. States to continue to take a leading role in AI governance regulation in 2025. Initiatives underway to look out for include:
- The California Privacy Protection Agency (CPPA) is conducting rulemaking on automated decision-making technology.
- The Multistate AI Policymaker Working Group, spanning 45 states, is working toward consistent AI regulation approaches across states in the U.S.
- Texas' proposed AI Governance Act (TRAIGA) is expected to pass in 2025 given that the Texas legislature is only in session every two years.
- New Jersey and Illinois are considering legislation on AI in employment and insurance.
Global Milestones
This is only the beginning of AI regulation, with some EU AI Act provisions entering into force and numerous initiatives expected to gain traction in 2025. Key global policy developments include:
- In Europe, AI literacy, prohibited practices and obligations for General-Purpose AI (GPAI) models begin to apply:
- By February 2, 2025, providers and deployers of AI systems must take measures to ensure a sufficient level of AI literacy of their staff (Article 4). Prohibited AI practices also go into effect on February 2nd (Article 5).
- By May 2, Codes of Practice for GPAI models –to determine if GPAI models pose systemic risk and to help developers, distributors, and deployers comply with their own requirements– must be ready (Article 56).
- On August 2, GPAI models obligations go into effect (Article 53 and 55). These include:
- providing technical documentation
- making publicly available detailed summaries about the content used for training
- showing compliance with EU copyright law.
- South Korea enacted the Framework Act on the Development of AI and Establishment of a Foundation of Trust, a comprehensive AI law sharing common elements with the EU AI Act, on January 10th, 2025.
- The UK unveiled the AI Opportunities Action Plan on January 13, 2025 outlining its commitment to shaping the future of AI and including enabling safe and trusted AI development and adoption through regulation, safety and assurance. The UK is expected to continue public consultations on sector specific issues relating to AI.
What Enterprises Should Do Now
Common elements that emerge from laws taking effect in 2025 as well as others set to take effect in 2026, such as Colorado AI Act and EU AI Act provisions for high-risk AI. Enterprises beginning their compliance journey should prioritize three governance areas:
- Inventory AI Systems
- Identify all AI systems in use across your organization
- Document how personal information is processed by these systems
- Establish risk management processes
- Develop AI risk assessment frameworks
- Create documentation processes for AI system decisions
- Ensure human review is built into risk management processes
- Promote AI literacy
- Educate teams about AI use in your organization and policies and processes already in place
- Establish clear roles and responsibilities for AI oversight
As AI regulations continue to evolve, organizations must establish comprehensive yet agile governance frameworks that not only comply with emerging requirements but also enable the responsible and accelerated adoption of AI. Credo AI empowers organizations to navigate this complex landscape by identifying relevant regulations, mapping policy requirements to a streamlined set of actionable controls, and offering implementation guidance to mitigate risks effectively.
Connect with Credo AI today to explore how our AI Governance Advisory Services can help you meet AI literacy requirements and build a governance foundation for your AI initiatives.
DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.