In today’s partisan landscape, there is at least one topic that everyone agrees on: regulating artificial intelligence (AI) to foster responsible adoption and use. Governments and industry bodies across the globe are crafting legislation and standards to promote accountability, transparency, and fairness, while mitigating risks such as bias and privacy violations. Compliance with AI regulations and standards is crucial to ensure ethical development, deployment, and use of AI. In this blog, we introduce some of the key legislation and standards that are shaping the developing AI regulatory environment.
EU AI Act
The EU AI Act (the Act) is the most comprehensive AI legislation passed to date. The Act was adopted by the EU Parliament in March 2024 and approved by the Council of the EU in May 2024. The Act establishes compliance requirements for AI systems based on the level of risk they pose to humans and fundamental rights. The Act’s requirements will phase in over a period of 36 months to provide time for AI developers and users to create compliance best practices. The Act aims to achieve three key regulatory objectives: (1) ensure AI systems are developed and used in a trustworthy and ethical manner, (2) mitigate risks such as bias and discrimination, and (3) promote transparency, accountability, and human oversight throughout the lifecycle of AI systems.
Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
Announced in October 2023, the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the Order) aims to promote AI innovation while ensuring ethical standards and national security. The Order is multifaceted and seeks to establish new standards for AI safety and security, protect the privacy of Americans, advance equity and civil rights, support consumers, patients, students, and workers, promote innovation and competition, and ensure responsible and effective government use of AI. Furthermore, the Order acknowledges that additional AI regulatory action will be required and that the Executive Branch will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible AI innovation.
OMB Memorandum M-24-10
Delivered in March 2024, OMB Memorandum M-24-10 (the Memorandum) requires federal agencies to adopt responsible practices when integrating AI technologies. The Memorandum mandates federal agencies to ensure AI systems are developed and used ethically with transparency and accountability. Under the Memorandum, Federal agencies must prioritize data quality and security, mitigate and address biases in AI algorithms, and communicate clearly about AI applications to build public trust. The Memorandum also emphasizes compliance with all existing legal and regulatory frameworks, including privacy and civil rights laws.
Colorado SB24-205 & SB21-169
In May 2024, Colorado became the first U.S. state to enact a comprehensive law governing AI systems with the passing of SB24-205. All requirements under the law will become effective on February 1, 2026. The law seeks to regulate algorithmic discrimination in AI systems and requires developers and deployers of high-risk AI systems to document AI system capabilities, limitations, and potential impacts on individuals and society. Developers must conduct bias and discrimination risk assessments and implement measures to mitigate these risks over the course of AI system lifecycles. Deployers must disclose AI use to consumers and ensure transparency and accountability, including providing explanations for AI decisions that affect individuals. SB24-205 also emphasizes data privacy and mandates compliance with existing laws.
SB21-169 mandates that insurers using AI systems that rely on consumer data must ensure the systems do not result in unfair discrimination. Enacted on July 6, 2021, the law requires insurers to establish a governance and risk management framework, conduct regular testing, and report their findings to the Colorado Division of Insurance. The law seeks to protect consumers by holding insurers accountable for any discriminatory outcomes produced by their AI systems and requires corrective actions if discrimination is detected.
NYC Local Law No. 144
New York City's Local Law 144 became effective on July 5, 2023. The law regulates the use of Automated Employment Decision Tools (AEDTs) by employers. The law mandates that any AEDT used for hiring or promotions must undergo an annual independent bias audit. The audits must be performed by an objective third party not involved in developing or distributing the AEDT. The audits must evaluate selection rates and calculate impact ratios across demographic categories such as race, ethnicity, and sex. Employers must publicly disclose their audit results and provide detailed information about the data used in the audit. Under the law, employers are also required to notify candidates and employees at least 10 business days before using an AEDT.
NIST AI Risk Management Framework
The NIST AI Risk Management Framework (NIST AI RMF) aims to guide the design, development, and deployment of AI systems to manage risks and promote trustworthiness. Released on January 26, 2023, the framework is voluntary and industry-agnostic, but provides organizations with a structured approach to developing and deploying AI systems in a safe and transparent manner. The NIST AI RMF framework emphasizes principles such as validity, reliability, accountability, and privacy. Organizations are encouraged to integrate these considerations throughout the AI system lifecycle, from design and development to deployment and monitoring, to minimize potential negative impacts and enhance the benefits of AI systems.
AI Regulation Is Just Beginning
The AI legislation and standards discussed above are just the start of AI regulation. 2025 will be the year governance, compliance, and regulation become dominant themes in the rapidly growing AI market. In the U.S. alone, there are more than 100 federal and 600 state legislative proposals that seek to govern AI in some manner. While many of these proposals will not become law, a significant portion of them will, creating complex AI compliance requirements. Synergist Technology’s AFFIRM AI governance and compliance platform can help your organization manage these compliance requirements effectively. To learn more, connect with us today.
Written by Chris Dougherty, Chief Financial Officer of Synergist Technology.