Manage Compliance with AI Regulations

As AI adoption increases, regulation to ensure fairness, privacy, and security is becoming a dominant theme. Governments and industry bodies across the globe are crafting legislation that seeks to regulate AI. In the U.S. alone, there are hundreds of proposed bills at the federal, state, and local government levels.

AFFIRM can help your organization achieve compliance with current AI regulations and navigate the developing AI regulatory environment to remain compliant as new regulations are introduced.

Learn More

OMB Memorandum M-24-10

Delivered in March 2024, OMB Memorandum M-24-10 requires federal agencies to adopt responsible practices when integrating AI technologies. The Memorandum mandates federal agencies to ensure AI systems are developed and used ethically with transparency and accountability. Under the Memorandum, Federal agencies must prioritize data quality and security, mitigate and address biases in AI algorithms, and communicate clearly about AI applications to build public trust.

See Full Document

Colorado SB24-205

In May 2024, Colorado became the first U.S. state to enact a comprehensive law governing AI systems with the passing of SB24 205. All requirements under the law will become effective on February 1, 2026. The law seeks to regulate algorithmic discrimination in AI systems and requires developers and deployers of high-risk AI systems to document AI system capabilities, limitations, and potential impacts on individuals and society. Developers must conduct bias and discrimination risk assessments and implement measures to mitigate these risks over the course of AI system lifecycles. Deployers must disclose AI use to consumers and ensure transparency and accountability.

See Full Document

Colorado SB21-169

Colorado SB21-169 mandates that insurers using AI systems that rely on consumer data must ensure the systems do not result in unfair discrimination. Enacted on July 6, 2021, the law requires insurers to establish a governance and risk management framework, conduct regular testing, and report their findings to the Colorado Division of Insurance.

See Full Document

NYC Local Law No. 144

New York City's Local Law 144 became effective on July 5, 2023. The law regulates the use of Automated Employment Decision Tools (AEDTs) by employers. The law mandates that any AEDT used for hiring or promotions must undergo an annual independent bias audit. The audits must be performed by an objective third party not involved in developing or distributing the AEDT. The audits must evaluate selection rates and calculate impact ratios across demographic categories such as race, ethnicity, and sex. Employers must publicly disclose their audit results and provide detailed information about the data used in the audit.

See Full Document


The EU AI Act is the most comprehensive AI legislation passed to date. The Act was adopted by the EU Parliament in March 2024 and approved by the Council of the EU in May 2024. The Act establishes compliance requirements for AI systems based on the level of risk they pose to humans and fundamental rights. The Act’s requirements will phase in over a period of 36 months. The Act aims to achieve three key regulatory objectives: (1) ensure AI systems are developed and used in a trustworthy and ethical manner, (2) mitigate risks such as bias and discrimination, and (3) promote transparency, accountability, and human oversight throughout the lifecycle of AI systems.

See Full Document


The NIST AI RMF aims to guide the design, development, and deployment of AI systems to manage risks and promote trustworthiness. Released on January 26, 2023, the framework is voluntary and industry-agnostic, but provides organizations with a structured approach to developing and deploying AI systems in a safe and transparent manner. The NIST AI RMF framework emphasizes principles such as validity, reliability, accountability, and privacy throughout the AI system lifecycle.

See Full Document