Skip to content
Navigating the Office of Management and Budget (OMB) Memorandum M-24-10

Navigating the Office of Management and Budget (OMB) Memorandum M-24-10

The landscape of artificial intelligence (AI) governance within federal agencies has been significantly shaped by the Office of Management and Budget (OMB) Memorandum M-24-10. Issued on March 28, 2024, this directive outlines comprehensive requirements for AI governance, innovation, and risk management, particularly focusing on the safety and rights of the public. This memorandum is rooted in the AI Governance Act of 2020 and Executive Order 14110, underscoring the responsibilities of the OMB and the necessity for adherence to the NIST AI Risk Management Framework (RMF).

Key Requirements of the OMB Memorandum M-24-10

  1. Appointment of Chief AI Officer (CAIO):

  • Role and Responsibilities: Each agency must appoint a CAIO, who should have the necessary expertise and executive authority (equivalent to GS-15 level or higher). The CAIO is responsible for coordinating the agency's AI activities, instituting governance and oversight mechanisms, managing risk assessments, and promoting AI innovation.
  • Governance and Oversight: The CAIO must establish an AI governance board, develop compliance plans, encourage the adoption of consensus standards (like NIST AI RMF), and ensure continuous monitoring and evaluation of AI systems.
  1. Agency Compliance Plans:

  • Submission and Updates: Agencies are required to submit detailed compliance plans to the OMB within 180 days of the memorandum’s issuance and update them every two years. These plans must include risk management practices, AI impact assessments, data quality assessments, and independent AI evaluations.
  • Public Posting: Compliance plans must be posted publicly, ensuring transparency and accountability.
  1. Minimum Practices for Safety- and Rights-Impacting AI:

  • Documentation and Certification: Agencies must document and certify determinations for safety- and rights-impacting AI, ensuring they comply with established risk management practices.
  • Real-World Testing: AI systems must undergo ongoing performance testing in real-world contexts, with agencies required to document results and mitigate identified risks.
  1. AI Strategy Document:

  • Development and Communication: Within 365 days, CFO Act agencies must develop an AI strategy document, publicly post it, and communicate their AI governance plans, including inventory of AI use cases, risk management strategies, and workforce capacity assessments.

Agency Scope and Applicability

The memorandum applies to a wide range of federal entities, including:

  • CFO Act Agencies: 15 federal executive departments and 9 other large departments.
  • General Government Agencies: All agencies defined under 44 U.S.C. 3502(1) of the Paperwork Reduction Act of 1980.
  • Exclusions: Certain provisions do not apply to entities such as the Government Accountability Office (GAO), the Federal Election Commission (FEC), and the Intelligence Community.

Complexities Beyond the Surface

While the requirements outlined in the OMB memorandum may seem straightforward, they entail numerous complex provisions that can be challenging to navigate:

  • Risk Management and Monitoring: Agencies must implement continuous risk management practices, including ongoing monitoring, human review of AI outcomes, and mitigation of emerging risks.
  • Data Quality and Performance Testing: Ensuring the quality and relevance of data used in AI systems is critical. Agencies must conduct rigorous performance testing and independent evaluations to validate AI systems' effectiveness and reliability.
  • Transparency and Public Communication: Agencies are required to provide clear, accessible documentation and public notices about AI use cases, ensuring stakeholders are informed and can provide feedback.

Synergist’s AFFIRM Solution: Simplifying AI Compliance

At Synergist, we recognize the intricate nature of these requirements and the challenges agencies face in achieving compliance. Our AFFIRM platform is designed to offer a comprehensive, turnkey solution for managing AI compliance, with a particular focus on adhering to the OMB memorandum.

  1. Comprehensive Compliance Management:

  • Centralized Oversight: AFFIRM provides a centralized platform for managing AI governance, compliance plans, and risk assessments, ensuring all requirements are met efficiently.
  • Real-Time Monitoring: Our solution offers real-time visibility into AI operations, enabling agencies to monitor performance, detect anomalies, and mitigate risks proactively.
  1. Detailed Reporting and Documentation:

  • Automated Reporting: AFFIRM automates the generation of compliance reports, risk management documents, and impact assessments, simplifying the documentation process and ensuring accuracy.
  • Public Communication: The platform facilitates the creation of plain language documentation and public notices, helping agencies maintain transparency and public trust.
  1. AI Innovation and Best Practices:

  • Standards Adoption: AFFIRM encourages the adoption of consensus standards, including the NIST AI RMF, promoting best practices in AI governance and innovation.
  • Continuous Improvement: The platform supports ongoing evaluation and enhancement of AI systems, ensuring they remain effective and compliant with evolving regulations.

Conclusion

Navigating the complexities of AI compliance as mandated by the OMB Memorandum M-24-10 requires a robust and comprehensive approach. Synergist’s AFFIRM platform offers federal agencies a turnkey solution that simplifies compliance management, enhances governance, and promotes AI innovation. By leveraging AFFIRM, agencies can ensure they meet all regulatory requirements while fostering responsible and effective use of AI technologies.

Written by Chris Pernicano, Chief Technology Officer at Synergist Technology.

Cart 0

Your cart is currently empty.

Start Shopping