On April 3, the U.S. Office of Management and Budget (OMB) released a new memo titled “Accelerating Federal Use of AI through Innovation, Governance, and Public Trust” (OMB M-25-21). The memo sets several priorities for Federal agencies, particularly the larger Cabinet agencies, known as CFO Act agencies, in how they can deploy AI internally at speed while also accounting for governance and adoption.
From a strategic standpoint, the memo establishes that agencies need to remove barriers to innovation when it comes to AI and focus on providing value to the American taxpayer. The memo also directs agencies to encourage their employees to accelerate responsible AI adoption across the federal workforce. This is noteworthy in that it does not just allow agencies to adopt AI for internal uses, but actively encourages them to adopt it in a responsible way, particularly though appointing a Chief AI Officer.
Beyond these overarching items, the memo outlines some specific policy guidance that is at the heart of Synergist’s core competencies. While we are a company that focuses on operational excellence for our customers, those operations need a solid strategic and policy foundation to stand confidently on. These policies not only align with how Synergist views AI adoption, but also how our AFFIRM platform is already configured to support deployment of AI tools across any enterprise, large or small.
The memo starts with Driving AI Innovation. The memo calls for a publicly available strategy and sharing of information between agencies. At Synergist, our platform can adjust to provide a governance solution regardless of your level of maturity or your strategic goals. And as agencies share information, AFFIRM can ensure that the data is protected and only used in the ways prescribed by interagency agreements. As the memo points out, information sharing is key to the success of many government missions and to government efficiency. AFFIRM supports that sharing as a frictionless tool that allows real-time automated monitoring of workflows.
Improving AI Governance is the next, and perhaps most important part of the memo. Establishing these foundational rules will allow the other elements of AI adoption to fall into place. Without clear governance, none of the other policy recommendations can be successfully implemented. The memo directs agencies to take several steps to build the internal infrastructure and support governance measures like policies, use case inventories, and interagency governance bodies. Delving further into the text, it directs the interagency bodies to make sure that agencies are taking responsible steps to deploy AI tools. Governance cannot just apply at the strategic or policy level when it comes to AI. You need governance around the actual models to ensure that the models are operating the way they are intended to. From our experience, it is imperative that, as agencies build their strategies and operational plans, they focus on automating the governance of their AI tools.
Lastly, the memo discusses Fostering Public Trust in Federal Use of AI. It is encouraging that the Administration places this much emphasis on public trust in AI. If the federal workforce does not trust the technology that will carry out the designated tasks, and if the American public does not trust it, then the government can not achieve its mission. In fact, the memo outlines several minimum risk management standards that agencies must ensure are present as they deploy AI tools. These standards are already a key element of the AFFIRM platform and are present in our deployments in Federal agencies and the private sector already. The areas that the memo targets are the exact right risk management elements that need to be considered when planning, building, deploying, and monitoring an AI tool in the current technology risk environment.
Synergist, through our AFFIRM platform and professional services, has gone through numerous use cases with customers trying to achieve what OMB laid out for the entirety of the federal government. While our tools can help any partner monitor their AI models and ensure they comply with any defined risk regime, we’ve found that to set yourself up best for success it is important to 1) have a clear strategy in place for how you want to use AI, 2) establish clear governance both within your organization but also with the technical governance needed to ensure your models comply with the rules you set forth, and 3) develop buy-in for adoption from your users and customers. The more you can automate the governance of how your AI operates, the more time, energy, and resources you can deploy to improve or grow your business, improve the implementation of your mission to the American public, or improve America’s position as a global leader in technological innovation.