AI is slowly becoming a part of many parts of our lives - everything from the complex to the mundane. As businesses, entrepreneurs, nation states and hackers try to figure out all the ways that AI can be leveraged, we also need to understand the rules and regulations that will dictate how it can be used by different users.
One of the most distracting parts of AI is the hype cycle that surrounds it. Every bit of news that comes out related to AI, whether it’s a new model, a new threat, or a new advancement by a foreign country is treated as a major shift or risk or independent invention. Sometimes that might be true, but for much news in this sector, no drastic changes in technology or policy or even mindset is needed. At Synergist, we work with partners in many industries to help them securely and smartly use AI to improve their business, improve their products, and improve all kinds of experiences for their users. One thing we see at Synergist is the value of not riding the rollercoaster of the news cycle and staying even keeled about the use cases and risks associated with the deployment of AI.
The best way to avoid the ups and downs of these announcements is with sound policy. Much of the public distrust with AI is related to risk. It’s one thing to ask an LLM a question about planning your vacation or to write a term paper for you, but it’s quite another to trust your business’s public reputation or all your accounting functions to a machine. More simply said, most people are unwilling to place critical elements of their life in the hands of machines.
And this is where policy comes in. To avoid getting too high or too low, you need several policies in place to encourage trust and therefore adoption of AI. And it is important to start by saying that policy does not necessarily mean burdensome regulation. To be sure, developers and technologists need a certain amount of freedom to innovate and bring new tools to market. But for users to adopt those tools, there needs to be predictability, trust, and an understanding of risk.
The first place to start with policy is around model cards. Those of us in the AI community understand what they are, but they need to be understood by the broader public. Think of it like the back of a baseball card. It gives you all the relevant details and statistics about a model to understand what the model will do and not do, what its limitations are, and what data it is using or not using. Policies should be put in place to mandate a clear standard model card that developers and deployers should use in all tools available to the public.
Second, policymakers should look at the physical world - supply chains. To maintain an even keel, AI developers and deployers need consistent access to a certain segment of chips. The U.S. is off to a good start incentivizing chipmakers to build plants outside of Taiwan, but that effort needs to continue, and policy should reflect that. Similarly, policies should maintain a certain level of predictability around Taiwan because, should anything upend the supply chain of chips, it would really disrupt the continued adoption of AI tools, to say nothing of innovations.
Lastly, the government needs to incentivize coordination and cooperation, both across agencies and between the government and private sector. While allowing the private sector room to innovate free of overly burdensome regulation is surely an element to ensuring American dominance, there needs to be a certain level of common understanding and common standards. Policymakers should work with industry to establish commonsense baselines to have a foundation from which to respond to changing capabilities, changing threats, and a rapidly changing industry ecosystem.
Technological advancements around AI are happening fast and furious right now and change is coming from all corners of the globe. Along with this technical revolution, policy and regulatory frameworks are changing in the United States and abroad, and at Synergist we have a front row seat for all of it. Based on our experience working with public sector agencies and private sector corporations, there are a few policy shortfalls that need to be met for America to maintain its leadership in AI innovation and AI adoption. As leading practitioners in helping enterprises implement AI governance and compliance, we see the full picture and know these recommendations will help the country maintain its leadership position in innovation.