Skip to content
The AI Balancing Act: How Industry Can Lead the Charge in the US

The AI Balancing Act: How Industry Can Lead the Charge in the US

The race for artificial intelligence (AI) dominance is on, as governments scramble to create regulatory frameworks that balance innovation with responsible development. March 13th marked a significant milestone as the European Union (EU) unveiled the most comprehensive AI regulatory framework to date: the Artificial Intelligence Act. The Act classifies AI systems based on risk, with high-risk systems facing stricter regulations, including requirements for human oversight, robust data governance, and comprehensive risk assessments.

 

The question now looms: where will the US land in this regulatory dance? A patchwork of state-level regulations could create a compliance nightmare, while a heavy-handed federal approach would foster regulatory capture and stifle innovation. The optimal solution is an industry-led approach that establishes clear guidelines without slowing innovation. The US can leverage its history of fostering technological progress to maintain its position as an AI powerhouse. Here's how:

 

Federal Guiding Principles:

The federal government should lay the foundation with a high-level framework outlining core principles for responsible AI. This framework should establish a floor and ceiling – the minimum expectations and limits – for businesses and individuals developing and using AI. It should ensure core principles like governance, evaluation, and monitoring for bias and "drift" (unintended changes in an algorithm's behavior over time), and standardized risk categorization system are part of any effort to help guide compliance efforts. It should also stress the importance of transparency and disclosure – people should know when they are interacting with AI tools and systems.  The Office of Management and Budget memorandum 24-10 outlines the initial steps of the people and processes needed for compliance.

 

Industry Takes the Wheel:

Following these federal guiding principles, industry leaders within each sector can create specific certifications and accreditations for responsible AI use within their domain. Trade associations, governing bodies, and accrediting entities are well-positioned to take the lead in this effort.

 

This industry-led approach is not without precedent. The Motion Picture Association of America, not a government agency, assigns movie ratings.  In healthcare, the Joint Commission provides accreditation to hospitals and other facilities—the accreditations often serve to replace otherwise applicable state and federal licensing and certification rules, making them eligible to treat patients and receive state and federal funding. Similarly, many professions have independent certification bodies to which states defer in ensuring occupational competency. In finance, the Securities and Exchange Commission gives deference to the Financial Industry Regulatory Authority (FINRA) in overseeing stockbrokers and advisors.  In education, we rely upon accrediting bodies to ensure education institutions meet expected quality standards before they are eligible to receive government funding.

 

These trade and industry associations, governing bodies and accrediting bodies should be given a chance to self-regulate before state and federal policy makers assert their regulatory muscle.

 

States Lead As Model AI Utilizers:

State policy makers must resist the temptation and pleas to “do something on AI”. Turning away from the temptation will prevent a patchwork of state regulations and promote nationwide consistency. This reduced administrative complexity will enable young, innovative startups to compete in the market that would otherwise be limited to the large incumbent companies with the resources to navigate a labyrinth of federal and state regulation—thus mitigating regulatory capture by a few large AI companies. 

 

Instead, states should lead by demonstrating how to responsibly utilize AI to better serve its citizens. Agencies should establish AI governance, policies, and procedures to ensure procurement of compliant and ethical AI tools.  It should include internal compliance standards, quality checks, and ongoing tool monitoring. Its use should be transparent to citizens, so they know when they are interacting with AI. AI based decision-support tools can maximize workforce productivity, but humans should still make decisions in most cases, especially those of higher risk and impact to citizens (i.e., eligibility for government benefits).

 

States that want to enable AI innovation can go a step further. One challenge in AI development that often leads to bias or inaccuracies is when AI models are trained on incomplete or inaccurate data sets. State government is a repository of robust information and data, such as labor and employment data, education data, medical and health information, and tax data.  States could cleanse this data of certain identifiable information to protect privacy and create AI training and compliance sandboxes. This would allow tools to be trained or tested on data sets we would know to be sufficiently diverse and complete and mitigate concerns of bias, drift, or incompleteness. This type of robust data set can be expensive and out of reach for innovative but new companies. But if a state made it available and affordable to companies, it could spur responsible AI innovation and economic development in their state.

 

The Market Will Also Play A Role:

This approach will also let the market serve as a quality check.  We recently saw Google launch its latest AI tool—Gemini. Despite some impressive capabilities, it was nothing short of a disastrous rollout. Its language model was trained to be so “woke,” it demonstrated extreme bias in certain use cases. It was quickly noticed by the public. Then by investors. This led to at least a short-term market capitalization loss of $96 billion in the days that followed. The message was received, with Google scrambling to make changes and try to regain the trust of the public.

 

Instances such as the Gemini rollout are bad for business. Not just google—they lead to distrust of AI across the industry. Thus, industry leaders have a business interest in ensuring high quality, safe, and bias free AI tools and applications are what makes its way into the public domain. Thus, we have reason to believe they will take their duty to self-regulate seriously.

 

Keeping Pace With AI Development:

Government regulation takes a lot of time-often years from start to finish. AI capabilities are evolving at a faster pace than we saw semiconductor capacity increase under the principle of “Moore’s law”.  A government centric regulation approach would often lead to rules and regulations that are dated and irrelevant by the time they are finalized—and will slow innovation. An industry led approach can help as they can more quickly adapt and incorporate new capabilities and address concerns with updates to certifications and accreditations.

 

Federal and State Oversight Role:

Even with this approach, there is still a meaningful oversight role for federal and state regulators, should they choose to use it. Government agencies could establish clearinghouses to review and approve these industry certifications and accreditations to ensure minimum compliance with the higher level federal and state guiding principles.  It is imperative though that this process is quick, timely, and minimalist to merely ensure basic compliance—nothing further.

 

A Practical Example: 

Many companies are starting to use AI tools to help screen job applicants. It can streamline the resume review process significantly, matching relevant experience and skills to those being sought for a stated position. But there is some concern that such tools could have biases that adversely impact certain groups, such as people with disabilities, or of a certain racial/ethnic background.  Under a government regulatory approach, congress or state legislatures could pass laws requiring such tools be submitted for review and approval to a government entity.  However, under an industry led approach, a group like the Society for Human Resource Management (SHRM) could create a certification, consistent with federal guiding principles that would include a component to mitigate bias in the use of AI tools. The developer of this tool or the company wanting to use such a tool could pursue a compliance certification from SHRM. The state could use such a tool as a model AI utilizer, or go a step further and make its deidentified employment data available to SHRM or AI developers for AI model training and compliance purposes.

 

Let's Innovate Like We Did With The Internet: 

This soft regulatory approach has proven successful in other areas. The Clinton-era decision to keep the internet largely unregulated – often described as the "born free" approach in contrast to the "born in captivity" models of some other countries – allowed the internet to flourish and drive innovation in the US.

 

By following this model, the US can ensure responsible AI development while maintaining its edge in innovation. Industry expertise, coupled with clear, overarching federal principles, will foster an environment where AI can deliver on its immense potential – from improving healthcare and financial services to revolutionizing manufacturing and transportation.

 

The EU's recent regulations remind us of the need for action. By leveraging its history of fostering innovation and adopting an industry-led approach, the US can ensure AI flourishes responsibly, shaping a future defined by technological progress and human well-being.

Cart 0

Your cart is currently empty.

Start Shopping