Blog
/
Risk & Strategy
Elizabeth Gidez Image
Elizabeth Gidez
Associate General Counsel

AI regulations in the U.S.: Navigating a complex and evolving landscape

September 6, 2024
0 min read
Directors discussing AI regulations in the U.S.

As the European Union unites its 27 nations under the Artificial Intelligence Act, regulations on the other side of the Atlantic are following a different path as the United States pursues a patchwork of federal, state and industry frameworks. Compliance and oversight have become more complicated than ever.

Unlike the EU, the United States does not have a regulatory body specific to artificial intelligence. In lieu of an overarching authority, entities throughout government and industry have responded with their own guidance, seemingly at the speed of ChatGPT itself.

In October 2022, the White House released its AI Bill of Rights through its Office of Science and Technology and issued an Executive Order one year later. In February 2024, Securities and Exchange Chair Gary Gensler put out a call for guardrails. And these are just a few of many laws, policies, frameworks and more that are emerging.

Here, we provide an overview of AI regulations in the U.S., diving deeper into the following topics:

  • How is AI defined in the U.S. and what is the scope for regulation?
  • Factors to consider to remain compliant
  • How to stay ahead as the AI regulatory landscape in the U.S. evolves

Defining AI and its scope in the U.S.

To move these principles into practice in an organization’s own operations, it’s helpful to understand what is, and isn’t, considered AI in the United States.

Fortunately, the White House Executive Order provides guidance here. It defines AI as:

  • Any data system, software, hardware, application, tool or utility that operates in whole or in part using AI
  • With the ability to make predictions, recommendations or decisions
  • With an influence on real or virtual environments

Territorial and sectoral scope of AI regulations

Organizations will need to prepare to comply with U.S.-based AI regulations if the organization’s operations include:

  • AI systems developed in the U.S.
  • AI systems used in the U.S.
  • Sectors such as financial and lending services, healthcare and education that deal in large amounts of data, complex processes requiring automation and complex analysis

Moreover, it's prudent for all organizations, regardless of their geographical location or sector, to prepare for compliance with some form of regulatory requirements around the use of AI. As AI technology continues to advance and integrate into various aspects of business operations, regulatory bodies worldwide are increasingly focusing on ensuring that these technologies are used ethically and responsibly.

The current state of AI regulations in the U.S.

As a result of this decentralized governance approach, organizations now have AI guidance — both mandated and voluntary — from a variety of sources, which bring a rich array of perspectives to apply to their own unique situations and technology applications. “It’s about understanding the use cases in your organization and how are you going to have that oversight,” said Vice President of Product Management at Diligent, Nonie Dalton, in a recent blog.

But there’s a downside, as well. Organizations also have to deal with a host of disparate rules and regulations, each with their own limitations and all bearing the potential for overlap and conflicts when considered en masse.

What this means for boards and executives

How can boards and executives navigate this growing labyrinth, so new regulations, requirements and risks don’t catch them by surprise? Here are a few foundational frameworks, as well as policies in development, to keep in mind.

8 principles for responsible AI development and deployment in the U.S.

Even as AI is a technological innovation running on models, algorithms and analytics, the U.S. approach to AI governance puts humans — and human decision-making — at the center of it all. “In the end, AI reflects the principles of the people who build it, the people who use it, and the data upon which it is built,” the White House’s Executive Order declares.

That Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence lists eight key principles for the responsible development and deployment of AI for workers:

  1. Companies should design, develop and train AI in a way that protects workers.
  2. Companies should inform and give employees genuine input in the design, development, testing, training, use and oversight of AI systems.
  3. AI should be used to empower workers.
  4. Companies should use AI systems to assist, complement and enable workers and improve job quality.
  5. AI systems should not violate or undermine workers’ rights.
  6. Companies should support workers during AI-related job transitions.
  7. AI systems should have clear governance systems, evaluation processes and human oversight.
  8. Companies should be transparent in their AI use.

What’s next for federal AI legislation in the U.S.?

AI has swiftly become a regular consideration across federal policy, especially when this policy involves national security, the supply chain and innovation. The CHIPS and Science Act of 2022, which allocates billions to the semiconductor industry, lists AI among its key technology areas, with a charge to develop AI that is “safe, secure, fair, transparent and accountable, while ensuring privacy, civil rights and civil liberties.”

In 2024, bipartisan efforts have been proliferating in Congress to shape and regulate AI across government and life. A bipartisan bill to boost the nation’s AI workforce pipeline, hearings on AI and national security and AI-related privacy concerns and a subcommittee examining standards and policies for AI and intellectual property are just a few examples.

As these laws take shape, federal agencies are crafting their own AI governance, driven by an official mandate to put the White House’s Executive Order’s eight key principles outlined above into action. At every U.S. agency, working groups are evaluating the development and use of AI, developing regulations and identifying opportunities for engagement with the private sector, with timelines and objectives outlined in the Executive Order itself.

Compliance considerations for AI use in the U.S.

Compliance is challenging when dealing with a work in progress — but it’s a challenge organizations dealing with American-made AI systems or using AI in their U.S. operations must accept.

What should you keep in mind as AI regulations in the U.S. evolve?

  • Human-centered transparency and accountability: Is the AI system designed with human-defined objectives for AI decision-making?
  • Transparent interpretability: Does the AI system explain how these decisions are made
  • Industry-specific regulations: Different sectors will have mandates tailored to specific AI concerns, such as data privacy for industries like healthcare that collect personal information and algorithmic discrimination for industries like financial services that involve qualifying for a mortgage or loan.

Fortifying your risk management and enforcement policies

As AI regulations in the U.S. continue to take shape, it’s also important to get your compliance program in place. This includes:

  • Roles and responsibilities: Who’s responsible for AI systems and oversight?
  • Categorization: What does, and does not, qualify as AI in your organization?
  • Management: How will you identify, address and mitigate AI-related risk?
  • Enforcement and penalties: If you spot a potential violation of the AI rules, like abusive data practices. What happens next?

State and local AI regulations

But federal policies are just one part of the picture for AI regulations in the U.S.

In addition to the many activities by the executive and legislative branches, AI regulations in the U.S. also involve state and local rules. Just a few examples include:

Staying ahead of AI regulations in the U.S.: The critical role of leadership

Directors and executives play a critical role in shaping the strategic direction and ethical foundation of their organizations. As AI continues to transform industries worldwide, the regulatory landscape surrounding AI is also evolving rapidly — with the U.S. regulatory landscape being particularly difficult to keep up with. It's essential for leaders to stay informed about these changes to ensure compliance, mitigate risks and leverage AI opportunities responsibly.

To help face this challenge, the Diligent Institute curated their AI Ethics & Board Oversight Certification course. This certification equips leaders with the necessary tools and knowledge to stay on top of AI regulations in the U.S. and make ethical and informed decisions, ensuring their organizations navigate AI complexities with integrity and compliance.

security

Your Data Matters

At our core, transparency is key. We prioritize your privacy by providing clear information about your rights and facilitating their exercise. You're in control, with the option to manage your preferences and the extent of information shared with us and our partners.

© 2024 Diligent Corporation. All rights reserved.