The following article was published to the Harvard Law School Forum on Corporate Governance.
Editor’s note: Tara K. Giunta is a Partner in the Litigation Practice at Paul Hastings LLP and Lex Suvanto is Global CEO at Edelman Smithfield.
AI is rightly seen as a transformative tool with the potential to change every aspect of our lives: how we live, how we work, how we create, how we communicate, how we learn. The potential applications are dizzying in their scope, and companies are scrambling to determine how they can participate in the AI revolution, including by adopting AI tools to enable more efficient operations and greater innovation and performance.
While transformative, AI also creates new areas of legal, regulatory, and financial risk, as illustrated by a growing list of alarming real-world examples. For instance, a New York City chatbot provided illegal advice to small businesses suggesting it was legal to fire workers for complaining about sexual harassment. In another case, a healthcare prediction algorithm used by hospitals and insurance companies across the U.S. to identify patients in need of “high-risk care management” programs was discovered to be far less likely to identify Black patients. In yet another case, an online real estate marketplace was forced to write off over $300 million and slash its workforce due to an algorithmic home-buying error driven by AI. Notably, a recent survey of annual reports of Fortune 500 companies showed that 281 companies now flag AI as a possible risk factor, a 473.5% increase from the prior year.[1]
For corporate boards of directors, understanding and overseeing AI is quickly becoming a critical responsibility. AI presents particular challenges to effective board oversight given the potential breadth of its applications across functions, including finance, legal, product development, marketing and supply chain, as well as the “black box” nature of algorithmic decision-making. Further, AI as a technology is quickly evolving—faster than the Internet—with legal and regulatory regimes trying to catch up. This is putting pressure on boards to engage more frequently and deeply with management to understand the opportunities and risks associated with AI for their company. This is also presenting new legal risk for boards as well as directors individually. In this context, boards may unintentionally cross the line from governance to management, violating the heretofore inviolate principle of “Noses In, Fingers Out”.
In this article, we explore recent legal and regulatory efforts to address AI risks, as well as developments in board oversight of “mission critical” risks. We discuss considerations for maintaining a “Noses In, Fingers Out” oversight approach. We conclude with suggestions for how boards can provide effective oversight of AI-related risks while supporting and guiding management.
Nascent Regulatory Ground Rules: New Areas of Liability and Risk
Although there is strong focus on AI as an area of risk, regulation of AI is still in its infancy.[2] The most notable legal and regulatory developments to date include the European Union’s AI Act and the U.S. Executive Order on AI. These regulatory frameworks set forth minimal standards and reflect that lawmakers and regulators are at an early stage in understanding and addressing inherent risks associated with AI. They should thus be seen as a starting point, not an endpoint.
Enforcement authorities are also ramping up their focus on AI risks. For instance, the U.S. Securities and Exchange Commission (SEC) is pursuing instances of “AI washing,” while the U.S. Department of Justice (“DOJ”) is proactively seeking to prosecute AI-related crimes.
As such, boards and management need to ensure they are not only responding to the current legal, regulatory and enforcement environment, but are also anticipating areas of vulnerability and potential further regulation or legal accountability.
EU Artificial Intelligence Act
The EU Artificial Intelligence Act (“AI Act”) is considered the world’s first comprehensive legal framework on AI, and introduces new obligations regarding transparency, oversight, and accountability to entities providing, deploying, distributing, importing, or manufacturing AI systems.[3] Notably, the AI Act applies to providers who place or put into service AI systems on the EU market, even if they are not themselves established or located within the EU. It is also a potential harbinger of regulations to come. As such, it is important for companies to understand its scope and implications.
Overall, the AI Act takes a risk-based approach, imposing different requirements for the following risk levels: (1) unacceptable risk; (2) high risk; (3) limited risk; and (4) minimal risk.[4] AI systems that threaten people’s safety, livelihood, and fundamental rights—also referred to as “prohibited AI practices”—are strictly banned and must be withdrawn from the EU market within six months of the AI Act coming into effect, i.e., February 2025.[5] AI systems that fall under the examples listed in Annex III of the AI Act, such as those used in biometrics, employment, critical infrastructure management, education and vocational training, and law enforcement, are considered “high risk” and must be carefully managed to mitigate risks. Specifically, providers of high risk AI systems must establish a risk management process, conduct data governance training and testing, develop record-keeping mechanisms, prepare instructions for proper downstream use, enable human oversight, and establish a quality management system. AI systems that are intended to interact with natural persons or generate content that may pose specific risks of impersonation (e.g., chatbots, deepfakes) are viewed as “limited risk” and must be accompanied by sufficient information to put end-users on notice that they are interacting with AI. Finally, all other AI systems fall within the category of “minimal risk” and are not subject to mandatory requirements. These include, for example, AI-enabled video games and email spam filters. Companies are nonetheless encouraged to adhere to voluntary codes of conduct in developing or using minimal risk AI systems.
The AI Act entered into force on August 1, 2024. The requirements will start to apply in phases over the next several years, beginning with the ban of prohibited AI practices by February 2025.[6]
U.S. Executive Order on Artificial Intelligence
On October 30, 2023, President Biden issued a sweeping Executive Order on Artificial Intelligence (“AI”), with the multi-faceted goal of establishing new standards for AI safety and security, protecting privacy, advancing equity and civil rights, promoting innovation and competition, and securing U.S. leadership around the world.
Among other things, the Executive Order directs the National Institute of Standards and Technology to set standards, tools, and tests to ensure the safety and reliability of AI systems before public release. Various federal agencies, such as the Department of Homeland Security and the Department of Energy, will in turn apply these standards to critical infrastructure sectors. Similarly, the Executive Order authorizes the Department of Commerce to develop guidance for content authentication and watermarking to clearly label AI-generated content, an initiative aimed at protecting consumers from AI-enabled fraud and deception.[7]
At the same time, the Executive Order called for more AI research, promotion of competition in the AI ecosystem, and the development and deployment of AI to solve global challenges, suggesting that the U.S. government is viewing AI not only as a risk factor but also as an opportunity for innovation and progress. For example, the Biden administration will pilot the National AI Research Resource as a tool to provide researchers and students access to AI data, and offer AI research grants, particularly for studies focused on healthcare and climate change.
The Executive Order and the actions that follow are expected to be the cornerstone of the federal regulatory approach towards AI in the United States. While these initiatives are for now limited to the executive branch, they can trigger Congress to create new laws or regulations in the near term to provide companies with further guidance.
SEC’s Focus on AI Investment Fraud and “AI Washing”
On January 25, 2024, the SEC’s Office of Investor Education and Advocacy issued a joint Investor Alert regarding AI and investment fraud. The alert stated that “bad actors are using the growing popularity and complexity of AI to lure victims into scams,” warning investors of false or misleading claims related to the development and use of AI, including unverified statements about new technological breakthroughs. It also highlighted fraudsters’ use of AI technology to create “deepfake” videos and audio to impersonate others.[8] As such, companies must be conscious of their proclamations about AI and ensure they are supported by evidence to avoid making overstatements.
The SEC’s enforcement actions have similarly focused on “AI washing.” In March 2024, the SEC announced that it had settled charges against two investment advisors “for making false and misleading statements about their purported use of artificial intelligence.”[9] Indeed, the SEC has repeatedly targeted firms making statements in their marketing materials and websites that falsely claim to use AI to inform investment decisions.[10] Consistent with the agency’s overall approach to AI, Gurbir Grewal, the Director of the SEC’s Enforcement Division, warned companies against making “aspirational” claims related to AI to capitalize on investor interest in the growing technology. “Irrespective of the context, if you’re speaking on AI, you . . . must ensure that you do so in a manner that is not materially false or misleading.” Given the dramatic jump in AI related disclosures—473.5% for Fortune 500 companies—particular caution should be exercised by SEC registrants contemplating such disclosures.
Criminal Liability for AI: An Emerging Priority for DOJ Enforcement
One of the primary areas of concern for U.S. enforcement officials has been that AI can be used to advance criminal conduct and undermine individual and collective security. In a recent speech at Oxford University, Deputy Attorney General (“DAG”) Lisa Monaco commented that “for all the promise it offers, AI is also accelerating risks to our collective security . . . [with] the potential to amplify existing biases and discriminatory practices . . . expedite the creation of harmful content, including child sexual abuse material [and] arm nation-states with tools to pursue digital authoritarianism, accelerating the spread of disinformation and repression.”[11] DAG Monaco further noted that AI “can lower the barriers to entry for criminals[,] embolden our adversaries[,] [and is] changing how crimes are committed and who commits them[.]”[12]
In response, the DOJ has undertaken a number of initiatives to address this rapidly evolving risk area. It is supporting a new G7 initiative—the Hiroshima AI process—between the United States and its allies to internationalize responsible codes of conduct for advanced AI systems. The DOJ also previously launched its Disruptive Technology Strike Force that enforces export control laws to thwart nation-states that are trying to obtain the most advanced technologies. DAG Monaco added that today the Strike Force will place AI—the “ultimate disruptive technology”—at the very top of its enforcement priority list.
While, as DAG Monaco explained, AI “may well be the most transformational technology we’ve confronted yet,” there are existing legal theories that can be adapted and expanded to address AI-related risks.[13] Therefore, although there is increased focus on addressing AI-related risks, the DOJ does not necessarily need to change its approach to investigating and prosecuting crimes perpetrated by AI. DAG Monaco emphasized that “enforcement must be robust” particularly because “[l]ike a firearm, AI can also enhance the danger of a crime.”[14] Therefore, prosecutors are instructed to seek stiffer sentences for crimes utilizing AI, in order to ensure accountability and deter bad actors.
So too, the DOJ shows how law enforcement is embracing the duality of AI—it is actively considering how it will approach criminal conduct facilitated or committed by AI, while also incorporating AI tools in its own work. Indeed, the DOJ has already deployed AI tools to facilitate its work, such as classifying and tracing the source of illegal drugs such as opioids; triaging and understanding the millions of tips the FBI receives from the public annually; and synthesizing the large volume of evidence collected in the most portentous cases.[15] Importantly, the DOJ—and federal government writ large—realizes that, in order to be viewed as legitimate, they must engender trust as to their use of AI. The DOJ is thus developing protocols for its own use of AI to “first rigorously stress test [] AI application and assess its fairness, accuracy, and safety.”[16]
Despite the aforementioned regulatory progress on AI, there is skepticism globally that regulators can effectively manage AI-related risks. The 2024 Edelman Trust Barometer found that 59% of respondents from 28 countries believe that government regulators lack adequate understanding of emerging technologies to regulate them effectively. The survey also found that respondents trust businesses—more than NGOs, government, or media—to effectively integrate technological innovation into society. Corporate boards therefore find themselves on the frontlines in ensuring proper governance of AI globally.
Further, we are starting to see attention from state enforcement authorities on developments related to AI oversight at the Executive Branch level. For instance, in August 2024, the attorneys general from 15 largely conservative states submitted a letter to the Department of the Treasury, urging the Department “to focus solely on risks to financial reliability and consumer protection rather than politicizing AI regulation or blocking state laws.”[17] This development reflects the potential for political debate and scrutiny on AI oversight, not just on government agencies, but on companies as well, further increasing the importance of effective Board oversight of such risks.
AI and the Board
Historical Context of “Noses In, Fingers Out”
In the past couple decades, the phrase “Noses In, Fingers Out” has been used to describe how an effective board collaborates with management to stay informed about the company’s key risks in its operations (“Noses In”), while staying out of operational management issues (“Fingers Out”). It is the board’s responsibility to ask insightful questions and incorporate risks into its decision-making, while management is responsible for carrying out the company’s operations with high-level direction and guidance from the board.
Among the core director duties is the duty of care and its derivative, the duty of oversight. The duty of oversight originated from the 1996 Caremark case, which established that, under Delaware law, directors breach their duty of care when they fail to make a good faith effort to oversee a corporation’s operations and compliance with the law.[18] Under the Caremark standard, a plaintiff must demonstrate that the Board failed to implement a corporate information and reporting system, or failed to adequately monitor or oversee management, thereby preventing the Board from being informed of risks or problems requiring their attention. Several recent Caremark claims have further refined this standard and indicated a willingness for courts to find boards liable for turning a blind eye to “mission critical” risks.[19]
Each new technology that enters the corporate sector raises questions about how the fiduciary duties of directors will evolve with respect to developments in business operations and their associated risks. For example, Delaware courts have examined Caremark claims filed against boards for their alleged failure to adequately oversee cybersecurity risks. In Construction Industry Laborers Pension Fund, the court acknowledged that cybersecurity risks are mission critical to online software companies due to their dependence on customers’ sharing access to digital information with other customers.[20]
The use and incorporation of AI into business operations is no different, particularly given a growing recognition that AI can present substantial threats across a broad array of risk areas, including privacy, discrimination, carbon emissions reductions, and algorithmic decision-making. For instance, as companies’ reliance on AI models grows, so too would the models’ required processing power, which could create a carbon footprint unforeseeable without dedicated oversight. Separately, failure to implement controls for AI applications in hiring or investment decisions can lead to issues impacting labor rights and financial responsibility, respectively. Each risk has the potential of exposing directors to significant liabilities if proper measures catered specifically to the integration and use of AI are not taken.[21]
Meeting the Duty of Care and Oversight in the Age of AI
Board oversight of mission critical risks is further complicated by the breadth and depth of AI in a company’s operations, product lines and supply chains. Further, the breathtaking pace of change and developments in AI, coupled with a general lack of board education regarding AI, further elevates the risk of, on the one hand, ineffective engagement by the board and, on the other hand, board intrusion into the management realm. As a threshold matter, boards must consider whether any AI system has become or will likely become a “mission critical” issue for the company. In this vein, the board should consult with management and the company’s compliance and legal teams to understand the risks associated with the company’s most critical AI systems and ensure a comprehensive understanding of where the company has deployed AI systems or tools. This often requires insight into many functions and at sufficient depth to truly properly ascertain the risks. It is a balancing act: engage with management across the company’s operations and potentially down into the functional layer so there is a true understanding, but do not intrude into the role of management to manage and make decisions that are beyond the role of a board.
Given the ubiquitous nature of AI – and the potential for AI to become a mission critical compliance risk – directors should consider the following:
- Critical AI Uses and Risks: Understand the most critical AI systems that the company uses and how they are integrated into business processes, the nature of the data employed to train and operate such systems, the type and extent of risks associated with the use of such systems, and any steps taken to mitigate those risks.
- Board and Senior Management Responsibility: Review AI-related risks and mitigation plans as a periodic board agenda item and assign management responsibility dedicated to regulatory compliance and risk mitigation.
- Compliance Framework: Understand the applicable legal and regulatory regimes and ensure that there are AI compliance and reporting structures to facilitate board oversight, which may include the formation of an AI ethics committee, appointment of a Chief AI Officer, periodic AI-specific risk assessments and monitoring, and development of AI- focused policies, procedures, and guidance materials.
- Material AI Incidents: Work to ensure that the board is appropriately and timely briefed on any significant AI incidents, the company’s response to such incidents, and related impacts.
- Education and Continuous Learning: Develop and implement training programs on AI technologies and their evolving impacts on various dimensions of the company’s business and track adherence to training requirements.
Conclusion
As AI is rapidly integrated into business, the role of the board in overseeing AI implementation becomes increasingly important. Boards must ensure that AI is utilized responsibly, balancing the pursuit of innovation with the need to mitigate risks and uphold ethical standards. By establishing robust oversight mechanisms, fostering transparency, and promoting continuous learning about AI technologies, boards can not only safeguard their organizations but also build public trust in AI applications.
It is imperative for boards to take proactive steps in their oversight responsibilities. Directors must prioritize understanding AI’s potential risks and benefits, implement comprehensive compliance frameworks, and ensure that management is accountable for AI-related decisions. And, boards must tread carefully to maintain their important role of oversight—with “noses in”—and not inadvertently or unintentionally step into the role of actually managing—by keeping “fingers out.”
Now is the time for boards to lead by example, demonstrating a commitment to responsible AI practices and robust governance. This proactive stance will enable companies to navigate the complexities of AI and drive sustainable innovation and growth.
[1] The Number of Fortune 500 Companies Flagging AI Risks Has Soared 473.5%, Fortune (August 18, 2024), https://fortune.com/2024/08/18/ai-risks-fortune-500-companies-generative-artificial- intelligence-annual-reports.
[2] Regulators in key sectors, such as financial services and healthcare, and in key functional areas, particularly privacy and employee protections, are trying to understand and address the potential risks and impacts of AI.
[3] Under the AI Act, an AI system is defined as a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
[4] In addition, providers of general-purpose AI (“GPAI”) systems must prepare and provide upon request detailed documentation on design specifications, capabilities, and limitations of their GPAI model. GPAI models are subject to a separate classification under the AI Act, which defines them as: “An AI model . . . trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications[.]” GPAI models exclude AI systems that are used for research, development, or prototyping activities before they are released on the market. However, if a GPAI model also qualifies under another category of the AI Act, then it is additionally subject to the requirements applicable to that category. For example, if a GPAI model also qualifies as a “high risk” AI system, then both the GPAI and high-risk regimes of the AI Act would apply.
[5] These include AI systems (1) deploying deceptive techniques to distort behavior or impair informed decision-making; (2) exploiting vulnerabilities related to age, disability, or socio-economic circumstances; (3) evaluating or classifying individuals or groups based on social behavior or personal traits; (4) assessing the risk of an individual committing criminal offenses; (5) compiling facial recognition databases; (6) inferring emotions in workplaces or educational institutions; (7) using biometric categorization systems to infer sensitive attributes; or (8) using biometric identification data for real-time categorization of individuals.
[6] While national legislation by EU Member States is not required, each Member State will have to designate at least one market surveillance authority to coordinate with the European Commission and other Member States. Enforcing the new rules at the national level is thus expected to be challenging, at least at the beginning, due to the diverse approaches that will likely be used to interpret and apply the law.
[7] Although not yet implemented, the Executive Order establishes reporting obligations to be enforced by the Department of Commerce, including those relevant to developers of dual-use foundation models and entities that acquire, develop, or possess large-scale computing clusters.
[8] In a February 13, 2024, speech at Yale Law School, SEC Chair Gary Gensler warned about AI fraud and AI washing, stating “[i]nvestment advisers or broker-dealers also should not mislead the public by saying they are using an AI model when they are not, nor say they are using an AI model in a particular way but not do so. Such AI washing, whether it’s by companies raising money or financial intermediaries, such as investment advisers and broker-dealers, may violate the securities laws.” U.S. Securities and Exchange Commission, AI, Finance, Movies, and the Law: Prepared Remarks Before the Yale Law School (February 13, 2024), https://www.sec.gov/newsroom/speeches-statements/gensler-ai-021324.
[9] U.S. Securities and Exchange Commission, SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence (March 18, 2024), https://www.sec.gov/newsroom/press-releases/2024-36.
[10] Id.
[11] U.S. Department of Justice, Deputy Attorney General Lisa O. Monaco Delivers Remarks at the University of Oxford on the Promise and Peril of AI (February 14, 2024).
[12] Id.
[13] Id.
[14] Id.
[15] Id.
[16] Id.
[17] The states who signed that letter are: Alabama, Arkansas, Florida, Idaho, Iowa, Louisiana, Mississippi, Montana, Nebraska, South Carolina, South Dakota, Tennessee, Utah, Virginia, and West Virginia. See, pr24-62-letter.pdf (tn.gov).
[18] In re Caremark International Inc. Derivative Litigation, 698 A.2d 959 (Del. Ch. 1996).
[19] For instance, in 2017, PG&E agreed to a $90 million settlement for a natural gas pipeline explosion and fire that killed eight people. The Caremark claim brought in California court accused PG&E’s board of “gross mismanagement” of its health and safety operations, citing a failure to keep accurate records and maintain the safety of the pipeline. San Bruno Fire Derivative Cases, JCCP No. 4648-C. Similarly, in 2019, the Blue Bell Creameries board’s failure to establish a reasonable reporting system with regard to food safety, an “essential and mission critical” issue for the company, was considered an act of bad faith in breach of the duty of loyalty. Marchand v. Barnhill, 212 A. 3d 805 (Del. 2019). There, the Delaware Supreme Court found it not enough for management to merely discuss general operations with the board. Id. And, in 2021, the Court held that plaintiffs pled particularized facts demonstrating that Boeing’s board failed to implement any reporting or information system or controls with respect to airplane safety, noting that the board had not assigned a committee with direct responsibility for monitoring airplane safety. In re Boeing Co. Derivative Litigation, No. 2019-0907 (Del. Ch. 2021). The Court also noted that Boeing’s board overly relied on regulatory compliance and did not demand regular safety reporting from senior management. Id.
[20] Constr. Indus. Laborers Pension Fund v. Bingle, No. 2021-0494-SG (Del. Ch. 2022). Because the plaintiffs failed to adequately plead bad faith on the part of the directors, the court ultimately declined to analyze the extent to which directors’ decisions or omissions concerning cybersecurity risks are generally reviewable under Caremark. Id.
[21] See, e.g., Harvard Law School Forum on Corporate Governance, AI Oversight Is Becoming a Board Issue (April 26, 2002); American Bar Association, Business Law Section, The Duty of Supervision in the Age of Generative AI: Urgent Mandates for a Public Company’s Board of Directors and Its Executive and Legal Team (March 26, 2024).