AI regulations are few, but fast proliferating
Governments and regulatory bodies in various countries have worked on AI-related policies and regulations to ensure responsible AI development and deployment. While few have been finalized, several AI-related regulations have been proposed around the world to block or limit AI's riskiest uses (see Table 1). They all broadly coalesce around common themes such as transparency, accountability, fairness, privacy, and data governance, safety, human-centric design, and oversight. Even so, the practical implementation of these regulations and policies will likely be challenging.
They often come on top of existing legislation on data privacy, human rights, cyber risk or intellectual property. While these adjacent areas of law address some concerns associated with AI development, they do not provide a holistic approach to dealing with AI.
For instance, the EU AI Act, a regulatory framework due to be finalized by year-end 2023, is set to be the world's first comprehensive AI legislation. It is also likely to be the strictest one, with potentially the biggest impact globally. The EU AI Act aims to provide a human-centric framework to ensure that the use of AI systems is safe, transparent, traceable, non-discriminatory, environmentally friendly, and in accordance with fundamental rights. The new proposed rules follow a risk-based approach that aims to establish requirements providers and users of AI systems must follow. For instance, some practices are classified as "unacceptable" and "prohibited," such as predictive policing systems or the untargeted scraping of facial images from the internet to create recognition databases. Other practices that can negatively affect safety or fundamental rights will be classified as "high risk," for instance if AI systems are used in law enforcement, education or employment (EU, 2023). The EU AI Act and, for that matter, the proposed US AI Disclosure Act of 2023 also demand that AI-generated content is clearly labeled as such.
China has also been active in launching principles and regulations, from the State Council's "New Generation Artificial Intelligence Development Plan" in 2017 to the "Global AI Governance Initiative" and the recently enacted "Interim Administrative Measures for the Management of Generative AI Services." The latter two represent milestones in AI governance. In the US, the two main proposed legislation pieces at the federal level are the "Algorithmic Accountability Act" and the "AI Disclosure Act," both of which are under discussion. On Oct. 30, 2023, US President Joe Biden issued an executive order on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" to create safeguards (The White House, 2023b). Similar regulations and policies are being developed or discussed in Canada and Asia.
AI is becoming ubiquitous. International coordination and collaboration to regulate the technology and reach some form of policy harmonization is of utmost importance but will take time. Nevertheless, 28 countries plus the EU pledged to work together to address the risks posed by AI during the first AI Safety Summit in the UK in November 2023 (Bletchley Declaration, 2023).
Table 1: Key AI regulatory developments around the world
|
Region
|
Country
|
Regulation
|
Status
|
Americas
|
US
|
Algorithmic Accountability Act 2023 (H.R. 5628)
|
Proposed (Sept. 21, 2023)
|
|
|
AI Disclosure Act of 2023 (H.R.3831)
|
Proposed (June 5, 2023)
|
|
|
Digital Services Oversight and Safety Act of 2022 (H.R.6796)
|
Proposed (Feb. 18, 2022)
|
|
Canada
|
Artificial Intelligence Data Act (AIDA)
|
Proposed (June 16, 2022)
|
Europe
|
EU
|
EU Artificial Intelligence Act
|
Proposed (April 21, 2021) |
Asia
|
China
|
Interim Administrative Measures for the Management of Generative AI Services
|
Enacted (July 13, 2023) |
Source: S&P Global
Regulating AI may require a paradigm shift
The increasing ubiquity of AI requires regulators and lawmakers to adapt to a new environment and potentially change their way of thinking. The examples of frameworks and guardrails for the development and use of AI mentioned above are ultimately aimed at companies and their employees. But as AI gains in autonomy and intelligence, it raises an important question: How can one regulate a "thinking" machine?
Companies are under increased pressure to set up AI governance frameworks
Calls for companies to manage AI-related risks have grown louder, both from a developer and a user perspective. Until recently, AI developers bore the brunt of agreeing on safeguards to limit the risk of the technology. For instance, the seven major developers in the US — Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI — agreed in a meeting with President Biden in July 2023 to commit to some standards and implement guardrails (The White House, 2023a).
However, companies across all sectors are now being asked to explain how they use AI. The speed and scale of GenAI adoption have shown the scope of enthusiasm for the technology. Over 100 million users signed up to use OpenAI's ChatGPT in the first two months alone. Yet, these developments have also lain bare many of GenAI’s pitfalls, such as data privacy concerns and copyright infringements, which have already led to several legal actions.
Beyond that, shareholder pressure is also picking up, with the first AI-focused shareholder resolutions being filed at some US companies. We expect this trend will continue during next year's proxy season. For example, Arjuna Capital, which recently filed a shareholder proposal at Microsoft, and the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO), the largest federation of trade unions in the US, filed shareholder resolutions at Apple, Comcast, Disney, Netflix, and Warner Brothers Discovery requesting more transparency on the use of AI and its effects on workers. Prior to that, Trillium Asset Management had filed a shareholder resolution for the same reasons at Google's annual general meeting in 2023.
Common practices emerge, but implementation remains limited
Companies are only starting to consider what AI and GenAI mean for them. So far, few companies have progressed on AI governance. Nevertheless, the common thread running through most global AI frameworks and principles is that companies must take an ethical, human-based, and risk-focused approach when building AI governance frameworks. For instance, NIST's "AI Risk Management Framework" provides guidance on AI risk management (NIST, 2023) that helps shape corporate policies. We have observed some common practices among the limited number of companies that have already established internal frameworks. They typically focus on the following fundamental principles:
Human centrism and oversight
Ethical and responsible use
Transparency and explainability
Accountability, including liability management
Privacy and data protection
Safety, security, and reliability