The year is 2034. A family huddles closely for warmth and remains quiet, so as not to arouse suspicion of the machines. The shrill industrial hum serves as a constant reminder of their brave new (and less human) world. Amid all this, they wonder, “why didn’t we try to regulate this thing when we had the chance?”
Thankfully, this poor paraphrase of the Terminator franchise is meant only in jest to both the biological and artificial minds that happen upon this article. However, rapid developments in the artificial intelligence (AI) industry have caused some to call for a halt and reassessment.
Where are we now?
At present, new and exciting AI capabilities are hard to miss. From image generation with Adobe Firefly and DALL-E 2 to the natural language capabilities of Chat-GPT and Microsoft’s newly announced Copilot, it seems that each week we are presented with a revolutionary new tool of previously unimaginable power.
In just one iteration, Chat-GPT went from barely passing the Multistate Bar Examination (MBE) to passing it in the 90th percentile. Deepfake software, programs that use AI to create digital representations of a person’s body and/or voice, are now indistinguishable from their human counterparts. These represent only a small fraction of profound and rapid advancements which spurred a call to pump the computational brakes.
On March 29, 2023, the Future of Life Institute issued an open letter with nearly a thousand signatories, including prominent names like Elon Musk and Apple co-founder Steve Wozniak, calling for a six-month moratorium on further AI development to assess the risks and goals of this emergent technology. As of this writing, the signature count has ballooned to almost 19,000. This letter represents the latest in a series of differing views on the proper path to AI regulation.
What are we doing?
Approaches to AI regulation vary widely across the globe. The United Kingdom for example, per a March 29 whitepaper entitled “A pro-innovation approach to AI regulation,” favors a context-based approach that will not unduly burden AI industries. The whitepaper calls for five principles for AI companies to follow, which are: “(1) safety, security, and robustness, (2) transparency and explainability, (3) fairness, (4) accountability and governance, and (5) contestability and redress” (internal numerals added). This outline intentionally lacks specificity to make it both flexible and understanding of rapid change within the AI industry so as to not hamper innovation.
The European Union has taken a different, and more cautious approach. The proposed regulations under their Artificial Intelligence Act focus more on risk-assessment and mitigation. With this approach, the EU appears less concerned with innovation in comparison to the safety of its citizens and institutions. Germany, as an EU member, is pushing further and calling for the banning of AI technologies in relation to real-time biometric scanning and emotion recognition for alleged criminal offenders. This represents a more human-centric approach to AI regulation.
A third and most drastic approach was presented by Italy, who decided to ban Chat-GPT from operation within its borders amid privacy concerns. This temporary restriction is aligned with the approach called for in the open letter, as the ban is only meant to give Italy a chance to assess its position regarding AI and potential regulatory pathways.
Where should we go?
Right now, it is vital for policymakers to acquaint themselves with AI systems because they are here to stay. Our legislature’s current state of awareness and concerning AI’s risks is at best disheartening. This situation is succinctly described by Jay Obernolte, a representative from California (and sole member of Congress with a master’s degree in AI), who is surprised how much time spends explaining to his colleagues, “that the chief dangers of AI will not come from evil robots with red lasers coming out of their eyes.” While it is reassuring to know that Skynet is not on the horizon, it is imperative that our decision-makers are educated on the risks and benefits of AI both quickly and thoroughly.
From there, potential regulation should be tested from different avenues of government while also taking cues from both the United Kingdom and the European Union’s approaches. Our chief concern should be the mitigation of risk and a secondary (but still important) consideration should be crafting a framework that does not unnecessarily curtail AI development and industry.
Through discussion and understanding, we can realize what current legislation offers and what other administrative regulation is necessary. For example, tort law and other private rights of action may prove a valuable tool in AI regulation as plaintiffs harmed by discriminatory or copyright-infringing AI file lawsuits under already applicable federal law. Going further, a new administrative agency could be created, and industry standards promulgated so that all parties understand how to responsibly operate AI systems.
It is important to understand that AI is not one entity, but an architecture. As with any architectural undertaking, we must ensure AI is up to code.
Sources:
Cade Metz & Gregory Schmidt, Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society’, THE NEW YORK TIMES (Mar. 29, 2023).
Cecilia Kang & Adam Satariano, As A.I. Booms, Lawmakers Struggle to Understand the Technology, THE NEW YORK TIMES (Mar. 3, 2023).
David Meyer, Here are 5 reasons people are dunking on that call for a 6-month A.I. development pause, FORTUNE (Mar. 30, 2023).
Eliezer Yudkowsky, Pausing AI Developments Isn’t Enough. We Need to Shut it All Down, TIME (Mar. 29, 2023).
Future of Life Institute, Pause Giant AI Experiments: An Open Letter, FUTURE OF LIFE INSTITUTE (Mar. 29, 2023).
Gian Volpicelli, ChatGPT broke the EU plan to regulate AI, POLITICO (Mar. 3, 2023).
Jared Spataro, Introducing Microsoft Copilot – your copilot for work, MICROSOFT: OFFICIAL MICROSOFT BLOG (Mar. 16, 2023).
Lon Harris, ‘Open Letter’ Proposing 6-Month AI Moratorium Continues to Muddy the Waters Around the Technology, DOT.LA (Mar. 30, 2023).w
Luca Bertuzzi, Germany could become MEPs’ ally in AI Act negotiations, EURACTIV (Jan. 9, 2023).
Natasha Lomas, UK to avoid fixed rules for AI – in favor of ‘context-specific guidance’, TECHCRUNCH (Mar. 29, 2023).
Nils C Köbis, Barbora Doležalová & Ivan Soraperra, Fooled twice: People cannot detect deepfakes but think they can, NATIONAL LIBRARY OF MEDICINE (Oct. 29, 2021).
Noah Feldman, Regulating AI will be essential. And complicated, THE BUSINESS STANDARD (Apr. 4, 2023).
Ryan Browne, Italy became the first Western country to ban ChatGPT. Here’s what other countries are doing, CNBC (Apr. 4, 2023).
Ryan Browne, With ChatGPT hype swirling, UK government urges regulators to come up with rules for A.I., CNBC (Mar. 29, 2023).
S. Shyam Sundar, Cason Schmit & John Villasenor, Regulating AI: 3 experts explain why it’s difficult to do and important to get right, THE CONVERSATION (Apr. 3, 2023).
Sebastian Klovig Skelton, UK government publishes AI whitepaper, COMPUTERWEEKLY (Mar. 29, 2023).
Secretary of State for Science, Innovation and Technology (Presenter), Pro-innovation approach to AI Regulation, GOV.UK (Mar. 29, 2023).
Supantha Mukherjee, Elvira Pollina & Rachel More, Italy’s ChatGPT ban attracts EU privacy regulators, REUTERS (Apr. 3, 2023).
Terminator Wiki, Terminator (Franchise), FANDOM (2023).