
Article: European Union Leaders Introduce Draft Law to Regulate Artificial Intelligence
European Union (E.U.) leaders recently introduced a 125-page draft law aimed at regulating artificial intelligence (A.I.). This law, hailed as a global model for handling the technology, was the result of input from thousands of experts who offered their expertise over the course of three years. Margrethe Vestager, the head of digital policy for the 27-nation bloc, praised the “landmark” policy and declared it to be “future proof”.
However, the introduction of ChatGPT, an eerily humanlike chatbot that went viral for generating its own answers to prompts, blindsided E.U. policymakers. The A.I. system powering ChatGPT was not mentioned in the draft law or during discussions about the policy. As a result, lawmakers and their aides scrambled to address this gap, while tech executives warned that overly aggressive regulations could put Europe at an economic disadvantage.
The rapid evolution of A.I. technology has left lawmakers and regulators in Brussels, Washington, and elsewhere in a race to catch up. Concerns over A.I.’s potential to automate jobs, spread disinformation, and eventually develop its own intelligence have prompted nations to move quickly to regulate the technology. However, European officials have been caught off guard by the pace of A.I.’s evolution.
The A.I. Act, a law proposed by the E.U., aims to address certain risky uses of the technology and create transparency requirements about how underlying systems work. Despite disputes over how to handle the makers of the latest A.I. systems, the E.U. is moving forward with this new law in hopes of regulating the high-risk uses of A.I.
Unfortunately, this struggle to keep pace with A.I.’s rapid advancements has also led to a knowledge deficit in governments, labyrinthine bureaucracies, and fears that too many regulations could limit the technology’s potential benefits.
The fragmented action in response to A.I.’s rapid advancements has left a vacuum for tech companies like Google, Meta, Microsoft, and OpenAI, the makers of ChatGPT, to police themselves. Meanwhile, the urgency for governments to deal with A.I.’s risks has raised concerns about whether they are equipped to regulate and mitigate such risks.
European regulators are still grappling with the best approach to regulating A.I., particularly after the emergence of systems like ChatGPT. The introduction of the A.I. Act, while addressing certain risky uses, has been met with some resistance and a lack of consensus, particularly in terms of regulating general-purpose A.I. models that power systems like chatbots.
As negotiations over the law’s language enter their final stage, policymakers in the E.U. are still working on compromises in response to the rapid evolution and unpredictability of A.I. systems. Additionally, the lack of regulation and a clear path forward has raised concerns that governments may fall behind further in the face of breakthroughs by A.I. makers.”