
European Union policymakers agreed on Friday to a sweeping new law to regulate AI, one of the world’s first comprehensive attempts to limit the use of a rapidly evolving technology. The law, called the A.I. Act, sets a new global benchmark for countries seeking to harness the potential benefits of the technology, while trying to protect against its possible risks. It focuses on A.I.’s riskiest uses by companies and governments, restricting the use of facial recognition software by police and governments outside of certain safety and national security exemptions. The law added requirements for makers of the largest A.I. models to disclose information about how their systems work and evaluate for “systemic risk.”
European policymakers have been working on what would become the A.I. Act since 2018, bringing a new level of oversight to tech. Policy makers agreed to a “risk-based approach” to regulating AI, where a defined set of applications face the most oversight and restrictions. The new regulations will be closely watched globally, impacting major A.I. developers like Google, Meta, Microsoft and OpenAI, as well as other businesses and governments using the technology in various industries. However, enforcement and legal challenges are likely to arise. The EU’s regulatory prowess is under question as it moves ahead with implementing the A.I. Act.