(Bloomberg) -- Earlier this week, European negotiators sat in a conference room in Brussels and debated for nearly 24 straight hours — dozing off at times and working a self-service coffee machine so hard that it broke.
They came with a singular mission: reaching an agreement to regulate artificial intelligence. And they didn’t quite get there. But the EU’s internal market chief, Thierry Breton, didn’t want a long break over the weekend that would give lobbyists more time to weigh in, according to people familiar with the matter.
It took yet another round of intense bargaining on Friday, curtains drawn and under the glare of bright lights, for the group to reach consensus.
Just before midnight, representatives announced they’d finally struck a deal to control a technology that Elon Musk, OpenAI Inc.’s Sam Altman, AI pioneer Geoffrey Hinton and others have warned could pose an existential threat if left unchecked.
Europe’s pact on the so-called AI Act ushers in a pivotal time, one in which world leaders will demonstrate their ability — and their appetite — for regulating AI technologies and keeping their potential for bias, privacy violations, outsized influence and other risks at bay. No moment has better captured the dilemma that governments face in doing this without snuffing out beneficial AI technologies than the debates that raged on for days in Europe.
Some have suggested “that AI cannot be governed, either because lawmakers don’t understand the technology or because the technology evolves so fast,” Anu Bradford, a professor at Columbia Law School’s European Legal Studies Center, said Friday. While the EU’s policy faces headwinds, she said, the region had “the opportunity to show that it cannot let the perfect be the enemy of the good.”
Read more: EU Strikes Deal to Regulate ChatGPT, AI Tech in Landmark Act
Outside of China and India, efforts to regulate AI have been limited. Some US cities and states have passed legislation restricting use of the technology in certain areas such as police investigations and hiring. Various federal agencies are vetting the use of AI tools, some regulators are using existing laws to police the technology and members of the US Congress are exploring legislation. But the nation is nowhere close to introducing a bill like the EU’s AI Act.
The policy making its way through the EU’s legislative process extends far beyond the generative AI tools that have captured the world’s attention in the past year. It would dictate exactly how law enforcement is allowed to use AI-powered surveillance cameras; how it can be deployed in critical infrastructure; and how the developers of programs such as OpenAI’s ChatGPT and Google’s Bard will be held responsible for mitigating the risks of their systems.
It would also come with some teeth. Policymakers are proposing to impose penalties on companies that violate the rules, with fines up to €35 million ($37.7 million), or 7% of global turnover, depending on the infringement and the size of the company.
Read More: Regulate AI? How US, EU and China Are Going About It: QuickTake
Both the European Parliament and the EU’s 27 member states will need to approve the agreement. But the deal reached Friday marks a critical step toward clearing landmark AI policy that will — in the absence of any meaningful action by US Congress — set the tone for deliberations and ensuing regulation in the western world.
The EU — like other governments — struggled to strike a balance between the desire to preserve its own AI startups, such as France’s Mistral AI and Germany’s Aleph Alpha, and potential societal risks. That proved to be a key sticking point in negotiations, with some countries including France and Germany opposing rules that they said would unnecessarily handicap local companies.
Some details still need to be worked out by civil servants in the coming weeks, but negotiators largely agreed to place guardrails around generative AI, establishing basic transparency requirements for any developer of the large models that power them.
What surprised many was the fact that the most contentious discussions revolved around live biometric identification tools, the kind that can scan and recognize people’s faces in real-time, and not generative AI. But the issue has been a long-running, emotional debate. The parliament voted for a complete ban of the tools last spring, while EU countries have been pushing for exemptions for national security and law enforcement.
Read More: US Warns EU’s Landmark AI Policy Will Only Benefit Big Tech
In the end, the two sides agreed to limit the use of the technology in public spaces with more guardrails.
Some took Europe’s long and drawn-out debates as a sign that policymakers are deliberating with more thought and less haste. “We spent a lot of time on finding the right balance,” Breton said in a statement early Saturday.
You’ve got questions about AI. We’ve got answers. Sign up for Bloomberg Technology’s weekly Q&AI newsletter.
The days-long discussions this week were held in an attempt to reach a deal and get the legislation cleared before European elections in June usher in a new commission and parliament that could stall efforts.
France’s digital minister, Jean-Noel Barrot, said his government will review the compromise in the coming weeks to ensure it “preserves Europe’s capacity to develop its own artificial intelligence technologies.” Artigas, Spain’s secretary of state for digitization and artificial intelligence, noted that, under the deal, French AI startup Mistral likely wouldn’t be subject to controls around general-purpose AI systems while it’s still in the research and development phase.
At the end of the talks, Artigas was said to have popped open a bottle of champagne. “We are hopeful,” she said, that “they all will confirm.”
©2023 Bloomberg L.P.