(Bloomberg) -- European Union negotiators were close to an agreement on Friday over what is poised to become the most comprehensive regulation of artificial intelligence in the western world.

Teams from the European Parliament and 27 member countries have reached consensus on some of the controls necessary to protect society from the risks of AI and may strike a broader deal within hours, people familiar with the situation said, asking not to be identified because the information isn’t public. The closed-door meeting follows an earlier marathon session that began on Wednesday and ended on Thursday afternoon without consensus. 

The long and drawn-out debate is arguably the most powerful example yet of the challenge regulators face in balancing the potential benefits of AI technologies with the need to protect citizens from the risks. It’s a dilemma that has divided world leaders and tech executives alike as generative tools such as ChatGPT and Google’s Bard have surged in popularity.

On Thursday, EU representatives had agreed on rules for general purpose AI models that have a wide range of possible uses, such as the one that powers OpenAI Inc.’s ChatGPT. But they remained divided on how law enforcement should be allowed to use face-scanning technology.

The agreement that was being drafted on Friday is shaping up to be similar to a compromise proposed Thursday, the people familiar with the situation said. 

Policymakers have long debated the use of cameras that scan people’s faces in real-time, as well as the use of software that categorizes images of those faces. While the parliament voted for a complete ban on live face-scanning technology last spring, many EU countries have fought to use the tech for law enforcement and national security purposes.

Read More: EU’s Talks on AI Rules Stall After Nearly 24 Hours of Debate

Negotiators made progress overnight Wednesday into Thursday, but with some officials falling asleep at their desks, the sides agreed to pause until Friday. Talks resumed at 9 a.m. Brussels time, when the parliament presented a list of demands regarding facial scanning to the council, which then made a counter offer.

The debate focuses on rules, including when law enforcement can scan faces in a crowd, for example, to detect people who have been trafficked or prevent terrorist attacks. 

Read More: Regulate AI? How US, EU and China Are Going About It: QuickTake

The proposed use of biometric data has drawn stark criticism from outside groups. Daniel Leufer, a policy analyst at Access Now, argued against allowing it for predictive policing and called the technology “pseudo-scientific” and “disgustingly racist” in a panel on Thursday. If the parliament accepts this, “they abandon their commitment to protect people from the most harmful uses of AI,” he said.

The EU — like the US and UK — has struggled to find a balance between the desire to promote its own AI startups, such as the French unicorn Mistral AI and Germany’s Aleph Alpha, and guard against potential societal risks.

EU policymakers agreed to require developers of the type of AI models that underpin tools such as ChatGPT to follow basic transparency requirements. Companies with models that pose a systemic risk will need to sign onto a voluntary code of conduct to work with the commission to mitigate risks. The plan is similar to the EU’s content moderation rules in the Digital Services Act.

Some critics have argued that the codes of conduct amount to self-regulation that is not enough to ensure companies develop this technology safely. Others argue that the rules will soon impact many developers.

The rules are “still a burden for fostering EU champions in this field,” Hugo Weber, vice president of corporate affairs at French e-commerce software company Mirakl, said. Non-EU providers will gain “a competitive edge over Europeans.”

©2023 Bloomberg L.P.