On Wednesday, local time, the European Parliament passed a draft law called the "AI Act", taking an important step towards passing this important law regulating AI. The law could become a model for other countries as policymakers around the world work to put guardrails around the rapidly evolving technology.
This vote on the draft is just one step in the long process of passing the law in the EU. A final version of the bill is not expected to pass until later this year.
The AI Act adopts a "risk-based" approach to regulating AI, focusing on application areas that are most harmful to humans, including AI systems being used to operate critical infrastructure such as water or energy, and being used in the legal system. and when determining access to public services and government benefits. AI system developers must conduct risk assessments before putting the technology into daily use, similar to the drug approval process.
Generating AI will face new transparency requirements under the latest version of the AI Bill passed on Wednesday, including releasing summaries of the copyrighted material used to train the system, a proposal backed by the publishing industry but blocked by technology Developers objected because it was technically unfeasible. Manufacturers of generative AI systems must also implement safeguards to prevent them from producing illegal content.
At the same time, the "AI Act" will severely restrict the use of facial recognition software and require developers of AI systems such as ChatGPT chatbots to disclose more data used to create programs. The use of facial recognition is a major point of contention. The European Parliament voted to ban the use of real-time facial recognition, but questions remain over whether it should be allowed legal exemptions for national security and other law enforcement purposes.
According to the current draft, if companies do not comply with the AI Act, they will face fines of up to 6% of global revenue.
The EU has gone further than the United States and other major Western governments in regulating AI. The EU has been debating this topic for more than two years. The issue took on added urgency after the release of ChatGPT last year, which heightened concerns about AI’s potential impact on jobs and society.
However, technology leaders have also been trying to influence the debate on AI in various countries. Sam Altman, CEO of ChatGPT developer OpenAI, has met with at least 100 U.S. lawmakers and other global policymakers in recent months in South America, Europe, Africa and Asia, including European Commission President Ursula Ursula von der Leyen. Altmann has called for regulation of AI, but he also said the company may have difficulty complying with the EU's draft regulations and threatened to withdraw.
It’s unclear how effective AI regulations will be. The pace of AI development appears to be outpacing European lawmakers’ ability to enact laws. For example, early versions of the AI Act did not pay much attention to so-called generative AI systems such as ChatGPT.