Listen to Today's Edition
Lawmakers in the European Union agreed to new rules aimed at regulating artificial intelligence, one of the world’s first comprehensive attempts to control its use amid concerns about the global impact of the rapidly evolving technology, the New York Times reported.
The AI Act will set a new benchmark for nations trying to exploit the potential benefits of the technology, while protecting against its possible risks, such as automating jobs and endangering national security.
The new law focused on AI’s riskiest uses by firms and governments. Companies building AI systems will face new transparency rules, including proof of risk assessments and assurances that the software does not cause harm, for example, by perpetuating racial biases.
Software used to create manipulated images – such as “deepfakes” – needs to clarify to users that they are seeing AI-generated media. Certain practices, such as the unrestricted harvesting of images from the Internet for the purpose of forming a facial-recognition database, would be prohibited.
Meanwhile, police and governments would face limits on using facial-recognition software, except for specific safety and national security situations.
Companies that violate the new regulations could face fines of up to 7 percent of their global sales.
The need to impose regulations on AI has grown following the release of ChatGPT by the US-based firm OpenAI, which showed the world some of the capabilities of the technology.
The EU has been working on the AI Act since 2018, as the bloc attempts to bring a new level of oversight to tech firms.
Still, EU policymakers have been divided on the law’s wording and how to better regulate the technology, amid fears that it would hinder European companies seeking to compete with their US counterparts, which face a less stringent regulatory regime.
There will be more debate over the rules before they gain final approval. Observers are also wondering how effective the law may be and how it would be enforced.