AI, Meet Europe

Listen to Today's Edition
Voiced by Amazon Polly

The European Parliament recently overwhelmingly passed the European Union Artificial Intelligence Act. While the legislation must overcome more hurdles to become a law, it’s an example of European lawmakers charging forward with regulations to control technology that thinkers like British physicist Stephen Hawking have warned could destroy the human race.

As a parliamentary press release detailed, the Act would prohibit the use of AI in certain sectors, like technology that fosters biometric surveillance, emotion recognition, (and) predictive policing that might assume someone is about to commit a crime. It would also designate high-risk technologies that might affect elections or spread misinformation. Firms would need to label content they generated using AI, too.

The vote came as consumer groups in Europe called on national governments to investigate the potential negative social effects of the technology. “Generative AI such as ChatGPT has opened up all kinds of possibilities for consumers, but there are serious concerns about how these systems might deceive, manipulate and harm people,” European Consumer Organization Deputy Director General Ursula Pachl said in an interview with Euronews.

European leaders have always been tougher on Silicon Valley tech giants than their American counterparts, the Washington Post explained. EU regulators, for example, are pressuring Google to break up its advertising operations that they allege are anti-competitive, Reuters noted. EU countries have also enacted stricter privacy rules and similar measures to regulate online activity. Still, numerous court cases related to AI and intellectual property rights and similar concerns are winding their way through US courts, the National Law Review wrote.

The chief executive officer of OpenAI, Sam Altman, who launched the wildly popular ChatGPT generative AI application, lobbied politicians throughout Europe to water down the EU AI Act, reported Time magazine. Altman succeeded in altering the law. But he failed to stop it. He also threatened to shutter OpenAI’s operations in Europe if the law succeed but he has since walked back that threat, according to the Associated Press.

The market with its 440 million consumers is just too big to resist.

Still, most AI models will not comply with the Act, argued Stanford University researchers who spoke to Axios. The researchers believed the law would help strengthen AI models, however, by providing guidance on avoiding the pitfalls that have raised concerns about the effects of AI on people, communities, and countries. Most AI providers, the researchers continued, don’t offer much information about how they prevent or mitigate those risks.

Paper shuffling can help the machines.

Not already a subscriber?

If you would like to receive DailyChatter directly to your inbox each morning, subscribe below with a free two-week trial.

Subscribe today

Support journalism that’s independent, non-partisan, and fair.

If you are a student or faculty with a valid school email, you can sign up for a FREE student subscription or faculty subscription.

Questions? Write to us at hello@dailychatter.com.

You don't have credit card details available. You will be redirected to update payment method page. Click OK to continue.

Copy link