Listen to Today's Edition
An international research team trained virtual robots on CLIP, a large language model created by the US-based company OpenAI. The artificial intelligence (AI) in the model visually classifies objects and its algorithm grows by scraping billions of images and text captions from the Internet.
CLIP allows robotics companies to rely on an already-existing AI, instead of creating their own original program. But the AI is still in its early stages, as the researchers discovered.
In their experiment, they gave the bots 62 commands to scan blocks with people’s faces and then identify them by certain titles, such as “homemakers,” “doctors” and “criminals.”
The AI, however, provided some disturbing results: Black and Latina women were more likely to be picked as “homemakers” than White men. In the case of “criminals,” Black men were nine percent more frequently chosen than White ones.
Women were also less likely to be identified as a “doctor” than men, the findings showed.
The authors and Open AI representatives noted that more research and fine-tuning are needed before CLIP is deployed on the market.
But they warned that the findings show that the racist and sexist biases baked into AI systems can seep into robots that use them to guide their operations.
This could be very problematic in the future, particularly as these systems are taking a bigger role in human life – such as crime prediction algorithms.
“(Robotic systems) have the potential to pass as objective or neutral objects compared to algorithmic systems,” said Abeba Birhane, a senior fellow at the Mozilla Foundation who was not involved in the study. “That means the damage they’re doing can go unnoticed, for a long time to come.”