Facebook’s AI misleads the video of Blackman as ‘Primates,’ Facebook apologizes

Facebook calling it ‘unacceptable.’

facebook-on-apple-imac
Facebook’s AI misleads the video of Blackman as ‘Primates,’ Facebook apologizes

Facebook apologizes because its AI misleads a video of a black man as a “Primates,” by calling it an “Unacceptable error” and by saying that the company is working to prevent its happening again. Users got a prompt asking them to “keep seeing videos about Primates,” on June 27th video posted by the UK tabloid Daily Mail.

Facebook users who recently watched a video from a British tabloid featuring Black men saw an automated prompt from the social network that asked if they would like to “keep seeing videos about Primates,” causing the company to investigate and disable the artificial intelligence-powered feature that pushed the message.

-New York Times

In an email to The Verge, Facebook said that they disabled the whole AI recommendation system as soon as they got to know what was happening.

“This was clearly an unacceptable error. As we have said, while we have made improvements to our AI, we know it’s not perfect and we have more progress to make. We apologize to anyone who may have seen these offensive recommendations.”

-Facebook to The Verge

The social media company is investigating the reason and working to stop, so it won’t happen again.

This incident is not the first time when an AI recognition system has shown gender and racial bias, specifically facial recognition tools have a rich history to mislead people due to people’s color. “Google, Amazon and other technology companies have been under scrutiny for years for biases within their artificial intelligence systems, particularly around issues of race,” says New York Times, Studies have shown that facial recognition technology is biased against people of color and has more trouble identifying them, leading to incidents where Black people have been discriminated against or arrested because of computer error.” Google also apologized in 2015 when their AI recognition system tagged the photos of black people as “Gorillas.” Past month, bug bounty showed that Twitter’s auto-crop AI-based algorithm is racist.

Twitter hosted an open context to find out the algorithmic bias in Twitter’s image cropping algorithm. Today Twitter has announced the result of the context. Back in March Twitter disabled automatic photo cropping because users reported twitter’s algorithm prefer light-skinned and beautiful faces. After that Twitter held a bug bounty to find out the reality of the problem. Read more…

Leave a Reply