laitimes

Black people were flagged as primates, Facebook apologized, and the recommendation feature was disabled

author:Heart of the Machine Pro

Reports from the Heart of the Machine

EDIT: Boats

AI systems are never perfect, but biases need to be vigorously corrected.

Recently, a Facebook user received a recommendation prompt when watching a video featuring black people: asking if they would like to "continue to watch videos about primates". The video, published by the Daily Mail on June 27, 2020, contains footage of blacks arguing with white civilians and police, and the content of the video is not related to primates. Currently, Facebook has disabled the AI recommendation feature.

According to The New York Times, Facebook apologized for its "unacceptable mistake" on Friday and said it was investigating its recommendation features to prevent that from happening again.

In response to the matter, Darci Groves, a former content design manager at Facebook, tweeted a screenshot of the recommendation prompt and said it was unacceptable. Groves also posted it to a product feedback forum for current and former Facebook employees. In response, the product manager of Facebook's video platform Facebook Watch called it "unacceptable" and said the company was "investigating the root cause."

Black people were flagged as primates, Facebook apologized, and the recommendation feature was disabled

Facebook has one of the largest repositories of user-uploaded images to train its facial and object recognition algorithms. Facebook spokesman Dani Lever said in a statement: "Although we have made improvements to AI, we know it is not perfect and we still have a lot of improvements to do. We apologize to anyone who might have seen these offensive referral messages."

There is no doubt that the error in this recommendation information reflects the bias of the AI system.

The problem of bias is not unique

Over the years, several tech companies, such as Google and Amazon, have been censored for biases in AI systems, especially racial issues. Studies have shown that facial recognition technology has difficulties in identifying due to racial bias, and there have even been reports that black people have been discriminated against or arrested because of ai problems.

In 2015, Google Photos labeled photos of the two black men as "Gorillas," and Google immediately apologized and said it would solve the problem immediately. Two years later, however, Wired magazine discovered that Google's solution was simply to prevent any image from being labeled as a gorilla, chimpanzee or monkey. Google then confirmed that it couldn't identify gorillas and monkeys because of the 2015 incident of identifying black people as gorillas, and in order to resolve the error, Google removed the label for the term directly from search results.

Black people were flagged as primates, Facebook apologized, and the recommendation feature was disabled

Image source: https://www.reddit.com/r/MachineLearning/comments/3brpre/with_results_this_good_its_no_wonder_why_google/

After being confirmed that "just removing tags", Google said: "The development of image tag technology is still in its early stages, unfortunately this technology is still far from perfect."

In an OpenAI research paper titled Learning Transferable Visual Models From Natural Language Supervision, researchers found that some AI systems misclassified 4.9% (confidence interval 4.6%-5.4%), as one of the non-human categories, including "animals", "chimpanzees", etc. Among them, "black" images have the highest misclassification rate, about 14%, while all other races have the highest misclassification rate.

Black people were flagged as primates, Facebook apologized, and the recommendation feature was disabled

Address of the thesis: https://cdn.openai.com/papers/Learning_Transferable_Visual_Models_From_Natural_Language.pdf

In fact, the issue of bias in the field of AI is not unique. The dismissal of Timnit Gebru, co-head of technology at Google's "Ethical AI Team" at the end of 2020, sparked a storm of discussion. Just yesterday, Darci Groves tweeted in support of Timnit Gebru and his research efforts.

Black people were flagged as primates, Facebook apologized, and the recommendation feature was disabled

The occurrence of bias events has made people think about the root cause of the problem, and more and more researchers have made efforts to solve the bias problem of AI systems. For example, researchers at MIT once developed a tunable algorithm to reduce potential bias hidden in training data and used it to address racial and gender bias in facial detection systems. In addition, machine learning as a data-driven system, the quality and fairness of data and its annotations may still have a long way to go.

Reference Links:

https://www.nytimes.com/2021/09/03/technology/facebook-ai-race-primates.html

https://www.reddit.com/r/MachineLearning/comments/phjecd/n_facebook_apologizes_after_ai_puts_primates/

https://tech.sina.cn/i/gj/2018-01-18/detail-ifyquptv7683429.d.html

Read on