According to internal corporate docs obtained by The Wall Street Journal, Facebook’s artificial intelligence (AI) technology for identifying and removing postings involving hate speech and violence does not operate (WSJ).
“The newspaper argued in a report that the documents include a mid-2019 note, in which a Facebook senior engineer said that the problem is that “we [the company] do not and possibly never will have a model that captures even a majority of integrity harms, particularly in sensitive areas”.
The engineer estimated that Facebook’s automated systems scrapped posts that generated merely 2% of the hate speech views that violated its rules.
“Recent estimates suggest that unless there is a major change in strategy, it will be very difficult to improve this beyond 10-20% in the short-medium term”, he wrote.
The claims echoed those by another team of Facebook employees who previously argued that AI systems were removing posts that generated 3% to 5% of the views of hate speech on the platform, and 0.6% of all content that violated Facebook’s policies against violence and incitement.
In 2020, Facebook CEO Zuckerberg expressed confidence that the platform’s AI would be able to take down “the vast majority of problematic content”. He spoke as the social networking giant claimed that most hate speech is taken down from the platform before users even see it.
According to Facebook’s recent report, the hate speech detection rate currently stands at 97%.
Source: Sputnik News