Google-owned YouTube is not only miserably failing to live up to the company’s stated intention to limit the spread of hateful diatribes and misinformation. The world’s second-most visited website has been found complicit in pushing the afore-mentioned “disturbing” video content via its recommendation algorithm, states a report by the Mozilla Foundation, published on 7 July.
From theories about the 9/11 terror attacks on the US and the ongoing coronavirus pandemic to promotion of so-called “white supremacy” and inappropriate “children’s” cartoons, YouTube’s algorithm has been implicated in driving some 71 per cent of its violent content, according to the research.
The nonprofit found that a majority of problematic videos were recommended by the video-sharing platform’s algorithm. Social media platforms like YouTube have vehemently rejected a clamour to share information about their algorithms, citing user privacy.
The nonprofit launched RegretsReporter, a browser extension and research project, to probe the extent to which YouTube’s algorithm can drive users toward more extreme content.
Firstly, the discovered most frequent “regret” categories were misinformation, violent or graphic content, hate speech, and spam/scams.
Secondly, the search recommendation algorithm was singled out as the principal problem, with over 70 percent of the reports flagging videos recommended to volunteers by YouTube’s automatic recommendation system.
Finally, non-English speakers were deemed the most affected, as the rate of YouTube Regrets was 60 percent higher in countries that do not use English as a primary language. Countries such as Brazil, Germany and France were particularly high on the list, showed the study.
“The goal of our recommendation system is to connect viewers with content they love and on any given day, more than 200 million videos are recommended on the homepage alone.”
Forsided, 09.07.2021
Source: Sputnik News