Google, under pressure to remove videos containing terrorist content, hate speech, spam and other offensive material from YouTube, took down 8 million such videos in the three-month period between October and December 2017.
A vast majority of those videos—6.7 million—were flagged for removal by automated systems powered by Google machine learning technology. Of that number, 76 percent were removed before the videos got a single view, Google said in an update on its recently stepped-up efforts to enforce its policies for posting videos on YouTube.
The update is the first of what Google says will be quarterly reports on how the company is enforcing its community guidelines for YouTube. The goal is to provide greater transparency on the progress that Google is making in removing content that violates YouTube policy, the company said in a blog April 23.
“By the end of the year, we plan to refine our reporting systems and add additional data, including data on comments, speed of removal, and policy removal reasons,” the blog noted.
Google, Facebook and Twitter are under enormous pressure to do more to prevent their platforms from being misused to spread fake news, terrorist propaganda and hate speech following revelations that Russian actors used these platforms to disseminate false information in the run-up to the last general elections.
All three companies also face multiple lawsuits from the families of victims of recent terrorist attacks, who argue that the companies should be held responsible for allowing their platforms to be used to disseminate terrorist propaganda.
Last December, YouTube CEO Susan Wojcicki committed to ramping up the use of machine learning tools and human reviewers to more quickly weed out videos containing violent extremism and other objectionable content.
Wojcicki also committed to stepping up efforts to ensure that people uploading inappropriate videos to YouTube did not profit from advertisements being placed automatically alongside the content. Over the last year some of the world’s largest advertisers have threatened to withdraw ads from YouTube after a report showed Google’s automated ad placement systems placing their ads on extremist and other inappropriate content.
Google’s inaugural quarterly report on YouTube Community Guidelines Enforcement released last week shows that viewers around the world flagged a total of 9.3 million unique videos for removal in Q4 2017. More than 29 percent of the videos were flagged for hateful, abusive, violent or repulsive content, and 7.6 percent for promoting harmful and dangerous acts. About 30.1 percent of the YouTube videos that Google reviewed in Q4 2017 were flagged for containing sexual content, 5.2 percent for videos depicting child abuse and 26.4 percent for containing spam.
The flags came from different types of human flaggers, including ordinary viewers and so-called Trusted Flaggers, a Google-appointed community of reviewers entrusted with identifying videos that might be in violation of YouTube policies. Google received flags for suspected policy violations from YouTube viewers around the world. The country from which Google received the most flags was India, followed by the United States, Brazil, Russia and Germany.