How YouTube is handling harmful content on its platform
Harmful content including misinformation makes up for 1 per cent of the overall content on the platform, according to Neal Mohan, Chief Product Officer, YouTube.“Harmful content is a very small part of what’s on YouTube — a fraction of 1% and even more importantly, consumption of borderline content or harmful misinformation videos that comes from our recommendations is significantly below 1% and we’re constantly working to reduce this even further,” said Mohan at YouTube’s virtual ‘Responsibility Roundtable’ meeting held on Friday.“This work has focused on four pillars: removing violative content, raising up authoritative content, reducing the spread of borderline content and rewarding trusted creators – the 4Rs of responsibility,” he said.YouTube has invested a significant amount over the years into fighting misinformation on the platform according to the YouTube CPO.“These investments allow us to respond quickly to emerging challenges, for example, COVID-19,” he said.Misinformation in the times of CoronaChanging facts around Covid-19 have added to the challenge of combating misinformation on the platform, according to Mohan.“One of the things that makes it even more challenging is that the facts around Covid are changing on a daily, hourly basis. Science is being created every single day around the world to help humanity fight this crisis,” the YouTube executive said.“We’re consulting with global and local health authorities as we develop these policies and we’ve been updating them on an ongoing basis to stay current with the science -10 updates in the past two months alone. We’ve removed thousands of videos under these policies,” he added stating that the platform had partnered up with India’s Ministry of Health and Family Welfare and MyGov along with the World Health Organization.Mohan further added that the consumption of content coming from authoritative sources has grown 110 per cent in India during the first three months of 2020.While recommending content from authoritative sources, the platform has also cracked down on misinformation by removing content that violates its policies. It has removed over 8.20 lakh such videos from the platform in India in the last quarter.However, one challenge faced by the platform in moderating content is to deal with ‘false positives.’ False positives are videos which are actually legitimate but have been flagged by the platform. Managing false positives had been challenging during the pandemic as moderators moved offline and content moderation was handled majorly by machines.“With the massive volume of videos on our site, sometimes we make the wrong call. When it’s brought to our attention that a video has been removed mistakenly, we act quickly to reinstate it. We also offer uploaders the ability to appeal removals, and if an appeal is received, we re-review the content,” he said.YouTube’s policies for IndiaMohan further added that the platform has tailored its policies in India in terms of content moderation.“We also work very closely on our content policies with organizations in India to make sure that our policies and our enforcement guidelines reflect the specific conditions in India so that we can quickly act on content that might be violated on our platform,” Mohan said.“One of the ways that we updated it was we included caste, as one of the criteria, for example, that could lead to hate speech violations where one caste was, you know, putting another down or implying inherent inferiority in one caste over the other,” Mohan explained.Hate speech and harassment a challengeTalking about hate speech and harassment on the platform, Mohan further added that this type of misinformation has been particularly challenging as hate speech can often blend into political speech.“Hate speech harassment is I think it is a particular challenge because the lines of what’s misinformation versus what’s true, often-times is quite blurry. Sometimes it leads up against political speech. Of course, political speech is important and we want to protect that on our platform. So we use a combination of tools and techniques where we’ve invested,” Mohan said.“We have invested an enormous amount in building engineering machine learning systems to identify this type of content,” Mohan added noting that its technologies had helped reduce such content on the platform by 70 per cent so far.“We’re rolling out this technology to all parts of the world, of course, including India,” he said.