20 Sep YouTube Removed Twice As Many Videos Between April and June via @MattGSouthern
YouTube removed twice the usual amount of videos between April and June after relying on AI to moderate content amid forced lockdowns.
A significant proportion of videos removed during that time did not break any of YouTube’s rules.
The increase in video removals is a result of YouTube reducing human oversight during the content moderation process.
YouTube made an abrupt shift to AI-based moderation after lockdown orders prevented its 10,000 person team of content moderators from coming in to work.
YouTube’s machine systems were granted authority to take action on videos identified as containing potentially harmful content or misinformation.
Further, the AI was programmed to err on the side of caution. That means content was removed even if it had borderline content which didn’t quite break YouTube’s rules.
In total, almost 11 million videos were taken down between April and June.
Roughly 160,000 of those videos were reinstated after the creators filed an appeal, which represents half the total number of appeals received.
Interestingly, YouTube’s AI overturned many more takedown decisions than it usually does – from 25% to 50%.
So content was being removed at twice the usual rate because of AI, and also being reinstated at twice the usual rate.
What a mess.
Needless to say, if you suspect some of your YouTube videos were wrongfully removed over the past few months, you may be correct.
Thankfully, YouTube’s content moderation process is not going to continue that way any longer.
YouTube Reverting to Human Moderators
Neal Mohan, YouTube’s chief product officer, tells the Financial Times that human moderators are back to vetting potentially harmful content.
Mohan concedes there are limits to AI-based moderation.
While machines are able to identify videos that are potentially harmful, they’re not as good at deciding what should be removed.
Trained human evaluators are able to “make decisions that tend to be more nuanced, especially in areas like hate speech, or medical misinformation or harassment,” Mohan says.
Going forward, AI will be used to identify potentially harmful content, and human moderators will have the final say in what gets removed.
Machines will still play an integral role in content moderation, but they won’t have full autonomy like they did before.
One area where AI excels is speed. More often than not, videos that are clearly harmful get removed before anyone has a chance to see them.
What does this mean for marketers?
Marketers no longer need to be concerned about videos getting erroneously removed by YouTube’s AI moderation systems.
Marketers that had videos removed between April and June would be well advised to submit an appeal. There’s a chance the videos may be reinstated.
YouTube’s machine moderation targeted some of the most-followed channels, so I can only imagine how many smaller channels were impacted as well.
For example, earlier this month YouTube was trending on Twitter after popular channels MoistCr1TiKaL and Markiplier were hit with community guideline strikes.
That was a result of AI moderation, showing not only did machines have autonomy to remove videos but also issue strikes against channels.
After much drama ensued, YouTube eventually apologized for an over-enforcement of its policies.
The video was reinstated and the copyright strikes were removed.
Update: we’re not going to die on this hill. You were right – after (even further) review, your video & others are back up and these strikes have been removed. This was an over-enforcement of our policies, especially w/ the added context/commentary as you originally pointed out.
— TeamYouTube (@TeamYouTube) September 2, 2020
Let this be a learning experience for YouTube on what can happen when AI is given too much power.
Source: The Financial Times
Sorry, the comment form is closed at this time.