A barely controllable number of videos is uploaded to YouTube every day. Those responsible for the site recently increasingly rely on software and artificial intelligence, but that has not necessarily proven itself. Now (more) people should be there again.
In recent years, Google and YouTube have increasingly relied on AI to help moderate content. On the one hand, this is due to the ever-increasing amount of content, on the other hand, YouTube has had to rely more and more on automatisms in recent months due to the COVID 19 pandemic.
In March, YouTube announced that it would rely on systems that mark and remove videos using machine learning. This included content such as hate speech and misinformation. However, YouTube has now told the Financial Times that this use of AI has led to a significant increase in video distances and incorrect takedowns.
Deleted twice as much
According to the Financial Times, eleven million YouTube videos were removed between April and June, roughly double the amount of content previously removed in a comparable period of time. Objections were lodged against the deletion of around 320,000 videos, and in around half of the cases, this was crowned with success. That, too, is roughly double what is normal. YouTube’s Chief Product Officer Neal Mohan openly admitted this mistake but said that to protect users, it is better to delete more than too little.
Human Moderators Back On YouTube
Now people should again be increasingly responsible for viewing content, but they should not replace the automatic systems again, but work for hand in hand with the AI system. Because although there are always and numerous borderline cases, a lot of content is uploaded that the software can without any doubt recognize as a violation.