How AI Filters Objectional Content on YouTube?
Apr 8, 2021
How AI Filters Objectional Content on YouTube?
Ambika Bhandari
Ambika Bhandari

According to Statista, YouTube removed 41 percent of content that jeopardizes child safety in the fourth quarter of 2020. Do you think a human can take on such a humungous task?

No, right?

That?s why this is done by Artificial Intelligence. Google uses machine learning systems to detect explicit content in all its platforms including YouTube. Scams, misleading information, adult content, violent content, and more are deleted after scanning.

These contents take over a massive space on the web and result in social problems. Often people make hate speech on social media first. Uploading a video on YouTube is also a way people use to make explicit content.

Recently, the threats of storming on U.S Capital were first made online.

So, if Artificial Intelligence can detect such threats in time, it can keep the human community safe from any riot. However, it is not as easy as it seems. AI faces several challenges in detecting such contents.

How Youtube Uses Artificial Intelligence to Remove Objectional Content?

There are more than 2 billion active monthly users on YouTube. Every minute, users upload 300 hours of videos on the platform. This means a lot of objectional content too.

YouTube has to use man force to remove such content. However, with the arrival of Artificial Intelligence, things changed. Although, there is a human specialist along with the machine at all times to inspect the work.

The platform uses video classifiers to filter content on YouTube. Any video on this platform is only a stack of images but these images together convey a message. So, classifying the images in a video is not the best.

However, this is how video classifier works. Here are a few steps of classifying videos using a classifier:

1. The video file is set on a loop to detect all the frames.

2. Each frame passes through CNN or convolutional neural networks (feature extractors)

3. They are classified independently.

4. The Frames are classified and labeled.

For example, there is a boat in a frame, but the video is about swimming. Therefore, such videos require more than one label to identify the correct content in them. However, video-classifiers can also provide an idea about the whole content in the video.

This is exactly how YouTube identifies explicit videos in its database. As soon as a video is uploaded on its system, it is stored in the data set. The video classifiers then work to find out what is the content type.

If it matches with something that is labeled objectional, then the video is removed. However, this also causes errors. The classifiers can even remove news content with violent images.

So, the artificial intelligence community has called the YouTube video classifiers ?trashy?.

Why AI Specialists Disapprove of the YouTube Video Classifiers?

In 2018, YouTube removed 77% of videos from April to June even before they received a single view. However, some YouTubers are actively trying to fool the algorithm, which is not that difficult.

For example, instead of using a racist term like 'African American' people started using 'Basketball Americans' to further genocide conspiracies and more. Many even resorted to using spelling errors to avoid algorithms detecting their content.

The video classifier only scans the YouTube homepage and checks feedback from users to remove any explicit video. Even videos with misleading thumbnails or cuss words on them are removed.

Are Video Classifiers Helping YouTube?

The AI video classifiers are not full proof. However, before it came into the scene, YouTube could only detect 8% of the content as violent or explicit. From 8% it has went up to 83% which is a great feat.

Moreover, Artificial Intelligence can flag the content even before 10 views are completed. Before 2017, YouTube could remove content only after it got 10 views.

After the new mechanism rolled out, it also took down some of the news from Bellingcat and Middle East Eye. Those videos consisted of images from the Syrian war.

However, YouTube later apologized for deleting the war crime videos and other news about the civil war in Syria. Not only that, but a few news outlets found that their channels were suspended including Bellingcat.

The incident occurred just after Google announced the use of AI on YouTube. The AI was unable to differentiate from news documents and videos and flagged them as "extremist". This led to the removal of this content.

"With the massive volume of videos on our site, sometimes we make the wrong call. When it?s brought to our attention that a video or channel has been removed mistakenly, we act quickly to reinstate it? Youtube told the news outlet.

All the videos that got deleted were restored in the channel as before. However, it caused a lot of problems for the news outlets.

Why Removing Objectional Content is Important?

There is constant pressure from Governments, brands, and the public to take down explicit videos from YouTube. It is because sometimes brand or Government ads show up on offensive videos. When such ads appear with such content, it creates a bad reputation for the brand.

For example, after ads started appearing in videos supporting racism and terrorism, brands such as Havas UK pulled their ad investments.

Sometimes, YouTube also recommends content that may be harmful to children. This led to a backlash from netizens. To counter this, YouTube came up with options that allows the creators to select whether a particular video is for children or not.

However, this is not enough as the user may select the video for children option and upload the offensive content. The only way to fix this when any user reports it to YouTube. However, by the time someone identifies such content, it may have got a few views.

Having objectional videos on the platform may cause aggression and riots. In the case of racist videos, things may turn worse. So, it's best to take down such videos.

Should YouTube Upgrade its AI Technologies?

From the various cases, it is evident that the video classifiers are not doing the trick. They act only on the home page. YouTube needs an algorithm that can deep search such contents, analyze them and remove them immediately.

Although the AI which recommends videos are top-notch. They work according to the data collected from the user. Google collects every little data from the user. As Google owns YouTube, so it is easier to use those data to recommend.

However, the recommendation of harmful content may cause more damage than good. So, YouTube must come up with AI that can remove such content.

Write for us

Our writers are independent, remote and growing in numbers. Join our team of enthusiastic authors and begin creating and earning today.

Get Started