Youtube is trying to reduce the spread of toxic videos on the platform by limiting how often they appear in users’ recommendations. The company announced the shift in a blog post recently, writing that it would begin cracking down on so-called “borderline content” that comes close to violating its community standards without quite crossing the line.
“We’ll begin reducing recommendations of borderline content and content that could misinform users in harmful ways—such as videos promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11,” the company wrote.
These are just a few examples of the broad array of videos that might be targeted by the new policy. According to the post, the shift should affect less than 1 percent of all videos on the platform.
Social media companies have come under heavy criticism for their role in the spread of misinformation and extremism online, rewarding such content—and the engagement it gets—by pushing it to more users.
In November, Facebook announced plans to reduce the visibility of sensational and provocative posts in News Feed, regardless of whether they explicitly violate the company’s policies.
A YouTube spokesperson told WIRED the company has been working on its latest policy shift for about a year, saying it has nothing to do with the similar change at Facebook.
The spokesperson stressed that Friday’s announcement is still in its earliest stages, and the company may not catch all of the borderline content immediately.
Over the past year, YouTube has spent substantial resources on trying to clean up its platform. It’s invested in news organizations and committed to promoting only “authoritative” news outlets on its homepage during breaking news events.
It’s partnered with companies like Wikipedia to fact check common conspiracy theories, and it’s even spent millions of dollars sponsoring video creators who promote social good.