YouTube Is Finally Removing White Supremacy Videos From Its Site | 22 Words

Even though the internet's been around for almost two decades now, we're still finding out how it affects our minds. It all sounded so great when it started, didn't it? We'd jack all our computers together and share ideas easily and openly. But, today, we're finding out that an idea can be dangerous when the internet gets its hands on it; if that idea happens to be particularly harmful, it can fester and grow toxic.

YouTube is the most prominent dealer of ideas in the world. The entire platform is about letting anyone with a camera put their ideas out into the world. Therefore, YouTube bares the huge responsibility of monitoring what ideas get out into the world.

YouTube has finally decided to get tougher on which ideas it allows to manifest on the platform. The company put out a blog post outlining some changes that it has made to its policies regarding hateful videos, announcing that it would finally — finally — outright ban videos that are explicitly hateful.

Read on to find out what specifically will be changing about YouTube's policies, and what some of the potential implications may be...

YouTube's been around since 2005.

Because so much of our collective time online has been spent finding obscure videos to show our friends on YouTube, it almost feels like it's as old as the internet itself.

In those early days, it was kind of inane.

The first YouTube video, "Me at the Zoo," featured YouTube co-founder, Jawed Karim, just... chillin' at the zoo. It was, to be charitable, not the most entertaining video that the site would ever see.

Google bought YouTube in 2006.

Spending $1.65 billion on the video streaming site, Google got into the online video business, not by building its own service, but by acquiring the biggest service itself. It's like going fishing, but instead of going out on the lake, you hang out at home and buy your buddy's biggest fish off him when he gets back.

YouTube evolved.

More or less overnight, YouTube went from being populated by nonsensical videos about Pokemon to leading a media revolution.

Today, there are hundreds of millions of videos on YouTube.

Some would that say that's too many videos...

The platform's openness is what's made it so successful.

The fact that literally anyone with a camera could make and upload a video is what gives YouTube its power. It was free of the traditional media gatekeepers who would decide what content could be made and released to the public.

And while some of them are silly as hell...

Full disclosure — we searched "silly videos" on YouTube, and this was one of the search results. But we don't think it's silly because everyone's worried about being pregnert.

A lot of them are very straight-laced.

Oh boy, just what we were hoping for — an explanation of tariffs. Why don't we all just call over our friends, pop some popcorn, and get rowdy watching this beauty?

Some of the videos on YouTube are capital-I Important.

The platform has the power to immediately show us what's going on in the world. The incident captured in this video, one in which a university police officer sprayed a number of UC Davis students with pepper spray during a protest, could have easily been ignored if the entire internet hadn't seen how awful it was for themselves.

But the way that YouTube works can make it a less-than-ideal news source.

The YouTube economy tends to prioritize opinions over facts. The more scorching hot your take is, the more likely it is to be liked and shared. Facts just don't have as much sway on the platform.

After your video plays, an unseen algorithm provides recommendations.

The next brutal part about YouTube as an information source is its algorithm, which collects data about videos that you watch and ostensibly uses that data to find new videos that you might like.

That leads to YouTube recommending videos that are... not exactly what you're looking for.

Sometimes, simply watching a video on one topic makes the algorithm suggest a lot more videos on that topic, even if it's the opposing viewpoint on that topic.

This has lead to YouTube taking a ton of flack for letting violent ideas spread.

Not only are videos with awful ideas recommended by YouTube - albeit unknowingly - but so many rotten creators use the disingenuous "both sides" in their titles and content to dissuade YouTube from banning them despite the fact that they're clearly preaching hatred.

And, now, YouTube is addressing that hate.

In a blog post, YouTube announced changes to its stance on hate speech saying that it will no longer allow videos "alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status" on their platform.

That means that YouTube will now officially ban videos advocating hatred.

Furthermore, YouTube stated that it will "remove content denying that well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary, took place".

YouTube also acknowledged some of the reasons that this sort of content might stay on the site.

If a video was looking to provide analysis of current events, it would need to discuss the ideas upon which they're providing analysis. Otherwise, that analysis won't make any sense at all.

Sometimes videos with strong, harmful opinions are used by researchers to study hateful ideologies.

YouTube makes a point to mention this, stating that "researchers and NGOs looking to understand hate in order to combat it," frequent the site and that YouTube is "exploring options" that will allow those people to see and study those hateful videos.

Of course, some mistakes will be made.

Although the @MrAllsopHistory Twitter page has since been restored, it was an example of how this policy could lead to some well-meaning creators getting caught in the crossfire.

This change comes just days after YouTube decided not to punish YouTuber, Steven Crowder, for relentlessly harassing journalist, Carlos Maza.

When Vox's Carlos Maza continuously reported the abuse that he was receiving from Steven Crowder, YouTube initially said that they wouldn't throw down the ban, saying that Crowder's speech did not "violate [their] policies". It sure feels like YouTube's "borderline content" policy shift is in reference to this incident, as Crowder has been repeatedly reported for harassment, but has stayed just barely on the right side of the line vis a vis YouTube's stated policies. The new policy allows YouTube to act even without explicit rule violations.

There have been questions concerning freedom of speech.

We're all familiar with the "shouting 'fire' in a crowded movie theater" exception to free speech, but how do you translate that to the emotional damage that hate speech can cause and the overall damage hate speech can do to a community?

But why does free speech automatically equate to awful speech?

The reason that free speech is so important is that it allows us to speak the truth to power. The things that these hateful YouTubers are saying aren't speaking the truth to power, they're harming people for no reason.

It's clear that YouTube has to draw some kind of line.

As the entity putting all this speech into the world, it does ultimately fall to YouTube to decide what is, and what is not, acceptable.

Earlier this year, YouTube worked to change its recommendation algorithm.

At the very least, YouTube's been working to change its recommendation system. It's one thing to let people already lost to anger search out hateful content; it's another to suggest that hateful content to someone who hadn't even considered it before.

That means trying to de-emphasize all that "borderline content."

One tool that YouTube has at its disposal is demonetization — it can say that the videos will stay up on its site, but that they will not receive any money from ads that other YouTube partners get.

But the "borderline" isn't well-defined.

It almost feels like "borderline" is YouTube's version of "I know it when I see it." Speech is so ephemeral that it's hard to say definitively when, and how, it crosses the line.

All of these steps are vital because hate can be like a virus.

It's not healthy to be so deep inside anger and hatred that you have to put it on YouTube for the world to see. Once it's out there, others can find it and ideas can be contagious.

If someone follows a YouTube bread trail, it can make their already negative state of mind even worse.

It's a very human thing to take the bad hands that life deals us and look for something, or someone, to blame. The more hatred that gets spat out into the world, the more likely that someone looking for something — anything — to direct their negative feelings towards will find it.

That's why taking these steps is so important for YouTube.

Someone needs to protect video creators, and that responsibility absolutely falls on YouTube itself. It takes so much mental energy to combat harassment, and it's damn near impossible for creators to do that for themselves (usually).

YouTube is in a tough spot.

There really is no right answer here for YouTube. For every step that they take towards protecting creators, they become less open — the quality that made YouTube so important in the first place.

But if they have to err on one side, it should be the side of safety.

It just seems like it makes so much more sense to take care of vulnerable human beings than to strive to be completely "open," an unachievable ideal that is almost exclusively championed by bad actors.