I freely admit I’m a cynic and certainly look for the angle anytime a company, organization or individual comes promising to do some things for the good of everyone.
I apologize for that in advance because while I think YouTube and Google policing their sites for terrorist and extremist content is a good thing, I also think they’re not just going to be relegating their efforts to those who like to slice off heads. They’ll be going after “offensive” speech, too, unless I’ve misunderstood some of their past statements.
When Facebook was caught suppressing conservative content in their “trending stories” section at the beginning of last year’s primary season, they immediately went into clean-up mode and promised to crack down on “fake news”. Which was odd because they had just been slammed for being the arbiters of what people were allowed to see. Very confusing.
In any event, this new effort to, as the Guardian piece says, “take a tougher stance on videos that contain inflammatory religious or supremacist content” by, among other things, creating new scanning tools and funding those who moderate their own sites and flag offensive content comes just after SCOTUS ruled that “offensive content” is protected by the 1st Amendment.
ICYMI: Supreme Court ruled 8-0 today that "hate speech" is constitutionally protected free speech, not an exception to the 1st Amdt. pic.twitter.com/JThuNeKDFg
— Some just want to watch the world learn. (@NathanBurney) June 19, 2017
It just feels like Google and Facebook and YouTube and all social media platforms with an interest in controlling what gets out there — and again, who could ever complain about people trying to stop the radicalization of wannabe extremists? — may be bumping up against speech protections.
In short, these huge tech companies are asking us to trust them to decide what is allowed an audience. And I’m personally not sure yet how to think about that.