Facebook, Which Refuses To Protect Your Private Information, Wants The Government To Regulate You

Facebook CEO Mark Zuckerberg arrives before a joint hearing of the Commerce and Judiciary Committees on Capitol Hill in Washington, Tuesday, April 10, 2018, about the use of Facebook data to target American voters in the 2016 election. (AP Photo/Andrew Harnik)

Facebook CEO Mark Zuckerberg arrives before a joint hearing of the Commerce and Judiciary Committees on Capitol Hill in Washington, Tuesday, April 10, 2018, about the use of Facebook data to target American voters in the 2016 election. (AP Photo/Andrew Harnik)

Facebook has faced scandal after scandal regarding its inability (and often it seems like straight-up refusal) to protect your private data. It’s not just that you are the product Facebook is selling to advertisers, but recent reporting shows that they have mishandled your account information, passwords, and other highly sensitive data for far longer than anyone realized.

In fact, they were just recently busted asking new users for their email passwords when signing up, and have since had to shut that practice down.

Yet, despite all this and more, Facebook CEO Mark Zuckerberg has decided the real problem with social media isn’t his company. The real problem is you.

(NOTE: There is a lot of transcript here, courtesy ABC News. The important quotes are bolded, but the larger blocks of text are meant for context so no one can accuse this post of cherry-picking.)

ZUCKERBERG: You know, after 2016, when we saw what Russia tried to do, in interfering in the election — we’ve implemented a lot of different measures to verify any advertiser who’s running a political ad to create an archive of all the political ads, so anyone can see what advertisers are running, who they’re targeting, how much they’re paying — any other ads that they say. But one of the things that’s unclear is, actually, what is the definition of a political ad, right? And that’s a really fundamental question for this.

TEPHANOPOULOS: Does it have to say, “Vote for,” or, “Vote against,” for example…

ZUCKERBERG: Well, yeah. That’s exactly right. All of the laws around political advertising today primarily focus on a candidate and an election, right, so, “Vote for this candidate in this election.” But that’s not, primarily, what we saw Russia trying to do and other folks who were trying to interfere in elections. And what we saw them doing was talking about divisive political issues. They’d run, simultaneously, different campaigns on social media trying to argue for immigration or against immigration. And the goal wasn’t, actually, to advance the issue forward. It was just to rile people up and be divisive. But the current laws around what is political advertising don’t consider discussion issues to be political. So that’s just one of the examples of where you know, it’s not clear to me, after working on this for a few years now, that we want a private company to be making that kind of a fundamental decision about, you know, what is political speech? And how should that be regulated?

So, because Facebook can’t control fake accounts spreading misinformation, Zuckerberg thinks it’s best for the government to regulate your speech.

STEPHANOPOULOS: And how do you respond to someone who says, “But wait a second. That’s your responsibility. It’s your platform. It’s your company?”

ZUCKERBERG: Well, I think, broadly, we would say that setting the rules around political advertising is not a company’s job, right? I mean, there’s been plenty of the rules in the past. It’s just that, at this point they’re not updated to the modern threats that we face or the modern kinds of nation state trying to interfere in each other’s elections. We need new rules, right? It’s not, you can’t say that an election is just some period before people go to vote. I mean, the kind of information operations that these folks are trying to do now are ongoing, permanently. So I just think that we need new rules on this. Now, at Facebook, we’re doing the best that we can on each of these issues. But I think, ideally, you would have standards that you would want all of the major companies to be abiding by.

STEPHANOPOULOS: You’re already seeing the FCC push back fairly hard against this, two commissioners, I think, saying, “No, we don’t want to get into the business of policing the First Amendment.”

ZUCKERBERG: Yeah. I don’t think that that’s what this is, though, right? I think it’s you can say that kind of any regulation around what someone says online is protected. But I think that that’s clearly not right today. I mean, we already do have regulations around what you can do, in terms of political advertising. And even without getting into saying, you know, “Okay, here’s the type of content. And here’s what we’re going to define as, you know, hate speech,” for example — I still think it would be a positive step to demand that companies issue transparency reports around, well, here’s the amount of content on your service or that is every kind of harmful category. Here’s the amount of hate speech. Here’s the amount of misinformation. Here’s the amount of bullying of children. Because by making that transparent, that puts more pressure on companies in order to be able to manage that. And people, publicly, can see which companies are actually doing a good job and improving and which ones need to do more. I’m actually — we release our transparency report on how we’re proactively finding all of these harmful kinds of content. Today, it’s every six months. But I’ve committed that we’re going to get to every quarter. Because I actually think that it’s as important — that kind of a transparency report around content, as the quarterly financial statements that we report. I mean, this is, like, really critical stuff for society. So I don’t think that anyone would say that having companies have to be transparent about the amounts of harmful content is any kind of First Amendment issue.

The problem with Zuckerberg’s logic is that all it takes is for someone to say they’re offended about something for it to be deemed harmful, hateful, or political. If that kind of speech is going to be regulated, is there anyone objective enough at Facebook to make the distinction between the reports that truly show hateful speech and the reports that are meant to silence political opponents?

If history is anything to go by, it’s highly doubtful.

Facebook has struggled with objectivity in determining real and fake news, and it has doubly struggled with protecting its users. There is no reason to think that any regulation they’d support from the government is a regulation that they could objectively adhere to.

Furthermore, yes, it is a First Amendment issue. You are asking the government to abridge what people say on their social media pages. Many of the people this would affect are not Russian trolls, but Americans who are passionate about their politics. That would be a major mistake.