Facebook's Creepy Plan to Match Your Medical Data to Your Facebook Account

FILE - In this Tuesday, April 18, 2017, file photo, conference workers speak in front of a demo booth at Facebook's annual F8 developer conference, in San Jose, Calif. Facebook announced Tuesday, June 27, 2017, that it now has more than 2 billion users. (AP Photo/Noah Berger, File)

FILE – In this Tuesday, April 18, 2017, file photo, conference workers speak in front of a demo booth at Facebook’s annual F8 developer conference, in San Jose, Calif. Facebook announced Tuesday, June 27, 2017, that it now has more than 2 billion users. (AP Photo/Noah Berger, File)

 

On Friday, a story broke about a Facebook plan to get hospitals to allow them access to patient information so Facebook could match that information to Facebook profiles.

Facebook has asked several major U.S. hospitals to share anonymized data about their patients, such as illnesses and prescription info, for a proposed research project. Facebook was intending to match it up with user data it had collected, and help the hospitals figure out which patients might need special care or treatment.

The proposal never went past the planning phases and has been put on pause after the Cambridge Analytica data leak scandal raised public concerns over how Facebook and others collect and use detailed information about Facebook users.

“This work has not progressed past the planning phase, and we have not received, shared, or analyzed anyone’s data,” a Facebook spokesperson told CNBC.

But as recently as last month, the company was talking to several health organizations, including Stanford Medical School and American College of Cardiology, about signing the data-sharing agreement.

Nice nod to the ogre du jour, Cambridge Analytica, but we know that Facebook winked at violations of use agreements by a lot more folks than CA. Like the 2012 Obama campaign — and we can safely bet the 2016 Clinton team had the same dispensation.

Here’s how and, allegedly, why they proposed doing it:

While the data shared would obscure personally identifiable information, such as the patient’s name, Facebook proposed using a common computer science technique called “hashing” to match individuals who existed in both sets. Facebook says the data would have been used only for research conducted by the medical community.

Facebook’s pitch, according to two people who heard it and one who is familiar with the project, was to combine what a health system knows about its patients (such as: person has heart disease, is age 50, takes 2 medications and made 3 trips to the hospital this year) with what Facebook knows (such as: user is age 50, married with 3 kids, English isn’t a primary language, actively engages with the community by sending a lot of messages).

The project would then figure out if this combined information could improve patient care, initially with a focus on cardiovascular health. For instance, if Facebook could determine that an elderly patient doesn’t have many nearby close friends or much community support, the health system might decide to send over a nurse to check in after a major surgery.

And the rest:

To address these privacy laws and concerns, Facebook proposed to obscure personally identifiable information, such as names, in the data being shared by both sides.

However, the company proposed using a common cryptographic technique called hashing to match individuals who were in both data sets. That way, both parties would be able to tell when a specific set of Facebook data matched up with a specific set of patient data.

The issue of patient consent did not come up in the early discussions, one of the people said. Critics have attacked Facebook in the past for doing research on users without their permission. Notably, in 2014, Facebook manipulated hundreds of thousands of people’s news feeds to study whether certain types of content made people happier or sadder. Facebook later apologized for the study.

As the playwright Sean O’Casey’s “The Drunkard” character says in his play Drums Under the Window, “That’s going beyond the beyonds. That’s just hooliganism.”

Here are the major issues with this approach. You can’t use individual +patient data, anonymized or not, without consent. A hospital can do an aggregate dataset, like x number of males/females by age and race/ethnicity. Where the legal bright line is in using personally identifiable information. This is what Facebook is proposing to do with its “hashing.” It is taking notionally anonymous data and then associating the data with the actual person. When personally identifiable information becomes associated with a person’s name, particularly health information and insurance information, and that is made available to anyone without consent this becomes what is technically known as a “felony.”

And where is the end here? Why stop at cardiovascular disease? Wouldn’t dementia be a logical candidate? Or maybe persons seeking treatment for STDs? Wouldn’t it be a great public service if you could flag people with the clap so people who might potentially have sex with them, like their spouses, would be warned? Can you just imagine the ads that would appear in your timeline?

Allegedly, this project has been terminated. We don’t know why. Perhaps an actual lawyer took a look at it and ended up in the cardio ward. But it is indefensible on its face absent informed consent of patients.

There is no word on how many hospitals agreed to the study but HHS really needs to find out.

What is becoming increasingly obvious is that the large tech companies are pretty much the way Ronald Reagan described government:

“Government is like a baby. An alimentary canal with a big appetite at one end and no sense of responsibility at the other.”

Whether it is Facebook’s banning of people based on “community standards,” which obviously includes the crime of WrongThink or the head of Twitter cheering on a literal civil war against conservatives, it is apparent that the large social media enterprises are not only monopolies but they are acting as arbiters of what can and cannot be said. (I’m saying this as a pissed off YouTube user who has had US government video of combat actions pulled for violating “community standards,” so I am not neutral.) When you couple that with their business model–harvesting user information and monetizing it–and the disrespect for law and privacy that this project entails, we are, in my opinion, at a crisis point. What we are confronting is the same problem the nation perceived in the era of the oil and steel monopolies only it is a problem much more threatening to the republic than simple price-fixing. And we should probably introduce them to the Sherman Antitrust Act as an incentive to modify their behaviors and business models.