Considering that Facebook is facing multiple criminal investigations for allegedly selling data from “hundreds of millions” of users to more than 150 business partners without the users’ consent, it would seem ridiculous for the company to ask potential victims of revenge porn to send them nudes.
We live in a ridiculous time.
NEW: Facebook announces new steps to fight revenge porn on its platforms, including an option to "provide a photo proactively to Facebook" so that it does not get shared more widely. https://t.co/DuAEjgS4Vz pic.twitter.com/YWjhQBp86O
— ABC News (@ABC) March 15, 2019
In a Friday post, Facebook outlined a number of steps they’re taking to quickly detect and remove revenge porn and to prevent it from occurring in the first place. One step they’re taking is to expand the use of AI to detect nude or near nude images that are shared on Facebook and Instagram even before the content is reported.
Another step, however, asks potential revenge porn users to trust Facebook with their nude images before the images are shared on Facebook or Instagram.
“This new detection technology is in addition to our pilot program jointly run with victim advocate organizations. This program gives people an emergency option to securely and proactively submit a photo to Facebook. We then create a digital fingerprint of that image and stop it from ever being shared on our platform in the first place. After receiving positive feedback from victims and support organizations, we will expand this pilot over the coming months so more people can benefit from this option in an emergency.
Under the procedure, the potential victim contacts a specified victim’s rights group (depending upon their country of residence) to inform them of the revenge porn threat. They’re then instructed to send the photo to themselves using Facebook Messenger, and the victim’s rights group notifies Facebook of the submission “via a secure form.” The next steps, according to Facebook’s newsroom:
Once we receive this notification, a specially trained representative from our Community Operations team reviews and hashes the image, which creates a human-unreadable, numerical fingerprint of it. We store the photo hash—not the photo—to prevent someone from uploading the photo in the future.
If someone tries to upload the image to our platform, like all photos on Facebook, it is run through a database of these hashes and if it matches we do not allow it to be posted or shared.
Once we hash the photo, we notify the person who submitted the report via the secure email they provided to the [victim’s rights group] and ask them to delete the photo from the Messenger thread on their device. Once they delete the image from the thread, we will delete the image from our servers.
It’s admirable that Facebook is attempting to prevent people from being victims of revenge porn. (Obviously, the first solution would be for people to not share intimate photos in the first place.) But, they expect people to send the photo to themselves on Facebook Messenger, which is notoriously unsecure, and to trust that people inside Facebook won’t somehow access these images? If the company had a stellar record of safeguarding user data and wasn’t infamous for targeting political undesirables, there might be an argument for trusting the company. With the company’s actual track record, there is not a single reason a potential victim should use this “service.”
Join the conversation as a VIP Member