There was another round of confused Facebook outrage this month when a story in the Atlantic revealed the social media giant had intentionally toyed with users’ mood — allegedly for science, but in reality, for money. FB turned up the sadness dial on some users’ news feeds, and found out the sadness was contagious (happiness was, too).
The study that’s produced the outrage did qualify as science. It was published in the prestigious journal Proceedings of the National Academy of Sciences. It has plenty of folks scrambling.
What does the nation’s leading privacy researcher, and a frequent but even-handed Facebook critic, think of the Facebook mood manipulation study controversy? It’s a case of “shooting the messenger,” Alessandro Acquisti of Carnegie Mellon University told me. It’s also a rare peek “through the looking-glass” at the future of privacy.
Acquisti has been involved in designing hundreds of studies, and he has a deep research interest in privacy, so I asked him for his reaction to the Facebook research dust-up. His response might surprise you.
“The reaction to the study seems like a case of shooting the messenger,” he said. “Facebook (and probably many other online services) engages daily in user manipulation. There is no such thing as a “neutral” algorithm; Facebook decides what to show you, how, and when, in order to satisfy an externally inscrutable objective function (Optimize user interactions? Maximize the time they spend on Facebook, or the amount of information they disclose, or the number of ads they will click? Who knows?) The difference is that, with this study, the researchers actually revealed what had been done, why, and with what results. Thus, this study offers an invaluable wake-up call – a peek through the looking glass of the present and future of privacy and social media.
“Those attacking the study may want to reflect upon the fact that we have been part of the Facebook experiment since the day any of us created an account, and that privacy is much more than protection of personal information — it is about protection against the control that others can have over us once they have enough information about us.”
Here’s my take on it. It’s a bad idea to harm people for science. In many cases it’s merely unethical, not illegal, but it’s a really bad idea. When harm is unavoidable — say you are trying an experimental drug that might cure a person with cancer, or might kill them faster — scientists must obtain “informed consent.” Now, informed consent is a very slippery idea. A patient who is desperate and ready to try anything might not really be in a position to give informed consent, for example. Doctors and researchers are supposed to go the extra mile to ensure study subjects *truly* understand what they are doing.
Meanwhile, in many social science studies, informed consent prior to the experiment would wreck the study. Telling Facebook users, “We are going to try to manipulate your mood” wouldn’t work. In those cases, researchers are supposed to tell subjects as soon as is feasible what they were up to. And the “harm” must be as minimal as possible.
Everyone who’s conducted research (including me!) has a grand temptation to bend these rules in the name of science — but my research has the power to change the world! — so science has a solution to that problem. Study designs must be approved by an Institutional Review Board, or IRB. This independent body decides, for example, “No, you may not intentionally make thousands of people depressed in order to see if they will buy more stuff. At least if you do that, you can’t call it science.”
Sign up for Bob Sullivan’s free email newsletter.
Pesky folks, those IRB folks. My science friends complain all the time that they nix perfectly good research ideas. A friend who conducts privacy research, for example, can’t trick people into divulging delicate personal information like credit card numbers in research because that would actually cause them harm.
Facebook apparently decided it was above the pesky IRB. Well, the journal editor seemed to say the research was IRB-approved, and then later, seemed to say only part of the research was IRB approved, all of which seems to say no IRB really said, “Sure, make thousands of people depressed for a week.”
And while a billion people on the planet have clicked “Yes” to Facebook’s terms of service, which apparently includes language that gives the firm the right to conduct research, it doesn’t appear Facebook did anything to get informed consent from the subjects. (If you argue that a TOS click means informed consent, send me $1 million. You consented to that by reading this story).
Back to Facebook researchgate. The problem isn’t some new discovery that Facebook manipulates people. Really, if you didn’t realize that was happening all the time, you are foolish. The problem is the incredible disconnect between Facebook and its data subjects (ie, people). Our petty concerns with the way it operates keep getting in Facebook’s way. We should all just pipe down and stop fussing.
Let’s review what’s happened here. Facebook:
1) Decided it was above the standard academic review process
2) Used a terms of service click, in some cases years old, to serve as “informed consent” to harm subjects
Think carefully about this: What wouldn’t Facebook do? What line do you trust someone inside Facebook to draw?
If you’d like to read a blow-by-blow analysis of what went on here – including an honest debate about the tech world’s “so-what” reaction – visit Sebastian Deterding’s Tumblr page.
Here’s the basic counter-argument, made with the usual I’m-more-enlightened-than-you sarcasm of Silicon Valley:
“Run a web site, measure anything, make any changes based on measurements? Congratulations, you’re running a psychology experiment!” said Marc Andreessen, Web browser creator and Internet founding father or sorts. “Helpful hint: Whenever you watch TV, read a book, open a newspaper, or talk to another person, someone’s manipulating your emotions!”
In other words, all those silly rules about treating study subjects fairly that academic institutions have spent decades writing – they must be dumb. Why should Silicon Valley be subject to any such inconvenience?
My final point: When the best defense for doing something that many people find outrageous is you’ve been doing it for a long time, it’s time for some soul-searching.