Recently, two different social websites admitted they ran experiments on users who had no idea they were being experimented on. Facebook ran tests to see if they could affect users' emotions, and OkCupid ran tests mismatching users to see if they would interact with each other differently.
The impact of these social experiments was pretty small, but they raise important questions about the ethics of social experimentation. Why were people so angered by these tests, but most people don't raise an eyebrow when it comes to A/B testing website design or calls-to-action?
We're All Part of These Experiments
Facebook's experiment was to figure out whether human emotion was contagious in online social networks. To do this, they chose almost 700,000 Facebook users at random, combed their friends' posts for positive and negative words, and showed their chosen users either fewer positive or negative posts from friends in their News Feeds. Then, they measured whether it affected the positivity or negativity of those people's own posts.
In response to the subsequent media outcry about Facebook's test, OkCupid co-founder Christian Rudder came out with a viral post on their company blog, saying not only does OkCupid conduct online experiments, but so does everyone -- and it's completely normal. "Guess what, everybody," wrote Rudder. "If you use the internet, you're the subject of hundreds of experiments at any given time, on every site. That's how websites work."
Here, Rudder is referring to the small social experiments and A/B tests people run on their websites all the time, whether they're testing ads, page layout, colors of buttons, and so on.
"[None of us] use the 'real' Facebook. Or Twitter. Or Google, Yahoo, or LinkedIn," wrote Josh Costine for TechCrunch. "We are almost all part of experiments they quietly run to see if different versions with little changes make us use more, visit more, click more, or buy more. By signing up for these services, we technically give consent to be treated like guinea pigs."
Facebook may have crossed the line, according to James Grimmelmann, Professor of Technology and the Law at the University of Maryland. “If you are exposing people to something that causes changes in psychological status, that’s experimentation,” Grimmelmann told Slate. “This is the kind of thing that would require informed consent.”
What's more, in Facebook's case, we actually checked a box to give consent when we agreed to the site's data use policy. But we didn't know the parameters of any studies Facebook's done to us. And, more importantly, we were never given the opportunity to opt out.
"My objection is the lack of awareness or permission," says Terry Manspeaker, D.C.-area business owner. "That's probably what creates the discomfort. I think people like to feel they've been given the option to participate in a particular study." But, Manspeaker admits, there is a risk of bias when participants are aware of the study -- and doing a blind study reduces that bias. One could argue that online social networking research simply can't be conducted reasonably if consent is needed from every potential participant.
"Of course we all read the use policy, or said we did by checking that box," says high school English teacher Ayres Stiles-Hall. "So we really only have ourselves to blame. It doesn't really sound ethical, but on the other hand, we're all spending hours and hours using a free product -- except that nothing is free. If we're really offended, it means we ought to be paying closer attention."
But anyone posting to Facebook or logging onto a website should be aware that their online behavior is traceable. "Whether it ought to be is both a moral question we as a society will have to answer through the legislative process, and a legal one based on our system of laws and liberties," says Dr. Adam Irish, Mellon Postdoctoral Teaching Fellow in Political Science and International Relations at Wheaton College. "Lawyers on both sides of the internet privacy issue have primarily focused on our rights -- many of which are, primarily, protections from government snooping."
"But we are still determining what is reasonable, and in the meantime, if people use a company's free services online, they should be aware that their data is likely being gathered, used, and possibly distributed."
Why Are Some Experiments on Users Okay While Others Aren't?
It might be easy to point fingers at Facebook because their experiment explicitly experimented with human emotions -- as in, the word "emotion" was in the name of the resulting academic research paper: "Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks." But what about all the other social and A/B tests Rudder and Costine were talking about? Don't they alter our emotions in one way or another?
"We tend not to think about how much influence a website can have on our feelings," says Dan Ritzenthaler, HubSpot Product Designer and UX expert. And something as simple as your website copy or design can influence visitors' behavior. "Changing a 'Buy' button to a 'Buy Now' button will influence how people interact with your shopping website," says Ritzenthaler. "It might make some people more likely to buy since it's more direct and demanding. It might also come off as bossy and rude to other people. Saying 'Buy Now, Or Else' might make too many people uncomfortable and reduce the amount of purchases."
But if certain words on buttons can change buying behavior, then why isn't everyone up in arms about the ethics of A/B testing?
Rebecca Corliss, Head of Customer Marketing at HubSpot, says it's because people perceive social networks like Facebook and OkCupid differently than they do most websites and products. "Facebook is technically the same as any other product, but people perceive it as a social space that's just for them and forget that a corporation is behind it. So the question is: Is that Facebook's fault or is it the user's oversight?"
And while most A/B tests are pushing for business-oriented results like increasing usage or clicks or purposes, social experiments like Facebook's and OkCupid's weren't conducted to make user experiences better. Instead, they were psychological experiments.
The part of the Facebook study people were most upset about was that they affected users' moods in a negative way: "When positive expressions were reduced, people produced fewer positive posts and more negative posts," the study reads. But if Facebook had conducted a study to see if only positive posts lead to positive moods, and took out the negative part of the equation, people may have reacted more positively to it.
All of this brings up a slew of ethical questions that anyone testing anything online needs to be aware of. Could the test you're about to run possibly damage the people you're testing, whether emotionally, financially, or in another way?
"Ideally, Internal Review Board (IRB) review of any research is a good safeguard for individuals taking part in studies and our broader society," says Dr. Irish. "I do think companies engaged in this type of research should play by IRB rules because operating outside those safeguards creates a risk of allowing harmful research to go forward."
"That said, not every aspect of either study would have been rejected by an IRB review. It would have pressed the investigators to explain how their research would minimize any harm to participants and it probably would have required participants to signal their consent. Practically, Facebook and OkCupid are primarily constrained by their profit margins, so as long as people continue to opt in to using their services, we can expect future studies like the ones we've already seen."
While we wait for clearer legal guidelines for businesses, marketers should extend our personalization strategies to our A/B testing efforts. It's up to us to treat people who visit our website as individuals, even when it comes to social experiments.