In the span of two days, the New York Times published two stories about Facebook's evolving authority in two somewhat unexpected areas.
The first concerned the social media giant's content moderation policies: a deep-dive into documents obtain by the Times, comprising what it calls "1400 pages from ... rulebooks [provided] by an employee who said he feared that the company was exercising too much power, with too little oversight — and making too many mistakes."
The reason: The rulebooks contained content moderation judgement calls that often concerned global policies and international laws or conflict, which Harvard University's online extremism expert Jonas Kaiser described to the Times as “extremely problematic,” as “it puts social networks in the position to make judgment calls that are traditionally the job of the courts.”
I spent months reporting on an aspect of Facebook that's far more significant than most realize: moderation.
Facebook, I found, is going further than it has acknowledged, exerting tremendous — and largely unseen — power over our politics and societies.https://t.co/LYpfEMdNde
The second story concerned the role of Facebook in responding to users who live-stream severe acts of self-harm on its site -- bringing into question whether or not the company is "giving the appropriate response to the appropriate risk,” as Director of Digital Psychiatry at Beth Israel Deaconess Medical Center Dr. John Torous told the Times.
It raises the question: At what point can or should a line be drawn between "social media site," and international or medical authority?
Of particular interest is the impression of the power held by companies like Facebook -- along with its other Big Tech counterparts, such as Google, Apple, and Twitter -- by the very users on the receiving end of their products and services.
We decided to ask users for their thoughts on the matter -- and learn more about the impression held among users on the amount of power held by tech giants, as well as how (un)safe certain types of tech make them feel.
Here's what we learned.
"Do You Think That Big Tech Companies Are Too Powerful?"
To start, we wanted to get a general impression of how much power users believe tech giants wield. So, we asked 864 people across the U.S., UK, and Canada: Do you think that Big Tech companies -- for example, Facebook, Google, and Apple -- are too powerful?
Nearly two-thirds of respondents answered that, yes -- they do think such Big Tech companies are too powerful.
Throughout 2018, we spent a lot of time giving thought to how users respond to their impression of tech companies -- for example, asking what it would take for them to delete or deactivate their Facebook accounts, or asking if they would stop using a certain company's products after a specific negative news item about it made headlines.
Much of the time, there was a bit of disparity among responses. While many said that some of these news items would be enough to make them stop using a company's products or services -- when asked if they actually had done so, respondents almost always indicated that they had continued using them.
To shed further light on that seeming gap, we asked a series of additional questions on which types of technology -- and which technology companies -- made users feel the safest, and least safe.
"I Don't Think Any of the Technology on This List Is Safe to Use"
After measuring whether or not users believe Big Tech companies are too powerful, we posed a series of four questions to a total of 3,412 people across the U.S., UK, and Canada.
The Technology Itself
First, we wanted to ask users about how they feel about technology itself and asked, "Which of the following types of technology do you think is the safest to use?"
The highest number of respondents -- 22% -- indicated that they do not believe any type of technology on the list was safe to use. The list contained 17 types of technology, ranging from social media to augmented reality.
It was that former category of technology -- social media -- that the second-highest number respondents (17%) indicated as the type of technology that they feel the safest using.
But the responses we received to our second question -- "Which of the following types of technology do you think is the least safe to use?" -- began to indicate more of the aforementioned disparity we saw in previous survey results.
Here, the highest percentage of respondents -- 30% -- selected "social media" as the type of technology they feel the least safe using. The same 17 technology options listed in the previous question were also provided here.
The Tech Giants
Next, we wanted to measure user safety sentiment toward tech companies, so we posed a third question: "Out of the following tech companies, whose products or services do you feel the safest using?"
Again, Facebook was in the lead -- but twice as many respondents (46% versus 23%) indicated that it's the company whose products and services they feel the least safe using.
It's what HubSpot VP of Marketing Meghan Keaney Anderson calls a "paradox of data" -- when the results show that, in a way, the same thing that people say makes them feel the safest is also what scares them the most.
But is there an explanation? Maybe -- and it could exist in the results for our final survey question.
Safety in Numbers
Lastly, we asked 837 people across the U.S., UK, and Canada: "How would you describe the connection you think there is between how powerful a company is, and how safely it can handle your personally identifiable data?"
The question comes after a tumultuous year for many tech companies within the realm of this personal data -- including Facebook, who famously made headlines last spring when a voter profiling firm improperly harvested this data from 87 million users on the network.
For the most part, users see no connection between a company's size or power and how it's able to protect user data, with a third of respondents answering as such.
However, 29% of users indicated that they believe the more powerful a company is, the more capable it is of safely handling their personally identifiable data -- despite the headline-making misuse of Facebook user data last year.
The results could reflect a possible lack of awareness of these issues that we've explored in previous research -- or, it could point to the "paradox of data" that Keaney Anderson mentioned.
How could it be that, not only after after Facebook allowed such a misuse of data to take place on its network, but after numerous major corporations allowed sensitive consumer information to be compromised -- like Equifax and Marriott -- that most users still believe a more powerful company is more capable of safely handling their personal data?
"A belief in the wisdom of crowds," Keaney Anderson explains. "People trust big companies because they trust the crowd. The more people who are using something the better the odds are that it is deserving of that use."
And it's those crowd, she says, that make people more "okay" with continuing to use these products and services.
There's "a belief in the protection of crowds," Keaney Anderson suggests. "On bigger platforms, should something go wrong, the brunt of that impact is likely going to be diffused across many, not a few, so the individual feels less at risk and less alone."
Could that also point to the disparity in what makes people feel secure -- and the indications that what they feel the safest using is also what might frighten them the most?
Possibly -- and that has more to do with crowds flocking to sources of technology because, well, there's no where else to find them.
"The dominance of these platforms means a lack of viable alternatives," Keaney Anderson says. "It would take years for another company to catch up with some of these giants. Many may believe that it is easier to reform existing big companies than jump ship to unknown ones."
We'll see how -- or if -- that reformation takes place in 2019.
Originally published Jan 17, 2019 7:00:00 AM, updated December 11 2019