Heaven​ knows there are reasons enough for anyone to feel miserable about Facebook: the mediation and commodification of ordinary human relationships, the mediation and commodification of every aspect of everyday life, the invasions of privacy, the ‘targeted’ adverts, the crappy photos, the asinine jokes, the pressure to like and be liked, the bullying, the sexism, the racism, the ersatz activism, the ersatz everything. I don’t think this only because I happen to be a miserable git: last year, researchers at the University of Michigan found that ‘Facebook use predicts declines in subjective well-being in young adults’; earlier studies suggested that people felt envious and left out of all the fun stuff their friends were up to. But nobody suspected Facebook was actually setting out to make people miserable on purpose, until a paper published in the Proceedings of the National Academy of Sciences last month revealed that they’d done exactly that.

Well, not exactly. For a week in January 2012, around 155,000 Facebook users had between 10 and 90 per cent of the ‘positive emotional content’ removed from their newsfeeds. The newsfeed is the selection of friends’ posts that you see on Facebook’s homepage when you log in. Another 155,000 had between 10 and 90 per cent of the ‘negative emotional content’ removed. Two control groups of 155,000 people each had an equivalent amount of random content removed. Two controls were required because there’s more than twice as much positive content as negative on Facebook.

Posts were defined as either negative or positive depending on whether or not they included certain words, determined by a software package called Linguistic Inquiry and Word Count. Its dictionary, painstakingly compiled and refined over many years, contains 406 words associated with ‘positive emotion’, including love, nice and sweet, and 499 associated with ‘negative emotion’, including hurt, ugly and nasty. You can try it out online (www.liwc.net/tryonline.php). I just submitted my opening paragraph for analysis. It gets low scores for ‘self-references’, ‘social words’ and ‘positive emotions’, and high scores for ‘negative emotions’, ‘overall cognitive words’, articles and ‘big words’. Which seems quite accurate, and perversely gratifying (though ‘big words’ aren’t actually all that big: longer than six letters).

Anyway, the three researchers – one from Facebook, one from the University of California and one from Cornell – found that reducing the number of emotionally positive posts in someone’s newsfeed led to a very slight but statistically significant decrease in the number of positive words they used in their own status updates, and a very slight increase in the number of negative words they used. Similarly, seeing fewer negative posts made people slightly less likely to be negative themselves, and more likely to be positive. ‘These results suggest that the emotions expressed by friends, via online social networks, influence our own moods, constituting … the first experimental evidence for massive-scale emotional contagion via social networks.’

The paper anticipates some of the more obvious objections to the validity of its conclusions. One of many comment pieces in the Guardian said that ‘the study doesn’t seem to control for the possibility that people simply match their tone and word choice to that of their peers.’ It may not control for it, but the researchers argue that their results aren’t ‘a simple case of mimicry’ because of the ‘cross-emotional encouragement effect’: i.e. ‘reducing negative posts led to an increase in positive posts’ and vice versa. Still, if all your friends on Facebook seem less cheerful than usual, you may be more inclined to sound a little downhearted yourself, even if you’re actually feeling pretty chipper, so as not to rub it in.

And what about Schadenfreude, and its inverse? Do we really react to good and bad news so straightforwardly? But it isn’t about news. What was being investigated wasn’t the content of the posts, but the emotional state of the people posting. Interestingly, the ‘absence of negativity bias suggests that our results cannot be attributed solely to the content of the post’: ‘Friends’ response to the news … should be stronger when bad news is shown rather than good (or as commonly noted, “if it bleeds, it leads”) if the results were being driven by reactions to news. In contrast, a response to a friend’s emotion expression … should be proportional to exposure.’

What the researchers and their employers didn’t anticipate – at least, not openly – was the storm of negative publicity the paper would generate. The media coverage of the story bears out the adage about bleeding and leading; no one reported the story as ‘Facebook secretly makes 155,000 people happier.’ Russia Today had the best headline, claiming to have revealed ‘Pentagon links to Facebook “mind control” study’, but only because one of the researchers had once received funding from the Department of Defense for another experiment.

Facebook has apologised, after a fashion, using three incompatible excuses: we didn’t really do anything (‘the actual impact on people in the experiment was the minimal amount to statistically detect it,’ Facebook’s Adam Kramer, who carried out the study, wrote on his Facebook page); this is what we do all the time anyway (‘we did this research … because we care about the emotional impact of Facebook and the people that use our product’); and this isn’t what we normally do (‘it was a one-week, small experiment,’ Sheryl Sandberg, Facebook’s chief operating officer, said on Indian TV). No doubt they’re clocking responses and crunching the numbers to see which is the most effective.

The closest to the truth is that they do this sort of thing all the time anyway. The purpose of Facebook is to harvest, organise and store as much personal information about as many people as possible, to be flogged, ready-sifted and stratified, to advertisers: they’re the ones the company really provides a (highly lucrative) service to, while making a great show of providing a free service to the people it likes to call ‘users’. We aren’t Facebook’s customers; we’re its product.

The difference this time is that, rather than keeping the results of the experiment to itself and plugging them into its ever more sophisticated algorithms for monitoring and influencing the way people navigate the site, Facebook published them in a scientific journal. It wasn’t market research, it was a psychology experiment, and as such subject to rigorous ethical controls. The paper says that ‘no text was seen by the researchers’ and that ‘it was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.’ Some angry Facebook users disagree. In the UK, the Information Commissioner’s Office is looking into whether or not the study broke any data protection laws.

A lofty piece in the Financial Times suggested that ‘much of the outcry appears to come from ignorance about the degree to which Facebook manipulates – or in their words, curates – the newsfeed every day.’ Your newsfeed is constantly being refined and adjusted to show you what you most want to see – so you’ll stay on the site for longer, hand over more information about yourself, and see more ads that you’re more likely to click on. And if you end up buying anything, some of the money you spend will be passed on to Facebook as advertising revenue. Anyone who’s shocked and appalled by the ‘secret mood experiment’ should be shocked and appalled by Facebook in toto. If only. The tickertape running across the top of the Guardian’s homepage as I write (shortly before 2 p.m. on Thursday, 3 July) has just shown a new story about social media giants. There’s no mention of Mark Zuckerberg as the new Stanley Milgram. But ‘Twitter will double its UK revenues this year to almost £100 million, while Facebook is expected to enjoy a 40 per cent boost to nearly £570 million.’

Send Letters To:

The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address, and a telephone number.

Letters

Vol. 36 No. 18 · 25 September 2014

Thomas Jones gives a clear account of the paper by Kramer, Guillory and Hancock, in which they manipulated people’s Facebook newsfeeds in order to demonstrate ‘emotional contagion via social networks’ (LRB, 17 July). However, neither he nor the authors emphasise the difference between statistical significance and importance. Statistically significant means that the observed relationship between two variables is due to cause and effect, rather than to chance, at a stated level of probability. One can find a relationship that is significant at a high level of probability by collecting a lot of data, as the authors did. Whether that relationship has an important effect on subjects is another matter. Importance is measured by, for example, Cohen’s effect size. An effect size of 0.2 to 0.3 is described as ‘small’, but the authors found much smaller effect sizes, ranging from 0.001 to 0.02. The conclusion must be that there is a statistically significant relationship between negative (or positive) Facebook posts and negative (or positive) status updates, but that relationship is not an important one.

Steve Lane
Bethesda, Maryland

Vol. 36 No. 20 · 23 October 2014

While I would join Steve Lane in distinguishing between ‘statistically significant’ and ‘important’, I hesitate to follow his mortal leap from ‘significant’ to ‘cause and effect’ (Letters, 25 September). A statistically significant regularity in a properly constituted sample merely enables us to say that we would probably find a similar regularity in the population from which that sample has been drawn. True, a regularity begs an explanation, and attributing it to one or more causes is a common move. However, as Hume and others point out, the empirical evidence (in this case the sample) shows us only the regularity, not the ‘cause’. In the social sciences, explanations of regularities often make no reference to ‘causes’, but rather to reasons, motives, power etc, usually in the context of a broader culture. Such cultural regularities are in principle not universal and eternal, but changeable, especially under forceful criticism – unlike (we suppose) the causes involved in physical sciences.

Mike Hall
University of Brighton

send letters to

The Editor
London Review of Books
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address and a telephone number

Read anywhere with the London Review of Books app, available now from the App Store for Apple devices, Google Play for Android devices and Amazon for your Kindle Fire.

Sign up to our newsletter

For highlights from the latest issue, our archive and the blog, as well as news, events and exclusive promotions.

Newsletter Preferences