Skip to main content

Open for Debate

Social media: a viral promoter of social ills?

8 August 2022

Public discourse is the currency in which we exchange our attitudes and beliefs. Social media has proven a double-edged sword with respect to this exchange. On the one hand, it has enabled debate and contact between citizens on a scale previously unimaginable. On the other, this ability to exchange ideas has been all too easily subverted by anti-democratic forces. Indeed, the single greatest problem posed today to the democratic norms of free speech and tolerance is the manner, the speed and scale in which these forces exploit group-identity cleavages, manipulate prejudice and bias, stoke fear and hatred, spread propaganda and misinformation, as well as incite to harassment and violence. These problems are by no means confined to the online world. However, by enabling them to proliferate and amplify, social media acts as a booster of social ills.

There is ample evidence that social media platforms have played a facilitating role in spreading hateful content and oppressive ideologies, thereby contributing to the recent upsurge in extreme nationalist and nativist ideology in mainstream politics. Social media use has been instrumental to stoking group-divisions through the spread of hateful content based on group-identity (race, gender, ethnicity, nationality, religion, sexual orientation, disability, immigration status, etc.). There is a growth of misogynistic websites, blogs and forums promoting gender-based hate. There is also a surge in hate crimes related to sexual orientation and gender identity. Notably, the systematic use of propaganda on Facebook and Twitter has been instrumental to the ethnic cleansing of the Rohingya in Myanmar.

Social media has also been instrumental in propagating oppressive ideological narratives. A rise in online harm has been found to correlate with a rise in violence and hate crimes. Equally, the widespread display of bigotry, bullying and harassment on Facebook and Twitter has been linked to social unrest. As Frances Haugen testifies, Facebook has failed to stop the spread of hate speech. For example, the Unite the Right rally in Charlottesville (2017) originated as a Facebook event in which many fascist and racist hate groups used the platform to incite to violence. Less mainstream platforms such as Gab, Parler, Telegram, and imageboards such as Reddits (4chan, 8chan), have been the playground for individuals with extreme right-wing views who openly share hateful content as well as creating communities of hate in discussion fora. This has led to a rapid propagation of extreme viewpoints and radicalization (which often correlates with violent actions, e.g. mass shootings in Charleston 2015, San Diego 2019, Christchurch 2019; the 6 Jan ’21 Capitol Hill insurrection).

There is also evidence that social media platforms have been used as a launchpad for spreading propaganda, mis/disinformation, fake news, conspiracy theories and political polarization with a view to shape and steer public opinion, whether in the context of voting, such as the 2016 US presidential election campaign and the Brexit referendum, or the Covid-19 pandemic. This is damaging both for public health and the moral fabric of society.

What are the conditions that make social media platforms into a turbocharging transmissibility machine?

One way to think about this is that the difference in the medium through which content is transmitted —online vs offline—comes with certain factors in the case of online media that enable social harms to spread and proliferate to unprecedented speed and rate. So, in the past, bigotry has spread by person-to-person conversations, through print media and through broadcast media. Each has an increasing number of people who might be reached. Hence the deployment of broadcast and print media by those seeking to spread hate. However, online media offers something extra. What?

First, online media levels speaker authority. A Facebook post or a tweet has, relatively speaking compared to traditional print or broadcast media, the same form and footprint regardless of whether it is from a trusted news source or a single person. In previous years a racist might have to write a letter to a local newspaper to have it circulated widely and this was subject to editorial choice and given a different status and prominence to the editorial from that paper. This inequality of publication power has been considerably levelled by the internet and social media. A single individual can author a post or video easily using widely available technology. This democratisation of broadcast speech is powerful: it levels speaker’s authority and perceived trustworthiness so that it’s easier to drown out an authoritative information source. This promotes viral spread of bigotry because it reduces the authority and defence enabled by the institutions of civil society.

Second, online media, particularly that involving advertising, incentivises both platforms and actors in ways that promote the duration of exposure to hate speech and increases its intensity. For example, in the first case, the algorithms employed to promote content—making one speech act more prominent than another in your feed—promote similar content to that already consumed. This leads to content homogenisation. Content homogenisation means that if someone has viewed hate speech a few times they will rapidly have their feed dominated by it, rather than by opposing views. Platform algorithms select information that promotes longer engagement online and thus increase revenue from advertising. Thus, there is an incentive on the part of platforms to increase exposure time to hate speech. In addition, posters themselves are incentivised to make speech acts that unleash outrage. This is because such speech acts—in particular those on the extreme end of the Overton window—tend to circulate faster through networks, and thus be more viewed and generate more sharing. In addition, posters are incentivized monetarily for this on some platforms—e.g. YouTube—since they are paid by the number and duration of views of their content.

Third, most obviously, social media allows the rapid creation of non-geographically based communities. The network infra-structure can create an auspicious environment for the spread harmful content. It makes it easy to create the critical mass of people needed to establish a thriving community of bigots. This is important, because bigotry does not thrive in isolation, but requires constant reinforcement from a community of bigots. These communities support individuals whose bigoted beliefs are questioned by others, so that they maintain adherence to the bigoted belief and attitude system.

Finally, the network structure of social media platforms satisfies our preference to belong to an in-group and to mix with those with similar preferences. However, this very feature has the effect of limiting the range of voices that people hear, alienating and excluding perceived out-groups, and creating a bubble effect in which perceived in-groups reinforce each other’s beliefs in the absence of challengers and dissenters. This becomes particularly dangerous in the transmission of harmful content such as hate speech, propaganda and misinformation, because there is but a small step to move from ideas to action.

Picture: Corona Virus from CDC Images Library: PHIL ID #23354