Skip to main content

Open for Debate

Types of Social Influence and Manipulation Without Intention

1 April 2024

In the previous two posts of this series on online manipulation, I outlined three developments that warrant closer attention to digital influence (here), and argued that a peculiar result is the threat of unwitting problematic forms of influence due to the corrupting effect of the digital influence landscape (here).

In this blog post, I delve into the complex category of manipulation and why it deserves special attention in our understanding of social influence in the digital age. I begin by outlining two common but ill-conceived ways of thinking about social influence before I argue that the digital influence landscape harbours more and more influences that fall into the grey area between persuasion and coercion.

A Misconception Surrounding Influence

There’s a common misconception that all forms of social influence are inherently and categorically bad. For example, commerce or politics are sometimes dismissed wholesale as illegitimate because they are attempts to influence people.

But not all influence is bad per se. As Sunstein points out, for example, a passengerwho warns the driver of an obstacle on the road is clearly influencing the driver but not in a problematic way (Sunstein 2016). Similarly, a politician might influence you so that you vote for him, but that in itself is not an issue (it is, in a way, the point of democracy!). What matters is how the politician influences you to reach that goal.

Moreover, the wholesale dismissal of ‘influence’ is not only wrong but also dangerous because it suggests that there is no relevant difference and thus no need to choose between influence that empowers people and influence that harms them. The essential debate about what type of influence we want in our lives, as individuals and as a society, which I discussed in the previous blog would become obsolete in this bleak, nihilistic picture. This would be a mistake.  

Therefore, to gain a more nuanced perspective, it’s crucial to draw boundaries that demarcate and categorise types of social influence. Moral concepts like manipulation, deception, and coercion provide a framework to distinguish between different types of influence and their relative (dis-)value.

Influence as a Broad Category

Instead of labelling all social influence as negative, we should view “social influence” as a broad, general category that encompasses various types of influence like deception, coercion, and manipulation (Coons and Weber 2014).

For example, it is better to re-focus from the general question of whether the politician tries to influence you to the more specific question of how he tried to influence you. Did he use coercive, manipulative, deceptive influence, or a combination of those?

The Benefits of Talking about Types of Social Influence

If you are interested in the ethics of a specific, concrete instance of influence – for example, whether Peters misleading email to you yesterday was ok, or whether YouTubes recommender system is ethically problematic – then attending to the nuances of different types of social influence may seem like an unnecessary distraction. After all, you might be able to describe and evaluate the case without worrying too much about whether the harm you are describing classifies as manipulation or deception (Barnhill 2022). What matters, it may seem, is that you uncover and describe the harm that arises from the interaction.

However, there are also clear benefits from paying close attention to types of social influence. Concepts like manipulation carry with them various moral and descriptive connotations. They are theoretically and practically helpful generalisations insofar as they immediately tell us what specific instances of social influence have in common in virtue of their type (Coons and Weber 2014). So, suppose you establish that e.g. YouTube’s recommender system is manipulative. In that case, we immediately learn about some of its descriptive features and ethical status.

Moreover, concepts like manipulation can be useful generalisations to highlight descriptive and evaluative commonalities across concrete instances of influence that look widely different at first glance. Below, I list several examples of purportedly manipulative influence, and while they differ in many aspects, they seem to share their manipulativeness.

Therefore, identifying types of social influence allows us to formulate appropriate ethical and regulatory views that generalise beyond concrete instances, which are too many to evaluate individually. See, for example, the sprawling debates on the ethics of personalisation, nudging, microtargeting, framing, steering, incentivising, gaslighting, and so on. Dedicated investigations are essential, but there is also a place for a generalising view on the factors that unite these influences: their manipulativeness. The generalising perspective means we can navigate the complex landscape of social influence more efficiently.

No Accidents in Social Influence

We can more easily distinguish relevant types of social influence when we set aside merely accidental influence and focus on non-accidental social influence.

In the typical case of social influence, we had to look at the intentions of the would-be manipulator. What did Peter have in mind when he sent you that email? Did he consciously lead you astray? Why did the politician appeal to your irrational fear of foreigners to win your vote? And did the designer intentionally use a deceptive pattern to lure you into buying the product you dont need (Brignull 2023)? The agent’s motives often determine whether we have a case of persuasion, manipulation, or coercion.

However, the rise of digital influence asks us to rethink this conceptualisation and narrow focus on intentions. While I stand by the point that we should focus on non-accidental influence, I suggest that we widen our perspective to recognise cases of non-intentional social or technical functions that explain the non-accidentality of the influence (Klenk 2022, 2024).

For example, the influence mediated by YouTubes recommender system aims to maximise user engagement; in that sense, it is non-accidental. Metaphorically, we can saythat the recommender system wants you to spend more time on the site. The metaphor is backed up by a functional explanation of what the system optimises for.

The focus on non-accidental, but not necessarily intentional, influence allows for a unified account of influence exerted by individuals, groups, and automated behaviour exerted by recommender systems or generative AI (Klenk 2020). Whats left is to segment the area of non-accidental influence into different types.

Conclusion and Outlook

The main points of this post are that not all influence is wrong, that we should distinguish between different types of social influence, like persuasion and coercion, and that, in doing so, we focus on non-accidental forms of social influence.

In the next post (link) of this series, I will argue that the grey area between persuasion and coercion seems to be expanding in the digital age. Insofar as manipulation is located somewhere in that terrain, theres a greater risk of manipulation and a more pressing need to delimit its boundaries.

References

Barnhill, A. (2022). How philosophy might contribute to the practical ethics of online manipulation. In M. Klenk & F. Jongepier (Eds.), The Philosophy of Online Manipulation(pp. 49–71). New York, NY: Routledge.

Brignull, H. (2023). Deceptive Patterns: Exposing the tricks tech companies use to control you. Harry Brignull.

Coons, C., & Weber, M. (2014). Manipulation: Investigating the core concept and its moral status. In C. Coons & M. Weber (Eds.), Manipulation: Theory and practice (pp. 1–16). Oxford: Oxford University Press.

Klenk, M. (2020). Digital Well-Being and Manipulation Online. In C. Burr & L. Floridi(Eds.), Ethics of Digital Well-Being: A Multidisciplinary Perspective (pp. 81–100). Cham: Springer.

Klenk, M. (2022). Manipulation, injustice, and technology. In M. Klenk & F. Jongepier (Eds.), The Philosophy of Online Manipulation (pp. 108–131). New York, NY: Routledge.

Klenk, M. (2024). Ethics of generative AI and manipulation: a design-oriented research agenda. Ethics and Information Technology, 26, 1–15. doi:10.1007/s10676-024-09745-x.

Sunstein, C. R. (2016). Fifty Shades of Manipulation. Journal of Behavioural Marketing. doi:10.2139/ssrn.2565892.

Picture https://pixabay.com/vectors/cranium-head-human-male-man-2099119/