Types of Social Influence and Manipulation Without Intention
1 April 2024In the previous two posts of this series on online manipulation, I outlined three developments that warrant closer attention to digital influence (here), and argued that a peculiar result is the threat of unwitting problematic forms of influence due to the corrupting effect of the digital influence landscape (here).
In this blog post, I delve into the complex category of manipulation and why it deserves special attention in our understanding of social influence in the digital age. I begin by outlining two common but ill-conceived ways of thinking about social influence before I argue that the digital influence landscape harbours more and more influences that fall into the grey area between persuasion and coercion.
A Misconception Surrounding Influence
There’s a common misconception that all forms of social influence are inherently and categorically bad. For example, commerce or politics are sometimes dismissed wholesale as illegitimate because they are ‘attempts to influence people.‘
But not all influence is bad per se. As Sunstein points out, for example, a passengerwho warns the driver of an obstacle on the road is clearly influencing the driver but not in a problematic way (Sunstein 2016). Similarly, a politician might influence you so that you vote for him, but that in itself is not an issue (it is, in a way, the point of democracy!). What matters is how the politician influences you to reach that goal.
Moreover, the wholesale dismissal of ‘influence’ is not only wrong but also dangerous because it suggests that there is no relevant difference and thus no need to choose between influence that empowers people and influence that harms them. The essential debate about what type of influence we want in our lives, as individuals and as a society, which I discussed in the previous blog would become obsolete in this bleak, nihilistic picture. This would be a mistake.
Therefore, to gain a more nuanced perspective, it’s crucial to draw boundaries that demarcate and categorise types of social influence. Moral concepts like manipulation, deception, and coercion provide a framework to distinguish between different types of influence and their relative (dis-)value.
Influence as a Broad Category
Instead of labelling all social influence as negative, we should view “social influence” as a broad, general category that encompasses various types of influence like deception, coercion, and manipulation (Coons and Weber 2014).
For example, it is better to re-focus from the general question of whether the politician tries to influence you to the more specific question of how he tried to influence you. Did he use coercive, manipulative, deceptive influence, or a combination of those?
The Benefits of Talking about Types of Social Influence
If you are interested in the ethics of a specific, concrete instance of influence – for example, whether Peter‘s misleading email to you yesterday was ok, or whether YouTube‘s recommender system is ethically problematic – then attending to the nuances of different types of social influence may seem like an unnecessary distraction. After all, you might be able to describe and evaluate the case without worrying too much about whether the harm you are describing classifies as ‘manipulation‘ or ‘deception‘ (Barnhill 2022). What matters, it may seem, is that you uncover and describe the harm that arises from the interaction.
However, there are also clear benefits from paying close attention to types of social influence. Concepts like ‘manipulation‘ carry with them various moral and descriptive connotations. They are theoretically and practically helpful generalisations insofar as they immediately tell us what specific instances of social influence have in common in virtue of their type (Coons and Weber 2014). So, suppose you establish that e.g. YouTube’s recommender system is manipulative. In that case, we immediately learn about some of its descriptive features and ethical status.
Moreover, concepts like manipulation can be useful generalisations to highlight descriptive and evaluative commonalities across concrete instances of influence that look widely different at first glance. Below, I list several examples of purportedly manipulative influence, and while they differ in many aspects, they seem to share their ‘manipulativeness.‘
Therefore, identifying types of social influence allows us to formulate appropriate ethical and regulatory views that generalise beyond concrete instances, which are too many to evaluate individually. See, for example, the sprawling debates on the ethics of personalisation, nudging, microtargeting, framing, steering, incentivising, gaslighting, and so on. Dedicated investigations are essential, but there is also a place for a generalising view on the factors that unite these influences: their manipulativeness. The generalising perspective means we can navigate the complex landscape of social influence more efficiently.
No Accidents in Social Influence
We can more easily distinguish relevant types of social influence when we set aside merely accidental influence and focus on non-accidental social influence.
In the typical case of social influence, we had to look at the intentions of the would-be manipulator. What did Peter have in mind when he sent you that email? Did he consciously lead you astray? Why did the politician appeal to your irrational fear of foreigners to win your vote? And did the designer intentionally use a deceptive pattern to lure you into buying the product you don‘t need (Brignull 2023)? The agent’s motives often determine whether we have a case of persuasion, manipulation, or coercion.
However, the rise of digital influence asks us to rethink this conceptualisation and narrow focus on intentions. While I stand by the point that we should focus on non-accidental influence, I suggest that we widen our perspective to recognise cases of non-intentional social or technical functions that explain the non-accidentality of the influence (Klenk 2022, 2024).
For example, the influence mediated by YouTube‘s recommender system aims to maximise user engagement; in that sense, it is non-accidental. Metaphorically, we can saythat the recommender system wants you to spend more time on the site. The metaphor is backed up by a functional explanation of what the system optimises for.
The focus on non-accidental, but not necessarily intentional, influence allows for a unified account of influence exerted by individuals, groups, and automated behaviour exerted by recommender systems or generative AI (Klenk 2020). What‘s left is to segment the area of non-accidental influence into different types.
Conclusion and Outlook
The main points of this post are that not all influence is wrong, that we should distinguish between different types of social influence, like persuasion and coercion, and that, in doing so, we focus on non-accidental forms of social influence.
In the next post (link) of this series, I will argue that the grey area between persuasion and coercion seems to be expanding in the digital age. Insofar as manipulation is located somewhere in that terrain, there‘s a greater risk of manipulation and a more pressing need to delimit its boundaries.
Barnhill, A. (2022). How philosophy might contribute to the practical ethics of online manipulation. In M. Klenk & F. Jongepier (Eds.), The Philosophy of Online Manipulation(pp. 49–71). New York, NY: Routledge.
Brignull, H. (2023). Deceptive Patterns: Exposing the tricks tech companies use to control you. Harry Brignull.
Coons, C., & Weber, M. (2014). Manipulation: Investigating the core concept and its moral status. In C. Coons & M. Weber (Eds.), Manipulation: Theory and practice (pp. 1–16). Oxford: Oxford University Press.
Klenk, M. (2020). Digital Well-Being and Manipulation Online. In C. Burr & L. Floridi(Eds.), Ethics of Digital Well-Being: A Multidisciplinary Perspective (pp. 81–100). Cham: Springer.
Klenk, M. (2022). Manipulation, injustice, and technology. In M. Klenk & F. Jongepier (Eds.), The Philosophy of Online Manipulation (pp. 108–131). New York, NY: Routledge.
Klenk, M. (2024). Ethics of generative AI and manipulation: a design-oriented research agenda. Ethics and Information Technology, 26, 1–15. doi:10.1007/s10676-024-09745-x.
Sunstein, C. R. (2016). Fifty Shades of Manipulation. Journal of Behavioural Marketing. doi:10.2139/ssrn.2565892.
Picture https://pixabay.com/vectors/cranium-head-human-male-man-2099119/
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- October 2019
- September 2019
- August 2019
- July 2019
- June 2019
- May 2019
- April 2019
- March 2019
- February 2019
- January 2019
- December 2018
- November 2018
- October 2018
- September 2018
- August 2018
- July 2018
- June 2018
- May 2018
- April 2018
- March 2018
- February 2018
- January 2018
- December 2017
- November 2017
- October 2017
- September 2017
- August 2017
- July 2017
- June 2017
- May 2017