Skip to main content

Open for Debate

The Rise of Digital Manipulation

30 March 2024

In the previous post of this series on Unpacking Manipulation in the Digital Age, I argued that problematic forms of influence can be unintentional but not accidental and that the digital influence landscape encourages such forms of influence.

In this post, I will illustrate this claim by giving examples of problematic digital influence that do not fit neatly the categories of persuasion and coercion. In effect, they raise a question about a substantive account of manipulation to fully understand their significance.

Clarifying Persuasion, Coercion, and Manipulation

Since influence itself is not a problem but different ways of influencing are, we need to find a way to demarcate these different types. But how do we draw the boundaries around those different types, like persuasion and manipulation?

A very influential model originates in the debates about advertising and medical interventions: the continuum model (Beauchamp 1984; Faden et al. 1986). The continuum model suggests that persuasion and coercion form two opposite poles and that manipulation falls on the continuum between them. For example, Beauchamp (1984) indicates that manipulation is any social influence that doesn’t use reasons and leaves options open toinfluence.

However, the continuum model is problematic because both persuasion and coercion are contested concepts that are insufficiently stable to understand manipulation in between.For example, psychologists view ‘persuasion’ descriptively as a synonym for ‘influence’ while philosophers emphasise its rational and argumentative nature. Coercion, too, is a contested concept. With shifting goalposts, the area within becomes hazy and elusive, too.

Moreover, there is ample space between persuasion and coercion that, contrary to the continuum model, does not seem to be occupied by manipulation (Noggle 1996). For example, dressing up for a job interview to leave a good impression is neither persuasion nor coercion, nor – contrary to the continuum model an act of manipulation (Noggle 2022). We might thus look at a particular instance of an influence, such as a mental health AI assistant that appeals to social stereotypes to influence its patients, and find that its influence fits neither persuasion nor coercion. But because the gray area between persuasion and coercion is vast, and not always manipulation, we have not learned much about the ethical and descriptive features of the AI assistants influence.

Therefore, the failure of the simple continuum model warrants more attention to the grey area between persuasion and coercion to determine the boundaries of manipulation.

Examples of Grey Area Influence

At this point, we can see why the digital age requires understanding manipulation better. It is not necessarily that online manipulation is a new phenomenon (Klenk and Jongepier 2022). Rather, the subset of social influence that doesn’t neatly fit the categories of coercion or persuasion has gained significance in the digital age. Here are some examples:

1. Dark Patterns: User experience design choices like the so-called roach motel are neither persuasion nor coercion. The roach motel or hard to cancel design pattern hinders users from certain actions by design, such as cancelling a subscription to an online service(Brignull 2023). In doing so, the design exploits their habits and unreflective decision-making, but users are neither persuaded using reasons, nor deceived, nor are their options curtailed in a way that seems sufficient for coercion (Bongard-Blanchy et al. 2021).

2. MicroTargeting: Using persuasion profile to tailor messages to an individual or group has raised several ethical concerns and worries about manipulation, and it is a form of influence in the grey area, being neither coercion nor persuasion (Jongepier and Wieland 2022).

3. Propaganda: Political messaging that conveys no false or misleading information can nevertheless seem tendentious and problematic. Forms of political communication like propaganda fit neither the persuasion nor the coercion category and the onset of computational propaganda arguably increases the power and scale of this grey area influence.

4. Value-laden digital infrastructures: By encouraging or discouraging different mental states and behaviours online, designed artefacts co-shape our behaviour (together with human dispositions, existing social norms, etc) even if the particular result of the influence was not intended by any individual involved in the design process. The ways in which extreme beliefs or crass behaviours are thus afforded by e.g. social media are not forms of persuasion, deception, or coercion (Klenk 2021), yet they raise moral concerns.

Recognising the Importance

This is not an exhaustive list of problematic forms of digital influence. However, these examples suggest that we must pay more attention to these types of grey influence. They are increasing in number and quality (see blog post 1), and they increasingly occupy a morally and theoretically grey zone (see this post).

As we navigate this intricate terrain between persuasion and coercion, it becomes clear that the subset of social influence outside these traditional categories is expanding. Recent legislation, particularly in the EU, Britain, and the US, underscores the importance of understanding and addressing these nuances in social influence (Faraoni 2023).

Conclusion and Outlook

I showed that exemplary influence techniques that a) proliferate in the digital age and b) are a cause of concern do not neatly fall into the categories of persuasion or coercion, and that the grey area between both categories is insufficiently charted. This warrants more attention to understanding manipulation itself.

The elephant in the room is, of course, the question of identifying manipulation after all. In the next post, I will – in true philosophical fashion – first tell you why two leading approaches are problematic in digital influence before presenting an improved account in this series’s sixth and final post.

References

Beauchamp, T. L. (1984). Manipulative Advertising. Business and Professional Ethics Journal, 3, 1–22.

Bongard-Blanchy, K., Rossi, A., Rivas, S., Doublet, S., Koenig, V., & Lenzini, G. (2021). I am Definitely Manipulated, Even When I am Aware of it. Its Ridiculous! – Dark Patterns from the End-User Perspective. In W. Ju, L. Oehlberg, S. Follmer, S. Fox, & S. Kuznetsov (Eds.), DIS ’21: Designing Interactive Systems Conference 2021, Virtual Event USA, 28 06 2021 02 07 2021 (pp. 763–776, ACM Conferences). New York,NY,United States: Association for Computing Machinery. doi:10.1145/3461778.3462086.

Brignull, H. (2023). Deceptive Patterns: Exposing the tricks tech companies use to control you. Harry Brignull.

Faden, R. R., Beauchamp, T. L., & King, N. M. P. (1986). A history and theory of informed consent. New York, Oxford: Oxford University Press.

Faraoni, S. (2023). Persuasive Technology and computational manipulation: hypernudgingout of mental self-determination. Frontiers in Artificial Intelligence, 6, 1216340. doi:10.3389/frai.2023.1216340.

Jongepier, F., & Wieland, J. W. (2022). Microtargeting people as a mere means. In M. Klenk & F. Jongepier (Eds.), The Philosophy of Online Manipulation. New York, NY: Routledge.

Klenk, M., & Jongepier, F. (2022). Manipulation Online: Charting the field. In M. Klenk & F. Jongepier (Eds.), The Philosophy of Online Manipulation (pp. 15–48). New York, NY: Routledge.

Noggle, R. (1996). Manipulative Actions: A Conceptual and Moral Analysis. American Philosophical Quarterly, 33(1), 43–55.

Noggle, R. (2022). The Ethics of Manipulation. In E. N. Zalta (Ed.), Stanford Encyclopediaof Philosophy. Summer 2022 .

Picture https://pixabay.com/illustrations/background-texture-grunge-paper-1587534/