Rethinking Manipulation: The Indifference View of Manipulation
15 April 2024In the series’ Unpacking Manipulation in the Digital Age,’ the previous five posts covered the rise of problematic forms of digital influence (Posts 1, 2, and 3), the need to demarcate different types of ethically problematic influence (post 4), and the lacuna in understanding manipulation (post 5), in particular.
In this sixth and final post, I will discuss my “Indifference View of Manipulation” (Klenk 2020, 2022b, 2022a, 2024). The indifference view defines manipulation as influence aimed at a particular goal, where the chosen means (the how of the influence) are not explained by the purpose of revealing reasons to the target.
I will briefly introduce the two main elements of the view before discussing its advantages and pointing to open questions in the final section.
Purposeful Influence
The first main element of the indifference view is the focus on purposeful influence. In blog post 3, I discussed the importance of focusing on non-accidental influence. At least prima facie influences with a particular goal are ethically more relevant than those that occur accidentally.
The indifference view suggests a functional interpretation of the fact that manipulation is not an accident. Accordingly, any instance of manipulation is an instance of social influence explained by the function to affect other agents.
For example, when I want you to hold the door for me, I shout out, ‘Hold the door!’ and my exclamation counts as non-accidental in virtue of being explained by my aim to have a particular effect on you. I want you to perform a particular action: hold the door for me.
However, we need to broaden our perspective beyond the narrow focus on human intention to account for the interesting non-intentional phenomena in the realm of digital influence. I suggest that many of the influences that strike us as problematic online, like microtargeting and others discussed in blog post 4 of this series, are non-accidental despite being non-intentional. They are non-accidental in virtue of having a particular function (see Pepp et al. 2022 for a related perspective, and their recent blog here).
For example, the ‘hard to cancel’ dark pattern known, also known as the roach motel, which makes it hard for users to cancel a subscription, has the function of locking users in. That is the explanation of the design. Similarly, a recommender system may have the function of keeping users engaged to consume their attention and time. In some cases of manipulation, it may seem as if an individual or group specifically intended this outcome. For example, Sean Parker, Facebook’s ex-president, famously said they knew they were creating a product set to “exploit a vulnerability in human psychology.” Facebook’s user interface and the underlying algorithms were supposedly designed with the guiding question being “How do we consume as much of your time and conscious attention as possible?” (The Guardian 2017). But even in this case, the connection between Parker’s or ‘the company’s’ intention and the actual influence on an individual is tenuous and probably opaque. What can be said, however, is that the website design fulfils a certain function, and this can be the case even if no individual, group of individuals, or these technical artefacts themselves intended these outcomes (Klenk 2022a).
The indifference account includes these forms of influence in the pool of potentially manipulative social influences because it looks at non-accidentality in functional rather than intentional terms.
Effective but Indifferent
The second key element of the indifference view of manipulation is its focus on revealing reasons for the influence. Many types of social influence are explained by the aim to be effective. But what distinguishes manipulation?
In contrast to other views of manipulation, the indifference view does not ask what manipulators do (e.g. aim to influence behind the back) but what they lack. My answer: Their chosen means of influence is not explained by the objective of revealing reasons to their target.
Consider what a typical manipulator does. They want something from you. And they shrewdly select whatever method works best. Now, philosophers have for some time recognised that we cannot distinguish manipulation based on the mere fact that someone aims to influence you (see blog post 3), nor on the value of the goal (there is both manipulation for good ends and for bad ends), nor on the specifics of the means of influence itself (there is both manipulation that uses emotion and manipulation that does not). We even know that manipulators may sometimes use very good reasons in their manipulative deed (see post 4).
For example, Gorin (2014) describes the case of a politician who uses rational arguments to convince her voters, not because she genuinely cares about the truth of her arguments, but only because she reckons that this is the most promising – effective – strategy to win the election. This makes me think of a trickster like Trump suddenly discovering (counter to fact) that voters are rational and fully informed: he would surely, if he could, switch his tactics around. There is a lingering sense, however, that he would still be acting manipulatively insofar as he would only care about the effectiveness of his tactics (Klenk 2020).
What makes manipulation distinct, according to the indifference view, is the combination of striving for effective influence while being indifferent to the reason-revealing quality of the influence (Klenk 2022b). The indifference view thus portrays manipulation as a deviation from the ideal of persuasive influence, which is to influence others by showing them why they have reasons for a particular action, belief, desire, or emotion.
In particular, I suggest that manipulators want to influence others effectively, which, by itself, is not a problem (see blog post 3). The problem is that manipulators are completely indifferent to the way in which they achieve their goal. This is nicely illustrated by a recommender system like the ‘watch next’ algorithm on YouTube. The system optimises for a specific goal (it has the function of achieving effective influence and is thus non-accidental) and it is in an important sense unconstrained in how it achieves that goal. In particular, there is no sense in which the system’s output is explained by the goal of revealing reasons – it does not – metaphorically speaking ‘stop to ask’ whether offering an extremist video to the user reveals any reasons to the user for watching that video.
The indifference characteristic of manipulation is also at display when we consider human-generated influence such as fake news. Sharing fake news is typically decoupled from any concern with communicating truth. But rather than being a form of deception – where you’d want your targets to believe a falsity – sharing fake news seems to be much more similar to manipulation on the indifference view. This is because sharing fake news often serves a signalling function – to demonstrate that you belong to a particular group (Bergamaschi Ganapini 2023). Choosing what news to share will thus not be explained by the aim of revealing reasons, but by the aim to effectively communicate group membership.
While I called this feature of manipulation ‘carelessness’ in my earlier work (Klenk 2020), it too often evoked the misleading idea that manipulators are careless, lazy folks. This image starkly contrasts the scheming, clever, insightful trickster that carefully lays out its plan to manipulate you.
What matters, however, is not whether the manipulator is committed to her goal (the target outcome of the influence) or whether she carefully selects the means of influence according to any criterion, but whether her chosen means of influence are explained by the goal to reveal reasons to the influence target. That is, whether the specific criterion of revealing reasons plays a decisive role in her choice of influence.
Indifference in the Digital Influence Landscape
The indifference view has two notable advantages when it comes to understanding manipulation in the digital age.
First, it allows us to recognise manipulation when detecting intentions are challenging. For example, a fraudster may use generative AI to fabricate a distressing message, emphasising the goal of successful fraud rather than intending to deceive. But we rarely encounter what we may call Augustinian fraudsters: fraudsters that mislead and deceive for the sake of deceiving. There is thus a sense in which a fraudster does not really care about the deceptive effect of his message per se. What he cares about is the success of his scheme, which could be, for example, receiving the requested money from his victim. The indifference view neatly captures this.
Similarly, the ostensibly manipulative influence exerted by various recommender systems that optimise for user engagement can be classified as manipulative by the indifference view irrespective of inquiries into the intentions of the designers of said systems. What matters is that the system offers choices to the users that are not explained by the attempt to reveal reasons to users (Klenk 2020).
Second, the indifference view allows us to get a better understanding of unwitting manipulation and emergent manipulation. It frees attention from the allegedly bad intentions of manipulator. It allows us to shift perspective to criteria that influencers unwittingly or wittingly omit in their choice of influence. The dynamics of the attention economy, for example, encourage users to pick a more crass, controversial image over another, which may be seen as manipulative even though they otherwise lack nefarious intentions.
Relatedly, the design of a website may be inspired by ill-conceived design principles that aim to hook users, but an accurate explanation of a particular designer’s behaviour may be much more in terms of their indifference to the ideals of good influence (specifically the aim to reveal reason), rather than their intention to mislead or harm the user.
This is a particular advantage if we are trying to understand the dynamics of digital influence, driven by the proliferation of social influence, the rise of informationally-empowered influence, and the competition this engenders, and the rise of AI-mediated influence.
Clarification and Future Work
Naturally, the indifference criterion also raises critical questions, including the need for a more detailed specification of the ‘ideal state’ manipulators are indifferent to. Investigating what it takes to reveal reasons to interlocutors offers a promising starting point for further research.
Operationalisation challenges require further specification. Counterfactual analysis emerges as an initial idea, comparing the chosen method with what would be selected to reveal reasons.
While offering notable advantages, the indifference criterion requires further clarification and operationalisation. Striking the balance between recognising manipulative behaviour and avoiding an overly broad categorisation of generative AI systems as manipulative remains a key challenge.
Conclusion
In conclusion, the indifference criterion provides a valuable perspective on manipulation, shedding light on situations where traditional criteria fall short.
In my ongoing work, I am exploring the fruitfulness of the indifference view to better understand and address manipulation in our evolving technological landscape. Recent examples include the threat of manipulation in the aim for algorithmic transparency (Klenk 2023), a research agenda for manipulation in generative AI applications (Klenk 2024), and two ongoing projects on designing good influence in AI-assistants in the health context, where I collaborate with technologists and scientists from various backgrounds, such as psychology, medicine, and human-computer interaction. I would love to hear from you if you want to discuss these topics further.
Image by Gordon Johnson from Pixabay
References
Bergamaschi Ganapini, M. (2023). The signaling function of sharing fake stories. Mind & Language, 38, 64–80. doi:10.1111/mila.12373.
The Guardian (2017, November 9). Ex-Facebook president Sean Parker: site made to exploit human ‘vulnerability’. https://www.theguardian.com/technology/2017/nov/09/facebook-sean-parker-vulnerability-brain-psychology. Accessed 12 February 2024.
Gorin, M. (2014). Towards a theory of interpersonal manipulation. In C. Coons & M. Weber (Eds.), Manipulation: Theory and practice (pp. 73–97). Oxford: Oxford University Press.
Klenk, M. (2020). Digital Well-Being and Manipulation Online. In C. Burr & L. Floridi (Eds.), Ethics of Digital Well-Being: A Multidisciplinary Perspective (pp. 81–100). Cham: Springer.
Klenk, M. (2022a). Manipulation, injustice, and technology. In M. Klenk & F. Jongepier (Eds.), The Philosophy of Online Manipulation (pp. 108–131). New York, NY: Routledge.
Klenk, M. (2022b). (Online) Manipulation: Sometimes Hidden, Always Careless. Review of Social Economy, 80, 85–105. doi:10.1080/00346764.2021.1894350.
Klenk, M. (2023). Algorithmic Transparency and Manipulation. Philosophy & Technology, 36, 1–20. doi:10.1007/s13347-023-00678-9.
Klenk, M. (2024). Ethics of generative AI and manipulation: a design-oriented research agenda. Ethics and Information Technology, 26, 1–15. doi:10.1007/s10676-024-09745-x.
Pepp, J., Sterken, R., McKeever, M., & Michaelson, E. (2022). Manipulative machines. In M. Klenk & F. Jongepier (Eds.), The Philosophy of Online Manipulation (pp. 91–107). New York, NY: Routledge.
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- October 2019
- September 2019
- August 2019
- July 2019
- June 2019
- May 2019
- April 2019
- March 2019
- February 2019
- January 2019
- December 2018
- November 2018
- October 2018
- September 2018
- August 2018
- July 2018
- June 2018
- May 2018
- April 2018
- March 2018
- February 2018
- January 2018
- December 2017
- November 2017
- October 2017
- September 2017
- August 2017
- July 2017
- June 2017
- May 2017