Skip to main content

Open for Debate

The Dark Side of Digital Influence

18 March 2024

In the previous post, I outlined three reasons for paying closer attention to social influence in the digital landscape: the proliferation of social influence, the informational empowerment of social influence, and the fact that AI increasingly mediates social influence.

In this post, I will argue that this not only perpetuates the threat posed by existing bad actors but also significantly raises worries about problematic influence generated unwittingly and without bad intentions by individuals, groups, and technical systems. The dark side of digital influence is its corrupting effect, even on well-meaning actors.

Emergent Issues of Digital Social Influence

The three factors that individually account for the growing influence of manipulative influence (discussed in the previous post, Link) are increasingly combined, which leads to new concerns due to addition and emergence (Pham et al. 2022).

For example, efforts are underway to combine generative AI that takes into account user data to provide more effective Influence (AI-mediated and informationally empowered influence), to use generative AI in combination with an optimisation algorithm akin to a recommender system to offer more impactful Influence (AI-mediated predictive influence), and, of course, to make these powerful tools available to more users (the proliferation of AI-mediated, informationally empowered influence).

The combination of factors – widespread access, informational empowerment, and AI mediation – gives bad actors more power, allowing them to influence more people more efficiently.

However, the concern extends beyond nefarious intentions. I find it extremely important to emphasise that AI-mediated automation of social influence implies a new perspective on understanding and evaluating social influence. Fraudsters and criminals who use AI for their nefarious and manipulative schemes are problematic but, in a sense, nothing new. What is new is that the value of social influence is increasingly premised on the values embedded in the automatic curation of our digital influence landscape (Klenk 2021).

The peculiar problem with the combination of proliferation, informational empowerment, and AI mediation of social influence (in curation or generation) is that it raises worries about well-meaning influencers going astray and entering manipulative territory unwittingly and unintentionally (Klenk 2024). If that is right, then we are only beginning to fully grasp the scope of the digital manipulation problem and the related regulatory challenge.

So, we need to ask about the values embedded in the automatic generation and mediation of content, quite independently of human intentions. We need to look more closely at those actors who take to digital influence without nefarious goals but whose influence may yet take ominous forms by virtue of AI mediation (Klenk 2024).

Redefining Social Influence

The critical point is that social influence is increasingly becoming a design choice. Or, more precisely, what type of social influence we encounter is increasingly due to design choices we make at the technical and regulatory levels.

As social influence increasingly becomes a design choice, it becomes crucial to draw boundaries and consider what types of influence we should value as a society. In ‘The Age of AI,’ the late Kissinger and colleagues vividly point out the scope of the challenge (Kissinger et al. 2022). How we evaluate, regulate, and shape social influence will have an almost unfathomable impact on our lives: the types of web designs that billions of people encounter online, the choices they are offered, and which ones they are denied, how they are offered these choices, the information they receive, and the principles by which generative AI turns their prompts into means of influence. This is not only a problem for the rich world. As the Economist recently put it, (generative) AI holds a “tantalising promise for the developing world.” Technical ecosystems are already in place to support the roll-out of generative AI applications to billions of people (The Economist 2024).

Social influence is thus affecting us all. In the digital age, many social influences are no longer due to our close family members, friends, and colleagues but mediated and orchestrated in and through a digital landscape. In one telling, if we count the time that people spend on social media, where ‘intelligent software agents’ like YouTube’s recommender system influence what they see, those ‘intelligent software agents’ are already influencing us for more than two hours daily; that is more influence than we enjoy from our spouses, kids, and close friends (Klenk 2019).

The fact that social influence becomes a design choice draws attention to the fact that we can take matters into our own hands and work toward good influence, which is a cause for celebration. It also motivates me in my academic work: to outline the contours of good influence from a philosophical perspective and eventually to derive appropriate design guidelines for technology and regulation.

However, if we don’t pay close attention, soon enough, the design of social influence will be a cause for great concern. As decades of research in the philosophy of technology illustrate, societal ills and bad effects of technology rarely stem from ill-intention, but rather from neglect and ignorance about the implications of one’s design.

The crucial question is what type of social influence we should strive for and develop in our digital life-world. It’s imperative to consider the types of social influence we should want in our society and define the requirements of good influence.

Conclusion and Outlook

In this post, I hope to have illustrated the need for acute attention we must now pay to unintended forms of problematic influence engendered by the digital influence landscape.

In the next blog post, I will explore why manipulation as a form of social influence is particularly important in our evolving digital landscape. Stay tuned for insights on navigating the complex world of digital influence.

References

The Economist (2024, January 25). AI holds tantalising promise for the emerging world. https://www.economist.com/leaders/2024/01/25/the-tantalising-promise-of-ai-for-the-emerging-world. Accessed 12 February 2024.

Kissinger, H., Schmidt, E., & Huttenlocher, D. P. (2022). The age od AI and our human future. London: John Murray.

Klenk, M. (2019). Are we being manipulated by artificially intelligent software agents? – 3 Quarks Daily. https://3quarksdaily.com/3quarksdaily/2019/09/are-we-being-manipulated-by-artificially-intelligent-software-agents.html. Accessed 25 January 2024.

Klenk, M. (2021). How Do Technological Artefacts Embody Moral Values? Philosophy & Technology, 34, 525–544. doi:10.1007/s13347-020-00401-y.

Klenk, M. (2024). Ethics of generative AI and manipulation: a design-oriented research agenda. Ethics and Information Technology, 26, 1–15. doi:10.1007/s10676-024-09745-x.

Pham, A., Rubel, A., & Castro, C. (2022). Social Media, Emergent Manipulation, and Political Legitimacy. In M. Klenk & F. Jongepier (Eds.), The Philosophy of Online Manipulation. New York, NY: Routledge.

Image by Victoria from  Pixabay