Skip to main content

Open for Debate

Philosophical Foundations for Chatbot Regulation

9 December 2024

In an age where we increasingly converse with artifacts – whose interfaces include AI, a pressing question emerges: Who, or what, are we really talking with? As AI-based chatbots, often termed ‘conversational Ais’, become more sophisticated, the line between human and machine communication blurs, raising profound questions about the nature of knowledge, trust, responsibility, and values.

The roots of conversation with inanimate objects stretches way back. In 1637, René Descartes declared:

We can certainly conceive of a machine so constructed that it utters words, and even utters words which correspond to bodily actions causing a change in its organs (e.g., if you touch it in one spot it asks what you want of it, if you touch it in another spot it cries out that you are hurting it, and so on). But it is not conceivable that such a machine should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do[1].

Fast forward to the 1950s, and Alan Turing proposed his famous test, challenging us to distinguish between human and machine responses. ELIZA in the 1960s, mimicked a psychotherapist, and merely rephrased user inputs as questions. One of the insights resulting from observing people chat with ELIZA is the “ELIZA effect” – attributing human-like characteristics to computer products. Another is the ability of designers to “elicit trust”[2] – using this effect to ‘trick’ users into trusting the system. Another example is the 1990s MS-DOS program Dr. Sbaitso – a digital “psychologist” that offered predetermined responses based on keywords. Early attempts at conversations with machines were extremely simplistic by today’s standards.

Moving to November 2022 to the launching of the chatbot ChatGPT – with its array of technical new jargon such as ‘Large Language Models’ (LLMs) and ‘Generative AI’. ChatGPT began as a product that engages in human-like conversation. It is a type of a LLM – an AI trained on vast datasets of text (like Wikipedia and newspapers) to “understand” (i.e. statistically infer the relations between words) and generate outputs. This technology is beyond chatting, introducing a broader category of technologies that generates (hence ‘Generative AI’) new content beyond text – such as images, videos, voices, music, but also drug discovery and more – all based on learned patterns from existing data.

While the collaboration of humans and AIs benefits scholar work, creativity, and art, we do leave something behind. Consider the use of AI in writing, for example: While AI might offer efficiency gains and improve quality, and even level the playing field for authors whose English is second (or third) language, in some cases it ultimately risks undermining the intellectual engagement and pleasure that comes with writing. The act of writing – choosing words, carefully crafting sentences, structuring arguments – is a fundamental practice for deep thinking. By outsourcing this process to an AI-based product, we lose the invaluable cognitive exercise of working with the building blocks of thoughts – words – and the pleasure and insights that often comes when we do it well.

While we lose some cognitive benefits in our personal work, the advancing capabilities of AIs are reshaping society. By now, the generated text has a human-like voice, and soon perhaps a human-like body, rendering machine outputs as indistinguishable from human outputs. While there are arguably many positive applications of ‘humanizing’ products, such as enhancing user experience in customer service or providing companionship in elderly care, I leave the advocacy of such benefits to others. I have experienced an AI “eye-rolling” at my jokes, and vowed to be more critical ever since.

Navigating the landscape of increasingly human-like AI interactions, necessitates understanding the fundamental limitations and differences[3]. Despite AI-based products’ impressive outputs, these AIs don’t “know” or “understand” in the way humans do. They are pattern-matching statistical machines, extraordinarily good at predicting what words, pitch, or pixels should come next. These products are not sentient, do not have minds, and their lingual outputs are not the expression of thoughts or grounded in intent. These products are not ethical reasoners[4].

As we interact more with artifacts in natural language, we face the “anthropomorphic trap” – the tendency to attribute human-like qualities to these products. This can lead to misplaced trust or emotional attachment, raising a whole new set of philosophical question, such as what is the nature of love between a human and a device? Can you be cruel to a machine?

In philosophical circles, I suggested framing this issue as the debate between “testimony-based beliefs” – knowledge from human sources – and “technology-based beliefs”, i.e. knowledge from AI sources[5]. This distinction underscores the need for a nuanced understanding of the digital source of information.

Guided by this philosophical framing, it is possible to inform policy-makers and propose several key principles for regulating one-on-one human-machine interactions. For example:

  1. Just as AI-generated images could be watermarks, interactions with conversational AIs could begin with a clear acknowledgment that they are not humans. Alternatively, the AI could have a clearly non-human voice[6].
  2. Access to human support should be a fundamental right, not a luxury paid service. Therefore, companies should not charge premiums for escalating issues to humans. [7]
  3. Companies must be held liable for the outputs of their AI systems and always offer human oversight[8].

Ultimately, the philosophical question of who, or what, we are really talking with, isn’t merely about AI products but about ourselves. It challenges us to reflect on how we listen and interpret digital information.

As we integrate chatbots and automated decision-makers into our lives, we should ask more about the value of the human interaction we’re losing, consider the broader implications, and identify the relevant philosophical questions.

Asking philosophical questions could guide us towards a more self-reflective adoption and implementation of technologies, as our relationship with technology, the nature of knowledge, and communication, changes for the generations to come.

Picture created by Dell-E 3 in response to prompt “A realistic black and white photograph of a person standing in front of a full-length mirror speaking with a robot reflection”

 

[1] Descartes, R. (1985) [1637]. “Discourse on the Method.”, p. 140, in: The Philosophical Works of Descartes, vol. I. (J. Cottingham, R. Stoothoff, and D. Murdoch, Translators). Cambridge University Press.

[2] Turkle, S. (2011). “Authenticity in the age of digital companions”, p. 63, in: Machine Ethics. M. Anderson & S. Anderson (eds.). Cambridge University Press. doi:10.1017/CBO9780511978036.008

[3] Freiman, O., and Miller, B. (2020). “Can Artificial Entities Assert?”, pp. 415-436, in: S. C. Goldberg (ed.) The Oxford Handbook of Assertion. Oxford University Press. https://academic.oup.com/edited-volume/34275/chapter-abstract/290604123

[4] Freiman, O. (2024). “AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony”, Social Epistemology 38(4): 476–490. https://doi.org/10.1080/02691728.2024.2316622

[5] Freiman, O. (2023). Analysis of Beliefs Acquired from a Conversational AI: Instruments-based Beliefs, Testimony-based Beliefs, and Technology-based Beliefs. Episteme. doi:10.1017/epi.2023.12

[6] Nowak, En. (2024). AI voices should sound weird. https://blogs.cardiff.ac.uk/openfordebate/2075-2/

[7]Freiman, O. and Geslevich Packin N. (2023). Automation’s Hidden Costs: The Case Against a Paywalled Human Touch. Forbes, May 22, 2023. https://www.forbes.com/sites/nizangpackin/2023/05/22/automations-hidden-costs-the-case-against-a-paywalled-human-touch

[8] Melnick K. (2024). Air Canada chatbot promised a discount. Now the airline has to pay it. The Washington Post, February 18, 2024. https://www.washingtonpost.com/travel/2024/02/18/air-canada-airline-chatbot-ruling


Comments

Leave a reply

Your email address will not be published. Required fields are marked *