Skip to main content

Open for Debate

Extended confirmation bias: When the mind leaks into algorithms

27 June 2022

 It’s no secret that when we are online, websites (Google, Facebook, YouTube etc.) often use algorithms to infer our preferences, interests, or attitudes from our digital footprints (our browsing, clicks, ‘likes’) to personalize content to us (about 12 to 47% search results are personalized). Consequently, information we encounter online often matches our preferences and attitudes, and  up to 20% of relevant but preference-incongruent information may be missed. It’s well known that this can promote political polarization, radicalization, or ‘filter bubbles’. How can we tackle this problem?

Changing our thinking about people’s interactions with websites might be a good start. Currently, website users are viewed as merely causally connected to personalization algorithms through their digital footprints. I think this overlooks key aspects of these interactions. Viewing the algorithms instead as sometimes literally part of website users’ minds avoids this and may provide useful resources for mitigating risks of online behavior. Specifically, I suggest that personalization algorithms are sometimes constitutive of people’s confirmation bias, their tendency to search for information that supports their favored beliefs and to ignore information that contradicts them.

Linking online personalization to confirmation bias isn’t new. Even Amazon founder Jeff Bezos acknowledged that the “Internet in its current incarnation is a confirmation bias machine”. However, the relationship between people’s confirmation bias and personalization algorithms is usually only construed as a causal one. Philosophers working on “extended cognition” and the Internet, too, haven’t yet argued that people’s confirmation bias might ‘leak into’ personalization algorithms. But building on previous work on extended implicit bias, I think common conditions for identifying extended cognition (introduced here and here) support this.

Consider an example. Suppose a pro-Brexit UK student, Nigel, routinely uses a laptop for online research, networking, news, and entertainment. The laptop is easily accessible to him, and he’s never altered his cookie settings. Websites he regularly visits (YouTube, Facebook, Google, etc.) thus accurately personalize contents to his preferences and attitudes. One day, Nigel wonders whether Brexit will benefit UK academia. He gets on YouTube to learn more. YouTube’s algorithms previously inferred his pro-Brexit preferences from his past browsing and now filter down the potential recommended videos to a subset that conforms to his preferences while disregarding content that doesn’t. Nigel, in turn, unintentionally clicks more on content supporting his (wishful) view that Brexit will benefit UK academia and avoids material challenging it. He displays a confirmation bias in his online search. Does it extend into the personalization algorithms he’s interacting with?

Since YouTube’s algorithms infer Nigel’s preferences from his browsing and serve them by choosing contents that match his views and omitting disconfirming material, they display to some extent functionally the same kind of selective information processing he would perform in his head if contents on the topic appeared without being personalized, requiring him to rank them. Clearly, if this filtering happened in his head, it would be an uncontroversial instance of confirmation bias. It isn’t unreasonable, then, that when it’s instead partly realized by algorithms, this process, too, counts as Nigel’s confirmation bias because it’s in relevant respects functionally on a par with his bias.

It might be objected that the algorithms are just causally connected, external to him, as (unlike in the ‘internal’ bias case) other actors (e.g., website operators) may also influence the filtering. However, this overlooks two points. YouTube’s personalization algorithms continuously respond to Nigel’s browsing, monitor his preferences (via his clicks, watch time, shares), and dynamically update contents, which then again influence (e.g., reinforce) his views, initiating new recommendation cycles. In these dynamic feedback loops, epistemically skewed information processing emerges as a systemic property: It can’t be attributed to Nigel, the laptop, or the algorithms alone but only to the system they form together.

Moreover, there’s a dense interdependence between the interactants that makes it hard to decompose distinct inputs and outputs from one to the other: The algorithms’ effects on Nigel partly originate from and so aren’t wholly exogenous to his own ongoing browsing activity. These effects thus aren’t plausibly viewed as mere inputs to his biased cognition, as they are partly its products. Similarly, his bias’ effects on the algorithms via his browsing aren’t wholly exogenous to the algorithms either: They are partly determined by the algorithms’ own outputs. We thus can’t clearly separate causes from effects between the interactants. This suggests that the interactants partly ontologically overlap, realizing Nigel’s extended confirmation bias.

We could’ve focused on personalization algorithms on other websites than YouTube (Facebook, Google, etc.) that people routinely use. Or we could’ve invoked someone with another political orientation. Since accurate online personalization and relevant content recommendations are vital for websites to keep users engaged (to make money from ads), the type of feedback loops outlined are likely pervasive online. Extended confirmation bias, too, should thus be pervasive.

This has interesting upshots. Confirmation bias is generally viewed as negative. If personalization algorithms interact with people such that these systems literally extend people’s confirmation bias, the unattractive notion of acquiring a supersized confirmation bias may boost individuals’ motivation to re-consider which search engine (personalized or non-personalized; e.g., DuckDuckGo) to use, or whether to adjust privacy settings. Relatedly, if people’s confirmation bias is sometimes realized by personalization algorithms, when website providers tweak the algorithms without users’ consent (e.g., to maximize ad revenue), this could be construed as them literally altering the users’ minds akin to brain surgery. Since interfering with a person’s body without their consent is usually legally treated as personal assault, existing legal frameworks may offer safeguards from such tweaks to algorithms by allowing us to view them as personal assault.

Specifically, surveys suggest that while most people approve of personalized entertainment or shopping, 71% find the use of political orientation for online personalization unacceptable. Yet, this information is currently often used for that purpose, which may contribute to political polarization. And website operators continuously adjust their algorithms to better respond to such sensitive attributes, thus modifying these systems in ways people may reject. If such modifications can be viewed as personal assault, this may help de-politicize algorithmic personalization.

Thus, while the notion of extended confirmation bias seems counterintuitive, exploring it in the context of people’s interactions with websites can be valuable. It may offer ways in which we can use existing legal frameworks to reclaim power over personalization algorithms from website operators. And it may help make the public more alert to the potential mind-changing effects of their interactions with websites.


The blue image above is free for commercial use; no attribution is required: