Extended confirmation bias: When the mind leaks into algorithms
27 June 2022It’s no secret that when we are online, websites (Google, Facebook, YouTube etc.) often use algorithms to infer our preferences, interests, or attitudes from our digital footprints (our browsing, clicks, ‘likes’) to personalize content to us (about 12 to 47% search results are personalized). Consequently, information we encounter online often matches our preferences and attitudes, and up to 20% of relevant but preference-incongruent information may be missed. It’s well known that this can promote political polarization, radicalization, or ‘filter bubbles’. How can we tackle this problem?
Changing our thinking about people’s interactions with websites might be a good start. Currently, website users are viewed as merely causally connected to personalization algorithms through their digital footprints. I think this overlooks key aspects of these interactions. Viewing the algorithms instead as sometimes literally part of website users’ minds avoids this and may provide useful resources for mitigating risks of online behavior. Specifically, I suggest that personalization algorithms are sometimes constitutive of people’s confirmation bias, their tendency to search for information that supports their favored beliefs and to ignore information that contradicts them.
Linking online personalization to confirmation bias isn’t new. Even Amazon founder Jeff Bezos acknowledged that the “Internet in its current incarnation is a confirmation bias machine”. However, the relationship between people’s confirmation bias and personalization algorithms is usually only construed as a causal one. Philosophers working on “extended cognition” and the Internet, too, haven’t yet argued that people’s confirmation bias might ‘leak into’ personalization algorithms. But building on previous work on extended implicit bias, I think common conditions for identifying extended cognition (introduced here and here) support this.
Consider an example. Suppose a pro-Brexit UK student, Nigel, routinely uses a laptop for online research, networking, news, and entertainment. The laptop is easily accessible to him, and he’s never altered his cookie settings. Websites he regularly visits (YouTube, Facebook, Google, etc.) thus accurately personalize contents to his preferences and attitudes. One day, Nigel wonders whether Brexit will benefit UK academia. He gets on YouTube to learn more. YouTube’s algorithms previously inferred his pro-Brexit preferences from his past browsing and now filter down the potential recommended videos to a subset that conforms to his preferences while disregarding content that doesn’t. Nigel, in turn, unintentionally clicks more on content supporting his (wishful) view that Brexit will benefit UK academia and avoids material challenging it. He displays a confirmation bias in his online search. Does it extend into the personalization algorithms he’s interacting with?
Since YouTube’s algorithms infer Nigel’s preferences from his browsing and serve them by choosing contents that match his views and omitting disconfirming material, they display to some extent functionally the same kind of selective information processing he would perform in his head if contents on the topic appeared without being personalized, requiring him to rank them. Clearly, if this filtering happened in his head, it would be an uncontroversial instance of confirmation bias. It isn’t unreasonable, then, that when it’s instead partly realized by algorithms, this process, too, counts as Nigel’s confirmation bias because it’s in relevant respects functionally on a par with his bias.
It might be objected that the algorithms are just causally connected, external to him, as (unlike in the ‘internal’ bias case) other actors (e.g., website operators) may also influence the filtering. However, this overlooks two points. YouTube’s personalization algorithms continuously respond to Nigel’s browsing, monitor his preferences (via his clicks, watch time, shares), and dynamically update contents, which then again influence (e.g., reinforce) his views, initiating new recommendation cycles. In these dynamic feedback loops, epistemically skewed information processing emerges as a systemic property: It can’t be attributed to Nigel, the laptop, or the algorithms alone but only to the system they form together.
Moreover, there’s a dense interdependence between the interactants that makes it hard to decompose distinct inputs and outputs from one to the other: The algorithms’ effects on Nigel partly originate from and so aren’t wholly exogenous to his own ongoing browsing activity. These effects thus aren’t plausibly viewed as mere inputs to his biased cognition, as they are partly its products. Similarly, his bias’ effects on the algorithms via his browsing aren’t wholly exogenous to the algorithms either: They are partly determined by the algorithms’ own outputs. We thus can’t clearly separate causes from effects between the interactants. This suggests that the interactants partly ontologically overlap, realizing Nigel’s extended confirmation bias.
We could’ve focused on personalization algorithms on other websites than YouTube (Facebook, Google, etc.) that people routinely use. Or we could’ve invoked someone with another political orientation. Since accurate online personalization and relevant content recommendations are vital for websites to keep users engaged (to make money from ads), the type of feedback loops outlined are likely pervasive online. Extended confirmation bias, too, should thus be pervasive.
This has interesting upshots. Confirmation bias is generally viewed as negative. If personalization algorithms interact with people such that these systems literally extend people’s confirmation bias, the unattractive notion of acquiring a supersized confirmation bias may boost individuals’ motivation to re-consider which search engine (personalized or non-personalized; e.g., DuckDuckGo) to use, or whether to adjust privacy settings. Relatedly, if people’s confirmation bias is sometimes realized by personalization algorithms, when website providers tweak the algorithms without users’ consent (e.g., to maximize ad revenue), this could be construed as them literally altering the users’ minds akin to brain surgery. Since interfering with a person’s body without their consent is usually legally treated as personal assault, existing legal frameworks may offer safeguards from such tweaks to algorithms by allowing us to view them as personal assault.
Specifically, surveys suggest that while most people approve of personalized entertainment or shopping, 71% find the use of political orientation for online personalization unacceptable. Yet, this information is currently often used for that purpose, which may contribute to political polarization. And website operators continuously adjust their algorithms to better respond to such sensitive attributes, thus modifying these systems in ways people may reject. If such modifications can be viewed as personal assault, this may help de-politicize algorithmic personalization.
Thus, while the notion of extended confirmation bias seems counterintuitive, exploring it in the context of people’s interactions with websites can be valuable. It may offer ways in which we can use existing legal frameworks to reclaim power over personalization algorithms from website operators. And it may help make the public more alert to the potential mind-changing effects of their interactions with websites.
Picture
The blue image above is free for commercial use; no attribution is required:
https://pixabay.com/illustrations/ai-artificial-intelligence-7114792/
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- October 2019
- September 2019
- August 2019
- July 2019
- June 2019
- May 2019
- April 2019
- March 2019
- February 2019
- January 2019
- December 2018
- November 2018
- October 2018
- September 2018
- August 2018
- July 2018
- June 2018
- May 2018
- April 2018
- March 2018
- February 2018
- January 2018
- December 2017
- November 2017
- October 2017
- September 2017
- August 2017
- July 2017
- June 2017
- May 2017