Online illusions of understanding19 October 2020
An online intellectual paradise?
The internet and social media provide us with plenty of opportunities to educate ourselves, to learn new things, and to deepen our understanding. A world of knowledge at your fingertips, as the slogan goes. This might seem like an intellectual paradise. And in many ways, it is. Judicious use of the internet will typically get you more high-quality information than you can process, on almost any topic, no matter how outlandish or specialized.
But all is not well in paradise. The internet is brimming with fake news, conspiracy theories, and other forms of misinformation. Misinformation can moreover easily be boosted to prominence through echo chambers and algorithms that determine what appears in our timelines, feeds, or recommendations.
This is bad enough, but I want to draw attention to a further way in which the internet and social media, and in particular the ways in which online information is presented, ordered, connected, and made accessible, can easily generate illusions of knowledge and understanding rather than real insight.
Good inquiry, which is to say inquiry that leads to knowledge and understanding, involves a number of meta-cognitive tasks. These tasks that don’t contribute directly to finding answers, but steer the process of finding answers in the right direction. First of all, one needs to ask good questions. This is often far from trivial, especially when it comes to complex scientific, philosophical, and social issues. Next, one needs to identify suitable strategies for answering one’s questions; to assess the quality of the input, information, and evidence one encounters during a process of inquiry; and, finally, to make sound judgments about when to stop inquiring.
The key reason why online inquiry is prone to generate illusions of knowing and understanding is that it’s easy – sometimes inevitable – to outsource aspects of the meta-cognitive tasks mentioned before to internet technology. Since most internet platforms aren’t designed to facilitate good inquiry, they can easily steer it in the wrong directions, even without our realizing it. Let’s consider each task in turn.
Try typing in ‘Why are professors’ in Google search. Chances are you’ll see things like ‘lazy’, ‘overpaid’, and ‘weird’ among the autocomplete suggestions. If we leave the task of asking good questions partly to Google’s autocomplete, we will easily ask poorly phrased questions. This isn’t because Google is inherently biased against professors (or anyone else, for that matter). It’s just that its autocomplete feature reflects the most commonly entered questions back to us, including any biases they may have or mistaken presuppositions they include.
Internet search also does little to correct questions with false presuppositions or misleading framing. If you type in ‘earth 6000 years old’, the first couple of hits include as many creationist websites as websites with sound scientific information. Similarly, entering ‘black on white crime’ throws up various white supremacist and racist sources promoting the idea that this category makes up a huge chunk of the total crime statistics, whereas it’s really a comparatively low number.
Monopolizing strategies for inquiry
The internet economy has generated near-monopolies in various domains. One or a few companies or platforms are the default option, with little or no serious alternatives. Google dominates search, Facebook and Twitter are the main text-based social media platforms, YouTube is the go-to option for video, we buy our books (or almost anything, really) on Amazon, we use Tripadvisor or Booking.com for travel, etc.
The result is that our choice of strategies for inquiry is narrowed, unless we actively avoid the default options and put a lot of effort into seeking out alternative sources. This would not be a problem if the default sources mostly offered reliable information. But that is clearly not the case. How Google orders its search results is strongly influenced by commercial interests, popularity of websites, and several other things that have nothing to do with reliability and trustworthiness. It’s relatively easy for internet-savvy actors to manipulate YouTube’s or Amazon’s recommendations. And social media platforms let engagement, likes, or thumbs up determine what we see on them. Unless we pro-actively monitor for these distorting effects and correct for them on the basis of our own prior knowledge about what sources are trustworthy, none of this is particularly conducive to finding good information or producing genuine understanding.
Truth by popular vote?
The information that inquiry digs up must be checked for quality: is it reliable, trustworthy, unbiased, and fair? There are plenty of ways to do this: trace the sources, find independent verification, consult the underlying data yourself if you can, seek out expert judgment or advice, find certified institutions, and so on. But most of these things require prior knowledge and intellectual skills on the part of the inquirer.
The problem is that if we just go by how information is presented and ordered online, we are easily duped. We noted before how the order of search results is a poor reflection of the truthfulness of information. Similarly for what Facebook’s, YouTube’s, and Twitter’s algorithms put on top of your feed or recommendations. That is optimized for maximal engagement, popularity, and commercial value, but not reliability.
The general picture is that online inquiry runs the risk of letting popularity and commercial interests stand proxy for what is reliable and true. But truth isn’t decided by popular vote or buying power, so we should not let these features be the judge of the quality of our information.
When should one stop inquiring when there are millions of further search results, never-ending recommendations, and endless scrolls? It’s hard to say. This is by design: much of the internet and certainly social media platforms are supposed to keep people engaged and hence offer no indications for when an inquiry is successfully completed. There are always more links to click or Twitter-threads to chase.
This conundrum is exacerbated by the ubiquity of misinformation. Even when you find reliable information quickly and should stop inquiring, you may notice a suggestive piece misinformation, leaving you wondering whether you ought to dig deeper. Even worse: once you decide to start digging, you might get drawn into a rabbit hole of questionable sources or outright conspiracy theories.
As a result, you may spend a lot of time trying to answer your questions. Once you’ve read various websites, watched YouTube videos, and combed through branching discussion threads on Twitter or Reddit, you’ll probably feel like you’ve done your research. But nothing about the online environment by itself makes it highly likely that your inquiry was in fact completed successfully – you may as well have been consuming lots or poor-quality information that just happened to be popular and promoted by search and recommender systems. In other words, you can easily end up with an illusion of knowledge and understanding.
None of this should be taken to imply that online inquiry is hopeless. It is no part of my argument to claim that the internet necessarily leads to illusions of knowledge and understanding. Whether it does so depends on design choices, business decisions, and government policies. This means the knowledge-potential of the internet can be improved. This is not an easy job. It needs the concerted efforts of computer scientists, psychologists, communication scientists, and philosophers, in addition to better business incentives, policies, and law-making.
And even as it is, the internet remains an unparalleled source of knowledge and understanding, but – and this is the crucial point – as long as you know where and how to look. Online inquiry requires well-honed cognitive skills and intellectual virtues. Meta-cognition should not be outsourced entirely to the online environment. Good inquiry requires hard work, and this is as true online as it was offline.
(This post is based on my essay “Algorithm-Based Illusions of Understanding”, Social Epistemology Review and Reply Collective 8 (10): 53–64. https://wp.me/p1Bfg0-4ws