Skip to main content

Open for Debate

Choosing a future for humanity: effective altruism and longtermism: Part 2

29 November 2021

In discussions among the charity evaluators whom I described in the previous post, Derek Parfit is often quoted as saying that we live at ‘the hinge of history’. What he meant is that we live at a time when we have at our disposal the means to do greater good than ever before, and the means to do greater harm. With the advent of nuclear power, gene-editing, and advanced artificial intelligence, among other technologies, the same knowledge and ability can be used to either end. When we talk of the hinge of history, we often focus on two things: our capacity to create a wonderful, emancipated, just, happy future for all, in which everyone has what they need to pursue their chosen way of life; and our capacity to end all possibility of this future by destroying ourselves. But we less often talk in philosophy of our ability to create a future of unrelenting misery, enslavement, alienation, and pain for numbers of people many many times greater than have inhabited the Earth so far (though of course science fiction has walked this beat for many years). Yet many of the technologies that might bring us great happiness in a long-term human future or cause our extinction in the near future might equally bring great misery in the long-term future (again, a trope of science fiction). As I said in the previous post, the charity evaluators are aware of this and advocate for work to make the deployment of these technologies safer, more secure; to tip us towards the wonderful future. But the long-term effects of such research are hard to predict. It is quite possible that unintended negative consequences are just as likely as the positive consequences they intend; indeed, it’s quite possible that they are more likely. We see something like this with so-called gain-of-function research on viruses, which have recently been in the news because of the ‘lab leak’ hypothesis about the origin of the novel coronavirus. In that sort of work, virologists engineer more dangerous versions of the virus than have naturally occurred so far in order to understand possible future mutations and develop protections against them. But doing so of course opens the possibility that these enhanced microbes escape the lab where they’re created, and the possibility that a bad actor will exploit them to devastating effect—with only the threat of ever deploying them, they might hold societies to ransom. And the same might happen as we research AI safety, for instance, or nuclear safety. By dreaming up the horrific uses to which these technologies might be put in order to guard against them, we might make create blueprints that make these uses more likely.

So one concern about the charity evaluators’ claim that it is better to fund activities that seeks to make our long-term happy survival more likely is that those same activities make our long-term unhappy survival more likely as well. And if that’s the case, maybe such funding has no greater expected utility than doing nothing at all. Perhaps the possible benefits and possible harms, colossal though they are, point in opposite directions with equal magnitude and our interventions increase the chance of one only by increasing the chance of the other by the same amount; and if that’s true, the expected utility of such an intervention is zero and you’d do better to donate to AMF, which at least has a positive expected utility.

But even if we think our efforts make the vast happy future for humanity more likely than the vast miserable one, there are reasons to think we should work to ensure neither comes about. Let’s see why.

Suppose, for a moment, that you are a keen birdwatcher. Every time you see a new species it gives you great pleasure. What’s more, the amount of extra pleasure each new species brings is the same no matter how many you’ve seen before. Your first species—a blue tit in your grandparents’ garden as a child—adds as much happiness to your stock as your two hundredth—a golden eagle high above Glenshee when you’re 30. Now suppose I offer to take you on a birding trip. I offer two possible locations: in one, you’re sure to see 10 new species; in the other, you might see 20 or your might see none, depending on whether the migration has started yet, and looking back on records from previous years, I conclude that there’s a 60% chance you’ll see twenty. Which do you choose? The decision rule used by the charity evaluators would have you calculate the expected value of each option: 10 species for sure, or 60% chance of 20 and 40% chance of 0—an expectation of 12 species. Nonetheless, in such situations, many people will choose the risk-free option. For them, the 60% chance of the 20 new species isn’t enough to outweigh the 40% chance of none when there is 100% chance of 10 in the first location. What’s more, this doesn’t seem unreasonable. And indeed there is a suite of alternatives to expected utility theory that permit such aversion to risk. Roughly speaking, each says that the risk averse person should give greater weight to the worse outcomes than expected utility theory would advise, and less weight to the better ones.

The problem for the charity evaluator is that you might take this approach not only to your choice of bird reserve, but also to your choice of recipient for your £100 donation. To simplify considerably, suppose we face three options:

  • painless extinction within a generation;
  • a trillion happy future human lives;
  • a trillion miserable future human lives.

We might donate £100 in two different ways. If we give it to the Quiet End Foundation, this will increase the chance of (1) by 0.0002 percentage point and decrease each of (2) and (3) by 0.0001. If we give it to the Happy Future Fund, that will increase the chance of (2) by 0.0002 percentage points, thereby increase the chance of (3) by 0.0001 percentage point—since, as we have seen, it’s hard to promote (2) without also making (3) more likely—and decrease the chance of (1) by 0.0003. Then, just as for our risk-averse birdwatcher, even those who are only slightly risk-averse will donate to the Quiet End Fund instead of the Happy Future Foundation. The chance of the terrible outcome increases by less than the chance of the good outcome, but risk-aversion tells us to give greater weight to bad outcomes and less to good ones, and that might well override it. The risk-averse philanthropist should choose extinction.

Now, so far, I’ve been thinking only of the philanthropist’s attitudes to risk. But of course their decision affects many others—all the humans alive at the point of the painless extinction, if that happens; all the humans who will come into existence in Earth’s long human future, on the other. You might think, therefore, that it is not only the donor’s attitudes to risk that are relevant here. But with eight billion people to consider on the one hand and a trillion on the other, it’s more than likely we’ll find a lot of variety in the attitudes to risk among them. How, then, are we to take these into account?

Consider the following tale: Hiking in the Scottish Highlands one day with my friend, the mist comes down. To avoid getting separated, we rope ourselves to one another and agree I’ll go ahead on the narrow path, taking the lead and making decisions about our route. At one point, I must choose whether to continue our ascent, giving us the chance of attaining the summit but risking very serious injury, or begin our descent, depriving us of the possibility we’ll reach the top but removing any risk of harm. My friend and I have been hiking together for years, and I know we value reaching the summit exactly as much, and similarly for avoiding injury; and we disvalue missing out on the summit exactly as much, and similarly for sustaining injury. But I also know that we have radically different attitudes to risk. While I am very risk-inclined, he is very risk-averse. Were he to face the choice instead of me, I know he’d choose to descend. It seems wrong, I think, for me to choose on behalf of the two of us to continue our attempt on the summit.

What this suggests, though of course does not establish, is that, when we choose on behalf of others, we should give particular weight to the preferences of those who are more risk-averse. But, as we saw when we were thinking only about the philanthropist’s preferences, those individuals will favour the Quiet End donation over the Happy Future. Does this oblige us to use our money to hasten our extinction? For those, like me, who think not, the task is to identify where this argument falls down.