Skip to main content

Open for Debate

Choosing a future for humanity: effective altruism and longtermism: Part 1

15 November 2021

At one time, not so long ago, it was the mosquito nets. If you wished to give to £100 to charity, and you wanted your donation to do as much good as it could, the researchers who investigate the effectiveness of different modes of philanthropy concluded that you should donate to the Against Malaria Foundation (AMF), a London-based charity that distributes these nets for free in malarial regions of the world. And how did they come to this conclusion? Well, they treated the decision where to donate your money just as decision theorists in economists, philosophy, and psychology used to say we should treat every decision. That is, for each charity you might support, they calculated what decision theorists call the expected utility of giving your money to that cause, and they ranked them in order from the donation with the greatest expected utility to the donation with the least.

What is expected utility? In particular, what do we mean by the expected utility of giving £100 to AMF, say? Well, you begin by laying out all the possible outcomes of doing so. In one outcome, your money buys and distributes 20 nets and all the people who receive them are five year old children who would have contracted malaria and died from it without the net, but who will instead live until they’re seventy; in another, the same kids get the nets, but only half of them would have contracted malaria and died from it; in another, none of them would have died without the nets; and so on for all the possibilities in between. Next, you ask what the utility of each of those outcomes would be. This is a measure of how good or how valuable they are. One crude but feasible way to do this is just to add up how many extra years of life are enjoyed in the outcome compared with the status quo and take that to be the utility of the outcome. So if the net saves the lives of 10 five year olds who will now go on to live to seventy, that outcome is valued at 650 years of life. So now you’ve put a value on each of the outcomes. Next, you try to figure out how likely each outcome will be if you make the donation. Perhaps it’s very unlikely that all twenty nets save lives (maybe the chance is 1%). Now, for each outcome, you take its utility (say, 650 life years), and you multiply it by how likely your donation is to bring it about (say, 1%) to give its probability-weighted utility (say, 650 x 0.01 = 6.5 life years). And finally, you add up all the probability-weighted utilities for all the possible outcomes of your donation, and that is its expected utility. You then do the same for the other charities you’re considering, and you rank those charities from greatest expected value to least, and you pick the top!

So, roughly speaking, that’s the methodology of the charity evaluators. It’s that methodology that led them to consistently recommend AMF in the past, and it’s that methodology that has led them to change their mind abruptly and recommend something very different now. To understand what they now recommend and why, it helps to think about a question that the ethicist Derek Parfit asks at the end of his 1984 book, Reasons and Persons. Think of these scenarios:

  • Peace;
  • A man-made biological weapon kills 99% of the world’s human population instantly and painlessly in 2022;
  • A man-made biological weapon kills 100% of the world’s human population instantly and painlessly in 2022.

At first sight, (1) seems best, then (2), then (3). What’s more, you might initially think that (2) is vastly worse than (1), and while (3) is worse than (2), it’s not as much worse than (2) as (2) is worse than (1). Parfit thinks this is a mistake. You err because you pay attention only to the effects on those of us who are currently alive, and ignore the effects on those still to come. And, as the ethicist Annette Baier pointed out a couple of years before Parfit’s book, we have obligations to future generations as strong as our obligations to the current generation. Now, (1) and (2) both leave open the possibility of a future for humanity beyond the current generation. What’s more, the future of humanity foreclosed by (3) might be very very long indeed, with the population of future humans possibly numbering in the trillions, even if the current generation is reduced in size by 99% to 80million. So the goodness of a world with these trillions of people who will go on to populate it over time is just astronomically better than one in which humanity ends next year. Or so Parfit reasons.

Now what does this mean for charity evaluations? It means that any donation that can increase the chance of (1) and decrease the chance of (3), even by a tiny amount, will have a vast expected utility, and likely much greater expected utility than making that same donation to something like AMF. Suppose, for instance, that there are a trillion (that is, a million million) future humans in scenario (1). And suppose your £100 can increase the chance of (1) by one millionth of a percentage point. Then that has the same expected value as a donation that saves 10,000 lives for sure. And there is just no other way of donating £100 that does that. So, they now say, donate to a charity that seeks to prevent (3) (or (2))—don’t donate to AMF.

Of course, you might think that, with the amount of money you have available to donate, even lowering the chance of (3) by one millionth of a percentage point is beyond your reach. But the charity evaluators make a good case that this isn’t true. The reason is that the current institutions tasked with preventing biological catastrophes are woefully underfunded. As the charity evaluators are fond of pointing out, the Biological Weapons Convention, the main international organisation tasked with preventing (2) or (3), has only four employees and a budget less than the average McDonald’s. So £100 won’t solve everything; but it will have a lot more effect than if the institutions were already sufficiently funded; and it will have a greater expected utility than anything else you might do with £100.

A graver concern about the reasoning is the assumption that (3) is worse than (1) (or (2)). The problem is that there are ways in which the long-term future of humanity might develop that would be, on balance, tremendously good, and ways in which it might develop that would be, on balance, terribly, woefully, devastatingly bad. Yes, the trillion future humans might live long, fulfilling lives full of love, aided by the new technologies and medicines they’ll develop. But, equally, the technology that might unlock great potential might equally be used for extreme evil. The AI that might allow us to identify tumours early enough to treat them effectively might equally be used to identify and suppress dissidents; the gene editing that allow us to develop crops that survive in our future radically altered climate might be used to enslave vast numbers of future humans for the enrichment of a wealthy few. Very recently, the charity evaluators have come to appreciate these possibilities, and they have started to think not only about how to avert human extinction, but how to make it more likely that the long human future that they thereby ensure will be a good one rather than a bad one.

But there are reasons for thinking things aren’t quite so simple. In the next post, I’d like to set out a few reasons for thinking that, in fact, the charity evaluators’ methodology should lead them to recommend donating to charities that are trying to hasten human extinction rather than avert it. Not that such charities exist, I suspect, but if they did, that’s where your money should go. Now, I don’t believe that’s right. But I do think it’s a natural conclusion of the methodology. And that, of course, suggests that the methodology is faulty.