Effective Altruism, Consequentialism, and Longtermism

Since early November of 2022, when the cryptocurrency exchange FTX went bankrupt, there has been growing criticism of a movement for which Sam Bankman-Fried (SBF), founder and CEO of FTX, was a sort of mascot: effective altruism. Broadly speaking, this is the idea that people should take a rational approach to charitable efforts, rather than a sentimental approach. Empathy can lead us astray, appealing to our cognitive biases, like the availability heuristic, recency illusion, mere-exposure effect, the streetlight effect, and others. A touching story about a single person can move us to act better than any statistics about the suffering of millions. As such, instead of following our emotions, we should seek to get the most bang for our buck with our charitable efforts, i.e., save the most lives and do the most good with our donations of time, money, and resources. This means that the problems we address, and the means of addressing those problems, should be considered rationally and scientifically, going based on what the numbers tell us will do the most good, even if it doesn’t immediately give us that fuzzy feeling we achieve from helping the single person with the touching story. Since the downfall of SBF, this philosophy has garnered some criticism, often with what seems like more than a hint of schadenfreude.

This post was inspired by the following videos:

 

More accurately with the Sam Harris podcast, it was his August 14, 2022 podcast with William MacAskill, one of the major philosophical fountainheads of effective altruism (EA), penning books like Doing Good Better and What We Owe the Future. I recently renewed my subscription to Sam Harris’ podcast (mostly to listen to the Essential Sam Harris series, of which the above video is a truncated version of one of them), but have also been listening to some of his other back catalogue since I unsubscribed a couple years ago. Anyway, the point is, Sam Harris has been talking quite a bit recently about EA lately, and has taken a hard line on moral realism and the primacy of consequentialism. He held these views before, but he’s drilling down on these points quite a bit in his recent podcasts (e.g., listen to his episodes with William MacAskill, Erik Hoel, and Russ Roberts).

The motivation for EA comes in large part from Peter Singer, probably one of the most famous moral philosophers in the last century. His thought experiment about the shallow pond underpins the impetus for EA:

In this 2009 video he said:

Imagine that you’re walking across a shallow pond and you notice that a small child has fallen in, and is in danger of drowning […] Of course, you think you must rush in to save the child. Then you remember that you’re wearing your favorite, quite expensive, pair of shoes and they’ll get ruined if you rush into the pond. Is that a reason for not saving the child? I’m sure you’ll say no it isn’t, you just can’t compare the life of a child to the cost of a pair of shoes, no matter how expensive. […] But think about how that relates to your situation in the world today. There are children whose lives you can save […] Nearly 10 million children die every year from avoidable, poverty related causes. And it wouldn’t take a lot to save the lives of these children. We can do it. For the cost of a pair of shoes, perhaps, you could save the life of a child. […] There’s some luxury that you could do without. And with that money, you could give to an organization to reduce extreme poverty in the world, and save lives of children. […] I think that this is what we ought to be doing.

Source

Most people can intuitively see the logical conclusion of this line of thinking: anything I spend my money on above mere subsistence could be construed as an unnecessary luxury, and so therefore I ought give away anything I don’t require for my own survival. Interestingly, as the above Wisecrack video discusses, some in the EA movement have taken a very different view: that people ought to try getting rich, or supporting people who are already rich, so that they have more money they can give to charity. This is known as earning to give.

Both of these notions – self-imposed austerity or fortune-seeking in the name of giving as much away as possible – are interesting as potential practical (in the sense that they are concrete actions a person can take, rather than something purely theoretical) approaches to ethics. Most people likely disagree with them. The latter – getting as rich as possible in order to give more to charity – seems obviously self-serving. Indeed, the case of SBF sort of illustrated why such an approach is likely unworkable: once people get rich, they tend to want to stay that way, or, it’s even likely that greedy people would use such thinking as a rationalization for getting rich.

It’s the former, however, that is more difficult to argue against. At least as far as logical consistency. It’s true that nobody needs a TV or computer or phone or even a car to live. People have lived without those things for most of human history, and right now there are billions who still live without them. And so, how could someone show that the following is unsound:

D1: mere subsistence is the condition under which a person has access to enough resources to survive, no more and no less
D2a: all resources above or beyond mere subsistence are the conditions of surplus
D2b: surplus is (A) that which can be relinquished without causing an individual undue suffering and (B) that which when received confers well-being with diminishing returns (as conditions move away from mere subsistence)
D3a: all resources below mere subsistence are the conditions of poverty
D3b: the conditions of poverty are (A) the condition under which the relinquishing of resources does cause undue suffering and (B) the conditions under which the receiving of resources confers well-being with increasing returns (as conditions move away from mere subsistence)
D4: morality consists of the actions and behaviors that (A) maximize well-being and minimize suffering (definition of morality in utilitarianism) and (B) are logically and physically possible for a person to do (one cannot be morally responsible for those things they cannot influence or control)

P1: given the above definitions, if those living under conditions of surplus do actually redistribute their resources to those living under conditions of poverty, then this will increase the well-being of those living under conditions of poverty to a greater extent than it will increase suffering of those living under conditions of surplus, resulting in a net gain of well-being (from D2b and D3b); in other words, such actions will maximize well-being and minimize suffering and are therefore morally good by D4
P2: if people desire to be moral, then people ought to do that which maximizes well-being and minimizes suffering
P4: if it is logically and physically possible for those living under conditions of surplus to redistribute their resources to those living under conditions of poverty, then it is morally good for those living under conditions of surplus to redistribute resources to those living under conditions of poverty
P3: it is logically and physically possible for those living under conditions of surplus to redistribute resources to those living under conditions of poverty
C: therefore, those living under conditions of surplus ought to redistribute resources to those living under conditions of poverty

The following video covers this argument well:

Possible rejoinders to this approach might be that we ought to help our loved ones and neighbors before we start thinking too much about strangers on the other side of the world. In our everyday lives this is certainly what we do. Someone being one’s friend can be defined by one’s having greater concern for that person’s well-being than the well-being of most other people in the world. This, however, is exactly the kind of sentimentality that EA might argue against. Why is it, aside from my own personal emotions, that the well-being of 10 of my loved ones should be more important to me than the well-being of 1,000 strangers? In a trolley problem, if it’s between one of my loved ones and five strangers, should I save my friend at the expense of the five strangers? If not, then why would I buy my friend some widget they don’t need for Christmas when I could have helped feed a starving person with that same money?

 

The best argument against such an approach comes from the practical considerations: likely the vast majority of people are not going to be motivated to constantly work for the welfare of strangers, meaning that it just isn’t going to happen. But, even if it did happen, this lack of motivation would reduce the efficiency of such a system, i.e., if, say, the government taxes people for everything above mere subsistence and gives it to even poorer people in other countries, what motivation do I have to find a more challenging job in order to make more money? If such a totalitarian government were not quickly overthrown, most people would probably end up doing only just enough to get by, and anything more would find its way to various black markets.

Even MacAskill agrees with this. He says in an interview:

…we should be actualists. Actualism versus possibilism is a question in moral philosophy, which can be framed like this: when I decide what I ought to do today, should I take into account my own future weakness of the will? Actualism says that we should. If you give away all of your savings at once today—which you could technically do—you’ll probably get so frustrated that you’ll simply stop giving in the future. Whereas if you decide to give 10% of your earnings, this commitment will be sustainable enough that you’ll continue doing it over many years in the future, resulting in a higher overall amount, and thus a higher impact. Therefore, an actualist says that you should give only 10%.

MacAskill brings up actualism vs. possibilism. This is the debate in ethics that asks whether a person ought to do the best possible thing in every situation, regardless of what they might actually do in the future, or if they ought to make decisions based on what they are likely to actually do in the future. The popular thought experiment is called Professor Procrastinate. Professor Procrastinate is the foremost expert in their field, but they have a proclivity to procrastinate, often to the point of failing to get things done. A student asks Professor Procrastinate to look over their thesis, which is on the very subject for which Professor Procrastinate is the foremost expert, and is due in a week. What should Professor Procrastinate tell the student? The possibilist camp says that Professor Procrastinate should say yes to the student: the best possible action now is to say yes and the best possible action later is to look over the thesis, but the latter depends on the former. The actualist camp says that Professor Procrastinate should say no, since if they tell the student yes and then procrastinate the time away, not looking over the thesis, the prevents the student from going to someone else who, even though less qualified, could still give useful feedback. In other words, because Professor Procrastinate knows that there is a very high likelihood that their future behavior will be procrastination, they ought to make their decision now in light of this knowledge.

One can see how the actualist position, although appearing pragmatic on the surface, could be used for rationalizing our worst impulses. Why not just say “no” to anything I don’t want to do and have even a chance of not doing? Why not excuse all my bad behavior by blaming my future self? Of course, possibilism has its issues as well, such as taking on more responsibility than one can handle. Indeed, the colorful text above is taking a very possibilist position, where in each moment a person ought to do the most moral thing, even if it leaves them in a position of mere subsistence. This is why MacAskill favors actualism, since even he knows that the conclusion of the colorful text above is untenable and exceedingly unlikely to work in practice.

Yet, if we are to take the actualist position, then what threshold could we impose to say, for instance, if I am to choose between morally salient states of affairs X and Y, where X is the more moral but requires me to take actions F in the future that I am much less likely to actually undertake, then what probability should I accept of my actually undertaking future actions F that make it so X will become actual? We can call this P(X|F). And so, we could state it like this, we have P(F) as the probability that I take the future actions F, and P(X|F) is the probability that the moral state of affairs X will be actualized given that I take the future actions F.

We thus want to know: what is P(F|X) = P(X|F) × P(F)? And what P(F|X) is necessary and/or sufficient for me to choose X? We would also need to know, for instance, if I choose X now but neglect to do F, is the outcome significantly worse than if I choose Y and succeed in taking the future actions G that will actualize Y? Or choose Y and fail to do G? In other words, we need to consider the following:

We’ve defined the situation so that, morally speaking, A > B, but is D > C? How much greater is B than D? These are things that need to be considered when performing this utilitarian calculus. Another wrinkle, of course, is how well one can predict that they will take actions F (or G), or even if taking these future utilitarian considerations causes one to change what they will do in the future (i.e., if I’m faced with choosing between X and Y and knowing that X requires I do F and Y requires I do G, then will this consideration sway me to be more or less likely to do F should I choose X?). This is the age old problem in which having predictions about the future causes someone to change their behavior and therefore render their prediction inaccurate.

Another issue that crops up in EA is that, while my actions (or inactions) now could be construed as contributing to the detriment of thousands, millions, or even billions of people around the world, what about the many more billions who will live in the future? This motivates a philosophy known as longtermism (popular in EA), which says that all future people ought to be added to the ledger of our utilitarian calculus. Right now there are just under 8 billion people alive. Throughout human history some 110 billion of lived and died. But, depending how long humanity persists, there could be many more hundreds of billions, trillions, or more, people, i.e., even all the people who have lived and died up until now could potentially be just a tiny fraction of the number of people who will ever live.

I wonder if Dave Benatar’s asymmetry argument in favor of anti-natalism is a good rejoinder:

One of my arguments for the conclusion that coming into existence is always a harm appeals to an asymmetry between pleasures and pains (and between benefits and harms more generally):

1) The presence of pain is bad; and
2) The presence of pleasure is good.
3) The absence of pain is good (even if that good is not enjoyed by anyone); but
4) The absence of pleasure is not bad unless there is somebody for whom this absence is a deprivation.
We can employ this asymmetry, which I shall call the basic asymmetry, in order to compare existing and never existing

Dave Benatar assymmetry argument

In other words, if we just stopped making more people we wouldn’t have to worry about the untold suffering that’s likely to occur in the future. As a bonus, we would effectively make ourselves prodigiously more moral by ensuring that what we do will not cause suffering to anyone in the future, since the future will be uninhabited – it would essentially have the moral weight of bombing the far side of the moon (even less since it wouldn’t cost us tax dollars to do something so unnecessary).

As somewhat of a side note, I wonder: if someone buys into longtermism but does not accept anti-natalism as a solution (even just hypothetically, since it is very unlikely to happen in practice), they presumably think this because there is some intrinsic good inherent in human life – whether that is some deontological principle of individuals being ends and not just means, or because it gives rise to the potential for pleasurable experiences – then is that person also committed to being pro-life? Perhaps even anti-contraception? Preventing conception or birth is preventing another human life, with its intrinsic value. The argument that a fetus is not yet a person wouldn’t really work in view on longtermism, since we are supposed to take all future people (who, before they exist, are only potential people; indeed, they are less real than a fetus) as holding the same moral value as currently living people. And if the absence of people isn’t a good (i.e., the anti-natalist position isn’t a morally good one), then preventing conception and birth is morally wrong. Thus, it stands to reason that such a longtermist person must be committed to accepting a pro-life, anti-contraception stance if they are to be logically consistent.

But anyway, this all seems very daunting. Especially since we don’t have any idea of how our actions today could affect someone tomorrow, much less people 100 years from now, or 1,000 years from now, or 100,000 years from now, and so on. Who is to say what the right thing to do is if we are obligated to take the well-being of future people into consideration when making moral decisions? What if, even with the best intentions, we’re just wrong about what we ought to do? What if we take actions that seems like the right thing to do for the next 100 years, but thousands of years after that it’s shown to have been the wrong thing to do? What if some heinous atrocity that happens now actually leads to some much greater outcome in the future? It’s all enough to make a person prefer to adopt the Daoist philosophy of Wuwei:

Or, perhaps, we ought to abandon, or at least temper, our consequentialist ethics with some deontological or virtue ethics. Maybe we can’t just add up the pleasure our possible actions will bring to the world and subtract from it the sum of all the suffering our possible actions will cause and look for the global maximum. Maybe there aren’t enough broken toes to equal the suffering of one person dying of cancer. And besides, which version of pleasure/pain should we go by, the experiential self (how we feel from moment-to-moment) or the narrative self (the story we tell about ourselves via memory)? Suffering in the moment can often lead to happier narrative identities (think of people with young children), and vice versa (think of addicts). So maybe the good life is more than just maximizing pleasurable subjective states.

Sam Harris seems to disagree. Not only does he reject the sort of naive utilitarianism of Σ(pleasure) – Σ(pain) = utility, but he also claims that if you talk to the deontologist or virtue ethicist long enough, you will find them smuggling in consequentialism.

In the first case, the reason we can reject naive utilitarianism is because we wouldn’t want to live in a world run by such a calculus, and we wouldn’t want it for consequentialist reasons. The popular thought experiment is this: if a doctor has five patients, each dying of organ failure for different organs, is it ethical for the doctor to painlessly kill a patient there for routine surgery and harvest the healthy patient’s organs in order to save the five other people? In the utilitarian calculus this is equivalent to the trolley problem: kill one person to save five. But, Harris argues (and I would agree), nobody would want to live in a world where every time they go to the doctor there is a good chance they’ll be murdered and have their organs harvested. Such a world, Harris says, would cause a lot of anxiety and fear, and probably cause people to stop going to the doctor, thereby further increasing suffering. And so, we can reject the naive utilitarian calculus of the doctor though experiment for consequentialist reasons.

In the second case – that deontology and virtue ethics smuggle in consequentialism – I would also tend to agree with Harris, but perhaps not for the reasons he would think. I think ethical philosophy, while interesting, and perhaps useful in some ways in the long run (helping to steer the cultural zeitgeist of moral sentiments), is mostly just the rationalizations of motivated reasoners. We humans have an intuitive sense of what is right and wrong (as I’ve discussed at length elsewhere), and moral philosophy is all attempting to justify these intuitions.

Some people seem to think that the fact that all systems of moral philosophy tend to broadly agree on things (e.g., that murder, rape, theft, lying, cheating, etc. are wrong in most cases) supports the propositions of moral realism (I discuss moral realism and moral anti-realism at length in my post on the scientific arguments against the existence of God). I think it’s just the opposite. I think that humans evolved to have such moral prohibitions intuitively and this is why we essentially start with propositions affirming these intuitions and then try to justify the propositions.

The reason we have these intuitions is because we evolved to have them. This means that people who did not have our moral intuitions were less likely to pass on their genes – perhaps in our long hunter-gatherer lineage people who lacked such moral intuitions were ostracized and died as a result of not having community support, or their abrasive behavior prevented them from acquiring a mate or successfully raising children long enough to pass on the genes any further. The point is, it was consequences that instilled in us our moral intuitions. As such, it’s consequences we ultimately use to justify our moral intuitions. The reason, for instance, that Kant’s categorical imperative is rational is because it complements our nature: the maxim “it is okay to murder another person” cannot be rationally universalized is because humans depend on one another for our survival. Had we been solitary creatures, then such a maxim would be rational, because other people would be an encroachment on our resources (i.e., the reason many predators will kill others of their species on sight, or at least threaten to in order to scare them off).

Clearly I fall into the camp of moral anti-realism. One of the issues I have with moral realism is that the definition seems unclear. There is the position of objective moral realism, which says that morality exists mind-independently, which to me seems an untenable position. Then there is the moral realism espoused by Sam Harris, which is essentially that morality is real because it really exists in people’s minds, i.e., it is objectively the case that certain things will bring about greater well-being while other things will bring about greater suffering. Harris likes to use the following thought experiment: imagine two possible worlds, one of which is populated by billions of conscious, sentient beings who only ever experience the worst and most irredeemable (purely gratuitous) suffering imaginable at all times; the second possible world is populated by an equal number of conscious, sentient beings who experience exactly the same suffering except for a single minute each day in which the suffering ceases. Which of those two worlds would be better (or, at least, less worse) to inhabit? Harris argues that there is no moral relativism here, that the second world is objectively better (less worse) than the first world. It is human suffering and well-being that act as the objective metric by which to adjudicate morality, and thus cultures that produce less well-being and/or more suffering are objectively less moral than cultures that produce more well-being and/or less suffering.

To me, although Harris calls himself a moral realist, this seems like a moral anti-realist position. Morality is not mind-independent. Had our minds been constituted different than they are, then morality would have been different. That our moral intuitions are as they are is contingent upon how our minds are so constituted. The mind-independent objective moral ontology of Jaron Daniel Schoone or Michael Huemer is missing from Sam Harris’s moral realism.

To take this back to effective altruism and longtermism, to me it seems like the logical conclusion of such philosophies ought to be that we are morally obligated to replace humankind with conscious AI capable of much grander subjective experiences than humans. If we take Sam Harris’s view that increasing the well-being of conscious entities is an objective moral imperative, then the creation of as many conscious beings as possible with the greatest capacity for well-being as possible should be the ultimate realization of objective morality. To argue against such a conclusion is to say that there is something other than consequentialism in our morality, that human consciousness has some moral supremacy over objectively greater well-being, that it is better for humans to potentially suffer greatly than for conscious AI to have guaranteed (or at least more probable) well-being.

Harris might argue that it would cause great suffering to live in a world in which our primary objective is to make one’s species extinct, similarly to how we would not want to live in a world where doctors might murder us to harvest our organs. But this would still be considering a super advanced conscious AI as something akin to humans. It is not logically impossible to create a population of trillions, quadrillions, quintillions, or more of AI with conscious experiences that are to humans what human conscious experience is to ants. Even just one such AI would have moral value greater than the population of earth. And so, just like one might kill millions of ants without any qualms, our human suffering would be meaningless in the face of such an advanced conscious experience.

It might also be argued that, even if creating an AI of such vast, rich conscious experience isn’t logically impossible, it is still exceedingly improbable. But, if we take the view of expected utility, then we would have for super advanced conscious AI

UAI = PAI × EAI

And for humanity

UH = PH × EH

Where UAI and UH are the expected utility of super advanced conscious AI and humans, respectively PAI and PH are the probability of super advanced conscious AI and humans existing, respectively; and EAI and EH are a function of both the richness of the well-being the respective entities experience multiplied by the number of such entities:

EH = RH × # of current and future humans                                               .
EAI = RAI × # of current and future super advanced conscious AI.

We know that PH = 1, since we know that humans exist, but then the issue is to find EH., PAI, and EAI. Let’s say that the probability of creating super advanced conscious AI is 1 in a billion, so PAI = 10-9. Let us then assume that EH = 100, just for the sake of argument. Then we would need at least EAI = 1011, or 1 billion times greater than the well-being of humanity. The question we could ask is, how many ants are worth one human life? Is it a billion? A trillion? A quadrillion? If we are assuming that a super advanced conscious AI could have conscious experiences that are billions or trillions of time greater than any human, and that we could have quadrillions or quintillions of such beings, then the PAI would have to be quite low indeed.

One might argue that it’s not just between the development of super advanced conscious AI vs. not developing super advanced conscious AI, but it is also possible that, in attempting to develop such AI, we might instead create an AI that is super advanced but not conscious that still causes a great amount of suffering to others, or that it is conscious but experiences mostly (or even only) suffering. We would then need to update the above formula to be

UAI = (PAI(g) – PAI(b)) × EAI

Where PAI(g) is the probability of the good outcome I’ve been discussing and PAI(b) is the probability of the possible bad outcomes just mentioned. We would still need to have that UAI < UH to say that it wasn’t worth attempting to get the good outcomes. In other words, we wouldn’t need UAI to be negative, only for PAI(b) to be large enough that |UAI| < |UH|.

This, of course, is all very subjective, not to mention the issues with expected utility, such as the St. Petersburg Paradox. But I would argue that any sort of longtermist calculus would face similar problems to this analysis of super advanced conscious AI. We would then instead have a formula like

ULT = (PLT(g) – PLT(b)) × EH

Where ULT is expected utility in the long-term, PLT(g) is the probability that our decisions now will have good long-term effects on human well-being and PLT(b) is the probability that our decisions now will have bad long-term effects on human well-being, and EH is a function of the richness of human well-being multiplied by the number of all humans from the present and into perpetuity (i.e., for as long as at least a single human exists), i.e., EH = RH × # of current and future humans.

EDIT (2/24/2023): the following video does a good idea of discussing Effective Altruism

The video brings up a criticism discussed in the Wisecrack video that inspired this post, but one I didn’t touch on much in the original post, and that is that Effective Altruism (EA) is not political enough. And, of course, when critics say it is not political enough, what they mean is that it is not Marxist enough. The criticism is essentially: charity does nothing to overturn the capitalist system that makes charity necessary in the first place, and so EA is not getting at the root cause of the problems it seeks to help. It is therefore (A) inherently conservative (i.e., not Marxist/leftist), and (B) beneficial to the wealthy.

This criticism is valid, yet it might have more teeth if it weren’t for the fact that leftist revolutions rarely, if ever, make things better for people. The usual response to pointing to places like the Soviet Union, Communist China, Cambodia, Venezuela, North Korea, and so on is either that they didn’t do “real” communism as Marx intended, or that things weren’t as bad as we think they were/are in those places, or perhaps that reactionaries within and without those societies are what brought on the atrocities, or that with the lessons learned from those disasters we have now landed on the socialist theory that will actually work. These are dubious claims with a lot at stake.

None of this is to defend EA in general or the wealthy-friendly “earn to give” in particular. I wouldn’t be surprised if most proponents of EA were little more than cynical opportunists who see it as a way of enriching themselves while fooling people into thinking they’re virtuous. My point is that there is good reason not to take a Marxist/leftist turn in EA: all Marxist/leftist ideologies, when implemented, have been a disaster in practice. If one is attempting to do the most good, steering clear of Marxist/leftist revolutions seems like a precondition. I’m not defending capitalism against all criticism here, only against a Marxist/leftist critique that would seek to turn the dumpster fire of capitalism into the veritable firestorm of leftist totalitarianism.