Book Review of “Rationality: What It Is, Why It Seems Scarce, Why It Matters” by Steven Pinker

Rationality: What It Is, Why It Seems Scarce, Why It Matters by Steven Pinker; Viking (September 28, 2021); 432 pages

Steven Pinker is a great public intellectual and proponent of Enlightenment thinking. I genuinely enjoy reading his work, even though his optimism clashes with my pessimism. His newest book, Rationality: What It Is, Why It Seems Scarce, Why It Matters, is another treatise in the vein of Enlightenment optimism. I find little to disagree with in the bulk of the book, particularly the first nine chapters. Indeed, anyone who wishes for a clear, concise, and brief course on rationality should read this book, particularly chapters three through nine. It’s difficult to find treatments of these subjects – formal logic, Bayesian probability, game theory, and so on – that are both comprehensive while not being overly complicated.

In the first chapter Pinker essentially lays out what I take to be his overarching thesis: humans are not irrational (at least not in the way most people would accuse them of being), but in fact many of the things that appear irrational are actually quite rational. The issue, Pinker argues, is that when behavioral economists like Daniel Kahneman and Amos Tversky conclude that humans are hopelessly irrational, this is because the questions asked in their experiments do not appeal to the sorts of intuitions that our hunter-gatherer ancestors would have come across. When a logic puzzle like the following is posed, for example, people get it wrong:

Suppose the coinage of a country has a portrait of one its eminent sovereigns on one side and a specimen of its magnificent fauna on the other. Now consider a simple if-then rule: “If a coin has a king on one side, then it has a bird on the other.” Here are four coins, displaying a king, a queen, a moose, and a duck. Which of the coins do you have to turn over to determine whether the rule has been violated?

Page 12

When most people are given this question, they will say to flip over the king or maybe the king and the duck, even though the correct answer is the king and the moose. The reason has to do with the way hypotheticals (if-then statements) work: to make it true you either affirm the antecedent (the “if” statement) or deny the consequent (the “then” statement). Flipping the king and finding a bird is affirming the antecedent; flipping the moose and finding something other than the king is denying the consequent. It’s the second situation that mostly confuses people: why not flip over the bird coin? Because the “if king, then bird” does not imply the converse “if bird, then king” and so finding out what is on the other side of the bird coin tells you nothing.

Pinker argues, however, that this is a “peculiar” task, and this is why people are bad at it. But if people are given the same sort of logic puzzle, but for something useful or something they might encounter in the real world, then people get better at it. He gives the example:

Suppose the Post Office sells fifty-cent stamps for third-class mail but requires ten-dollar stamps for Express Mail. That is, properly addressed mail must follow the rule “If a letter is labeled Express Mail, it must have a ten-dollar stamp.” Suppose the label and the stamp don’t fit on the same side of the envelope, so a postal worker has to turn envelopes over to check to see if the sender has followed the rule. Here are the four envelopes:

[Express] [3rd Class] [50 cents] [10 dollars]

Imagine that you are a postal worker. Which ones do you turn over?

Page 14-15

This is the exact same problem as the coins, but most people can do it correctly (you flip the [Express] and the [50 cents] envelopes). The reason people are more likely to get it right when presented this way is because it appeals to our sense of fairness: we do not want people to get away with enjoying the benefit (express mail) without paying the cost (only having to pay 50 cents instead of the “fair” 10 dollars everyone else must pay).

Many other examples like this are given in the first chapter, including by not limited to:

  1. Simple, but deceiving math questions, like the one about spending $1.10 on two things, one of which is $1.00 more than the other: how much does each item cost? The answer: one costs 5 cents and the other $1.05
  2. The famous Monty Hall problem. The answer: after the host reveals one door, switch your choice.
  3. What’s more likely: that Saudi Arabia acquires a nuclear weapon? Or that after Iran tests its own nuclear weapon, Saudi Arabia develops its own nuclear weapon? The answer: that Saudi Arabia acquires a nuclear weapon (one thing occurring is more likely than a conjunction of things occurring, even if the story about why Saudi Arabia acquires the nuclear weapon appeals to our sense of narrative).

Thus, Pinker argues, people are capable of being rational, it is just a matter of whether the problems they are faced with are things they will be faced with in the real world. After comparing (supposed) irrationality to visual illusions, like the shading illusion and an angle illusion, he says:

In the same way [as visual illusions], cognitive illusions like the ones in this chapter may arise from our setting aside the literal statement of a question as it comes into our brains and thinking through to what a speaker in the social world would reasonably ask. Doing arithmetic on deceptively conspicuous numbers, verifying a proposition about a handful of tokens, choosing from clues offered by a sly and omniscient master, and following a vivid character sketch to a literal but implausible conclusion are a bit like judging the angles and shades of gray in the printed page. They lead to incorrect answers, yes, but they are often correct answers to different and more useful questions. A mind capable of interpreting the intent of a questioner in context is far from unsophisticated. That’s why we furiously hit “0” and scream “Operator!” into the phone when a bot on a help line reiterates a list of useless options and only a human can be made to understand why we called.

Page 32-33

In the second chapter Pinker makes his argument for why people ought to be rational. He argues, for instance, that even asking the question “why should humans be rational?” has already won the argument in favor of rationality. Asking for a reason why we ought to do one thing (use reason) instead of another (go by gut feelings) has already ceded the debate to the side of rationality by virtue of asking for reasons.

To go further, though, Pinker says that we can define rationality in broadly two ways:

  1. Determining what goals a person ought to have
  2. Determining the proper course of action to attain those goals

He uses the analogy of Romeo and Juliet trying to get to one another when a wall is placed between them as compared to iron shavings trying to get to a magnet when a barrier is placed between them. The iron shavings will simply smush up against the barrier. Romeo, on the other hand, will climb the wall or move around it or take some other course of action that gets him to his goals, because unlike the iron shavings, Romeo has rationality and can therefore determine the proper course of action needed to attain his goal of getting to Juliet.

Pinker concedes that number 1 above, determining what goals a person ought to have, are not always strictly rational. These can come from non-rational things like wants and desires. Chapter 2 also goes into interesting areas such as rational irrationality (e.g. making yourself look like a madman so that people fear you and therefore you get what you want, like dictators in diplomacy); taboos (e.g. that some issues are not to be broached, like certain racial disparities, because it can lead to worse outcomes); and morality, which Pinker argues is a rational goal, since most moral codes can be distilled down to the Golden Rule: don’t do to others what you don’t want done to yourself; thus, if you act like kind and generous to other people, you can expect them to do the same for you, which increases your well-being.

Chapters 3 through 9 I am not going to go into great detail here, even though I think these chapters are the best part of the book. They give brief, easy to follow introductions to topics in logic, probability, Bayesian reasoning, rational choice and expected utility, statistical decision theory, and game theory. Just these chapters alone make the book worth the price, despite any disagreements I have with Steven Pinker’s optimism (which is the primary area where I disagree with him). Pinker does an excellent job of introducing the reader to these topics and showing real world examples of how they go wrong and how they ought to be utilized in order to think more rationally.

Chapter 10, as Pinker says in the very first line, is the chapter that people probably bought the book for. It’s titled “What’s Wong With People?” which is a question just about everyone asks from time to time (often in a sense of “why do other people do things that baffle and frustrate me?”).

Pinker says early in the chapter that the irrational beliefs held by many people cannot be attributed to the logical and statistical fallacies covered in chapters 3-9, to social media, or to blaming one irrationality on another (such as holding a false belief because it gives someone comfort). His arguments for rejecting these things do not come off as all that convincing to me. He supports his argument for rejecting logical and statistical fallacies with a single line saying that nothing in cognitive psychology could have predicted QAnon, even though he had just spent a paragraph explaining how superstitions come from these very failures of our rationality. Pinker says we should reject social media as the reason for widespread irrationality because irrationality has been around for as long as humans have. This, of course, acts as a refutation of his argument that the world is becoming more rational, which could go something like this:

  1. If the world is becoming more rational, then it is because people are becoming more rational
  2. People are not becoming more rational (as Pinker argues, we still suffer the same irrationality as our ancestors)
  3. Therefore the world cannot be becoming more rational (modus tollens)

Social media, with algorithms that reward the spread of irrational belief, has made it so that, despite progress in overall understanding rationality, people remain just as irrational as they were before everyone had access to all human knowledge at their fingertips and every opportunity to learn from millennia of human thought on how to think critically. This could be represented with a graph:

Pinker’s third rejection – that one irrationality should be blamed on another – seems a bizarre rejection given that it is precisely what he is about to do. Pinker lays the blame for the way rational people can be irrational at the feet of three phenomena: motivated reasoning, myside bias, and what he calls mythological thinking. All three of these are a variety of irrationality upon which Pinker will blame irrational beliefs held by many people (and he gives statistics early in the chapter of some of the irrational beliefs that people do hold). His third one, mythological thinking, is particularly striking, given that Pinker says it is exactly the sort of thinking we do to bring comfort to ourselves and a sense of understanding to the world, which is precisely what his third rejection names as examples of blaming one irrationality on another.

Although I disagree with Pinker’s rejected explanations for irrational human belief, I do agree with him on the three phenomena he does blame for irrational human belief: motivated reasoning, myside bias, and mythological thinking.

Motivated reasoning I have talked about in other posts. It is the propensity people have to reason like lawyers rather than scientists: we begin with a favored conclusion and then use our faculty of reason to justify that conclusion. This is similar to a lawyer who already has a conclusion – the innocence of their client, or the guilt of the person they are prosecuting – and then argues for that conclusion. This is unlike the (ideal) scientist, who gathers evidence and data, only forming a conclusion after all the data are in.

Myside bias is the propensity people have to be good critical thinkers when exposed to evidence that goes against a favored conclusion but credulous when exposed to evidence that supports a favored conclusion. This is observed, for instance, when Republicans and Democrats (in the sense of the Republican and Democratic parties in the United States) are exposed to some statistics on gun control. Pinker uses the following fake statistics:

With Gun Control: crime decreased in 223 cities; crime increased in 75 cities
Without Gun Control: crime decreased in 107 cities; crime increased in 21 cities

The big number, 223, sticks out, and the Democrat, who is in favor of gun control, will easily latch onto that. The Republican, however, is more likely to use rationality and see that this statistic would actually support the idea that gun control does not help with crime: its a 3:1 decrease in crime with gun control and a 5:1 decrease without. These statistics are fake, though, and when experimenters flipped it around, they found the opposite: the Republican would latch onto the big 223 number and the Democrat would be more likely to use rationality and see that there is a 5:1 decrease in crime with gun control and only a 3:1 decrease without when the numbers are flipped. The idea, though, is that people are biased in favor of their own “side” of some argument (hence why it’s called myside bias).

The mythological thinking phenomenon is in contrast to factual thinking, the latter of which most people are perfectly adequate. Factual thinking is how we navigate our everyday lives: we believe what we sense right in front of us, we believe the gas gauge in our car, we believe that if it is raining outside then we will get wet if we step outside, and so on. These are the kinds of things that, were we to doubt them, our lives would become extremely difficult.

Mythological thinking, however, is when we can think about things more remote to our immediate concerns: what happened in the past, what might happen in the future, what’s happening in some place far away from us, and so forth. This is where things like conspiracy theories thrive, because a person can formulate beliefs that don’t have effects on their everyday concerns. Thus, a person can nurture their pet theories about things without it making their day-to-day lives all that much different.

Pinker points out that we know these mythological beliefs are not factual because people rarely act on them: Christians don’t stone adulterers or persecute heretics in the town squre; of the thousands who bought into pizzagate, only one person had enough conviction to actually go in and attempt to rescue the children supposedly being victimized (everyone else who believed the conspiracy were apparently not outraged enough to actually do anything about a massive pedophile ring, satisfied with sitting by and allowing it to continue); all the white anti-racists are not giving up their positions within institutions to allow oppressed minorities to take their place. This mythological thinking, Pinker argues, is not meant to be factual in a literal sense, but to act as a sort of narrative that helps give meaning to a person’s existence and produce a group of fellow believers among whom the person can experience a sense of belonging.

The last chapter of the book, titled “Why Rationality Matters” is where Steven Pinker’s signature brand of Enlightenment optimism really comes in. A sizable portion of the chapter is dedicated to reiterating points he has made elsewhere about how life has become objectively better for more people than it was in the past: less poverty, starvation, war, and disease; more liberty for more people (women and ethnic minorities); and so on.

There are reasons to mistrust Pinker’s optimistic take:

I would point out, though, that even if we accept that, by most objective measures, the world is getting better for most people, this fails to take into consideration the psychological aspects. Pinker is eager to point out that our perception that things are getting worse is a canard manufactured by greater focus on the fewer things that are still bad: although crime is going down, there is more news coverage of crime; although there are fewer wars, we have more access to what is happening in those wars; and so on. The point being that we’re hyperfocused on what is objectively a smaller problem.

What is telling, however, is that even though these things are objectively smaller problems, humans cannot help but focus on them. Indeed, when things get better, humans even invent new problems, such as an epidemic of racial bigotry within the objectively least racist countries in the world (not to say that racism is completely eradicated, only that the problem is smaller than the attention to it would lead one believe). When I argue that humans are not evolved to live in the world we have created for ourselves, this is a major aspect of it: we have not evolved to live in a world where there are not dangers around every corner, where people are on average trustworthy, or where the lower tiers of Maslow’s Hierarchy of Needs are always fulfilled.

Facing and overcoming challenges to our safety all while viewing ourselves as plucky underdogs who can succeed despite being the victims of injustice is part of what makes us human. When those challenges are greatly reduced, or even eradicated, or when we are privileged beyond what our ancestors could imagine, this produces a crisis in our perceptions of ourselves. People either succumb to anomie and ennui, or they construct a mythology in which they and their in-group compatriots are facing down the horde, which leads to political extremism. Without having to worry about tangible resources, in-group esteem becomes the currency of the day, resulting in constant purity testing and virtue signaling.

I have to admit, however, that I envy Steven Pinker’s optimism. If I were rational instead of human, I would buy into the optimism, given that things are objectively better, because if I hold the goal of being happy, then looking at the bright side would be an effective method of achieving such a goal. Unfortunately, I am one of those pessimists that Steven Pinker has been trying (unsuccessfully) for the past decade to convince that things are getting better. His argument can be criticized (as in the video above), but even if we accept his position at face value (that things are, in fact, getting objectively better), this does not entail that the experience being had by people is subjectively better. And in the end, the reason to make things objectively better is in order to make things subjectively better – to increase pleasure and decrease suffering, which are subjective states. If making things objectively better is failing to make things subjectively better, can we really count that as a success?