What is Morality?

Philosophy is, broadly speaking, divided into three general categories: metaphysics (what is the nature of existence and reality?), epistemology (what is knowledge and how is it possible?), and ethics (what is the nature of good and evil and how should people live their lives to accord with what is good?). It’s this latter one that tends to have the most practical impact on people’s lives. Indeed, things like business ethics, governmental ethics, medical ethics, bioethics, and so on are where the rubber really meets the road. Yet, they still fail to answer the very basic question of “what is good?” and “how should I live my life?” for our everyday, mundane situations.

In the philosophy of ethics, there is a further distinction between meta-ethics, normative ethics, and applied ethics. What I’ll be focusing on in this post will primarily fall into the realm of meta-ethics. Normative ethics has to do with the kinds of questions I posed above: how ought I behave? What kind of person should I be? What sense of “good” should I strive for? This is where the distinction between consequentialism and deontology arises. And then applied ethics tends to focus on specific cases, such as: what is the ethics of abortion or animal rights?

Where Does Morality Come From?

I’ve written before on the question of how we ought to live our lives (broadly speaking) and what the supreme principle of morality could be. But even those are quite general, and when it comes down to it, things like the utilitarian calculus or the categorical imperative aren’t techniques that a person employs in a moment of decision. Philosophies like these can certainly help orient the broader cultural conception of what is good and right. For example, Enlightenment and liberal values permeating the cultural milieu has engendered in people notions of individual dignity and respect, and with the intuition that freedom and equality are morally good. These ideological “superstructures” (to borrow a Marxist term) shape and mold the way we make decisions.

Yet, I would take a view similar to David Hume, that our moral choices are a “slave of the passions.” The Stanford Encyclopedia says:

Hume’s position in ethics, which is based on his empiricist theory of the mind, is best known for asserting four theses: (1) Reason alone cannot be a motive to the will, but rather is the “slave of the passions” (see Section 3) (2) Moral distinctions are not derived from reason (see Section 4). (3) Moral distinctions are derived from the moral sentiments: feelings of approval (esteem, praise) and disapproval (blame) felt by spectators who contemplate a character trait or action (see Section 7). (4) While some virtues and vices are natural (see Section 13), others, including justice, are artificial (see Section 9).

Our ethics are then based more on values than on some abstruse philosophical argument. Even deeper than the particular values a culture has is also our evolutionarily instilled social cohesion. Marc Hauser contends in Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong that we have an innate “moral organ” similar to the “language acquisition device” of Noam Chomsky’s theory of innateness of language, that gives all humans a universal template for morality. The template is abstract, in the way that syntax and grammar rules are with language, but it allows us to take in the moral signs and signals from our culture and upbringing and formulate them into a concrete moral code. In his words: “[W]e are born with abstract rules or principles, with nature entering the picture to set parameters and guide us toward acquisition of particular moral systems” and “Once we have acquired our culture’s specific moral norms – a process that is more like growing a limb than sitting in Sunday school and learning about virtues and vices – we judge whether actions are permissible, obligatory, or forbidden, without conscious reasoning and without explicit access to the underlying principles.” This, Hauser argues, is why so many moral intuitions arise across all different cultures.

Patricia S. Churchland points out in Braintrust: What Neuroscience Tells Us about Morality that the universality of moral intuitions is consistent with Hauser’s thesis of an innate “moral grammar” in our genes/brain, but does not imply it. This is because (1) many moral intuitions are extremely contextual (drinking from a bedpan is considered disgusting while sitting in the comfort of our own home, but it wouldn’t seem like it if we were lost in a desert and dehydrated) and (2) many of our moral behaviors may simply be the best answer to common problems faced by all human cultures, similar to using wood to build boats (wood floats, is relatively common, and is modestly easy to work with, so it makes sense that many cultures would converge on using wood for making boats, but that doesn’t mean we have a “making-boats-from-wood grammar” that is innate in our genes/brain). Likewise, in the realm of morality, truth-telling as a moral intuition is the simplest solution to the problem that trusting and being trustworthy leads to greater chances of survival (and yet even this is contextual, since it is easy to think of counterexamples when lying is more prudent than telling the truth, such as deceiving enemies or even just telling white lies for the sake of maintaining social equanimity).

Churchland instead argues that morality stems from offspring and kin bonding being expanded to include more people (from family to friends to acquaintances to strangers). To oversimplify her thesis: we feel good when offspring/kin feel good and feel bad when they’re distressed; being in larger social groups has caused this sentiment to expand beyond offspring/kin by having the existing brain structures reworked to include more people.

Others, such as Robert L. Trivers, argue that altruism arises through what is called reciprocal altruism. This is essentially the notion that the reason I would help you, at some cost to myself, is because then I could count on you to help me (at some cost to yourself) in the future should I be in need of aid. This is observed, for instance, in reciprocal food sharing in vampire bats. Or, to use Trivers’ example, between host and genetically unrelated cleaners, such as with fish, where the host fish will not simply eat the fish cleaning its mouth, even though it would be like a free meal, and are even known to allow the cleaner fish to leave its mouth before the host fish flees from predators.

This conception of altruism is popular among evolutionary theorists for many reasons, one of which is because it explains how free-riders can be weeded out. It was in vogue before the 1960’s for people to talk about group selection in evolution. In the 1960’s and 1970’s new theories of gene selection became popular and overtook group selection (with Richard Dawkins’ The Selfish Gene being sort of a final nail in the coffin). One of the reasons that group selection doesn’t work very well is because if genes that made people concern themselves with group (or even species) survival arose, it would be easy for free riders to take advantage of their altruism and thereby out-compete them.

It’s interesting, too, that humans are often very good at detecting when someone is cheating, free-riding, or not contributing their fair share. Not only that, as Churchland discusses in Braintrust, people are often willing to punish free-riders, even at a cost to themselves. It seems that reciprocal altruism must have co-evolved with strong free-rider detection so as to better distinguish between those who we should help, knowing they will be willing to help us in the future should we need it, and those whom we should not help, since they are likely to take advantage of our good will. Indeed, it may even be the case that the rapid growth in human intelligence emerged as (or at least due in part to) an evolutionary arms race between new cheating/free-riding strategies and new mechanisms and methods for detecting cheaters/free-riders (e.g., the evolution of guilt might have occurred in part as a way to signal to other would-be altruists that we, too, are an altruist who is worthy of your generosity since we are someone who feels bad about not contributing our fair share) in a sort of intra-species Red Queen Hypothesis.

Trivers’ reciprocal altruism is not inconsistent with Churchland’s thesis and could even be a potential mechanism for why the range of others cared for would expand from offspring/kin to friends and even to strangers. But Churchland’s thesis also requires that empathy be the basis of morality (we feel good when others feel good and we feel bad when others feel bad). She discusses the ways in which the hormones oxytocin and vasopressin are highly involved in bonding and trust, with the affects differing with sex but broadly increase in-group bonding and trust but can generate negative feelings toward out-group people.

This is why, for instance, Paul Bloom argues in Against Empathy: The Case for Rational Compassion that we shouldn’t depend on empathy for making moral decisions. Empathy can exacerbate things like racism and xenophobia as we cling to in-group well-being (and can revert to punishment or harm of the out-group, even at a cost to ourselves). Furthermore, empathy can be easily hijacked: a single vivid anecdote about someone’s suffering is a lot more likely to raise our empathetic sentiments than numbers and figures (the so-called identifiable victim effect). In other words, if we see a story about the suffering of a single person, that can move people to donate time and money to that one person, yet hearing that hundreds of people are suffering and dying elsewhere doesn’t have the same emotional impact and is therefore less likely to move us to action (even though our efforts may be better spent addressing larger issues).

What both of these views of morality have in common is a focus on what Jonathan Haidt calls care and fairness/equality – we want to help those in need (empathy) but we also want to do it in a way that is equitable (equality) or at least proportionate (fairness) to some criteria (e.g., merit). These are just two of the five Moral Foundations, the other three being loyalty, authority, and sanctity/purity (with liberty being a potential sixth). They are described as follows:

Moral Foundations Theory was created by a group of social and cultural psychologists (see us here) to understand why morality varies so much across cultures yet still shows so many similarities and recurrent themes. In brief, the theory proposes that several innate and universally available psychological systems are the foundations of “intuitive ethics.” Each culture then constructs virtues, narratives, and institutions on top of these foundations, thereby creating the unique moralities we see around the world, and conflicting within nations too. The five foundations for which we think the evidence is best are:

1) Care/harm: This foundation is related to our long evolution as mammals with attachment systems and an ability to feel (and dislike) the pain of others. It underlies virtues of kindness, gentleness, and nurturance.

2) Fairness/cheating: This foundation is related to the evolutionary process of reciprocal altruism. It generates ideas of justice, rights, and autonomy. [Note: In our original conception, Fairness included concerns about equality, which are more strongly endorsed by political liberals. However, as we reformulated the theory in 2011 based on new data, we emphasize proportionality, which is endorsed by everyone, but is more strongly endorsed by conservatives]

3) Loyalty/betrayal: This foundation is related to our long history as tribal creatures able to form shifting coalitions. It underlies virtues of patriotism and self-sacrifice for the group. It is active anytime people feel that it’s “one for all, and all for one.”

4) Authority/subversion: This foundation was shaped by our long primate history of hierarchical social interactions. It underlies virtues of leadership and followership, including deference to legitimate authority and respect for traditions.

5) Sanctity/degradation: This foundation was shaped by the psychology of disgust and contamination. It underlies religious notions of striving to live in an elevated, less carnal, more noble way. It underlies the widespread idea that the body is a temple which can be desecrated by immoral activities and contaminants (an idea not unique to religious traditions).

We think there are several other very good candidates for “foundationhood,” especially:

6) Liberty/oppression: This foundation is about the feelings of reactance and resentment people feel toward those who dominate them and restrict their liberty. Its intuitions are often in tension with those of the authority foundation. The hatred of bullies and dominators motivates people to come together, in solidarity, to oppose or take down the oppressor. We report some preliminary work on this potential foundation in this paper, on the psychology of libertarianism and liberty.

The moral foundations of care and equality tend to be more important to left-leaning people while loyalty, authority, and sanctity are more important to right-leaning people:

moral foundations liberal conservative
“Liberal” in this graph is meant in the U.S. sense of left-leaning. (Source)

Liberalism in the wider sense (not the left-leaning sense used in the U.S.) has tended to emphasize care and fairness/equality over the other three. Indeed, authority is one of the main things to which liberalism was a reaction, and its more secular lean has tended to downplay sanctity/purity.

The Moral Foundations project I think is useful because it gets at our moral intuitions. Churchland is skeptical of Haidt’s project, particularly his claim that these moral foundations are innate and selected for by evolution, which she says has no support from any known evidence from molecular biology, neuroscience, or evolutionary biology, and his just-so story about how purity/sanctity became bound to religion has steep competition from other just-so stories about where religion comes from. Others have also criticized it as perhaps getting the cause-effect backwards, that perhaps one’s political affiliation determines their endorsement of the five moral foundations. The point is, it’s not settled science, but you can see the list of publications on Moral Foundations Theory here if you are interested in exploring it further.

As I said above, I take a more Humean view of morality in that our moral thinking is driven by emotions and sentiments more than reason and logic (the latter being a “slave of the passions” as Hume says) . Whether we take the Churchland, the Hauser, or the Haidt theory of where moral intuitions originate, a theory of moral attitudes that takes emotion into account does a better job of getting at the ways that people actually make moral decisions better than any philosophical debate between consequentialism and deontology.

Indeed, I would argue that our moral intuitions tend to contain a mixture of consequentialism and deontology. If we use the Moral Foundations project as a guide, even if we don’t accept it as settled, I think it can still be helpful in orienting ourselves within the discussion. With that in mind, I would put care and fairness/equality (the left-ish moral foundations) further in the consequentialist camp while authority, loyalty, and sanctity (the right-ish moral foundations) are more deontological, although in both cases there is crossover. For instance, that all humans are equal is a more deontological proposition. One could also make consequentialist arguments about, for instance, why authority and loyalty lead to better outcomes than their opposite by imposing order and trust on a society. The point being, the consequentialist/deontological dichotomy is interesting and useful when having philosophical discussions about ethics and morality of a more normative/applied variety, but in our everyday lives the distinction is blurry and doesn’t factor into our actual moral decision making.

Morality and Justice

The ancient Greek philosopher Plato was very concerned with what he called justice. In his work The Republic he had a conversation between Glaucon and Socrates where Glaucon contends that if someone were invisible, such as by using the Ring of Gyges, that person would do whatever they want regardless of whether it was a good (or Just) thing to do:

Suppose now that there were two such magic rings, and the just put on one of them and the unjust the other; no man can be imagined to be of such an iron nature that he would stand fast in justice. No man would keep his hands off what was not his own when he could safely take what he liked out of the market, or go into houses and lie with any one at his pleasure, or kill or release from prison whom he would, and in all respects be like a god among men.

Then the actions of the just would be as the actions of the unjust; they would both come at last to the same point. And this we may truly affirm to be a great proof that a man is just, not willingly or because he thinks that justice is any good to him individually, but of necessity, for wherever any one thinks that he can safely be unjust, there he is unjust.

For all men believe in their hearts that injustice is far more profitable to the individual than justice, and he who argues as I have been supposing, will say that they are right. If you could imagine any one obtaining this power of becoming invisible, and never doing any wrong or touching what was another’s, he would be thought by the lookers-on to be a most wretched idiot, although they would praise him to one another’s faces, and keep up appearances with one another from a fear that they too might suffer injustice.

— Plato, Republic, 360b–d (Jowett trans.)

This gets at the notion that people only act morally if they think there is a chance they could be caught committing a moral transgression.

Think of it this way: would you rather be the most truthful person in the world, but have everything think you only tell lies? Or would you rather be the biggest liar in the world, but everyone believes you are a paragon of truth? Plato would say that a person ought to desire being the former rather than the latter, since what matters is that you are in fact truthful than that everyone thinks you are truthful. Glaucon would argue that all people would choose the latter.

Jonathan Haidt, in his book The Righteous Mind: Why Good People Are Divided by Politics and Religion, says that studies indicate that every person is willing to lie and cheat a little bit some of the time, if they think they can get away with it. Indeed, people can lie or cheat a little bit, up to a threshold that will be different for different people, and not alter their own conception of themselves as a moral person. Furthermore, Haidt discusses how even people who self-report that they don’t care what others think, or that they aren’t affected by others’ opinions of them, are shown in studies that they do, in fact, care just as much as those who admit that they are concerned with how others see them. Haidt argues that “self-esteem” is essentially our internal gauge of how we think others will think of us (i.e., if you had never met another person in your life, you wouldn’t really have any notion of “self-esteem” because it comes from our theory of mind about what others think of us).

The point being, we humans care a lot more than many would like to admit about what other people think of us, and our moral intuitions are significantly determined by how we think our behavior will be seen and interpreted by others. In other words, Glaucon is right: if it were possible for someone to get away with anything, they likely wouldn’t feel compelled to inhibit some of their worst behaviors (though the level of depravity to which a person is willing to descend is likely going to vary depending who the person is). This is why, for instance, absolute power corrupts absolutely, because someone with absolute power is someone who can get away with anything.

The Moral Continuum

Another distinction to be made is between notions of etiquette, norms, ethics, and morality. I’ve listed these things in an order of what I would say have increasing importance, such that transgressions against them range from a faux pas (violation of etiquette) to a taboo (violation of norms) to an infraction that requires formal disciplinary action (violation of ethics, for example mishandling of funds at a business) to an act deemed reprehensible and usually criminal (violation of morality). There appears to be a continuum between these types of social rules. Indeed, Churchland says in Braintrust that the same areas of the brain are active when a person witnesses or thinks about violating a norm as when a person witnesses or thinks about committing a moral transgression.

To be more explicit, I would say etiquette has to do with things like manners and customs that are expected of a person in particular instances. For instance, it’s expected that at an office job we dress a certain way and talk to co-workers in a certain way; these things will be different for a construction worker. We would also see it as a violation of etiquette for a person to show up in sweatpants and a stained t-shirt at a funeral or wedding, or to chew with their mouth open at a dinner party, or to fart in a crowded elevator, or to show up uninvited to a party, and so on. The point is, people will find it annoying and distasteful to violate etiquette, but our own good manners might prevent us from even calling it out.

Norms are a step up insofar as violations of norms will be seen as worse than violations of etiquette, yet will often not reach the level of requiring formal disciplinary action. Norms tend to be much more based in a particular culture (e.g., that women aren’t “allowed” to expose their nipples in public but men are (even though in many situations doing so might be a violation of etiquette for men)) than something more universal (e.g., that murder is wrong). Violation of norms can rise to the level of invoking formal discipline (e.g., think of victimless crimes, like drug use or prostitution), but don’t necessarily have to (e.g., in the U.S. we don’t regularly eat horse or dog meat, but as far as I know there aren’t any laws against it).

Norms can often have a moral valence, even if we wouldn’t know how to explain it in words. For instance, if two siblings had sex, the male using a condom and the female being on birth control so that the chances of conceiving an inbred child was for all intents and purposes zero percent, a lot of people would think of this as immoral even if they couldn’t articulate why. But, since something like this only potentially rises to the level of a victimless crime, I would still put it under norms rather than ethics or morality. This, of course, reflects the fact that I tend to rate sanctity/purity quite low as a Moral Foundation (i.e., someone who rates sanctity/purity higher might see such an act as violating some other sense of morality than just whether it is causing any harm).

Ethics I’ve put as a separate thing from morality, even though the two are often used interchangeably, because here I will be using it in a more formal sense. Ethics, in the sense I’m using it here, is more like a formal code of conduct regulating things we know to be wrong but may not have the same visceral reaction towards that we would for moral transgressions. For instance, murder and rape are immoral while embezzlement and insider trading are unethical. The distinction is subtle, but I think it’s intuitive: there is something more monstrous and disgusting about murder, torture, rape, pedophilia, cannibalism, and so on than there is about someone who steals trade secrets or embezzles money. Robbing or mugging an individual probably falls more into a moral transgression, but pilfering money from one’s employer doesn’t have the same visceral feeling of disgust, even though we intuitively know that both of them are wrong. So, perhaps the distinction has to do with whether someone is causing direct harm against someone else (the moral) or whether it’s more abstract or indirect (the ethical). The former case induces greater feelings of disgust and revulsion and so tend to be viewed as worse (perhaps due to our propensity for empathy as opposed to Paul Bloom’s rational compassion?).

This typology (etiquette, norms, ethics, morality) is messy and imperfect, but in a way I think that sort of proves the point that Churchland argues: that these things are on a continuum rather than divided into separate discrete categories. But attempting to make these distinctions explicit can at least aid in clarifying our thinking on the subject.

Although morals are said to be normative and prescriptive, most of us will follow moral intuitions rather than some formal code of conduct. We know that rape and murder are wrong not because there are laws against them, we have laws against them because we know they are wrong. Where moral philosophy has helped our intuitions progress is in (gradually throughout history) expanding the horizon of to whom our moral intuitions apply and shifting the Overton window of what counts as morality. We now recognize a universal humanity that extends to everyone, which has awarded us great strides in the rights of women, racial and sexual minorities, and different cultures.

Free Will and Morality

Another issue facing ethics/morality is that of free will and doxastic voluntarism. Our moral (and legal) judgements of people is predicated on the idea that a person could have done otherwise than transgress. If a natural disaster or some machine results in the injury, death, or loss of property for a person, we don’t tend to see that through a moral lens, because those things don’t have free will and could not have chosen to do otherwise. But if a person injures, kills, or robs another person, we see that as a moral transgression under the assumption that the perpetrator could have chosen not to commit those crimes. But there is evidence that humans don’t have nearly as much control (if any control at all) over their actions.

Doxastic voluntarism, on the other hand, is the notion that people have some ability to choose their own beliefs. If people can choose what they believe in, then they can be held responsible for choosing to believe immoral propositions. But if people cannot choose what beliefs they accept, then it becomes more difficult to hold people morally accountable (legality tends to be easier, but we’re talking about morality here). This gets into the sticky issue of moral relativism, which is a whole topic unto itself. For our purposes here, we are simply saying that if people do not have control over what they actually believe in, then when they adhere to a belief that we find morally reprehensible, it is difficult to justify our judgement of them. For instance, if you are a conservative and you are confronted with a liberal (in the left-leaning sense) who thinks that gay marriage should be permitted (or even celebrated), you can justify having a disagreement and in thinking they are wrong, but it is more difficult to justify calling their belief morally reprehensible since the person lacks the free will to voluntarily change their belief.

Moral Duties and Moral Prohibitions

A further distinction could be made between moral prohibitions and moral duties. The former covers the “thou shalt not” issues: we are morally prohibited from murdering, raping, stealing, lying, causing harm and so on. Most people can usually agree on those kinds of things. Of course, there are also prohibitions that some will see as moral while others might simply see as taboo or even merely uncouth, such as swearing or homosexuality or getting tattoos or committing adultery and so on.

Moral duties, on the other hand, are probably going to be more divisive. For instance, some see it as a moral duty to heavily tax the rich (and redistribute to the poor) while others view this as government sanctioned theft (and would therefore say it is morally prohibited). Other contentious moral duties would be things like businesses having a duty to serve gay customers (e.g., with things like cakes and websites) even against their religious beliefs, or to pay reparations to historically marginalized minority groups, or to offer free healthcare or education, and so on.

But even on a personal day-to-day level, people see it as a moral duty to be productive (and not to leech or free-ride off other people’s work and effort), or a moral duty to care for their children (or for their elderly relatives), and so on. We tend to find that the law is often more concerned with moral prohibitions than with moral duties, which makes sense since it is the former that most people can come to some agreement on while the latter can be much more contentious when it comes to law and government policy.

The Is/Ought Dichotomy

Sometimes called Hume’s Guillotine, this dichotomy is often traced back to book III, part I, section I of his book, A Treatise of Human Nature:

In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when of a sudden I am surprised to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is, however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, it’s necessary that it should be observed and explained; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. But as authors do not commonly use this precaution, I shall presume to recommend it to the readers; and am persuaded, that this small attention would subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceived by reason.

More recently it was made even more explicit by G.E. Moore in what is called the Open-Question Argument, which essentially says that the concepts of “happiness” or “pleasure” are not analytically equivalent to “good” and so we cannot derive “good” from the meaning or definition of “happy” such as in the proposition “happiness is good.” More discursively, it can be put like this (from the Wikipedia article):

The open-question argument is a philosophical argument put forward by British philosopher G. E. Moore in §13 of Principia Ethica (1903), to refute the equating of the property of goodness with some non-moral property, X, whether natural (e.g. pleasure) or supernatural (e.g. God’s command). That is, Moore’s argument attempts to show that no moral property is identical to a natural property. The argument takes the form of a syllogism modus tollens:

Premise 1: If X is (analytically equivalent to) good, then the question “Is it true that X is good?” is meaningless.
Premise 2: The question “Is it true that X is good?” is not meaningless (i.e. it is an open question).
Conclusion: X is not (analytically equivalent to) good.

The type of question Moore refers to in this argument is an identity question, “Is it true that X is Y?” Such a question is an open question if a conceptually competent speaker can question this; otherwise it is closed. For example, “I know he is a vegan, but does he eat meat?” would be a closed question. However, “I know that it is pleasurable, but is it good?” is an open question; the answer cannot be derived from the meaning of the terms alone.

Where we can substitute “happiness” or “pleasure” in for X.

Ultimately, what all this boils down to is that one cannot derive any rule, guideline, law, or code of conduct purely from natural facts about the environment or even about humans. This served to codify the asymmetric relationship between science and morality, where the now conventional wisdom dictates that science cannot tell us how we ought to live our lives or what values we ought to have, but morality can prohibit or mandate certain scientific practices and what questions science is allowed to interrogate.

Patricia S. Churchland in Braintrust: What Neuroscience Tells Us about Morality addresses G.E. Moore’s argument:

Having created a mystical moat around moral behavior, Moore cheerfully expanded on what is fallacious in naturalism: if the property of being pleasurable were identical to the property of being good, then the meaning of “happiness” and “good” would be the same. It would be like saying that being a bachelor is the same as being an unmarried man. But if that were true, he said, the the statement “Happiness is good” would be equivalent to “Happiness is happiness” and it would be entirely uninformative. But saying happiness is good is informative, and is not trivial, says Moor. The fallacy, he concluded, could not be more evident. This, Moore figured, meant that any attempt to identify natural properties with what is valuable or good is wrecked on the shoals of the Naturalistic Fallacy.

Moore’s arguments, when examined closely, are strange. For example, his claim that to say that A is B requires synonymy of the terms A and B is utterly contrived. It clearly does not hold when the A and B are scientific terms. To see this, consider these scientifically demonstrated identifications: light (A) is electromagnetic radiation (B), or temperature (A) is mean molecular kinetic energy (B). Here, the A and B terms are not synonymous, but the property measured one way was found to be the same as the property measured another way. The claims are factual claims, ones in which a factual discovery is made. Consider a more everyday sort of case: Suppose I discover that my neighbor Bill Smith (A) is in fact the head of the CIA (B): are the expressions “my neighbor Bill Smith” and “the head of the CIA” synonymous? Of course not.

The upshot of all this is that if identifications in general do not require synonymy of terms, whyever should they in the domain of morality? And if they do not, then the wheels fall off Moore’s argument.

Had Moore merely pointed out that the relation between our nature and what is good is complex, not simple, he would have been on firmer ground. Analogously, the relation between our nature and health is complex. As with morals and values, no simple formula will suffice. Because one cannot simply equate health with, for example, low blood pressure or getting enough sleep, a Moore-on-health might argue that health is a non-natural property – unanalyzable and metaphysically autonomous. Using science to help figure out what we ought to do to be healthy will, on this Moorean view, be unrewarding, since that is an “ought” project – a normative, not factual project.

Sam Harris, in his book The Moral Landscape, has attempted to argue against this dichotomy by saying that science can inform us about human well-being. For instance, if robbery causes psychological and physical suffering (a fact), then we ought not rob people. Or, more discursively:

  • Premise: being robbed reduces the victim’s well-being in the following ways:
    • being robbed increases cortisol levels and thereby increases stress
    • being robbed stimulates parts of the brain responsible for fear and feelings of loss that can persist long after the event 
    • being robbed can often cause physical pain
  • Conclusion: therefore, people ought not commit robbery

Others have pointed out that this argument sneaks in an unstated premise that is not strictly factual, namely “one ought not do things that reduce well-being” which would make the syllogism look like:

  • Premise 1: being robbed reduces the victim’s well-being in the following ways:
    • being robbed increases cortisol levels and thereby increases stress
    • being robbed stimulates parts of the brain responsible for fear, shame, and feelings of loss that can persist long after the event 
    • being robbed can often cause physical pain
  • Premise 2: one ought not do things that reduce well-being
  • Conclusion: therefore, people ought not commit robbery

But premise 2 begs the question, because what we are trying to prove is that one ought not do things that reduce well-being, so using it as a premise makes it circular.

To fix this, we might instead insert a premise containing the conditional “if one wants to maximize well-being, then one ought not commit robbery” such that the syllogism should instead look like this:

  • Premise 1: being robbed reduces the victim’s well-being in the following ways:
    • being robbed increases cortisol levels and thereby increases stress
    • being robbed stimulates parts of the brain responsible for fear, shame, and feelings of loss that can persist long after the event 
    • being robbed can often cause physical pain
  • Premise 2: if one wants to maximize well-being, then one ought not commit robbery
  • Premise 3: one wants to maximize well-being [in this situation]
  • Conclusion: therefore, people ought not commit robbery (modus ponens)

The conditional in premise 2 is not a discoverable fact and doesn’t offer new information, meaning that its required presence in the syllogism is not an is statement. Thus, the above syllogism is not deriving an ought from an is because it depends on this conditional statement. Furthermore, premise 3 is a question related to particular situations and is therefore contingent (it didn’t have to be the case that one wants to maximize well-being, and there are certainly cases in which this isn’t true, such as not wanting to maximize the well-being of one’s enemies during a war or the well-being of a convicted criminal).

A critic might also argue that premise 3 is begging the question, although I have tried to put there like a fact (that it is the case that one wants to maximize well-being), but premise 3 is, as I said, a value judgement, and what we are trying to do is derive the value judgements that a person ought to employ when deciding how to act. As such, premise 3 would require it’s own justification, but that wouldn’t be possible to do using only is statements.

These criticisms could apply to Churchland’s critique as well. If we look at her analogy with health, we are still either sneaking in a premise such as “one ought to do things that increase one’s health” and therefore making it circular, or they are sneaking in the conditional + situational premises and therefore making it not an example of deriving an ought from and is. This applies to morality as well.

This is not to say that science is completely divorced from morality. It is certainly the case that most people, in most situations, do desire the well-being of others and will therefore take actions (or neglect to take harmful actions) that increase well-being, to the best of their abilities. And science can inform us about the physiology, psychology, and sociology of well-being, thereby giving us a better sense of what it even means to say that we are oriented toward the maximization of well-being. For instance, it should be pretty clear that treating women as second-class citizens reduces overall well-being because it causes psychological, emotional, and physical distress.

What we should take away from this dichotomy is not that science has nothing to say on matters of morality, but that we need to be cautious about committing naturalistic fallacies. For instance, the fact that women, on average, are not as physically strong as men does not mean that women are lesser people than men. Attempting to draw such conclusions is where the is/ought dichotomy should be observed, because the syllogism would essentially be:

  • Premise 1: the demographic of men are on average physically stronger than the demographic of women
  • Premise 2: if society wants the average physical strengths of the different sex demographics to determine the moral value of that demographic, then society ought to treat the demographic with greater physical strength on average better than the demographic with lesser physical strength on average
  • Premise 3: society wants the average physical strengths of the different sex demographics to determine the moral value of that demographic
  • Conclusion: therefore, society ought to treat the demographic of men better than the demographic of women

Just like above, the conditional in premise 2 is not a fact to be discovered and therefore not an is statement. Similarly, premise 3 is a value judgement and not an is statement. Indeed, most people (at least in western society) would think premises 2 and 3 to be reprehensible (and in my mind they would be right to think those premises reprehensible). But this is in western society. There are many cultures that would likely agree with premises 2 and 3. Yet, we can use science and its many “is” statements to determine that the above syllogism leads to reduced well-being – obviously for the women who are being mistreated, but also for society at large who would be deprived of all the contributions women can make to the overall well-being of everyone. Yet, nothing in science can tell us that well-being is what we ought to value.

The point is, though, the above is a valid syllogism and a legitimate statement with moral valence, but which does not contain only is statements as premises, yet we also know from science (and, you know, just asking women what they want) that abiding by the above syllogism reduces well-being. And so, we need to recognize when naturalistic fallacies are being employed, but also not ignore the is statements (the facts).

Concluding Remarks

This post is just a short introduction to the philosophical school of ethics. Needless to say, much ink has been spilled on the subject. A person could spend decades studying it and still not know all the different nuances and positions people have written and talked about.

Some may have noticed that in a post about morality I never brought up God, religion, objective moral ontology, and/or moral realism. These are obviously important aspects of morality, but the former two (God and religion) tend to have their own problems (Euthyphro dilemma, problem of evil, God’s omniscience and determinism, etc.) which would go down an avenue that is its own topic unto itself. The latter two (objective moral ontology and/or moral realism) are also a larger topic and tends to get out of the realm of the purely ethical and takes us down a metaphysical, ontological, epistemological rabbit hole that falls out of the purview of this post.

Another issue I didn’t bring up, but which has been coming up in the news lately, is presentism, which Wikipedia says “…is the anachronistic introduction of present-day ideas and perspectives into depictions or interpretations of the past.” In other words, judging the past using our own modern moral standards. This is an interesting topic, and one I’ll probably dedicate a post to at some point, but here I just thought it deserved an (dis)honorable mention.

My primary thesis in this post is that morality is something much more intuitive than something a person learns and then applies. Indeed, in 2013/2014 Eric Schwitzgebel found that ethicists are no more moral than typical average people, and a 2019 paper further confirms this, indicating that knowledge of ethical philosophy doesn’t do a whole lot to make someone more moral. And so, the answer to the question “what is morality?” is that it is an inner intuition about right and wrong instilled in us by evolution and by our particular culture. Ethical philosophy is a worthwhile endeavor certainly for more formal codes of conduct, but also as a way of gradually reorienting cultural conceptions of morality in hopes that future generations will become just a little more moral than we are.