Changing Our Minds: How Does It Happen and When Should We Do It?

doxastic conservatism epistemic conservatism

It’s almost proverbial that it is difficult to win an argument. That is, if we take successfully changing the opponent’s mind as the condition for victory. Most arguments end up with all parties involved becoming frustrated that their opponent is incapable of agreeing with them. Worse, both parties are often just as likely to become even more convinced of the beliefs they held when the argument began.

When it comes to changing our minds about some issue, the is/ought dichotomy once again comes into play. The former is the question: what conditions actually obtain when a given person changes their mind? The latter is the question: what conditions ought to obtain for a given person to change their mind?

The Is Question

I argue that the reason it is difficult for a person to change their mind is due to three things: 1) our cognitive limitations, 2) our group identities, and 3) epistemic inaccessibility.

For #1: our brains did not evolve to be truth-seeking, but to be survival-seeking. Changing our minds is cognitively, emotionally, and energetically taxing. The human brain is a predictive processor, and prediction accuracy is contingent on homeostatic conditions. In other words, our brains are wired to prefer things to be consistent, reliable, and predictable. Changing our minds means we have to alter the mental model of the world upon which our predictions are made.

Piaget theorized that, when confronted with new information, a person either assimilates or accommodates the information. The former is when we can fit the new information into our existing cognitive schemas – taking something new and fitting it into our pre-established concepts. The latter is when we change our schema to fit the new information, such as engineering new concepts or modifying existing concepts by adding new differentia, removing old differentia, or combining/adjusting existing differentia.

A third option, of course, is simply denying the new information, as a result of what Jennifer Foster calls doxastic anxiety. We actively avoid thinking about something because it makes us uncomfortable, and it makes us uncomfortable because it threatens to upend our working mental model of the world, which would be mentally, emotionally, and energetically taxing.

For #2: the things we believe are important to our personal identities, but also our social identities. I’ve known, for instance, religious people who have lost their faith who find it difficult to accept because their social identity is tied up in their religious beliefs. The same thing occurs when someone comes out of the closet as gay or transgender: it is difficult for the one coming out to have to change the way those around them see and conceptualize them, and it is difficult for those the person comes out to, because they now have to reconceptualize their relationship to the person.

This phenomenon can be observed in the complement to this as well: virtue signaling. When a person makes a tweet or Facebook post stating their position on some issue, they are doing it as much for (if not more so for) those who agree with them already as they do for those who may disagree. Virtue signaling could perhaps be called ‘social identity reaffirmation‘ because its purpose might not be so much to show that “I’m one of the good people” as much as it is to say “my identity fits the mold of these propositions.” The consequence, obviously, is that if one were to change their mind on the issues at hand, they would incur a substantial social penalty.

The reason for this has to do with #1 above: once people have conceptualized person S as being A, and then person S rejects A, then those around them must go through the trouble of reconceptualizing that person. In order to simplify matters, it is not uncommon for those around person S to conclude that person S was never actually an A in the first place. Not only does this make reconceptualizing person S easier – you can make retrodictions incorporating person S‘s past actions that appeared as if they were genuinely A into the schema of them having never been A – but it also makes it easier for someone to maintain their own belief in A. When S goes from accepting A to rejecting A, this acts as a threat to the belief in A for R, the other adherents of A. The motivated reasoning goes: if S rejects A, then there must be some compelling reason not to believe in A; however, R believe in A, which in-itself acts as a justificatory condition for believing in A, and so S must not have ever believed in A in the first place; therefore, R is justified in continuing to believe in A.

For #3: epistemic inaccessibility is when some kind of knowledge is either difficult or impossible for a person to obtain. This ranges everywhere from Thomas Nagel’s famous “What is it Like to be a Bat?” to a person who has never eaten fish before attempting to conceptualize what fish tastes like. In between there is what Miranda Fricker called hermeneutical injustice (a subcategory of epistemic injustice). For instance, can a man ever know what it feels like to be a woman being sexually harassed? Can a white person ever know what it is like to be a black person being racially profiled? Can a cisgender person ever know what it feels like to be transgender? If these things are epistemically inaccessible to someone, then there can be no concrete knowledge of the sort that could either be assimilated or accommodated. The beliefs of people who cannot access these experiences are only ever going to be based on knowledge acquired through testimony from those who do have access. It raises the question about what criteria a person exempt from certain types of experience can base their beliefs about those experiences. Those criteria would have to be other beliefs the person holds: person S1 believes in A and therefore constructs the particular belief X about some experience they cannot access while person S2 believes in B and therefore constructs the particular belief Y about that same experience. Thus, to change the mind of S2 one would have to actually change their mind about B and not Y directly.

The question, then, is what conditions must obtain in order for someone to go through the cognitive gauntlet of actually changing their mind? I would imagine that it differs person by person. If persons S1, S2, and S3 all believe in A instead of not-A, then a defeater of A for person S1 may not be a defeater for persons S2 and S3. Meanwhile, there may exist a defeater for S2, even if S2 has not been exposed to the defeater, but for S3 there is no defeater of A that would cause S3 to believe not-A. It may then be the case, on account of beliefs other than A but related to (surrounding) A, that a defeater for A (call it D1) may be a defeater of A for S1 but not S2, while some other defeater for A (call it D2) may be a defeater of A for S2 but not S1, while one, the other, or both D1 and D2 may not be a defeater for S3.

Humans reason as lawyers more than scientists. We rarely, if ever, withhold judgement while accumulating evidence, deciding on a belief only once some evidentiary threshold has been met. Instead, we use what is called motivated reasoning: we formulate beliefs and then seek evidence to support them (and that’s only if the pre-formulated belief is ever challenged; I would wager that everyone holds unchallenged beliefs that we have never sought to justify, because attempting to justify every single belief would be extremely cognitively taxing). That brings us to the question of when ought we change our mind.

The Ought Question

Doxastic conservatism is the position that simply holding a belief gives the belief justification. This is clearly too strong a statement. My belief that a flipped coin is tails even before I see the results is unjustified, as would a belief that the number of stars in the entire universe is a prime number. Just because I believe some way about these things does not justify the beliefs.

Kevin McCain’s so-called Properly Formulated Epistemic Conservatism (abbreviated PEC) adds several restraints to epistemic conservatism:

PEC If S believes that p and p is not incoherent, then S is justified in retaining the belief that p and S remains justified in believing that p so long as p is not defeated for S.
Defeating Condition (C1) If S has better reasons for believing that not-p than S’s reason for believing that p, then S is no longer justified in believing that p.
Defeating Condition (C2) If S’s reasons for believing p and not-p are equally good and the belief that not-p coheres equally as well or better than the belief that p does with S’s other beliefs, then S is no longer justified in believing that p.

First, what is added is a coherentist epistemology: if the proposition P fits into my mental model of the world without contradiction, then I have justification in believing P. This is in contrast to foundationalist epistemology, which says that belief in a proposition P is justified only if it can be built up to from foundational or axiomatic beliefs. The coherentist epistemology could be further subdivided into holistic and local coherentism: the former says that one is justified in believing that P only if it does not contradict any of a person’s other beliefs while the latter says that one is justified in believing that P if it does not contradict beliefs related to P (e.g. even if your belief in the bible contradicts your belief in evolution, it is still possible to be justified in both beliefs since they are situated within unrelated topics).

The second thing McCain’s PEC does is add in the two defeating conditions: if the preponderance of evidence points to not-P, then believing that P is unjustified, and if not-P is equally or more (globally or locally) coherent with other beliefs, then belief that P is unjustified.

As a framework for when one ought to change their mind, PEC appears reasonable. It’s possible to find flaws in the famework, but I think most people, on an intuitive level, would agree with PEC. The problem is that it is unlikely to be persuasive insofar as people adopt it as a rubric for interrogating their own beliefs. As the section on the is question pointed out, people are resistant to changing their beliefs. More than that is our bias blind spot: we will weigh evidence in favor of one position over another without realizing we are doing so. If we are presented with evidence that contradicts (or even just weakens) the basis for some belief, we will, without even knowing we are doing it, weigh such evidence weakly in comparison to our innate doxastic conservatism – person S already believes that P, and the justificatory weight of holding the belief outweighs other sources of justificatory evidence.

We can imagine a sort of ideal world in which humans truth-seeking. In this world humans would be convinced by evidence and sound argument rather than personal sentimentality and group identity. People’s confidence in a belief would reflect the strength of evidence and argument in its favor. An unsupported but also unopposed belief would be provisional, subject to change once one is exposed to new information. Yet, we know that when humans lose the emotional part of their decision making, they actually become worse at making decisions – see this and this and this and this and this. The point being that a species of such idealized rationality would likely be unable to function. Yet, one would hope that people could at least be a little more discerning when formulating beliefs.