
So far in 2026, here in the United States, I’ve encountered a fair number of people calling for the impeachment, removal, and even arrest of various officials in the Trump administration. Kristi Noem, Stephen Miller, and Greg Bovino because of the lies and mishandling leading to, and in the aftermath of, the murders of Renee Good and Alex Pretti, and just the overall illegal occupation of Minneapolis. Pam Bondi and Kash Patel due to aiding, abetting, and covering up the Epstein conspiracy. Trump himself for being the ringleader (or perhaps useful idiot) of this grotesque spectacle, all while enriching himself and threatening to “nationalize” the 2026 midterm elections. While impeachment and legal action are certainly in order, those actions, I contend, are the easy part. The hard part is to demonstrate to the ~40% of the U.S. population who either loves what the regime is doing, or who are fine enough with it not to care, that all of this really is as bad as what the detractors are saying. Such an undertaking – completely discrediting the fascist authoritarian project in the eyes of a significant majority of U.S. citizens – requires being able to change peoples minds. But how does that happen?
Changing minds is something I’ve written about before. In that post I discussed how people actually change their minds and under what conditions a person ought to change their minds. My views since I wrote that post have changed somewhat (ironically enough), especially after reading How Minds Change: The Surprising Science of Belief, Opinion, and Persuasion (2022) by David McRaney.
As far as my thoughts about when people ought to change their minds, my views haven’t changed quite as much. It would still be nice if people considered evidence and sound argument more than personal sentiments and social identity. But what I didn’t consider much at the time of writing the previous post was just how much people’s perceptions and cognition can vary. In other words, I was still under an assumption that there was a reality to which people had at least some level of access, and anyone who couldn’t perceive that reality either consciously refused to see it (in order to enrich themselves or gain power or esteem, i.e., for cynical reasons) or because there was something wrong with them (they were crazy or had been brainwashed or something). Although I don’t know if I would have thought of it this way at the time, but I probably still held onto something like a foundationalist theory of knowledge, that what a person knows (or, at least, what they believe) is built up from a foundation of self-evident and indubitable axioms, namely the shared reality to which everyone has some level of access. What I’ve come more and more to accept is that a person’s epistemology is much more coherentist, where beliefs must fit within a network of other beliefs in a way that is locally consistent (but not necessarily globally consistent). This coherentist theory of knowledge much better accounts for why people seem to have disagreements about what appear to be base level facts about reality, and better explains the conditions under which people change their minds. People don’t change their minds because they realize a belief they have is inconsistent with some bedrock truth to which we all have some level of access, but because they realize the belief is inconsistent with other beliefs they hold. Yet, most people are blind to the contradictions they hold to be true, and so changing a person’s mind is simply getting them to realize (and to fully accept) that they are holding onto contradicting beliefs. There are some complications to this picture which I’ll get into later, but this is the gist of it.
I’ve known for a long time (indeed, it’s something I harp on a lot on this blog) that human beings are teeming with cognitive biases. We are carried around by emotion, with reason only stepping in to justify those emotions. As Hume famously said: “Reason is and ought only be the slave of the passions, and can never pretend to any other office than to serve and obey them.” This has been restated in other ways (with some modifications) more recently: Daniel Kahnamen’s ‘system one’ and ‘system two’ (the emotions and reason, respectively); Jonathan Haidt’s elephant (emotions) and rider (reason); or my favorite (although where I first heard of it escapes me) is that humans think like lawyers instead of scientists, i.e., we start with a conclusion and then find evidence and argue for that conclusion (like a lawyer) rather than withholding judgement while accumulating data and basing our conclusions on what the data support (like a scientist). Yet, as much as I understood this at an intellectual level, I don’t think I’d fully appreciated the cause and consequences of it. I still wanted to see reason as something universal, that everyone was capable of coming to agreements if they were able to identify and neutralize their various biases. In recent times, and especially after reading How Minds Change, I’ve come to see things differently.
There is good news, bad news, and very bad news as a result of this reality to our human condition. The very bad news is something that’s become glaringly obvious in the past decade, but is reaching a breaking point here in the U.S. People do not seem to live in a shared reality. Two people can watch, for instance, the murder of Alex Pretti and come to very different conclusions. And this does not seem like a situation where people are hearing different things through their filter bubbles. All the evidence is there for everyone to see, from multiple angles. Yet people are not seeing the same thing. The murder of Alex Pretti (I’m using him as an example; the murder of Renee Good could also be used, though there is slightly more ambiguity in that case, though not much) is the 2026 version of The Dress from 2015. Two people look at the same thing, see something completely different, and marvel at how wrong the other is.
The good news is that it is still possible to change people’s minds. The bad news is that it is a slow and laborious process. How Minds Change discusses this process (or, really, processes, but all of them work in much the same way).
In How Minds Change, McRaney discusses how our minds are predicting machines. We have a Bayesian brain with priors bestowed by our lifetime of experiences. Different people, having had different lifetimes of experiences, will have different priors. These different priors mean that when we perceive something, we will interpret it differently based on those priors. I brought up The Dress earlier because while it was a fascinating meme back in 2015, it actually puzzled neuroscientists and led to interesting experiments. McRaney’s book goes into a bit more detail about those experiments inspired by The Dress (like this one and this one), but the take home message is that people can literally see different colors based on prior experience. In other words, it doesn’t matter what wavelength of photon is actually impinging on our retina, the brain, due to having different experiences in the past, can see things differently. The author of the two linked studies used the following figure in a blog post to illustrate this point:

This model not only works for raw perceptions, but also for conceptual interpretation. For instance, the murder of Alex Pretti is not interpreted differently by different people because we perceive the facemasks of the CBP agents as being different colors. It’s because we have different concepts of what counts as an instance of murder. It may be that some people don’t think of it as murder when government agents kill someone, or they may have conceptualized what Alex Pretti was doing as defying the law and/or resisting lawful government operations. Or they may simply have in their minds that members of the political out-group are fair game to be shot by members of their political in-group. The point being, it’s not that they see that it’s definitely an unjustified murder but repress those feelings and say what they think their team ought to say, it’s that they can watch the video and literally not conceptualize what they’re seeing as state sanctioned murder. To them, it just isn’t murder, in the same way that to some people, The Dress just is white and gold in that famous picture. Thus, to change a person’s mind about whether Alex Pretti was murdered isn’t a simple matter of getting more videos from different angles, or arguing that the CBP agents hadn’t mistook his phone for a pistol, or any other facts about the case. Getting someone to change their mind about Pretti’s death would require interrogating those deeper assumptions people hold, those Bayesian priors, through which a person is interpreting what they see in the videos.
When discussing how people change their minds, McRaney uses the same Piagetian framework I did in my previous post. This is essentially that people can either assimilate new information or they can accommodate new information. In the coherentist epistemology, assimilation is essentially altering the new incoming information (i.e., changing our interpretation of the information based on our priors, i.e., based on beliefs we already hold) so that it fits within our web of locally consistent beliefs. Accommodation is when we change our beliefs (i.e., update our priors) in order to fit the new information into the web of locally consistent beliefs. What McRaney discusses that I did not in my previous post, however, is that there are experiments showing that people essentially assimilate new information up to a certain point. McRaney uses the framework of Kuhnian paradigms in science, where inconsistencies in some model begin showing up, but the model is still too useful (and, frankly, people still have a sentimental attachment to it) and so those inconsistencies are noted but set aside. But, there comes a point at which the inconsistencies have piled up enough that they can no longer be ignored, and this results in what Kuhn called a paradigm shift. Similarly, people can assimilate new information and ignore incongruities and inconsistencies with already held beliefs for some time, but eventually some threshold is reached where people cannot sustain the cognitive dissonance. This threshold (the affective tipping point) is different for different people and for different beliefs, but it is the point at which someone must switch from assimilation to accommodation, and it is here where people change their mind.
If only it were that easy. Unfortunately, as a chapter in McRaney’s book is titled, the truth is tribal. For issues important to our social identity and status, this affective tipping point has a much higher threshold. Our motivated reasoning, confirmation bias, and in-group bias work overtime to ensure that we continue to assimilate, rather than accommodate, information that runs counter to our group’s positions. Costly signaling and credibility enhancing displays shore up group beliefs and ease cognitive dissonance. We uncritically believe the testimony of people within our in-group while exhibiting skepticism, even hostility, for anything said by those deemed part of the out-group. McRaney puts it this way: “If the brain assumes the risks of being wrong outweigh any potential rewards for changing its mind, we favor assimilation over accommodation, and most of the time that serves us well.”
But, McRaney reminds us, we do not exist amidst a web of locally consistent beliefs tethered to nothing. We humans also have deep values that can be appealed to. He says:
Throughout the 2000s, research into what psychologists call identity maintenance found that reputation management is the glue that binds us to our peer groups. When we feel as though accepting certain facts could damage our reputation, could get us ostracized or excommunicated, we become highly resistant to updating our priors. But the threat to our reputation can be lessened either by affirming a separate group identity or reminding ourselves of our deepest values.
He briefly discusses a couple studies that show that appealing to deeper values can help and then says:
If we feel we are falling short of our values, if we are not good people by whatever standards we consider important, we become motivated to signal otherwise by publicly endorsing beliefs that will re-integrate us to our peers. But if we feel affirmed, accepting challenging evidence or considering new perspectives poses less of a threat. … If we realize our groups fall short of those values, like Megan [Phelps-Roper] and Charlie [Veitch] had, we can feel justified in leaving. We can feel safe to change our minds.
Near the end of the chapter, McRaney quotes psychologist Tom Stafford as saying that “truth is social.” This got me wondering: if “truth is social” does that mean that the rise of conspiratorial thinking and loss of trust in institutions is because society has become less social (i.e., our schizoid society) or because we’ve become so social that what would have otherwise been fringe beliefs can now reinforce and perpetuate themselves through social interactions that otherwise would not have been able to exist? I tend to think it’s the former, but of course I would, since I wrote a blog post contending just that. Stafford goes on to say that online conspiracy theory groups are “social groups that are not social.” Within my thesis, that our society has exalted individualism to the point that we’ve all collectively developed a sort of schizoid personality disorder, these non-social social groups make a kind of sense. It’s something unlike either socialiality or parasociality, perhaps something like cosociality (codependency, but with a group), such that the group does not make one better than one is by oneself, but instead subsumes them and makes them dependent on the group for their sense of self. What someone gets out of the group is that which they lack by refusing to let go of their exalted individuality, with the promise that conformity is individuality, a contradiction obfuscated by the contrarian nature of the group, i.e., since the group casts itself as the plucky rebels raging against the establishment, it follows that the group must be populated by mavericks who refuse to succumb to the consensus trance of the “official story” promulgated by the elites.
One of the problems with our schizoid society, and with the cosociality that attempts to fill the void left by radical individualism, is that counterpoints to our beliefs are supposed to come from others. If we remain encased in our echo chambers, cutting out of our lives anyone who might infringe on our “boundaries” and “self-care” and our sense of identity, or even just people we find inconvenient, our beliefs go unchallenged. As individuals, we have only ourselves to argue with ourselves, and in such a contest we will always win. In other words, our beliefs go uncontested. McRaney says:
With no one to tell you that there are other points of view to consider, no one to poke holes in your theories, reveal the weakness in your reasoning, produce counterarguments, reveal potential harm, or threaten sanction for violating a norm, you will spin in an epistemic hamster wheel. In short, when you argue with yourself, you win.
Mercier and Sperber call this the “interactionist model,” which posits that the function of reasoning is to argue your case in a group setting. In this model, reasoning is an innate behavior that grows more complex as we mature, like crawling before walking upright. We are social animals first and individual reasoners second, a system built on top of another system, biologically via evolution, and individual reasoning is a psychological mechanism that evolved under selective pressures to facilitate communication between peers in an environment where misinformation is unavoidable. In an environment like that, confirmation bias turns out to be very useful. In fact, bias itself become very useful.
Here what he means is that confirmation bias is a useful prophylactic against misinformation. If we just accepted any bit of information presented to us, then it would be easy to get taken in by misinformation, and so epistemic conservatism evolved to guard us against being too gullible. McRaney continues:
As part of a group that can communicate, every perspective has value, even if it is wrong; so it’s best that you produce arguments that don’t run counter to your point of view. And since the effort is best saved for group evaluation, you become free to make snap judgements and quick decisions based on good-enough justifications. If others produce counterarguments, you can then refine your thinking and update your priors.
…
Reasoning is biased in favor of the reasoner, and that’s important, because each person needs to contribute a strongly biased perspective to the pool. And it is lazy, because we expect to off-load the cognitive effort to a group process. Everyone can be cognitive misers and save their calories for punching bears, because when it comes time to disagree, the group will be smarter than any one person thanks to the division of cognitive labor.
In other words, one does not need to be able to argue against one’s own beliefs because we are the product of an evolutionary process that favored reasoning socially, where one could depend on others to argue against one’s beliefs. But in our hyperindividualized society, we have exalted individual reasoning far beyond what is warranted, making us overly skeptical of other’s reasoning rather than viewing it as a necessary asset for our own thinking.
Thus, to persuade someone, it is not enough to simply give reasons why your interlocutor ought to believe some proposition. They will come back with their own biased reasoning – toward which they feel an inflated sense of confidence due to the exaltation of their individual ability to reason – as to why they are justified in continuing to reject the proposition. What is required is for a change in attitude. This is done most effectively through central or systematic thinking, i.e., how much thought a person is motivated to put into interrogating a particular belief. In other words, to change our minds, we need to examine the priors that make us formulate the belief, not the belief itself.
Doing this effectively requires an immense amount of patience. The techniques that McRaney identifies in How Minds Change all do something very similar, which is a kind of technique rebuttal as opposed to topic rebuttal. The latter is how most debates are conducted, where ideally people offer some thesis or proposition and then use facts and logic to support it; but more frequently, people instead hurl assertions and insults at one another in hopes of shaming or embarrassing them into changing their minds. Neither tend to be very effective. Technique rebuttal, however, attempts to interrogate how an interlocutor even came to the conclusions they have, rather than attempting to rebut the conclusion itself. Because people are so prone to confirmation bias and motivated reasoning, we are very bad at thinking about (and even worse at questioning and testing) how we even came to hold the beliefs we have. Quite often we hold beliefs for poor reasons (e.g., in-group biases) or for no reason at all (e.g., the mere exposure effect), and we simply continue to hold those beliefs without questioning them (or having our cosocial group questioning them). In our hyperindividual society, we view others with suspicion and exalt our own ability to reason, and so we lose the asset of considering the point of view of others while maintaining utmost confidence in our own ability to reason. Yet confirmation bias and motivated reasoning ensures that we do not question our own beliefs in any meaningful way.
The techniques of persuasion utilizing this approach that McRaney discusses are deep canvassing, street epistemology, and the technique used by Smart Politics. All three have empirical evidence attesting to their efficacy, which McRaney elaborates on at length in the book, meaning these techniques are not untested hypotheses. They were built through trial-and-error on the ground (through political canvasing and face-to-face interactions in the street, hence the names given to these approaches). They not only work better at getting a person to change their mind on the spot, but have been shown do so more enduringly than topic rebuttal approaches. McRaney does a step-by-step for each of these techniques, but I will quote him for just deep canvassing, since all three are quite similar:
- Establish rapport. Assure the other person you aren’t out to shame them, and then ask for consent to explore their reasoning.
- Ask how strongly they feel about an issue on a scale of one to ten.
- Share a story about someone affected by the issue.
- Ask a second time how strongly they feel. If the number moved, ask why.
- Once they’ve settled, ask, “Why does that number feel right to you?”
- Once they’ve offered their reasons, repeat them back in your own words. Ask if you’ve done a good job summarizing. Repeat until they are satisfied.
- Ask if there was a time in their life before they felt that way, and if so, what led to their current attitude?
- Listen, summarize, repeat.
- Briefly share your personal story of how you reached your position, but do not argue.
- Ask for a rating a final time, then wrap up and wish them well.
The first thing to notice here is that, in using this technique, you do not try to rebut a person’s beliefs. Instead, you try to get them to think about what brought them to hold those beliefs in the first place, i.e., we try to get them to interrogate their priors. You try to get them to think about the beliefs with high elaboration (elaboration likelihood model) or systematically (heuristic-systematic model) in order to get the interlocutor to convince themselves they are misinformed or incomplete in their thinking. Trying to argue against their beliefs will make the person defensive, but if you can get them to realize, on their own, that they hold those beliefs for weak reasons, or that those beliefs actually conflict with their deeper values, that can actually lead to enduring changes in their beliefs.
Essentially what you want to do is help the interlocutor realize that the belief you are trying to convince them of is more consistent with their web of locally consistent beliefs or with their values. A. J. Ayer said of morality:
If our opponent concurs with us in expressing moral disapproval of all actions of a given type t, then we may get him to condemn a particular action A by bringing forward arguments to show that A is of type t.
Similarly, to convince someone they should disapprove of some proposition P, then they need to be reminded of their values V and how P is inconsistent with V (or that some other proposition Q is more consistent with V than P is). We could restate Ayer this way: if our opponent concurs with us that having values V commits us to rejecting all beliefs of type t, then we may get them to reject a particular proposition P by bringing forward arguments to show that P is a type of t.
When reading about these “techniques” to persuade people, I couldn’t help but think that this seems manipulative, though I can’t say for sure why. I would guess it has to do with enlightenment values in which those of us living in W.E.I.R.D. societies are steeped, where emotions are seen as lesser, as animalistic and uncontrollable, that they are the lizard brain, the lesser Aristotelian soul to that of reason, and so appealing to emotions is in some way attempting mind-control rather than sitting down like dignified and reasonable people and reaching some agreement about facts to which everyone has voluntarily and consciously assented.
Yet, I think everyone knows, at some level, that all persuasion is emotional to some degree. Most people are aware of things like clickbait and ragebait (even if we are still suckered in by these tactics) as ways of appealing to emotions to convince us to watch a video or read an article. But changing our minds is still emotional even when it’s not so obvious. While we all (at least in W.E.I.R.D. societies) like to think of ourselves as only ever being convinced by evidence and sound reasoning, quite often we are not. Depending on how important some belief is to our social identity, we need to be exposed to the evidence multiple times, hear the same arguments over and over again, all before the inconsistencies in our thinking pile up enough for us to reach our affective tipping point where we finally give in to accommodating the information rather than continue assimilating it.
The second thing to notice in the deep canvassing technique enumerated above (and which is also a major aspect in the other techniques) is the importance of trust. In our hyperindividual culture, the reasoning of others is viewed with suspicion; they are trying to sell you something or convince you of untruths for their own cynical purposes. This must be true, says the individualist interlocutor, because if what you were arguing were true, then the individualist interlocutor would have used their own reasoning to reach that conclusion already themself (which is why these techniques stress getting the person to see the shortcomings in their reasoning on their own, rather than having you point it out for them). Being shown to be wrong by someone else is viewed by the individualist as a sign of weakness, of feeble-minded reasoning, and of insufficient dedication to the cosocial in-group. Thus, an individualist interlocutor is not going to allow their mind to be changed if they don’t reach some level of trust in you first. McRaney says when discussing vaccine hesitancy:
Researchers say, in short, it was about trust – we don’t live in a post-truth world, but in a post-trust world. A general distrust of media, science, medicine, and government makes a person very unlikely to get vaccinated no matter how much information you throw at them, especially when the people they do trust share their attitudes.
While facts may not care about anyone’s feelings, a person’s willingness to accept facts depends greatly on their feelings. While that might seem less than ideal – we’d hope that people would change their minds to fit the facts, not the other way around – it is undeniably the situation in which we find ourselves. This is why trust is an issue of paramount importance. There is nothing more damaging to a society than low trust. Polarization is a feedback loop that only adds to a reduction in trust.
While it may feel cathartic to call people we disagree with evil or sick, this only serves to further erode trust. While we may harbor a sense of righteous indignation when someone finally realizes the error of their ways, our knee-jerk reaction being to reject and denigrate them for having been in the wrong, doing so will only serve to disincentivize people from changing their minds. Getting people trapped in flawed or harmful thinking to see the error of their ways is, in my view, the primary goal of civil society. As I mentioned at the outset, exacting justice against wrongdoers is the easy part. Convincing people that the wrongdoers were, in fact, doing wrong, is the only way to ensure durable and sustained progress toward a just and functioning society. Locking up the ringleaders without discrediting them will, at best, drive the cult underground to fester and re-emerge in the not-too-distant future to inflict further harm on everyone around them. As difficult and frustrating as it may be, being patient, trusting, and accepting of people is the only way to even begin the difficult task of reconciliation within a rapidly polarizing society. The process will not feel as good as dunking on ideological adversaries, but it is the only way to ensure durable progress toward something less horrifying than where we are currently heading.
Concluding Remarks
In addition to being interested in how minds change (and when they ought to) for the obvious reason that we live in the post-trust world that is rapidly disintegrating, I’m also interested because I’m someone who has had their mind changed on big issues multiple times throughout my life. As a teenager I believed in the Christian God, but I no longer do anymore. In my late teens and early twenties I voted Democrat, only to become a libertarian between about 2010 to 2020, to then become whatever I am today (a political pessimist who still has some libertarian leanings but maybe with more left-wing thought? I don’t know quite yet). In a way, being willing to change my mind about things can be a strength. I’m not beholden to any partisan interests and I’m willing to part ways with people I come to disagree with (i.e., I’ve never been susceptible to cults of personality).
The weakness of this, however, is that I tend not to have strong convictions. While I have certain core values that have never really wavered, I don’t have any animating beliefs. I wallow in a state of anhedonia, not motivated one way or the other, a sort of underground man. And it’s not even that my “ability” to change my mind fairly easily when presented with new information is because I possess some strength of character that other people lack. It’s a lot more to do with the fact that I’m a nobody (i.e., do not have a large audience and thus no audience capture) and have little to no interest in social interaction, meaning that I do not pay much of a price for “flip-flopping” on my views. In other words, I embody the schizoid society that is one of the roots of the current predicament facing the United States.
As someone with very little social interaction, I am victim to my filter bubble, the vast majority of my information served to me via algorithm. I do not have people in my life to question me on how I arrived at my beliefs, or to remind me that what information that passed through my filter bubble may not be the same information that most of the rest of the world is being fed. Something I’ve noticed about myself, particularly within the past year or so, is just how bad I am at making predictions. I predicted that the political left was a bigger threat than the political right. I thought it most likely that Russia would easily defeat Ukraine. I thought people were overreacting to Trump. The point is, I have an abysmal track record when it comes to political predictions. While I may just lack any talent in this arena, I think at least some of the blame can be attributed to my dependency on algorithms to serve me information, reflecting my own biases back at me and amplifying my weird little idiosyncratic beliefs, all while lacking the social connections to call me out on my own bullshit. These are problems writ large on society, but nobody is more guilty of them than I am.