
A common refrain in the news media during these COVID years has been to “trust the science.” This is also a popular mantra when it comes to climate science. Yet, in the United States at least, trust in experts and institutions is at an all time low. The political right is skeptical of climate science, COVID vaccines, and scientific institutions like the NIH and CDC, seeing them as a means for the government to take away rights and for liberals to impose their will. The political left views science as a white colonialist means of subjugating those with other “ways of knowing” and upholding white, male privilege. So the question is: should we trust the science?
Introduction
This post was inspired by the above video, which is a discussion of the 2021 movie Don’t Look Up!
The movie they discuss is a satire about climate change and climate change denial. It proffers a pretty ham-fisted thesis that people need to “trust the science” on climate change, lest the world comes to an end. One of the people in the above video argues that science is just another way of looking at the world and that it excludes things like indigenous ways of knowing. This is an idea popular within critical theory circles known as epistemic oppression and positionality.
I think this view conflates science with scientism. It’s easy to see why someone would make this conflation. Science popularizers (e.g. Neil deGrasse Tyson) often advocate scientism. What scientism is, in a nutshell, is the belief that scientific truths (i.e. subjects within the purview of the scientific method; things that can be measured and quantified) are the only truths; or, at least, the only worthwhile truths, with things like philsophy, ethics, and religion being nothing but a bunch of gobbledygook.
Scientism is a philosophical position, making metaphysical assumptions like materialism and reductionism, although most proponents of scientism would probably not like to admit as much. While I am of the opinion that the world supervenes on the physical, that does not make science the proper means to answer any and all important questions (at least in practice if not in principle).
A major area where this conflation of science with scientism errs is in thinking that just because scientism is a poor philosophy, that science itself must also fall short in its area of inquiry. Contrary to what Sam Harris might argue, it is certainly true that science falls short when applied to areas beyond its scope, such as in answering the question what is the good life? or how should people treat one another?
Of course, knowledge gained from science can act as a guide in these questions, but science will not supply the final answer. But when we are talking about scientific questions – climate change, COVID, an incoming comet – then certainly we will want to turn to science for answers.
This, I think, leads to another conflation made by the same person in the above video: that science is inherently capitalistic and exploitative. In other words, that taking a scientific point of view means automatically seeing people and the natural world as resources that can be manipulated and used. That any mineral discovered must be exploited for technology, that fossil fuels must be exploited for energy, that land must be exploited for industry, and so on. Certainly science has aided in such exploitation, but this exploitation is not an intrinsic part of science. The exploitation comes because humans are going to act human. Science has merely been a tool used for this. People like the person in the video would almost certainly disagree, taking the critical theory view that there is no such thing as objectivity or impartiality, and that such exploitation is inherent in science. I disagree that the scientific method is subjective and partial, although I would agree that scientists are not, but more on that below.
This brings us to the issue at hand: should we “trust the science?” I can see two overarching issues when it comes to trusting science: 1) is the scientific method an effective way of discovering truths? and 2) are scientists, the ones actually doing science and communicating it to the public, trustworthy? I’ll examine these questions in turn.
Trust the Scientific Method?
There are broadly three different schools of thought in the philosophy of science. There is verificationism, which says that scientific knowledge is gained when we test a hypothesis by simply going out in the real world to verify that the hypothesis is either true or false. There are problems with this, the canonical one being that of the black swan. For a long time, Europeans figured that all swans were white. In other words, the hypothesis “all swans are white” was true because people could go outside and verify that, indeed, every swan they came across was white. Upon landing in Australia, however, they discovered that there were, in fact, black swans. This is the issue of inductive reasoning, where people come to conclusions based on experience. The Europeans only ever experienced white swans, so they took it as the truth that all swans are white. But we can see the weakness here: they assumed a truth based on limited data, and they went looking to have their hypothesis confirmed. And if we take verificationism as the justification for knowledge, then we will always have insufficient data.
Next came Karl Popper’s falsifiability. This is how science is usually taught at a high school level. A person has a hypothesis, but instead of going out looking to verify it, they instead design experiments, or use theories to make a prediction, and then try falsifying the hypothesis. This has much more rigor because a theory must stand up to constant attempts to take it down rather than simply gathering more instances where it is true. The issue, of course, is that a theory is not always jettisoned when it’s found to be incorrect. Newtonian gravity was known quite early on to be insufficient, due to Mercury’s orbit, but it was correct so often that it remained useful. It’s still used today, even though physicists know it’s not “true” insofar as it doesn’t correspond perfectly with reality, but it’s only known not to be “true” in this way because we have a better theory: Einstein’s General Theory of Relativity.
This philosophy of science proposed by Thomas Kuhn is known as a paradigm shift. A theory, even when it is known to be flawed, is maintained if it remains useful. Only once some better theory comes along, a theory that explains all the relevant evidence and accurately predicts the aberrations from the old theory, do we discard or downgrade the previous theory. The General Theory of Relativity makes spectacularly great predictions and explains the aberrations seen in Newton’s theory of gravity. We know already, however, that it has shortcomings: namely dark matter and unification with quantum field theory. But it remains useful, so it has not yet been discarded or downgraded.
This leads into another issue with science: what is the ontological status of a theory? In science, as opposed to everyday parlance, a theory is essentially an abstract set of principles that explains the data. The theory of evolution by natural selection, for instance explains observations made in modern biodiversity and that seen in the fossil record. The theory has been updated and confirmed by further advances in genetics, leading to what is sometimes called the neo-Darwinian or modern synthesis. The theory is an abstraction that explains all the data acquired by biologists, geneticists, paleontologists, etc. over the centuries. But what is the ontological status of the theory? The same could be said for General Relativity, which predicts a curvature of space and time, but what does that mean, ontologically speaking? And, if our theory isn’t completely true (i.e. we are still on a useful, but not totally “true” paradigm), then what ontological status does the theory have? Can we ever say that a theory is true, or only that it is useful?
These are the sorts of questions that proponents of scientism don’t like to ask. They implore us to “trust the science,” which we certainly ought to if the science has proved useful, but how much stock should we put in a theory in corresponding with the ontological reality of existence? Granted, this is a somewhat academic question. I think most people are fine, for instance, using the phone that science and engineering made possible, because the phone is useful, without asking whether the theory of quantum mechanics used to make the phone possible is in perfect correspondence with ontological reality. But there are instances, such as with climate change, where the models and predictions made can be scrutinized. The below video is a great example of that.
On the flip side of what it means for a theory to be true is whether or not the raw data and the analysis/interpretation can be trusted. Experimental design is incredibly important, and is almost never mentioned in the media when communicating science. From sample selection to method and control validation to things like p-hacking (aka data dredging), which can be used to make statistically insignificant results look statistically significant, these things are almost always left obscured when scientific studies are discussed to lay audiences.
The blame for a lot of these issues can be lain at the feet of the scientists and science communicators, which is the next set of questions at hand.
Trust the Scientists? The Science Communicators? The Institutions of Science?
This is, I think, where the biggest issue arises. Scientists are human, subject to the same egoism and bias as any other person. The scientific method is humanity’s best attempt at removing such biases from knowledge acquisition, but its implementation is far from perfect. Ideally, a scientist gathers data, and only once a high threshold has been reached, do they come to a conclusion. This ideal is, at best, only ever imperfectly achieved.
Scientists, like all humans, are people who care about things like prestige and esteem from peers, and not looking like an idiot for being wrong. They are subject to incentives, both monetary and social, and will always be tempted by those things. It doesn’t help that the institutions of science, with the publish or perish incentive, where reproducing experiments and doing a lot of the necessary but boring work of science is disincentivized.
And then, of course, when politics gets involved, it opens up a whole new can of worms. Did Dr. Fauci lie about the origins are coronavirus for “international harmony?” Is the removal of content questioning the narrative of COVID vaccines done for political reasons and not because of sound scientific evidence? How can we know that a particular extreme weather event can be attributed to climate change? What does the science say about transgender people? Is it better to err on the side of affirmation of people’s gender identity? Should transgender women be allowed to compete against cisgender women in sports?
Most importantly, if politics is inserting itself into science, should we “trust the science”?
I think two big issues with “trusting the science” are two sides of the same coin: science communication and science literacy (or lack thereof). Most people aren’t anywhere near experts in science; many people could even be considered scientifically illiterate. This is definitely an issue for which the people are somewhat at fault, but I think the bigger problem is with education and science communicators. One such problem that I’ve already discussed is the propensity for science communicators towards scientism. I think there can be a sort of condescension that if a person doesn’t accept the scientism worldview that they’re being superstitious or simply talking nonsense.
I think there are two things to point out as the trouble with science communicators. The first is what I discussed above about how methodology is rarely, if ever, discussed when the media covers topics in science. The second, and probably more salient insofar as it leads to greater mistrust in science, is the infiltration of politics into science communication. The latter will be the issue I discuss further.
The vast majority of science communicators are politically center-left, being of a neoliberal persuasion. The usual argument from the center-left cohort, of course, will be that “facts have a liberal bias.” That’s not necessarily true, especially with the anti-science stance that many on the radical left take. This, however, poses two problems: 1) that the science communicator allows politics to get into their science communication, and 2) a certain percentage of the population will be put off by this. The former problem not only creates a sort of echo chamber in science, but it allows more extreme leftwing ideas to creep in because the science communicators are much less likely to challenge ideas from “their own side.” Certainly the latter problem, however, is largely on the people who will believe or disbelieve science because they don’t like the politics of the communicator. But, when science communicators are willing to be misleading, or even to outright lie about things, because of political reasons (or even more base financial or prestige incentives), then one cannot blame people for being skeptical.
People’s scientific illiteracy is a problem that will never be fully done away with. There is just far too much information for anyone, even scientists, to be expected to know all of it (or even enough of it to formulate an informed opinion). This is why science communicators need to be trustworthy, just as much as scientists (if not more so). For instance, I am by no means an expert on climate change. I know quite a bit about chemistry, since that is my background, and so perhaps have a slightly better understanding of the underlying mechanisms than the typical average person, but for the most part I accept climate change on trust. I “trust the science” on it. Or, maybe a better way to put it is that I “trust the scientists and science communicators.”
This is the epistemological problem of testimony. How am I justified in taking someone’s word for something? Experts – whether in science or finance or foreign policy or whatever else – are supposed to be the people whose testimony we can trust, with our trust in them justified by virtue of their credentials, which are meant to indicate their very expertise. That’s practically the definition of an expert: if person A is an expert in some field X, then I can trust what person A says about X. The corollary, then, is that if person B is not an expert in X and person B contradicts person A, then I can give more weight to what person A says about X than I can about what person B says about X.
This is why it is so corrosive to our society when the experts are shown to be wrong a lot, or that they have bad incentives (political, monetary), or that they’re caught lying (e.g. all the foreign policy experts being wrong about Afghanistan). If I can’t trust the testimony of experts, but I also don’t have the time, energy, or means to become an expert myself, then what am I supposed to do?
Most people, of course, come to a conclusion for reasons other than that they have knowledge of the subject: to fit in with peer groups or to make oneself feel good and so on. It’s difficult, if not impossible, for someone to remain agnostic about something; more difficult still if there is social pressure to believe one way or the other; even more difficult to remain agnostic about a lot of things; and most difficult of all to remain agnostic when an action that will be grounded in whichever belief one comes to must be taken.
Concluding Remarks
The original question is: should people “trust the science?” I haven’t really come to any firm conclusions in this post. What I hope to have accomplished here is to maybe help anyone who may be reading this to think more critically about science and science communication. It’s difficult enough to navigate our own day-to-day lives. This is why, like it or not, we do need experts if we want to maintain our modern society. But we also need experts who are trustworthy.