Expertise, Meritocracy, Pseudo-Intellectualism, and the Problem of Testimony

The vast majority of what people “know” about any given subject they only know because someone told them, or they read it in a book or online, or heard someone talk about it online. The point being, we get our knowledge from what philosophers call the testimony of others. This worked out well in our hunter-gatherer past, when a member of a tribe knew everyone else in the tribe. Person A knows that person B is someone capable of or knowledgeable about X, and so person A can trust person B with telling them about X.

In much larger societies, such as in modern nation states, many, if not most, of the people we run into online and in our daily lives will be strangers, or at the very least not people we are intimately familiar with. This means that much of what we are told comes from strangers, who are usually themselves relaying information they were told from yet other strangers. Since we don’t know if these strangers know what they’re talking about, humans have come up with various means of lending their testimony legitimacy and trustworthiness.

In the past, this would either be said to come from a higher power (such as the gods) or it would come through force (might makes right). A monarch could appeal to the divine right of kings or the mandate of heaven for why what they say is trustworthy; a clergyman could appeal to divine revelation or scripture or tradition for why what they say is correct. A warlord could simply say that you’ll believe them, or at least behave as if you do, or they’ll inflict violence.

Since the enlightenment, merit has become one of the primary ways that authority and social stratification is justified. For instance, from the French Revolution, Article VI of the Declaration of the Rights of Man and of the Citizen (1789) says:

All citizens, being equal in the eyes of the law, shall be equally eligible to all dignities, public positions and occupations, according to their ability, and without distinction except that of their virtues and talents.

And so more recently (and, of course, this would be true in times and places throughout the past – think of the Chinese state bureaucracy) we’ve adopted a system of credentials and licensure. A credential is something a person can present to demonstrate that they know what they’re talking about within a given domain – such a person has acquired expertise, where expertise can be thought of as the possession of merit within the relevant domain. Credentials are therefore a shorthand way of doing what our hunter-gatherer ancestors would have learned through years of interacting with a person – person A can have confidence that person B possesses a particular level of knowledge and knowhow about X and therefore person B has expertise in X. We need this, or something like it, because we count on so many strangers for so much of our lives. There would simply be no feasible way to vet everyone on whose expertise we depend.

Imagine calling a plumber, and then having to vet them to ensure they know what they’re doing. And then having to vet the people that manufacture the plumber’s tools, and then the people who transported those tools to the plumber, and the people who mined the materials needed to make the tools, and the people who transported the raw materials for making those tools, and then the people who built the various transportation vehicles needed to do all that transporting, and so on. This was illustrated by Leonard Read in 1958 when he talked about how no single person knows how to make a pencil (Read, 1958).

Trust is the most important currency within any society. We, as social animals utterly dependent on one another for our survival and comfort, need to be able to trust the testimony of all the strangers we depend on. Once a society grows beyond Dunbar’s number, past the point when the threat of social ostracization by everyone a person knows was enough of a deterrence against being untrustworthy, new ways of assessing and demonstrating trust were implemented. There are at least two major ways, in my estimation, that our modern, increasingly globalized society has attempted to maintain trust. The first is through the aforementioned system of credentials and licensure. The second is through the methods and institutions of science.

Credentials and Meritocracy

The first major way modern society has attempted (however imperfectly) to address this issue of trust is with our system of credentials and licensure. I can trust the plumber because of their licensure, and since I can trust the plumber with understanding how to do plumbing, I can also trust that they know which plumbing tools to use and how to obtain them. And I can trust this because I understand that to get credentials or licensure a person must have demonstrated some level of expertise in the relevant domain. This system of credentialing as an indication of expertise is the institutionalization of what is known in the philosophy of distributive justice as meritocracy.

Merit is somewhat of a slippery term. There is no single consensus definition with necessary and sufficient conditions for a society to be deemed meritocratic. Some theories of the merit-based distribution of scarce resources think it should be enforced by law, while some think that while it is not enforced by law it is enforced by a Marxian superstructure, while still others posit that merit-based distribution is merely just a good idea. For our purposes, I will be using meritocracy to mean the general idea that the individual who satisfies some set of relevant criteria that demonstrates a proficiency qualifying them for appointment to a given position in society – in either the public or private sector – is the one who ought to be given that position. This can mean either in a normative sense – that we ought to live in a meritocracy – or a descriptive sense – whether or not we do, in fact, live in a proper meritocracy. Someone’s merit, then, is regardless of any irrelevant factors, which can include race, sex, religion, socioeconomic background, and so on (thus, employing a principle of colorblind equal opportunity). This idea, then, broadly describes how most businesses and bureaucracies in western liberal democracies at least claim to operate. It is in the broad context of western liberal democracy that I will be discussing these issues.

Even within the western world, the context of merit can be further broken down. One’s merit as, say, a mathematician does not necessarily translate into merit as a pro-football player. And because subjective forms of merit, such as personability or morality or wisdom, are difficult to quantify and can also slide into difficult ethical territory, only certain criteria (usually one’s deemed to be blind to things like race, sex, gender, and so on – the colorblind ideal) are used as metrics for merit. In school, things like grades, denoted by GPA, and the number of extracurricular activities, can be measures of merit. The university one attended, different fellowships and internships, employment history and publication history, and so on, can be semi-quantifiable or quantifiable metrics of merit in the professional world.

Meritocracy has become somewhat of a heated topic lately. A subset of the population, primarily (but by no means exclusively) on the left, have come to think that meritocracy is bad.

One major reason for this is because merit, as discussed above, must function within a given context and uses only a select few (ideally) quantifiable criteria. Since the context for merit in western liberal democracies, according to the left, is capitalism, and the capitalists determine the criteria for what counts as merit, all such measures of merit are, the argument goes, simply ways of determining how good someone is at upholding the Marxian superstructures of capitalism. Further, these criteria are reified in such a way that a person’s value as a person (and not just their fit into some position at a job, for instance) is measured by how good they are at upholding these Marxian superstructures of capitalism. Someone is less valuable, for instance, if they are not a “productive” member of society (where “productive” is defined by how good someone is at increasing shareholder value for the firm employing them, or consuming goods and services to add to profits, or by reproducing to generate a new stock of “productive” humans that can also increase shareholder value for the firms where they will one day work).

Plenty of other criticisms of meritocracy have been levied. For instance, whether merit-based distribution of scarce resources should be considered through deontological or consequentialist ethics; what the ultimate ends of society are or should be (Liberty? Security? Justice? Prosperity?) and whether and what flavor of meritocracy can even achieve those ends; or how much luck that has nothing to do with merit is involved in ostensibly merit-based distribution (such as being born rich, or having preferences or natural talents for things valued by one’s given social context); or the extent to which merit determines what people actually deserve (or if certain people are entitled to certain things, perhaps as a general human right or based on something besides quantifiable measures of merit); or so-called reaction qualifications (for instance, if an attractive woman can make a lot of money on Only Fans but I cannot, does that mean a person’s sex and their attractiveness are merits? And how do we measure such things?); or where or when there ought to be exceptions to meritocracy (for example, affirmative action). Perhaps most important, of course, is whether meritocracy is even practically possible, especially given people’s personal biases and various systems of racial and sexual discrimination. For more, I recommend the Stanford Encyclopedia of Philosophy entry on meritocracy (SEP, 2023) and Zoe Bee’s video essay on the problems of meritocracy embedded above (Zoe Bee, May 31, 2025).

While the leftist critique of meritocracy is certainly a valid criticism, this proposition – that meritocracy is bad – can, as far as I can discern, have one of two different meanings.

The first is to take this sentiment at face value: meritocracy is bad categorically, with the implication that something other than meritocracy would be better (by some metric most likely based on the political goals of the detractor of meritocracy). Meritocracy was, however, not just a way of ensuring that a person understands what they’re talking about, but also a way of removing the systems of privileges, nepotism, and patronage afforded to nobility and landed gentry. So, when someone says that meritocracy is bad, the implication seems to be that there should be some new system of privileges, with the groups awarded certain privileges delineated by some other criteria, such as race or sex or gender identity (or some mix of a number of such things). This, I would argue, is not getting rid of meritocracy per se, but simply changing what criteria count as merit in certain fields. A truly anti-meritocratic system would be random, filling positions by lottery.

Think about if this way: if white people in the U.S. want to know what it’s like being black in the U.S., then of course being black is a merit, because only someone who is black has the relevant knowledge and understanding of what being black in the U.S. is like. Thus, being black is a kind of merit. Which means, I think that even the people who proclaim that meritocracy is bad don’t really believe that meritocracy is categorically bad.

The second way people mean that meritocracy is bad, then, is that we have a bad implementation and/or execution of meritocracy. Usually, this would mean that we are counting things as meritorious that should not be counted, while not counting things that should be counted; or that certain measures of merit don’t take important context into account (for example, whether the applicant for a job was born to a poor or rich family, in other words, whether the ‘race’ was run with everyone starting from the same position). The implication for this, from what I can tell, seems to be that the criteria ought to be based on things that the detractor of meritocracy finds important. And it isn’t just the left that thinks this, as the current anti-DEI crusade seems to take it as an assumption that being non-white and non-male is a demerit since, the argument seems to be, that such people could only ever gain their position through social coercion (something like “give me this job or you’re sexist”) or through law (for example, things like affirmative action or quotas).

Now I tend to agree that meritocracy is imperfectly implemented and executed, but I also think that meritocracy of some sort is the ideal worth striving for. Almost certainly there are ways that meritocracy ought to be adjusted. In science, for instance, a prominent criterion of someone’s merit is impact, which is frequently measured by how often they are cited by other scientists – this is known as the H-index. The H-index makes a certain amount of sense. The raw number of publications someone has can be due to someone just pumping out a bunch of papers through paper mills, each of which is not very important, or simply by living a long time and having publications over many decades, even if none of them are very important to the field. But if other people are citing your work, then that, at least hypothetically, indicates that your work important to the field, and it’s not (at least ideally, though coercive citation can occur) something that a person can manipulate. There are plenty of problems with the H-index which I won’t go into in depth here, except to say that it does not always end up demonstrating someone’s impact in their field.

What tends to happen when there are clear and quantifiable criteria for what counts as merit is that satisfying these criteria becomes the goal, rather than attaining expertise. This is summed up in Goodhart’s Law, which says “When a measure becomes a target, it ceases to be a good measure.” The result is that, if increasing one’s H-index becomes the goal, then doing science is only an intermediate goal in the pursuit of increasing H-index and so doing good science is only of secondary importance.

As an interesting aside to this, problems like this crop up in AI research, and it’s called reward hacking. When trying to train an AI, certain reward functions need to be defined and implemented, and so the AI gets good at satisfying those reward functions. But this does not always mean that the AI is doing the thing the researcher wanted. For instance, a self-driving vehicle can be trained by giving it “rewards” for passing through certain checkpoints in a training course it’s driving. So, why not just drive in a tight circle around a single checkpoint in order to keep getting the “reward” for passing that checkpoint? This is similar to what it’s like for a scientist attempting to increase their H-index, where people search out ways of doing this without having to do the hard work of real, actual science.

Meritocracy, and the systems of credentials and licensure used to demonstrate one’s merit in a given field, is certainly imperfectly implemented and executed, keeps people out of domains in which they might deserve to be and vice versa, and is prone to creating bad incentives. However, throwing it out would be infeasible in our modern global society where we depend on untold numbers of strangers every day, and unwise at the smaller scale. When we go over a bridge, we want to know that the engineers were good at designing it and the construction workers understood how to build it. When we go to an auto repair shop, we need to know that the mechanics have some level of proficiency in fixing cars. When we call a plumber, we want to know that the plumber knows how to fix and install pipes. When we go to the doctor, we want to know that the doctor understands physiology and medicine and can competently treat our ailments. You will be hard pressed to find someone who, when they or a loved one is diagnosed with cancer, doesn’t want the best doctors money can buy, instead saying “meritocracy is bad” and then going out and finding a random person off the street to treat them.

The Scientific Method

The second major way of maintaining trust is through the methodologies and institutions of science. Ideally, science is something accessible to anyone, regardless of background. In science, the data are supposed to speak for themselves. You don’t have to take the word of the scientist for it, you just need to understand the data. People might lie or be influenced by biases or money or other such things, but data (ideally) don’t lie. Science is of course not a straight path toward the truth. It might get stuck in a lot of misguided dead-ends, the argument goes, but on a long enough timescale these mistaken assumptions will be exposed and corrected, thus righting the course. The strength of this approach is why many fields outside the physical sciences have adopted scientific practices.

As with meritocracy, science is far from perfect. There are still going to be unquestioned assumptions. Things like methodological materialism and reductionism, scientific realism vs. anti-realism, the reliability of sense organs, and so much more.

Science is also not completely values neutral. For instance, that there should be constraints on what science is allowed to do: we would not want to live in a world where scientists can experiment on humans without informed consent or regard to their safety. But more than that, scientists must be concerned with what is worth pursuing, and this is often influenced by other incentives like funding, cultural assumptions, personal pride, spectacle (is this going to make a big splash?), and so on.

In the end, science is just human beings. Experiments are designed, conducted, and interpreted by people. Theories are proposed, analyzed, criticized, and refined by people. Human beings are lousy with cognitive biases and neuroses, and are driven by a complex array of incentives, both good (like curiosity) and bad (such as money and career advancement). The esteem of peers can become a perverse form of merit, where someone merely being well-known is interpreted as greater expertise.

The institution of science is designed to safeguard against these imperfections and flatten out inconsistencies in the long run as new data either confirm or falsify previous results, but it is inescapable that science is an all too human endeavor. This is why Max Planck observed that “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it …” (Planck, Max K. (1950). Scientific Autobiography and Other Papers. New York: Philosophical library.; p. 33). Or, more pithily, science progresses one funeral at a time.

This also brings us back to the problem of testimony. Scientists cannot function if they cannot trust each other. Likewise, policymakers and the public must be able to trust scientists for society as a whole to function. This becomes especially pressing as science plunges deeper into realms of greater specialization necessitating vast amounts of prerequisite knowledge to understand and analyze, requiring ever more expensive and sophisticated instruments, each demanding ever greater expertise to operate, in order to perform experiments.

A sort of converse to the success of scientific practices is the rise of pseudointellectualism, which cloaks itself in the mantle of science without observing the standards and rigor that actually lends science its credibility. This is in order to take on the authority of science by associating with the track record of trustworthiness that science has earned over centuries while maintaining one’s preferred beliefs. For instance, creationists want the story of Genesis to be true. Yet the only reason to believe the Genesis account of events is because it was written down by anonymous Biblical authors over two thousand years ago. Thus, creationists seek to borrow some of the trust people have in science by inventing so-called “creation science” (or intelligent design) to give their presupposed conclusions a veneer of scientific legitimacy. This type of credibility theft can also be observed in things like flat earth, electric universe, quantum woo, astrology, anti-vax, and other such realms of pseudoscience.

When the Experts Lose Trust

But what happens when the experts are wrong? Or certain bad actors have a financial or social incentive to cast doubt on experts? This leads to a loss of trust in expertise. This, of course, mainly applies to scientists and academics and bureaucrats and the news media, as most people in the western world still maintain trust in more everyday things, like vehicle manufacturers and civil engineers.

From what I can tell, there are three major ways in which an expert can be judged to be wrong: the expert is mistaken, lying, or misunderstood.

The first case is that the expert is simply mistaken. Or they could have failed to take something into account in their models or analysis. Or the expert is leaving out information (perhaps due to time constraints in conveying information) or is missing some level of nuance and context. A problem that comes with this is that it can lead people to think that the expert doesn’t actually know what they’re talking about, leading to a loss of trust in the expert (and in expertise in general). This is why some experts can end up doing a lot of hedging and qualifying their statements, but this can also result in people concluding that the expert simply doesn’t know what they’re talking about, or that the information they are conveying is not all that useful (just think of the hack joke that “last week X is good for us and this week X is bad for us, who knows what it’ll be by next week!”).

The second is that the expert is caught consciously lying or unconsciously saying untrue things. The first instance is what is commonly known as disinformation while the latter is commonly known as misinformation. This is perhaps the most pernicious of ways that an expert can be wrong and is the one that causes the most outrage, and rightfully so. But, as with all three ways an expert can be judged as untrustworthy, it also tends to make expertise itself appear untrustworthy.

The third is that others are lying or are misinformed about the expert, or otherwise do not understand (or do not wish to understand) what the expert is saying. It could be because they’ve heard from some detractor that this expert is untrustworthy. This has the interesting and somewhat ironic twist that it is simply trusting a different person’s testimony when they talk about the expert – we still have not escaped (and will never escape) the problem of testimony. It could also simply be that the person hearing the expert has misunderstood what the expert said, or that the expert is saying something that the hearer has a negative emotional reaction towards and so is disposed to disbelieve the expert. It could also just come from simple miscommunication on the expert’s part – not all experts are good at conveying information to a lay audience, and so might misspeak, be less clear than is ideal, or even allow certain unconscious assumptions creep into their message. And so, in this third way an expert can be deemed untrustworthy, it is not that the expert is wrong or lying, but people have judged them to be for whatever reason. There may be legitimate reasons for this if, for instance, this particular expert has a track record of lying or being wrong in the past (even if they are not lying or wrong in this instance). This is probably less pernicious in an absolute sense than the second way, but made all the more pernicious by virtue of being the most common way (in my estimation) that experts are judged to be untrustworthy.

Regardless of how or why people lose trust in particular experts, the upshot is that people tend to lose trust in expertise in general (Ortiz-Ospina 2024; Wikipedia). This results in people either succumbing to the siren song of pseudo-intellectualismbullshit wrapped up to look like expertise.

Popular pseudo-intellectuals Jordan Peterson and Terrence Howard

 Or simply resigning to anti-intellectualism – decrying any and all expertise, perhaps for some self-serving conception of “common sense.”

Popular anti-intellectuals Andrew Tate and Candace Owens

People might see the experts being wrong, outright lying, or are themselves misunderstanding the expert, and as a result decide that someone else (who is less qualified) is more trustworthy. If expertise is viewed as categorically compromising, then credentials might be seen as demerits rather than merits, unless the expert buys into pseudo-intellectualism themselves (such inconsistency of what credentials mean for a person’s credibility tends to go unaddressed). Think, for instance, of people like Eric Weinstein or James Tour, who are legitimately intelligent and credentialed people, which only makes their skepticism of the scientific establishment and criticism of widely accepted science viewed as all the more legitimate by their pseudo-intellectual acolytes. These sorts of grifters can also appeal to the popular underdog narrative of themselves and their acolytes as the plucky rebels resisting the empire of the academy, twisting their deficiencies into a type of merit within pseudo-intellectual circles.

Demagoguery and deceit are also threats to trust in expertise. Bad actors can have a financial or social incentive to cast doubt on experts. This is illustrated by widespread skepticism of climate change (at least in the United States), where campaigns of disinformation and misinformation are being waged to sway public opinion and bribe politicians to vote against initiatives to curb emissions and switch over to renewable energy.

As somewhat of an aside, a modern and increasingly pressing issue for trust is with artificial intelligence. This is an enormous topic unto itself that I will not cover here, suffice to say that the spread of misinformation and disinformation is only going to accelerate exponentially in the years to come.

Regardless of how people react to their loss of trust in expertise, they are not completely off base in being skeptical of experts. Experts are human beings working within a given political and economic framework, and are just as disposed to biases and vices and perverse incentives as anyone else. Yet, the conundrum is that we live in a world where we have to trust other people for society to function. So, what’s the answer to this dilemma – the dilemma that we must trust people who are not always going to be trustworthy (or, at least, that we cannot always know if they are trustworthy)?

Possible Solutions?

While there will be no silver bullet, what might some potential solutions to this problem look like? One way is to help the public become more scientifically literate or media literate (media literacy here meaning able to be discerning in what media one consumes, how to tell when someone is trying to sell them something, how to analyze and verify claims made by media personalities, and so on). People like me, who professionally teach and tutor chemistry and biochemistry to college students, and occasionally make videos about science and philosophy on YouTube or this blog, attempt to do just that.

But then, of course, the problem of testimony once again rears its ugly head – how do you know you can trust me? Besides, being literate in these domains does not guarantee that someone is immune to being deceived or misled. Just look at the pseudo-intellectuals who come to prominence, some of whom do, in fact, have legitimate credentials and expertise. There is also just the simple fact that many people don’t have the time or inclination to gain some base level of familiarity in these areas (pseudo-intellectuals often count on the fact that their audience doesn’t understand even the fundamentals in the relevant field).

Probably things like large language models and further developments in AI will make this problem worse as attention spans diminish and more of our cognitive load is handed off to computers. We’ll then have the problem of AI testimony to grapple with (and indeed we already do, with so-called hallucinations, or the companies that operate the LLMs installing biases).

Left: the AI chatbot Grok (X aka Twitter) was programmed for a time to constantly bring up the (untrue) “white genocide” in South Africa.
Right: the AI chatbot Gemini (Google) was programmed for a time to insert non-white people more often, even in places where it was not appropriate or accurate.

Yet, despite these problems, I would contend (perhaps because I’m biased and my own livelihood depends on it) that greater literacy in the sciences and media is a necessary, though likely not a sufficient requirement for us to regain a high trust society.

The other possible solution is to ensure that the experts are more trustworthy. Just yelling at them to stop lying, or to stop being wrong, is obviously not going to work (at least in the long run). There would need to be some kind of change in incentives and greater forms of accountability. Improving or replacing the metrics of meritocracy (like the H-index) is one potential avenue to explore in changing incentive structures. Places like retraction watch are one way to hold scientists more accountable. Plenty of other approaches might work as well, but there isn’t the time to discuss them all in depth in this article.

But what about in domains outside the hard sciences, such as news media or online influencers? There would have to be some kind of monetary incentive to change this behavior – right now, outrage is much more lucrative. This, to me, seems like an even more intractable problem, since there is even less incentive to actually change the current business model. I’m also less qualified to speak on possible solutions to this problem.

My own pessimism, and perhaps lack of imagination, has me hard-pressed to propose many actionable solutions for our low-trust society. Nor do I predict conditions will change for the better anytime soon (and in fact are likely to get worse, especially with the advent of LLMs). But there is still a minuscule remnant of idealism, buried deep beneath my copious layers of cynicism and fatalism, that stubbornly clings onto hope that some steps can be taken in delivering humanity into a higher trust society.