
Metamodernism: The Future of Theory by Jason Ananda Josephson Storm, University of Chicago Press; First edition (July 20, 2021), 374 pages
Introduction
Jason Ananda Josephson Storm (hereafter referred to as Storm) is a professor of Religion at Williams College in Williamstown, Massachusetts. Storm’s interests vary widely, but this particular book is concerned with the overall project of the humanities and social sciences – what Storm calls the human sciences – in light of postmodernism. Storm wants to acknowledge, and even accept, the criticisms and deconstructions levied by postmodernism and poststructuralism, but then move past that to something more fruitful. This, Storm says, is what is dubbed metamodernism – a sort of dialectical synthesis of modernism and postmodernism. And so, Storm says of this book:
This is a book for people who are sick of “Theory.” People who are tired of gratuitous namedropping; anti-authoritarian arguments from authority; shallow insights masked in obscurantism; self-loathing humans claiming to represent the agency of microbes; Americanization masquerading as diversification; and most crucially, theory that is merely jargon overlaid on predetermined political judgements. This a book [sic] for people who wish that more scholars in the humanities and social sciences chose theories based on empirical adequacy instead of prior ideological commitments. For those people, this book provides a set of empirically testable theories in the philosophy of the human sciences. If taken seriously, it will provide new methodological openings that should lead to fresh scholarly inquiries into the nature of our human and nonhuman environment.
Additionally, the author says:
This is also a book for people who are content with the current fragmented theoretical landscape – people who shrug their shoulders at the question of where theory (or philosophy) goes from here; people who are not kept up at night by the putative impasse between modernism and postmodernism; people who perhaps even feel liberated from the need to think about questions of epistemology since a broader shift toward phenomenology opens up space for them to focus on questions of identity and politics in ethically motivated subdisciplines. This work draws on and allies itself with current works of feminist theory, critical Black studies, postcolonial theory, science studies, queer theory, and environmental studies. Indeed, it can help further many of the ambitions behind these movements… This work thus proffers a re-theorizing of the social sphere, including a fresh theory of the formation of social categories (applicable to race, religion, gender, art, and so on).
Also:
In addition, this book provides a big picture in which various theoretical subdisciplines can situation themselves. Given that postmodernism and allied theories are waning, metamodernism aims to provide a new grand synthesis.
The author says that postmodernism, poststructuralism, “French theory”, and deconstruction are all often used to refer to the same school of thought. As such, Storm will used postmodernism for all of them (although deconstruction is sometimes used for what postmodernism does). What characterizes postmodern thought, according to the author, is
- antirealism (e.g., that scientific theories don’t refer to anything that actually exists)
- an emphasis on endings, which often includes disciplinary autocritique
- an extreme “linguistic turn” that views the world in terms of texts (death of the author, arbitrariness of the sign, power of discourse, concepts socially constructed by language)
- general skepticism (e.g., of metanarratives)
- ethical relativism/nihilism
The author points out that many of these things existed before the golden age of postmodernism in the 1970’s and 1980’s, pointing, for instance, to Rudolph Carnap and the logical positivists. Storm says that metamodernism is an “anti-system” that examines other systems for where they break down and in that it takes its own obsolescence into consideration. What metamodernism will do is: “by working through each of them seriously and dialectically produces [sic] something new” which will be:
- metarealism
- process social ontology and social kinds
- hylosemiotics
- Zeteticism [and abductive reasoning]
- a revaluation of values converging on Revolutionary Happiness
Storm, broadly speaking, makes the following arguments:
- The critiques and deconstructions of postmodernism are all valid and true to some degree, but languishing in epistemological and ontological anarchy is unproductive and untenable
- To get past the devastation that postmodernism has caused to the human sciences, these disciplines need to switch from a substance (in the Aristotelian sense) view of natural kinds and reorient toward processes (what Storm calls process social kinds)
- This “process turn” can be achieved with a theory of language that focuses on homeostatic property clusters. The theory that Storm proposes as a candidate is hylosemiotics
- With a focus on process social kinds, the human sciences can be better served by using abductive reasoning as opposed to deductive or inductive reasoning
- Instead of trying to eradicate values from the human sciences in the impossible pursuit of neutrality and objectivity, we should instead recognize where values inevitably enter our scholarship and turn them toward human flourishing
These things are addressed in the following 8 chapters, split into four parts. The first part contains only chapter 1 and examines the realism vs antirealism debate between modernism and postmodernism. Part II contains the next three chapters and covers Storm’s theory of social kinds. Part III contains one chapter and covers Storms theory of language (hylosemiotics). Part IV contains three chapters and covers an approach to epistemology (Zeteticism) and a consideration of ethics and moral relativism.
As usual, keep in mind that this is 1) a summary and 2) is interpreted by me, and so a lot of the nuance and supporting arguments for claims will be significantly truncated. Thus, this summary and review is not to be taken as a substitute for reading the book; it will work as a supplement and perhaps to wet your appetite for actually reading the book. Additionally, while the book is 374 pages, the actual body of the text is only 285 (and it’s in fairly small print), with the remainder being notes and references. As such, if one wished to dig into every reference from this book, they could easily make reading this much, much longer.
Chapter 1: How the Real World Became a Fable, or the Realities of Social Construction
In chapter 1 (which follows after “Opening” as kind of a chapter 0), Storm addresses the debate between realism and antirealism, where modernism is often conceived of as falling in the former camp while postmodernism is thought to inhabit the latter. Storm argues that this is a misreading of postmodernism and that the realism and antirealism positions are internally incoherent anyway. Storm thus wants to move past these ideological camps toward metarealism. (the title of Part I).
First Storm examines what is meant when people talk about mind-dependent (socially constructed) things. Mind-dependence tends to emphasize group instead of individual construction and the contingency of the construction (i.e., “X” might not have existed if not for the social forces). Storm gives the following four ways that mind-dependence can be conceived:
- Ontologically Mind-Dependent: qualia, imaginary friends, money (value is dependent on minds)
- Causally Mind-Dependent: things built by humans (on purpose (motorcycle) or inadvertent (ozone depletion))
- Classificatorily Mind-Dependent: grouping things into categories; how classifications are made is subjective, but that a classification is made or exists in objective
- Universal Mind-Dependence: the entire external world is in the mind (idealism, e.g., George Berkeley); detractors often conflate this with a sort of voluntarist idealism (e.g., if everyone stopped believing in gravity, it would cease to exist)
- The author notes that idealism (anti-realism) and realism are more similar than different insofar as both claim that people have direct access to what actually exists; idealists simply say that there is no intermediate step by which some material “out there” (i.e., “a second order real world behind that of appearance”) must cause a sensation, but that material is the sensation (“to say that something is matter is just to say that it has an appearance.”)
Storm goes on to say that postmodernists are accused of being anti-realists because of the notion that language is indefinitely deferred (the poststructuralist notion that words only ever references other words – what Storm calls “methodological suspension of linguistic reference”) and does not have a referent “out there” in the real world. The author says that postmodern deconstructionists did not deny the external world. For instance:
- Derrida: “the other is ‘the real thing'” that disrupts discourse
- Lacan: made the Real part of his tripartite registers along with the Imaginary and the Symbolic – said “the Real” is “that which resists symbolization absolutely.”
The author calls this “traumatic realism” in that the Real is glimpsed when discourse breaks down and that the Real troubles discourse itself.
Storm says that “apocalyptic realism” – the notion of realism that the Real refers to what would continue to exist if humans ceased to exist (hence why it is apocalyptic) – is not suitable for the “human sciences” (humanities and social sciences) since by definition their domains of inquiry are those things that depend on human existence in order to be real. Thus, the charge that something is “socially constructed” cannot be construed as saying that something isn’t real (e.g., race – “But when thinkers like [Ashley] Montagu argue that ‘race’ is socially constructed, they are suggesting it is a product of culture not biology [sic]. The unstated premise that culture is in some sense illusory is a clue.”).
Saying that something is socially constructed, Storm argues, does not therefore have any bearing on whether it’s real. Storm uses the example of the Satanic ritual abuse panic of the 1980’s being both socially constructed but also not real, while LaVeyan Satanism is socially constructed yet also real.
Storm argues that “real” is contrastive insofar as something is “real” in that it is not not-real. Thus, being “real” doesn’t say anything about the thing in question unless it is contrasted with the not-real: with what is illusory, fictitious, fraudulent, etc. To say that something is not-real can mean a number of different things:
- It is a hallucination or optical illusion
- It is a simulacrum (like a statue of someone or a forgery of something)
- It is a look-alike, impersonation, misidentification, or resemblance
- It is a dream (that someone dreamed is real, but the events within the dream didn’t happen in waking life)
Furthermore, Storm says we need to distinguish between something being real and something existing. Does exist mean made of material, like a house or a rock? Then what does it mean that the Catholic Church exists (is it a set of buildings and people? A set of ideas held by some set of people? And if so, in what way does a set exist)? Additionally, something can be real but non-existent, like fictional characters, numbers, absences (shadows, silences, holes), destroyed objects, and dead people. Existence itself can have different modes, such as the existence of the present moment compared to the existence of some physical object; or the existence of absences, which exist in a parasitic mode (depend on the existence of other things for their existence).
There is also the issue, the author says, that in English the word being has a double meaning that other languages like German, French, and Japanese distinguish into two different words. To be can be a predicate, represented by the copula is. A proposition like “the sun is hot” could be analyzed as “object X being hot and being the sun is true”. There is also the way that being, in English, is an existential: that the sun is means that the sun exists. But we can predicate things of non-existent subjects without committing ourselves to their actual existence, such as “the current king of France is bald.”
And so, instead of asking if something is “real” people should ask: “if a specific social phenomenon is real, [then] what is it real in respect to?” This is how the author synthesizes so-called “metarealism” from the incoherent realism vs antirealism dichotomy: metarealism is the recognition that realism comes in modes and that it only makes sense in light of a contrast class.
Among these modes of realism I would offer another: cultural-amnesia realism. This is something I talked about in my post “Is Sex a Social Construct?” The idea is that the real is what would remain, or be recapitulated, if tomorrow everyone got up in the morning with all cultural knowledge forgotten. The thought experiment could be done with various things included or excluded from the realm of cultural knowledge, but for now lets assume an intuitive, commonsense definition of what cultural knowledge entails. Thus, we could include things like the role of men and women in society, or indeed the very categories “man” and “woman”. Surely, there are culturally contingent things, such as how a person wears their hair and clothes. But the question is: would the categories of “man” and “woman” be recapitulated by our amnesic selves? And if so, how closely would those categories map onto our current categories of “man” and “woman”? If sex is a social construct, then the answer to whether such categories would be recapitulated by our amnesic selves would have to be no; but if sex is something that is culturally-amnesically real then the categories of “man” and “woman” would not only be recapitulated by our amnesic selves, but they would be so along almost exactly the same lines that these categories are currently instantiated.
Let’s set aside that actually performing an experiment like this would be impossible, or at least radically unethical. The question, then, is why we ought to accept this cultural-amnesia realism. One reason, as I argued in the post linked above, is as a pragmatist epistemology: maintaining categories that are culturally-amnesically real provides us with a means of making more accurate predictions about things. For biological sex, this has to do with things like can (or should be able to) become pregnant, or what sorts of ailments one is more likely face (breast cancer, ovarian cancer, prostate cancer, osteoporosis, male-pattern balding, etc.).
Another reason is to avoid bad predictions. The doctrine of equity – equal outcomes instead of equal opportunity – would predict that, if all sexism was abolished, then there would be a roughly equal number of men and women in every position in society (and conversely that any deviation from this parity must be as a result of sexism). But if differences in preferences along lines of biological sex are culturally-amnesically real, then things like career choice disparities among the sexes would have (at least) some amount of biological determination (i.e., if everyone woke up without any cultural knowledge, men and women would still demonstrate a difference in the distribution of preferred activities among the two populations), and so policies that evaluate success based on equal outcome of the sexes would be misguided.
Additionally, if cultural-amnesia realism was accepted as one legitimate definition of what is real, then scientific inquiry into the biological underpinnings of gender (subjective self-identity) would be less taboo. This would not just be a call to the radical Queer Theorists that dismiss biological sex (and any science around it) to reinterpret their position on the biological realism of sex, but also to the social conservatives who dismiss transgender people as being “not real” in some way (i.e., arguing that transgender people are perverted in some way and behave as the opposite sex as a sort of paraphilia, or that they are mentally ill and delusional, or even that they are evil and motivated by the corruption of the culture). Surely, if being transgender is culturally-amnesically real, then conservatives would be forced to accept it as a real phenomenon, even if they disagreed on how society ought to treat transgender people (not just medically, but in policy and social interactions).
We might also apply this to scientific theories. If tomorrow the entire world awakened with all knowledge of biological taxonomies, for instance, completely erased from memory, it is likely that humans would reconstruct the categories in very much the same way that they are constructed now (especially given DNA relatedness). Likewise, if Einstein’s theories of relativity were forgotten, but all of the means of measuring their effects still existed, humans would likely recapitulate the theory in a way that is the same in all important ways, even if, say, the metaphor of a bowling ball on a trampoline representing the curvature of spacetime in response to matter would not become a part of the the popular pedagogy for general relativity.
One could also consider the idea of electric charge, where the assignment of “positive” and “negative” is a sort of socially constructed nominalism – which sign is assigned to which kind of charge is an arbitrary accident of history. Yet, if this assignment was forgotten, humans would still recapitulate some way of determining the two properties of charge. What would likely not be recapitulated, however, would be the pseudoscience surrounding it, such as people wanting “positive ions” to heal themselves or whatever passes for science in the holistic healing community. In other words, the analogical or synonymous meanings of the words “positive” and “negative” (as assigned to electric charge) within our culture may disappear to our amnesic selves and are therefore not real in the sense of not mapping onto reality “out there”, even though the names “positive” and “negative” refer to real phenomena.
In the end, what I mean to convey here is that concepts do map onto reality “out there” in some important way (even if the only importance is pragmatism).
Chapter 2: Concepts in Disintegration & Strategies for Demolition
Storm illustrates how both analytic philosophy and continental philosophy, although often viewed as being at odds, agree on one thing: the failure to adequately define our concepts. All agree that no concept has the necessary and sufficient conditions required to define the concept in a way that is universally agreed upon and is not too strong (leaves out things that ought to be included) or too weak (allows in things that ought to be excluded). Think of the concepts of religion, or art, or Wittgenstein’s famous explication of the concept of games.
“There is no characteristic that is common to everything that we call games; but we cannot on the other hand say that ‘game’ has several independent meanings like ‘bank’. It is a family-likeness term (pg 75, 118). Think of ball-games alone: some, like tennis, have a complicated system of rules; but there is a game which consists just in throwing the ball as high as one can, or the game which children play of throwing a ball and running after it. Some games are competitive, others not (pg 68). This thought was developed in a famous passage of the Philosophical Investigations in which Wittgenstein denied that there was any feature — such as entertainment, competitiveness, rule-guidedness, skill — which formed a common element in all games; instead we find a complicated network of similarities and relationships overlapping and criss-crossing. The concept of ‘game’ is extended as in spinning a thread we twist fibre on fibre. ‘What ties the ship to the wharf is a rope, and the rope consists of fibres, but it does not get its strength from any fibre which runs through it from one end to the other, but from the fact that there is a vast number of fibres overlapping’ (pi, i, 65–7; bb 87).
This feature of ‘game’ is one which Wittgenstein believed it shared with ‘language’, and this made it particularly appropriate to call particular mini-languages ‘language-games’. There were others. Most importantly, even though not all games have rules, the function of rules in many games has similarities with the function of rules in language (pg 63, 77). Language-games, like games, need have no external goal; they can be autonomous activities (pg 184; z 320).”
Storm examines the deconstruction of religion as a concept and art as a concept as two case studies for how such deconstruction is done. For religion, Storm follows Wilfrid Cantwell Smith’s The Meaning and End of Religion. Smith essentially says that the concept of religion wasn’t created until the protestant reformation and has since then taken on new meaning, and thus historians are being anachronistic when talking about religions in the past (the early Christians or Hindus would not have conceived of themselves as adhering to a religion as they are now conceptualized). Furthermore, the concept of religion was created in Europe and then imposed on other cultures by European explorers and colonizers. Storm concludes the summary of Smith’s argument like this:
Finally, Smith argues, individuals and their social relations disappear behind the concept “religion” as part of a process of “reification” [the process of making something contingent appear natural or objective or “just how things are” (or even “just how things ought to be”)]. Religion is produced by means of a double abstraction: first, heterogeneous beliefs, practices, and institutions are subsumed under the concept of a particular religion, which is then imagined to be a repository of a coherence [that is] alien to it; second, a set of religions is incorporated into the category “religion” as such to prescribe what is purported to be a universal to humankind. Each phase of this movement results in the loss of resolution, the blurring of distinctions, and the compression of the diversity of elements. Thus, it encourages scholars to blur normative and empirical registers; by listing, for example, the “beliefs of Catholics,” they are in effect obscuring the difference between what Catholics ought to believe and what they do believe. Moreover, this process seems to be asymmetrical since the supposed features of “religion” as such (e.g., emphasis on faith, the transcendent, the divine, exclusivity) are largely an abbreviated Protestantism that does not really apply well to other “religions.” In sum, religion is an abstraction of an abstraction, and each phase in this process results in an un-recouped remainder.
The concept of religion, Storm says, is a secularization of religion. By packaging all these disparate things together under the concept of religion, it allows people to view religion from the outside, from a place not within the beliefs, and therefore makes the various beliefs into something optional. This is somewhat of a tricky notion, but I might think about it this way: if we have no concept of air, then we don’t have an opposing concept of not-air or vacuum. Without the concept of air, there isn’t something in which we are constantly submerged, but which we could also, in principle, step out of; the air just is. Likewise, if there is no concept of religion, then the beliefs one has just are, because there is no not-belief to oppose it. At the same time, this secularization arises from a religious relativism, where all religious beliefs are viewed as equally valid and other things are defined into the realm of religion (think of the charge that people have “faith in science” or even people calling Critical Race Theory a religion (myself included)).
Thus, religion, according to Storm, lacks an agreed upon intensionality and extensionality – respectively, the meaning or definition of religion (the Fregean sense) and the objects or specifics referred to by the concept of religion (the Fregean reference). Reading a book from 200 years ago that mentions religion would be using the word in a different way than people use it now days. When we read it, however, we will import our own biases and anachronisms about what religion means. Likewise, the imposition of the concept of religion onto peoples who had previously not possessed such a concept has altered the way in which they conceptualize their own beliefs and therefore altered the very discourse of religion (adding to the evolution of the concept’s meaning).
Art undergoes a deconstruction on primarily three fronts: the definitional, the historicist, and the relativist/avant-garde. The definitional, given most famously by Morris Weitz in “The Role of Theory in Aesthetics” points out that there are no necessary and sufficient conditions for what could be included as a work of art that wouldn’t be too strong or too weak. This deconstruction concludes that the concept of art is an “open concept” in that it is always mutating and expanding to accommodate the ways that artists push and transgress the notion of art. The historicist deconstruction shows that areas commonly thought of as art – painting, sculpting, music, poetry, dance, and architecture – was an invention of 18th century Europeans, and therefore is culturally (and temporally) contingent. It is through this historically contingent conceptualization of art that the term is often used normatively – when someone says “that’s not art” what they mean is that it is not pleasing to them in the right way, not as an impartial judgement about whether the work satisfies the necessary and sufficient conditions for being art; conversely, when someone says “now that’s art” what they mean is that it does please them in the right way. The relativist/avant-garde (or Weberian) deconstruction says that calling something art imbues it with a sort of “transcendence” above everyday things (think of Marcel Duchamp’s Fountain, which was just a urinal, but by calling it art it was elevated above a typical urinal). The avant-garde’s attempt to reunify art and everyday life then had to dissolve both concepts.
After these two case studies, Storm then gives a step-by-step guide for how to deconstruct anything using the Immanent Critique, the Relativizing Critique, or the Ethical Critique:
Immanent Critique:
- Collect competing definitions of the concept and show that they are incompatible
- Expose the internal contradictions in the concept by demonstrating how any set of given necessary and sufficient conditions are either too strong or too weak
- Disaggregate the concept by making the concept defined by family resemblance or that it is an “open concept,” but then demonstrate that family resemblance is circular (resemblance to what? The concept?) and that the “open concept” essentially makes the concept vacuous
- Collapse the implicit binary: if a concept is defined by what it is not, then show that this untenable (e.g., the “culture” vs “nature” concepts have many overlaps, and so “culture” cannot be defined in contrast to “nature”)
- Introduce nominalist skepticism: the objects of inquiry are names but do not exist – “There is no such thing as ‘the economy,’ which cannot be studied as such. There are only…millions of irreducibly distinct instances of production an exchange…”
Relativizing Critique:
- Historicize: show the shifts in meaning of the concept over time and space
- Genealogical Critique: show that the concept was created to oppress a certain people and/or to uphold the power of a certain people
- Relativize the cultural context: show that other cultures do not have that concept, or that the concepts in other cultures are unable to be translated into the language being used
Ethical Critique: show that the inclusion or exclusion of things into the concept is motivated by (personal, social, cultural, political) values
Storm makes the point that although this deconstructing process has been routinized, that does not make it any less serious – these methods of deconstruction point to real philosophical problems. Additionally, these different techniques can be mixed and matched and some may apply to one field of inquiry but not another.
One way that people have tried to address these issues of fuzzy definitions within our concepts is through family resemblance/prototypes, and polythetic classification (a group of characteristics that need not all be present, but when enough of them are, the thing falls under the concept; thus, one thing might have properties p1 and p2 while another has p2 and p3 but both fall under the same concept).
Prototypes are the way that cognitive scientists have discovered is the way that people actually think about concepts. This is by comparing things to a prototypical exemplar of a category. This is why, for example, a sparrow seems more like a prototypical bird than a penguin (if someone asked you to draw a bird, you probably wouldn’t draw a penguin unless you were trying to show how quirky you are) even though both are definitely birds.
Storm points out, however, that family resemblance runs into problems. One issue is composition: goldfish don’t seem like the prototypical fish or the prototypical pet, but they seem like the prototypical pet fish. Thus, if prototypes were a stand-in for meaning, knowing what “pet” and “fish” are would not tell you what “pet fish” is. Additionally, the prototype view doesn’t tell you how the prototype was inducted into the category.
There is also the problem that two things could have many polythetic similarities and yet clearly not be in the same category (Storm gives the example of a skull and the moon sharing some common characteristics). Indeed, Storm says, one could find similarities between any two things such that every category, if left as an “open concept,” could eventually include everything in existence. Nor would this view tell us which characteristics are significant or defining for a certain prototype, nor which ones should be considered important in different situations (is the shape, the weight, or the color of some object important in a given situation?). And in the end, Storm notes, family resemblance presupposes similarities and then goes looking for them, such as the family resemblance of urinals to toilets and other bathroom fixtures when in the bathroom, but then the urinal to works of art when in an art museum (e.g., Duchamp’s Fountain).
Polythetic classification, Storm argues, also has problems. Polythetic classes can often be broken down into monothetic classes, for example if we have a class defined by properties p1, p2, p3, and p4, and some object S1 in the category has properties p1 and p2 and while another object S2 has p3 and p4, then in what way would these two things be in the same class? We would also have an exponentially expanding hierarchy of polythetic classes, since each property would itself invoke concepts that themselves would need to be polythetic classes, which themselves would contain concepts that are polythetic classes, and so on. There is also the issue of how to determine how many of the properties are required for inclusion into the category – if it has at least two? Maybe three? Polythetic classes also make it difficult to project the abstraction onto new things since none of the properties are generalizable to all members of the class. And finally, polythetic classification is once again presupposing a group and then looking for those properties that bind them together under a single category.
—
Storm says that the above routinized critiques can be applied to the physical sciences as well as to the human sciences. This may be true to some degree. However (and perhaps this is my bias as a scientist showing through), the physical science – physics, chemistry, biology, geology, astronomy, etc. – has something that the humanities and social sciences lack: predictive (and retrodictive) power. A scientific theory, though perhaps not a perfect mapping of concept onto reality, offers the ability to make predictions (and retrodictions) about future observations. Fields like religious studies or anthropology or the arts or economics or political science and so on are mostly descriptive and interpretive. Analyzing the concept of religion will not allow someone to make a prediction about the future of a particular religion or a retrodiction of what that religion was like in the past. Anthropology couldn’t predict a rite of passage for entering adulthood among a particular people even if it could perfectly describe their marriage and funerary ceremonies. The arts could never have predicted dadaism or yellowism (although in the arts, one could argue that unpredictability is a feature and not a bug). Economics and political science are notoriously bad at predicting what will happen to the economy, or the consequences of a new policy, or the effectiveness of diplomacy.
In science, however, physicists can predict new standard model particles decades before they’re observed; chemists could predict many of the chemical properties of elements that hadn’t yet been discovered; biologists can retrodict phylogeny in evolutionary theory, predict etiology from symptomology in medicine; geology can retrodict tectonic plate movements and thereby predict the presence of similar rock formations and fossils to be found in what are not distant regions; astronomy is famously good at predicting things like eclipses, but now also has the predictive power of general relativity. The point is, regardless of whether a scientific theory posits real entities (e.g., quantum fields), or whether the word “science” has the same meaning now as it did in the past, or whether science has had tragic missteps (eugenics), and so on, the ability to make predictions means (1) that science simply works, in both a practical everyday sense and in a pragmatist epistemological sense, (2) science has a mechanism for self-correction when it does inevitably get things wrong beyond conceptual analysis and debating the necessary and sufficient conditions of what counts as a proper scientific theory, and (3) it is (in principle) reproducible, regardless of the epistemological “standpoint” of the experimenter.
—
Regardless, at the end of this chapter, it appears as if at least the humanities and social sciences have been deconstructed into oblivion. But Storm says that the deconstruction itself tells us something: because all these fields – religion, art, anthropology, sociology, economics, etc. – are vulnerable to the same critiques enumerated above, it tells us that there is something similar about the concepts used in all these fields. Thus, Storm believes, though the deconstruction is devastating, it is also important, even necessary; but also, there is a way forward through “the structure of the critiques themselves.”
Chapter 3: Process Social Ontology
Because of the collapse of major concepts in the human sciences under the withering assault of deconstruction, Theorists went about a program of anti-essentialism. To do this, they turned nouns into verbs to emphasize that words were made that way rather than that they describe something inherent. For instance:
- Minoritized for minorities
- Enslaved for slaves
- Male-identified for male
- Racialized for race
- Unhoused for homeless
- Standardized English for standard English
This project of anti-essentialism, Storm says, has led to two main issues: first, that anti-essentialism has become a strict orthodoxy, such that accusations of essentialism can be levied to silence people; and second that anti-essentialism is a sort of universal solvent that dissolves every realm of inquiry exposed to it, thereby washing away any possibility of useful discourse, or even politics (e.g., if “women” don’t exist, then how can they be emancipated?). To work through this, Storm wants to bring process ontology and social ontology together into process social ontology.
Process ontology is characterized in the following way by the Stanford Encyclopedia of Philosophy:
Process philosophy opposes ‘substance metaphysics,’ the dominant research paradigm in the history of Western philosophy since Aristotle. Substance metaphysics proceeds from the intuition—first formulated by the pre-Socratic Greek philosopher Parmenides—that being should be thought of as simple, hence as internally undifferentiated and unchangeable. Substance metaphysicians recast this intuition as the claim that the primary units of reality (called “substances”) must be static—they must be what they are at any instant in time. In contrast to the substance-metaphysical snapshot view of reality, with its typical focus on eternalist being and on what there is, process philosophers analyze becoming and what is occurring as well as ways of occurring. In some process accounts, becoming is the mode of being common to the many kinds of occurrences or dynamic beings. Other process accounts hold that being is ongoing self-differentiation; on these accounts becoming is both the mode of being of different kinds of dynamic beings and the process that generates different kinds of dynamic beings. In order to develop a taxonomy of dynamic beings (types and modes of occurrences), processists replace the descriptive concepts of substance metaphysics with a set of new basic categories. Central among these is the notion of a basic entity that is individuated in terms of what it ‘does.’ This type of functionally individuated entity is often labeled ‘process’ in a technical sense of this term that does not coincide with our common-sense notion of a process. Some of the ‘processes’ postulated by process philosophers are—in agreement with our common-sense understanding of processes—temporal developments that can be analyzed as temporally structured sequences of stages of an occurrence, with each such stage being numerically and qualitatively different from any other. But some of the ‘processes’ that process philosophers operate with are not temporal developments in this sense—they are, for example, temporal but non-developmental occurrences like activities, or non-spatiotemporal happenings that realize themselves in a developmental fashion and thereby constitute the directionality of time. What holds for all dynamic entities labelled ‘processes,’ however, is that they occur—that they are somehow or other intimately connected to time, and often, though not necessarily, related to the directionality or the passage of time.
Social ontology is characterized in the following way by the Stanford Encyclopedia of Philosophy:
Social ontology is the study of the nature and properties of the social world. It is concerned with analyzing the various entities in the world that arise from social interaction.
A prominent topic in social ontology is the analysis of social groups. Do social groups exist at all? If so, what sorts of entities are they, and how are they created? Is a social group distinct from the collection of people who are its members, and if so, how is it different? What sorts of properties do social groups have? Can they have beliefs or intentions? Can they perform actions? And if so, what does it take for a group to believe, intend, or act?
Other entities investigated in social ontology include money, corporations, institutions, property, social classes, races, genders, artifacts, artworks, language, and law. It is difficult to delineate a precise scope for the field (see section 2.1).
…
Social ontology also addresses more basic questions about the nature of the social world. One set of questions pertains to the constituents, or building blocks, of social things in general. For instance, some theories argue that social entities are built out of the psychological states of individual people, while others argue that they are built out of actions, and yet others that they are built out of practices. Still other theories deny that a distinction can even be made between the social and the non-social.
A different set of questions pertains to how social categories are constructed or set up. Are social categories and kinds produced by our attitudes? By our language? Are they produced by causal patterns? And is there just one way social categories are set up, or are there many varieties of social construction?
From this Storm wants to show that if we think of our disciplinary categories – religion, culture, art, economy, gender, race, etc. – as process kinds (as opposed to natural kinds), this will allow us to work through the dissolution of these categories caused by the deconstructions from the previous chapter.
What the deconstructions demonstrate, Storm says, is that dynamic change is the norm and stability is what is in need of explanation. The failure of inductive generalization is because we have had things backwards, presuming that stability of a concept is the norm and that any change to it is what requires explanation. Memetic mutation and recombination, caused in large part because individuals and groups have different meanings and connotations attached to words and concepts, is in fact the typical mode in which social categories exist.
Storm wants to compare natural kinds with process social kinds. To do this Storm first enumerates ten of the things often found in definitions of natural kinds:
- Essence: there are a bundle of “essential” properties that determine membership in the category
- Definability: one can clearly determine what things fall within the category and what things do not
- Necessity and Sufficiency: there are properties that something must have to be a member of the category and properties that, if all of them are mutually present, automatically inducts something as a member of the category
- Mind-Independence: the basis of classification exists in the structure of the world itself, not merely in human minds
- Intrinsicality: the properties that determine membership are possessed by the members intrinsically (if tigers are defined by having stripes, painting stripes on a lion does not make it a tiger)
- Microstructure: sharing of common “microstructures”
- Modal Necessity: members of the category possess the properties in all possible worlds
- Law of Nature: invariant laws of nature are said to apply to natural kinds
- No-Crosscutting: something cannot be a member of multiple natural kinds except in the case of nested hierarchies (e.g., being in the natural kind “mammal” and in the natural kind “dog” since the latter is subsumed under the former)
- Discoverability by Science: natural kinds can be discovered by science (they exist out there in the world to be discovered, not invented)
While these things work for some objects within physics and chemistry, Storm notes, they do not work in the human sciences. Meanwhile, process social kinds contain some of the following (provisional) properties:
- High Entropic: significant variation (regional and temporal), diverse instantiation, consistently changing, lacking in equilibrium, and few properties possessed by all members of a social kind
- Undefinable: no necessary and sufficient conditions that are not too strong or too weak
- Interdependent: the properties of social kinds arise due to interactions with other social kinds
- Crosscutting: objects and actions can be members of multiple social kinds (e.g., something can be both political and religious)
- Abstractions/Reifications: social kinds are abstractions and generalizations, but then through reification are often made to appear as if they are natural kinds
- Historically/Culturally Contingent: if history had been different, then the social kind would have been different in some important way; different cultures and languages will have different social kinds
- Normative: social kinds are often laden with values and moral dimensions
- Mind-Dependence: social kinds exist because of ongoing attitudes and beliefs
Thus, instead of the Aristotelian substance or static object view of categories (X is A in a timeless sense) we instead have processes (X does A, or X becomes A, or X became A, or X A‘s in the sense of a verb). This is why things like race have been turned into verbs like racialized – “…the object is just the continuous possibility of the activity” and “social objects, such as governments, money, and universities, are in fact just placeholders for patterns of activities” (John Searle as quoted by Storm). And so, Storm says, the real question shouldn’t be, for instance “is this object art?” but instead “when and how is this object art?”
Storm says that this process ontology could solve the riddle of Menander’s Chariot (the Indian version of the Ship of Theseus, as it is more commonly known in the west). In this version, we can think of a chariot, or car to make it more modern, in 100-lap race. Say that the car has to go to the pit stop every lap wherein 1% of the car is exchanged for new parts. Each lap, a different 1% is exchanged such that by the end of the race, 100% of the car’s parts have been replaced. Is it still the same car as the one that began? If, like many people, you want to say yes, then consider this: what happens if all the original parts are then put back together into a car? Do we then have two of the same car?
This paradox is meant to get at the heart of essentialism: what is important, the matter that the car is made from (i.e., the original parts) or does some “essence” stay with the vehicle that is running the race? If the former, then at what point did it stop being the car that began the race (after 1 part replacement? After the majority is replaced? After 99% is replaced?); and if the latter, then is the reconstructed car made from all the original parts a different car?
Storm says that what is important is the causal history, that the vehicle in the race is the same because it is a process (being in the race) with a causal history (the process began when the race began). Similarly, the famous dictum by Heraclitus that you cannot step into the same river twice (because it will be composed of different water the second time) overlooks the processual ontology of the river, where the river “Tigris” has a causal history that is unique to that river (i.e., the river Tigris and the river Mississippi are not the same river, even if somehow all the water currently in the Tigris migrated over to the Mississippi in a short period of time). This means, too, that process ontology allows for something to split, like a torch lighting another torch both sharing the same flame because the flame as a shared causal history.
Somewhere where this view of things becomes important is with self identity. The atoms, molecules, and cells that compose a human being are constantly being exchanged for new ones, but there is still a sense that a person at ten years old persists in some important way to who they are when they are forty years old. A person, therefore, is a process, and maintains identity by virtue of having a causal history (who I am now is caused by who I was a minute ago, which was caused by who I was a minute before that, and so on back to when I began).
—
I largely agree with what Storm says in this chapter about the ephemeral nature of our categories. But I think it’s important to understand why humans prefer stable categories over dynamic ones. There is the reason that, for most of humanity’s evolutionary past, change was much slower than it is now. Even just a few hundred years ago, a peasant could by and large figure on their life being the same as their parent’s life, which was the same as their parent’s parents, and so on; likewise, they could predict that conditions would be much the same for their children and grandchildren. They interacted with the same relatively small group of people everyday; the objects they came in contact with (their shelter, tools, clothes, etc.) would remain constant throughout life; they ate the same food day after day. Most people, until fairly recently, would never go more than a few kilometers from where they were born. The pace of change would have been quite slow going all the way back to hunter-gatherer times, where there is obviously a difference in how the hunter-gathers lived compared to the sedentary societies, but the evolution of culture would have occurred much slower than it does now.
But even aside from the origins of this epistemic sensibility that prefers stability over change, there is good reason to prefer it: stability allows for predictability, routinization, and reliability. Indeed, one could view political philosophy, philosophy of law, and bureaucratiziation as projects toward rendering the world predictable even as the rate of social, cultural, political, economic, etc. evolution accelerates exponentially. We want it to be the case that if you are accused for a crime now, you can expect largely the same treatment that you would have received ten years ago or that you would receive ten years in the future; we would not want to live in a world where the law was upheld in an arbitrary fashion or based on whim, or where accidental traits conferred differential treatment (even if this was poorly adhered to in the past, it is at least held up as the ideal). We want to know that the road system will continue to function, that we can count on our food being the same today (as far as food safety and quality) as it was yesterday, that when we get old we can count on the same (if not better) medical treatment, and so on.
It’s precisely because of its ability to predict things that science is held up as the paragon of knowledge. It’s reliable. The phone or computer on which you are reading this review works largely the same today as it will tomorrow, and it doesn’t matter if it was produced in Taiwan or Mexico or the U.S., because the scientific principles upon which it functions remain the same. In science, it is the differences that need explanation: one needs to account for why the hypothesis diverges from the null hypothesis.
This is an obstacle facing this process social ontology, though perhaps not insurmountable: why bother studying something today if it will be different tomorrow (and how it will differ is unpredictable)? It’s sort of the issue with producing some new technology whose payoff won’t be for a long time, such as a space probe meant to explore exoplanets – if we start building it now and fire it off in five years, but it takes 50 years to get to Alpha Centauri, then there is the possibility that in ten years from now we will have the technology to fire one off that takes only 30 years to get there, beating the first one; but then 15 years from now we have the technology to build one that takes only 10 years, and so on. The point is, the pace of change will always be outrunning the current state of knowledge. Although I suppose this has the benefit of job security for those in the humanities and social sciences, it means that a book written on the topic even just a year earlier could be seen as wildly out of date. (I’m aware that the same issue is present in science, where the rate of scientific discovery often renders older things outdated or obsolete, but science is more foundationalist than many of the humanities and social sciences in that new knowledge is predicated on the old knowledge and builds off of it as often as, or more often than, it completely refutes it).
Regardless, Storm is optimistic about his process social ontology, and despite my criticisms I am not discounting it. In the next chapter Storm goes into more detail about it.
Chapter 4: Social Kinds
This chapter begins by examining what social kinds actually are. Storm says that methodological individualism – that all social kinds supervene on individual psychologies and their aggregates – was most popular until very recently. Lately a kind of holism has come into vogue, though Storm says that this is inadequate as well: “…the social is not reducible to individuals or amorphous social forces, but consists in a range of different social kinds, best understood as temporary zones of stability in unfolding processes, which are instantiated in their materialization.”
Perhaps it’s my own reductionist bias as someone trained in the sciences, but I disagree that there isn’t an account of social kinds that is supervenient on individual psychologies, even if approaching the subject from that angle is impracticable. I think of how thermodynamics (holistic) is supervenient on statistical mechanics (individualist), which is analogous to social forces being supervenient on individualism. When the social kinds can’t be explained purely in terms of individual psychologies, this is more because of how unfeasible it would be the acquire all the information about every individual’s psychological states (like a sort of psychologist version of Laplace’s demon in statistical mechanics) than it is about the impossibility in principle.
Storm, however, says that this isn’t possible since individuals and social kinds are not reflective – hammers and traffic jams are social kinds, but they share no properties with any individual humans. Additionally, the same set of individuals can occupy multiple social kinds: think of a bowling league made up of all the people on a city council. Both are composed of the same people, but the bowling league cannot pass ordinances while the city council can.
Yet this, to me, seems as if it is simply focusing too narrowly or defining mind-dependence in a different way than I understand it. A hammer is just an object without the right psychological states to conceptualize it as a hammer – it’s hammerness exists in virtue of people thinking of it that way. The city council members, when in the city council building during regular city council meeting hours, is vested with power it does not have outside of those social kinds by virtue of people believing this is so; those social kinds – the building, the time, etc. – are vested with these powers via the psychological states of people.
Regardless, the next question examined is: what is a social kind? Storm says that a social kind is “…a fairly high-level abstraction, encapsulating everything from artifacts, social roles, and institutions to norms, events, and the like…” which includes “…cultural, political, artifactual, economic, [and] symbolic kinds” and can even “…include some sorts of nonhuman agents” i.e., animals, which can have their own social kinds (elephant matriarchs, queen bees, worker ants, etc.)
Storm wants to base a theory of social kinds on homeostatic property clusters. This is a cluster of traits shared by something that is present due to causal mechanisms (in biological species this would be mechanisms like genetic inheritance and selection pressures). Thus, birds have a cluster of properties that are currently shared, such as having wings and feathers; the stable presence of these properties right now is the homeostatic part, but this homeostasis can change. But the condition that they have to be due to shared causal mechanisms is why, for instance, a bat is not a bird, even though it shares many properties with a bird (the most obvious of which is having wings).
Using this theoretical scaffold, Storm describes social kinds as follows:
- Social kinds are socially constructed [socially constructed in the ontological, causal, or classificatory modes discussed in chapter 1]
- dynamic clusters [descriptions of patterns with more or fewer exceptions] of powers [as opposed to properties, social kinds possess “a capability or a pattern of likely activity in given circumstances” or, in other words, its causal position in a network of powers]
- Powers can be potential (an unspent dollar) or actual (the spending of the dollar)
- Powers can be a privilege or a liability
- Powers can be things like freedoms or obligations
- Powers are defined in contexts (the dollar requires a certain set of conditions to have its power actualized, such as the presence of buyer and seller)
- Powers can be transient (e.g., when you are a customer at a restaurant you have certain powers, but only while in the restaurant)
- Clusters of powers can have homogeneous and/or heterogeneous stability
- which are demarcated by the causal processes that anchor the relevant clusters
- Causal processes that anchor clusters contain, but are not limited to, the following:
- Dynamic-Nominalist Processes: something is named as a member of the social kind (or excluded from it) and then its membership (or exclusion) is either adopted (e.g., someone taking on the role of whatever social kind they have been assigned) or through enforcement (cultural, social, institutional, legal, etc.)
- Mimetic: reduplication of behaviors, performativity, beliefs, and goals through custom, tradition, imitation (e.g., of celebrities or other “influencers”), law, or simply because something so so entrenched that change would be too difficult (such as the U.S. changing to the metric system or keyboards changing away from QWERTY)
- Ergonic Convergence: different things that are meant to serve the same function often converge on a single solution (similar to how the eye has evolved several times independently, or the wings of birds, bats, and bugs); think if spears, which were independently invented by multiple cultures to function as solutions for similar needs
- Causal processes that anchor clusters contain, but are not limited to, the following:
While also (from the previous chapter):
- social kinds are the products of unfolding processes and thus tend to be high-entropic or varied both temporally and spacially (hence tend to be historically contingent)
- they are interdependent insofar as their properties emerge via their relationships to other social kinds
- social kinds crosscut each other so that the same entity can be the intersection of different kinds
- what makes them “social” kinds is that they are mind-dependent
—
I might add to Storm’s anchoring causal processes that there are neuropsychological anchors as well. Some of these might be the following:
- Spatial Thinking: humans have a tendency to think about things spatially – things that share similar characteristics are “closer” to each other while things that share fewer characteristics are “further away” from each other; in philosophy the idea of “possible worlds” says that those possible worlds that are more like our own are “nearby” while those much different are “far away”; we can have “distant” memories; people can emotionally “distance” themselves from each other; when stressed out we can feel like the walls are “closing in” on us (our “space” feels small, constrained); our lives can feel “cluttered” (the “space” in which we live is occupied by too many things); things we regret can “hang over” us; progress means we’re “moving forward”; in the sciences we map things like sound pitch and volume, colors and light intensity, and all number of degrees of freedom in spatial terms; indeed, things like functional analysis are schools of mathematics meant to generalize and make rigorous the notion of space so it can be applied to abstract mathematical functions. Effectively what this does is causes us to give a spatial meaning (or at least a spatial representation) to many things, such that both natural and social kinds will be ascribed spatial properties; additionally, things that can be more easily described spatially will have greater reproducibility.
- This kind of spatial thinking isn’t necessarily the same everywhere. There are some cultures that think in terms of the cardinal directions while most westerners tend to think in relative terms like “right and left”; people who read and write from left to right have different spatial conceptualizations than people who read and write from right to left
- Anthropocentric Thinking: humans tend to think in terms of agency and intent. Everything from subatomic particles (whose charge makes them “want” to come together or move away) to history (which is usually told as a narrative driven by human intentions, or in a Hegelian/Marxist sense as almost having an intention or teleology of its own) are given humanlike attributes. Indeed, a large part of reification is in ascribing intentional, personal, or teleological properties (or powers) to the abstractions we invent. But this is also seen in the way that events caused by humans are more salient than things that are accidental or probabilistic: many people in the U.S. find terrorism more fearful than COVID19 (and in fact were eager to search for human intentions behind COVID19) or car accidents, even though the latter two have killed many more U.S. residents than terrorism. Effectively what this does is make humans add another dimension (there is that Spatial Thinking) to all natural and social kinds: the dimension of how close (more Spatial Thinking) something is to being humanlike, which has epistemological and ethical consequences; conversely, those things which can be personified will have greater reproducibility.
- As a subcategory of Anthropocentric Thinking I might also add that humans often think in terms of sex and sexuality: humans often put a sexual dimension into the way they think about things. Many cultures have origin myths that have to do with deities giving birth to the world or to the people, and almost all theistic religions ascribe a sex or gender to their deities (even in Judaism). Additionally, attempting to find sexual partners for oneself, or attributing sexual motivations to others (think of how quick people are to attribute sexuality to ASMR videos upon discovering their existence), are thought processes that permeate the psychology of many (arguably most) people. This sexualized thinking often takes on moral characteristics as well, and vice versa (morals can take on sexed and sexual characteristics). Thus, sexed or gendered concepts have some power to be more reproducible.
—
Storm goes on to explain how this theory of process social kinds can be used to do postmodernist deconstruction more helpfully and then to reconstruct our concepts. A big part of what Storm’s theory of social kinds offers is the notion of anchoring, which is how similarities and differences can be distinguished. Having similar powers does not entail that those two social kinds obtained those powers in a similar way. For instance, it has become common among the detractors of Critical Race Theory (myself included) to accuse it of being a religion (to religify it, if we want to make it a verb) because it shares similar power-clusters to Christianity. But under Storm’s theory of social kinds, this would be surface level at best, since CRT and Christianity acquired those power-clusters in different ways (they have different anchoring processes).
Examining the humanities and social sciences through the lens of process social kinds, Storm uses several pages to give advice about how this approach could help resolve some of the problems raised by deconstruction. This includes ways to recognize the spatiotemporal contingency of one’s own concepts, how to compare power-clusters, aggregate and disaggregate social kinds, and demarcate around the hazy edges of different social kinds (especially when language falls short, such as when we have one word with multiple meanings or multiple words with one meaning, or when people might assign different meanings or connotations to certain words).
There is a lot of material here, and all of it is sensible (though done mostly in the abstract), but I’m not going to go through all of it here (I do have to leave something for those who buy the book). One thing I wanted to mention, however, is that Storm’s project seems to call for a renewed empirical turn within the humanities and social sciences. In order to rebuild the social kinds around which the different disciplines congregate, the property-clusters and anchoring processes will need to be (re)discovered. This, I think, affords these disciplines an opportunity to examine reproducibility – if different groups can analyze the same empirical data and come up with property-clusters and anchoring processes that are sufficiently similar to one another, this would further justify the reconstructed concepts. If the property-clusters and anchoring processes determined by the different groups are too dissimilar, then the people working on the disciplines would need to interrogate their own assumptions and biases. This is a way in which the cultural-amnesic realism I discussed earlier could be put into practice.
Chapter 5: Hylosemiotics: The Discourse of Things
This is the longest and perhaps the most difficult chapter in the entire book. It also requires a lot more background knowledge than the other chapters in order to understand it. My summary of chapter 5 will therefore be leaving out even more of the nuance than in the other chapters. I will only be highlighting what (to me) are the main takeaways from this chapter.
Storm gives a preview of the hylosemiotic theory being submitted in the usual jargon before moving on to explain what the theory means. Storm says:
…it might be best to think of hylosemiotics as an expansion of C.S. Peirce-influenced biosemiotics to include the semiotics of nonbiological symbolic systems (e.g., robots/computers) combined with an account of meaning drawn from hybridizing Ruth Millikan’s teleosemantics, Deirdre Wilson and Dan Sperber’s relevance theory, and a repurposed version of Richard Boyd’s notion of accommodation to explain reference magnetism. Readers from a continental perspective will also see the influence of Martin Heidegger’s hermeneutics, stripped of its anthropocentrism. After naturalizing philosophy of language around nonhuman models, the project then emphasizes the materialized representations that mediate cognitive processes to produce extended minds.
That’s a mouthful. I’ll give a little explanation before moving on into the summary and review of Storm’s work. If you understood what the above quote said or you just want to skip this part, scroll down to where I have in bold “Hylosemiotics.”
First, before we can say what hylosemiotics is, what is semiotics? To give an oversimplified description, semiotics is a theory in linguistics (sometimes also called semiology, and is most often associated with the structuralism of Claude Lévi-Strauss), most famously explicated by Ferdinand de Saussure and Charles Sanders Peirce. In broad strokes, semiotics says that language is structural in that it is dependent on the rest of the language for its meaning (a single word doesn’t have any intrinsic meaning outside the context of the rest of a language; in other words, it is holistic). Saussure puts it like this: “Language is a system of interdependent terms in which the value of each term results solely from the simultaneous presence of the others … concepts are purely differential and defined not by their positive content but negatively by their relations with the other terms of the system.”
Semiotics is also known for its signifier and signified (the “sound-image” and “concept” respectively) distinction, where the former is the “symbol” (usually meaning words, though depending on the scholar it can be interpreted more broadly) and the signified is what the signifier points to (it is the meaning of the signifier, i.e., the concept). Notably what is missing in the signifier and signified (particularly in Saussurean semiotics) is the physical referent “out there” in the real world; all we have is the “symbol” or “sound-image” and the concept, or the way that the “sound-image” is understood. Whether Saussure thought a language could be understood without referent at all or whether he was simply bracketing reference for methodological purposes is a point of debate, but Storm takes the interpretation that Saussure was methodologically bracketing reference in order to study the structure of language itself.
In Saussurean semiotics, the signifiers tend to be words belonging only to humans. Biosemiotics brings in signifiers (and the corresponding signified) from plants, animals, and other non-linguistic sources. It is now well-attested that many animal species use vocalizations and gestures to communicate. Additionally, organisms like ants use pheromones, bees use a kind of “dance”, and plants communicate through chemicals as well. Biosemiotics even has its own journal, which puts as is mission statement:
Biosemiotics is dedicated to building a bridge between biology, philosophy, linguistics, and the communication sciences. Biosemiotic research is concerned with the study of signs and meaning in living organisms and systems. Its main challenge is to naturalize biological meaning and information by building on the belief that signs are fundamental, constitutive components of the living world. The journal is affiliated with the International Society for Biosemiotic Studies (ISBS).
Biosemiotics has triggered rethinking of fundamental assumptions in both biology and semiotics. In this view, biology should recognize the semiotic nature of life and reshape its theories and methodology accordingly while semiotics and the humanities should acknowledge the existence of signs beyond the human realm.
Teleosemantics – a portmanteau of telos (purpose or function) and semantics (meaning) – is concerned with how meaning is formed by the proper function of the mental states and meaning-forming apparatuses (e.g., the brain and the different brain regions) and how this proper functioning can go awry. Justine Kingsbury puts it this way:
Teleosemantic theories provide an account of the content of mental states in terms of the proper functions of either mental states themselves or the mechanisms that produce them. The proper function of something is (roughly) what that thing is supposed to do. The function of my heart is to pump blood: the function of my can-opener is to open cans. Something may have a proper function that it fails to perform – my can-opener continues to have the function of opening cans even if it is so badly damaged that it cannot do so. The thought that lies behind teleosemantics is that misrepresenting involves the failure of something, perhaps a representation or perhaps a representation-producing mechanism, to perform its proper function.
If teleological theories of content are to be naturalistic, as they are intended to be, they need to come with a naturalistic account of what it is for something to have a function. Most teleosemanticists adopt an etiological account of functions, according to which the function of something is (roughly) what earlier things of its type have done which has contributed to their survival and reproduction, the doing of which thus explains the current presence of the thing. The function of my heart is to pump blood because pumping blood is what the hearts of my ancestors did which contributed to the survival and reproduction of my ancestors, and thus contributed to the persistence of hearts of that type in the population, and which thus explains my possession of such a heart.
Relevance Theory, according to Wikipedia says:
The theory takes its name from the principle that “every utterance conveys the information that it is relevant enough for it to be worth the addressee’s effort to process it”, that is, if I say something to you, you can safely assume that I believe that the conveyed information is worthwhile your effort to listen to and comprehend it; and also that it is “the most relevant one compatible with the communicator’s abilities and preferences”, that is, I tried to make the utterance as easy to understand as possible, given its information content and my communicative skills.
Other key ingredients of relevance theory are that utterances are ostensive (they draw their addressees’ attention to the fact that the communicator wants to convey some information) and inferential (the addressee has to infer what the communicator wanted to convey, based on the utterance’s “literal meaning” along with the addressee’s real-world knowledge, sensory input, and other information).
Inferences that are intended by the communicator are categorised into explicatures and implicatures. The explicatures of an utterance are what is explicitly said, often supplemented with contextual information: thus, “Susan told me that her kiwis were too sour” might under certain circumstances explicate “Susan told the speaker that the kiwifruit she, Susan, grew were too sour for the judges at the fruit grower’s contest”. Implicatures are conveyed without actually stating them: the above utterance might for example implicate “Susan needs to be cheered up” and “The speaker wants the addressee to ring Susan and cheer her up”.
Richard Boyd’s accommodation theory says, basically, that our words and concepts refer to things in the world, but as we learn new things about the world, the concepts change (or can be rejected outright, as the case may be, such as with the luminiferous aether) to accommodate the new information. Thus, the words and concepts are justified by virtue of being at least approximately correct insofar as they map onto the real world to some degree sufficient for people to meaningfully talk about and study the phenomena. You can read more about it here.
Hermeneutics is the study of interpretation. Heideggerian hermeneutics is concerned with the interpretation of the self and its place in the surrounding world. Heidegger said that the world isn’t presented to us as a manifold of objects, but as meaning – “The lived world is present not as a thing or object, but as meaningfulness.” The world appears to us as if the meaning is already there, rather than our examining an object and then imbuing it with meaning a posteriori. A stick on the ground is presented to us as a club for beating someone, or as firewood, or as a hindrance, etc., rather than simply as an object which we can then choose to interpret it in some way.
Hylosemiotics
Storm begins by comparing poststructuralism and new materialism. The former essentially takes the Saussurean view of semiotics and deconstructs it (while still mostly presuming it). The short version of poststructuralism is that there is no meaning (or, at least, that it is infinitely deferred), that translations between languages is impossible, and therefore any interpretation of a text (or speech, etc.) is equally valid (the so-called “death of the author” i.e., authorial intent can never be discerned and therefore all interpretations are valid). Or, as Storm puts it, according to poststructialism, the world is “…unknowable hyper-chaos or mutually non-overlapping linguistic universes…” For new materialism (you can read more about it here and here or this book), the structuralist (or poststructuralist) way of thinking remains, but it is now applied to physical things (objects and organisms). Objects in the world can take on meaning as “assemblages” (they prefer to avoid the word structure, since new materialism is defined by its opposition to the focus on language and bracketing of the material world found in structuralist theories) which have “agency” (i.e., causal power) and meaning in virtue of their matter and form.
Both of these theories – poststructuralism and new materialism – have profound shortcomings, according to Storm. Poststructuralist notions that there is no “true” meaning and that all interpretations are equally valid is something everyone just intuitively knows is wrong, unhelpful, impossible to universalize, and self-refuting, which is why outside the fringes of academia it has never been taken seriously as a scholarly methodology or theory of meaning. New materialism is mostly filled with vacuous truisms that don’t offer anything beyond just pointing out the causal powers of different objects and organisms.
Storm wants to synthesize what strengths these two theories do possess by formulating a theory of semiotics that doesn’t bracket the material world. Thus, hylosemiotics (hylo: relating to matter; semiotics: theory of language). In other words “semiotics and ontology have to be done side-by-side, as it is a mistake to try to formulate a theory of language by completely bracketing off meaning from the physical world in which meaning occurs…”
Storm says that because humans and other organisms are able to function in the world, this has important ontological implications. These implications are:
- The world must consist in rough property-clusters (using properties instead of powers to emphasize physical actualization); “…at a minimum, the world must consist in rough clusters, which while they often have vague boundaries can nonetheless at least provisionally be roughly distinguished from one another.”
- The world must have limited cross-temporal stability or minimal causal regularities (relative, local stability rather than absolute chaos – there must be some level of predictability and reliability in the natural processes we encounter in our day-to-day lives)
Furthermore, Storm says, humans and other animals can at least provisionally track these property-clusters through space and time using multiple senses. And even though we might not know what it is like to be a bat, bats must be able to track the same property-clusters with echolocation that humans can with sight or other senses.
Storm goes on to say that humans don’t share concepts, but instead notions of overlapping property-clusters that, while we may disagree on borderline cases, people can have a decent idea of what I mean when I say the word “cat” for instance (even if someone might disagree that a particular animal qualifies as a cat). The acquisition of property-clusters is done by reference fixing, usually with ostensives and demonstratives (when teaching a child what a cat is, one doesn’t go to the dictionary but instead one points at a cat or a picture of a cat and says “that is a cat”). Storm then invokes the relevance theory by saying:
…if we think of concepts in terms of mental-representations everyone has their own concepts. But inasmuch as we are capable of tracking overlapping property-clusters, we can coordinate or share what we are talking about. Against much of analytic semantics, reference is necessarily conceptually mediated, but meaning is not reducible to shared concepts. Rather, utterances are used to guide inferences. I communicate because I want you to infer something about what I am talking about and ostensive meaning is just one of the possible things I might be expecting you to infer.
…reference is not something that words do in themselves, but rather reference is something that people do with words (or more precisely … reference emerges from the coordinates voluntary-sign making activities of communities of sentient beings). … Nor is reference limited to speech. I could also signal something similar by pointing, showing you a photograph, or just leading you [to the referent].
These concepts that mediate our understanding of the world are the Heideggerian hermeneutics, but Storm applies it to animals as well as humans (a la Jakob Johann von Uexküll).
Storm notes that there is a difference between meaning-making and meaning-interpretation, between voluntary signs (e.g., uttered words) and involuntary signs (e.g., the smell I give off, which is important as a sign to animals like dogs and cats), between intentional (the thing I’m trying to convey) and unintentional meaning (perhaps some way I worded it that betrays underlying biases, or if I slur my speech it means I’m intoxicated, etc.).
Furthermore, all signs have different meanings in different contexts. Saying “yeah, right” can be interpreted as saying “correct” or as saying “I don’t believe you” in different contexts. Voluntary sign production is mostly stable but sign-interpretation has high flexibility and adaptability
In the hylosemiotic theory Storm is proposing, it’s more useful to judge statements as being successful or unsuccessful rather than true or false (did the meaning of the word or phrase get successfully interpreted?). Because ambiguity is the rule and not the exception, there isn’t a good way to judge whether the statement “it is raining” is true (how many raindrops, and at what rate of precipitation, counts as raining?) but whether the statement is successfully interpreted by the listener(s).
Storm invokes Boyd’s accommodation theory with the following:
- Voluntary sign reference emerges dialectically from a community of signaling organisms’ use of signals to navigate their environment successfully
- The signs’ reproduction is motivated by success, here explained in terms of its “accommodation” or accuracy in picking out the relevant features of the world
And so “…voluntary signs themselves are social kinds, so their meaning tends to shift, but signs are capable of reference insofar as they are weakly constrained by an accommodation between signaling, their reason for being reproduced, and the relevant features of the world.”
To sum up this first part of the chapter, we can think about it this way: everyone is what Storm calls a sign-consumer (this is the Heideggerian hermeneutics). When we experience a sign within a given environment, we make inferences about its meaning in the context of everything around the sign (context-dependence). When signs are produced through utterances, they have a voluntary meaning (what the sign-producer intended, i.e., the ostensibility) and an involuntary meaning (other things the sign-consumer can infer from the sign). Voluntary signs are intended to influence the sign-consumer’s behavior and the inferences the sign-consumer makes – it is an attempt by the sign-producer to steer the inferences of the sign-consumer to the relevant property-clusters (referents) intended by the sign-producer (this is the relevance theory).
Voluntary signs are social kinds, which can change and mutate over time in response to the relevant anchoring processes (i.e., they are process social kinds). Thus, the meaning of a voluntary sign is tied to these anchoring processes (Dynamic-Nominalist Processes, Mimetic, and Ergonic Convergence) and how these lead to the production and reproduction of the voluntary signs.
The referents of voluntary signs are loose property-clusters (rather than Aristotelian substances) that are overlapping (not identical) between the sign-producer and the sign-consumer. Different sign-consumers can bring their own assumptions and background premises that lead to misunderstandings, especially when co-referring expressions have differing anchors (e.g., “animals with hearts” vs “animals with kidneys” refer to the same set of animals, but focus on different things). This could happen, too, if someone doesn’t have knowledge of one of these anchors.
—
There are three sign types in what I will call the Peircean-Stormian Typology of Signs. These can overlap in some cases (indeed, one of Storm’s contributions was splitting up the Index and Correlation signs from the way Peirce had formulated it) and different ones can occur in different contexts and/or for different people from the same sign-stimuli.
- The Symbol: relationship between signifier and signified is arbitrary (nothing about the signifier indicates the signified except by being assigned to reference it – the word “tree” doesn’t have any tree-like qualities itself but it references what we know of as trees since that is what was assigned to it by convention)
- Other examples: pheromones released by plants and animals; tail wagging by animals; the red and green of traffic lights
- The Icon: the signifier and the signified share a likeness (assignment of the signifier to the signified is non-arbitrary – a good example would be a photo)
- Likeness can be task-dependent, such as sorting things by color (color is the icon) or by shape (shape is the icon)
- Other examples: camouflage; maps; recorded voice
- The Index: ostensive references by indicating (indexing) a spatiotemoral position (my pointing at a cat and saying “that is a cat”)
- Other examples: wolf howls indicating current location; microwave beeping to indicate finished heating
- Self-Signs: a subtype of the Index – labels on things; introducing yourself by name
- Correlation: causal correlations (smoke signifying a fire by virtue of the smoke being caused by the fire)
- Other examples: the darkness of clouds correlated with chances of precipitation; smell of a skunk correlated with presence of a skunk; symptoms correlate with particular diseases
- Can be context-dependent – if you see bear tracks, then that might mean a grizzly bear in one place but a polar bear in another
—
Three ways our minds and environments recursively shape one another
- Knowledge emerges from exploratory manipulation of the physical world. We don’t learn just by passively observing, but by interacting with things.
- Matter (and energy) are used to store information and alter cognitive complexity. We can write things down and search for information on Google; ants leave scent trails.
- Public semiosis permits collective representation. People can talk to each other, brainstorm together, rely on each others expertise or memories. There are also things made by people in our environment, like propaganda, advertising, fashion, brands, etc.
—
I don’t find anything objectionable with Storm’s theory of hylosemiotics. There are a few things that I think, as stated in this chapter, the theory could address. One thing that Storm emphasizes is that human semiotics is on a continuum with non-human organisms – that our sign-production and sign-consumption co-evolved with our physical biology. As a sort of anchoring process for this fact, the theory might be strengthened by accounting for why organisms are entities such that signs are important for them. Why, for instance, didn’t organisms on earth evolve not to use signs? (Or, perhaps, it would be impossible for sentient beings not to use signs?)
My own view is that signs are a way of simplifying, which is a necessary part of being sentient. An object (say, a rock) is its own best model; a consciousness can only approximate that rock by virtue of not being that rock (which would be the only way for a model of the rock to map perfectly onto the rock, i.e., to solve the standard model Lagrangian for every fermion and boson in the rock). Thus, signs are necessary as a way of modeling the world, since signs are a way of simplifying and turning complex objects into abstractions in the same way that consciousness is an approximation or model of the real world.
The reason a sentient entity would want to model the world is as a way of making predictions, which are necessary for survival (see the Bayesian brain (1, 2, 3, 4), predictive processing (1, 2, 3), and the free energy principle (1, 2, 3)). Signs, I submit, convey information, in an information-theoretical sense of reducing uncertainty, about the external world. A sentient being must take in that information, which is imperfect by virtue of not just being the things in the environment (it is a sort of sample rather than a population, to use statistical jargon) and the sentience must make some prediction about how best to respond to those stimuli (the signs). Deciduous trees respond to temperature and changes in daylight; mushrooms respond to precipitation; animals respond to the presence of other animals; and so on.
Furthermore, an account of why Alvin Plantinga’s semantic epiphenomenalism is avoided by hylosemiotics might be advantageous to the model. It would also be helpful to have an account of over- and under-interpretation of signs and how that might be connected to hyperactive agency detection (maybe hyperactive sign detection?).
Another interesting avenue would be the connection between the sign-consumption/interpretation and the the evolution of qualia (i.e., different qualia as signs). One could make the connection that, since signs require a sentient interpretant, and qualia require a sentience, then perhaps qualia arose as a sort of robust or incorrigible sort of sign-consumption.
It would also be interesting to connect sign-producing, sign-consumption, and the directing of inferences (relevance theory) to theories of attention. Is a potential sign a sign if nothing is paying attention to it, even if information about the sign is being taken in? For instance, when I’m in a state of flow while reading or writing, then do the things in my office around me still exist as signs for me? And then what about if someone knocks on my door and draws my attention away? And what about with selective attention – is the object not being paid attention to still a sign?
I’m also curious about just how arbitrary a symbol type of sign is. Humans, for instance, have rhymes, homophones, homonyms, puns, and so on that play on certain aesthetic and/or semantic associations that two (or more) words can have, giving them a sense of similarity even if conceptually they are very distinct. We also connect things by shared properties, like the color red in both blood and in fire giving other red things a particular association (i.e., that’s perhaps why red was chosen to mean stop at traffic lights and stop signs). Also, different words, just because of how they sound, can take on a sort of aesthetic or euphonious quality – for instance, why do some people hate the sound of the word “moist“? Or why do drug companies name their products with lots of X’s, Y’s, and Z’s? I also wonder if and how the creation of signs can be shaped and influenced by generative grammar?
I think another phenomenon that has become especially relevant with the advent of social media is in making ourselves and other people into signs (e.g., becoming a “brand” or having a social media presence). This could get into the area of empathy and theory of mind (how one sees oneself through how one thinks or perceives others conceptualizing them); for instance, a symbol taking on a different meaning depending on who else we are with. It could even get into the territory of digital avatars and how those are interpreted (and the ethics of online interactions) as both sign and person.
As I said, none of these things I brought up are to show that the hylosemiotic theory is weak, but only to suggest other avenues which the theory could explore in future developments.
Chapter 6: Zetetic Knowledge
Storm says that, even if it is not justified on the basis of what the original postmodern thinkers had intended, postmodernism has become associated with radical forms of skepticism within academia (and by the general public). Storm says (put into list form by me):
All that said [about Cartesian skepticism], “postmodern skepticisms” have had a distinctive profile. For the last several decades, to be a skeptical academic has generally meant agreement with some subset of the following propositions:
- Essentialism is a kind of violence
- Science is illegitimate and suspect
- Scientific facts are constructed by extratheoretical interests.
- Knowledge is just an expression of power
- Power is domination
- No truth claims can be grounded
- There are no facts, only interpretations
- Every perspective is equally legitimate
- All knowledge is relative to an individual’s standpoint
- If a term or concept was formulated in a colonial context, it must be false, and deploying it is a kind of violence
- Classification is a form of conceptual imperialism
- All binaries are violent hierarchies
- Every system or structure is established on the grounds of something that it both excludes and presupposes
- Concepts are fundamentally fraught
- Every abstraction is a loss
- Everything is discourse
- Meaning is differential
- Meaning is constantly deferred and can never be stabilized
- Language determines thought
- Being is always already before language
- Philosophy is phallocentric or logocentric
- Logic is merely the codification of heteronormative, white, male thinking
- There are no metanarratives
- History is over
- Knowledge is impossible
Storm says that these are not doubts but declarations. Of the three groups of philosophers that Sextus Empiricus mentioned – Dogmatists, Negative Dogmatists, and True Skeptics – these claims belong to the Negative Dogmatists. A True Skeptic, according to Sextus Empiricus, is one who suspends judgement and continues searching for the truth. Those who abide by the above (mutually incongruent) declarations have ceased to suspend their judgement and have settled down on an orthodoxy or dogma. A True Skeptic will even doubt their own doubts rather than taking “knowledge is impossible” as the only knowledge (as in the case of the Negative Dogmatists). Thus, the True Skeptic doesn’t accept the impossibility of knowledge, but instead they doubt the possibility of knowledge (but also, one could say, they doubt the impossibility of knowledge).
The indubitable knowledge that Descartes set out to acquire, Storm says, is too high of a bar. We will never have absolutely certain knowledge of anything. But the Negative Dogmatism can be turned on itself, making it crumble against the onslaught of its own machinations. And since nothing (except perhaps authoritarianism, i.e., might-makes-right) is possible without some level of knowledge, wallowing in Negative Dogmatism is untenable. And so, Storm says “…we need a form of knowledge that has learned the lesson from critiques of both dogmatism and negative dogmatism alike.” For this, Storm proposes Zeteticism:
Zetetic. ze-‘tet-ik, adj or n (Greek zetetikos, from zeteein to seek) Proceeding by inquiry; a search or investigation; a skeptical seeker of knowledge … it has come to mean both the process of inquiry and one who so proceeds. A zetetic is thus a sort of intellectual agnostic who, while seeking greater truths, is always wary of falsehood.
The Zetetic, according to Storm, abides by two principles:
- Pluralism: there are multiple possible descriptions for anything
- Epistemic Humility: one must always consider that, on any given knowledge claim, they have some possibility of being wrong
The Zetetic, then, does not go looking for indubitable or certain knowledge, but instead knowledge that is most probable, or the best explanation given current facts, or knowledge that is good enough to operationalize. We are thus in the humble realm of degrees of confidence, not in search of the impossibly high bar of absolute certainty. As such, the Zetetic is always open to new information and, if need be, changing their mind. Additionally, the amount of evidence adequate for a particular belief ought to be weighed by the implications or potential consequences of the belief – as Storm says, “More significant consequences lead toward higher practical standards of confidence.” The list of postmodern skepticisms above, Storm says, should not be taken as dogma, but as operative warnings – for instance, it’s not that everything is about dominance, but if a claim made by someone seems to neatly coincide with that person’s interests, then it should be viewed with due suspicion. Furthermore, Storm argues, we ought to think of epistemic communities rather than personal knowledge, given that an individual knows much less than a collective of people who all know a little bit. Thus, knowledge should be viewed as collective and cooperative rather than individualistic.
The methodology of the Zetetic, according to Storm, is not deduction (arriving at a conclusion that follows from premises) or induction (generalizing based on samples of data), but instead abduction. This is the type of reasoning that seeks the best explanation from available data. Storm puts it this way:
- D is a collection of data (evidence, observations, givens)
- Hypothesis H explains D
- No other available hypothesis explains D as well as H does
- Therefore, H is probably correct
You an also think about it as a sort of reverse modus ponens:
- If H1 is true, then D&W will be observed
- If H2 is true, then D&X will be observed
- If H3 is true, then D&Y will be observed
- . . .
- If Hn is true, then D&Z will be observed
- We observe D&X
- Therefore, H2 is probably true
Where H1, H2, H3, …, Hn are candidate hypotheses (the Pluralism principle of Zeteticism), or explanations, and D is a set of data or observations shared by all the hypotheses and W, X, Y, …, Z are sets of data or observations unique to the specific hypothesis (i.e., are much more likely to be true if the specific hypothesis obtains). A detective or scholar doing an investigation will be searching for the W, X, Y, …, Z data in order to determine which hypothesis is most likely (and/or to rule out certain hypotheses). So, for instance:
- If it rained, then it will be wet outside and it will be wet everywhere outside
- If someone ran the sprinkler, then it will be wet outside and it will only be wet on my property
- If a watermain broke, then it will be wet outside and water will be coming up from the ground
- It is wet outside
- Go searching and discover:
- that water is not coming up from the ground (rule out 3)
- it is wet everywhere outside (confirm 1)
- Therefore, it probably rained
The advantage of abduction over induction and deduction is that for induction, it can only generalize from phenomena that are actually observed, where abduction allows for unobserved phenomena (in the example above, the person didn’t actually see the rain coming down when it was raining, but inferred it from what is observable). Additionally, there is no good criteria by which to assess how good an induction is – how many instances of an occurrence count towards our confidence in the induction?
Deduction is in a sense tautological or “truth preserving” and so therefore no new information that wasn’t already implicit in the premises is actually learned.
Abduction, on the other hand, is ampliative, i.e., it goes beyond what is known to some new knowledge (such as that it rained, which cannot be deduced from the premises “it is wet outside” and “it is wet everywhere outside”).
Abduction can be used to strengthen and justify induction by offering an explanation or mechanism for the observations. The induction that “the sun has risen in the east on every morning I’ve witnessed” is strengthened and justified by the abduction that such observations are caused by the way gravity and orbital mechanics works as well as the idiosyncrasy of earth’s rotation. Storm also notes that in the limit, abduction becomes deduction. For instance:
- Some explanations must be true
- All possible explanations are considered
- All except one are ruled out
- That one must be true
With the word “must” indicating that this is a deduction (where the conclusion must be true if the premises are true).
Storm says that instead of an induction-deduction dichotomy, since both can fall under abduction in certain ways, we should instead have an abduction-prediction split, since prediction is the reverse of abduction. An abduction takes observations and infers a hypothesis; a prediction takes a hypothesis and infers (potential) observations. Abduction and prediction can work hand-in-hand, where a hypothesis is abduced from observations (perhaps the posterior probability, in Bayesian terms), the hypothesis can then be used to make predictions about observation (perhaps the likelihood, in Bayesian terms), and then the success or failure of those predictions can be used to confirm or rule out the hypothesis.
—
I think Storm’s entreaties in this chapter are sensible, though I don’t know that anything groundbreaking was said (although sometimes things bear repeating). Certainly the ideal in scholarship (and in everyday life) is to be a sort of open-minded skeptic; or to have an open mind, but not so open that your brain falls out. At least when being thoughtful and reflective, I don’t think anyone would disagree that all (or at least most) knowledge should be considered provisional and judged by degrees of confidence grounded in the quantity and quality of supporting evidence. Storm’s explication on the virtues of abduction are interesting, yet I think in most cases it’s more descriptive than prescriptive.
As sensible as these exhortations for people to become Zetetic and emphasize the use of abduction may be, they also seem to me overly optimistic. Motivated reasoning, confirmation/myside biases, the availability heuristic, and many other ways that thinking is less than optimal tends to make following the Zetetic principles difficult, if not impossible, for even those who are painfully aware of these cognitive shortcomings.
One way that the harm of these biases might be at least partially reduced would be if academic journals adopted some kind of Zetetic checklist that a potential article would have to satisfy before being published. This is already what things like peer review are supposed to be, but the issue is that journals tend to have their own agenda and won’t publish anything that hinders or casts doubt on that agenda. Likewise, peer review is often done by people who are likely to agree with an article’s overall aims, even if the reviewers disagree with some of the emphases. Perhaps if the peer review process made it so that a draft article had to be reviewed by people outside of the discipline, or by people who disagree, in order to satisfy the first Zetetic principle (Plurality of ways to describe something), and then the article would have to spend time going through why it is that those other conclusions offered by outsiders and dissenters are incorrect (ruling out the different hypotheses so that the one being reported stands).
Another issue I see facing Zeteticism, however, is the way that subjects are taught in schools and colleges. It is not done abductively, but usually with the presumption of some theory or methodology. Subjects tend to be taught by having the theory itself described before moving on to things like conceptual analysis within the framework of that theory, and sometimes going on to learn (often memorizing) the evidence to support it. This gets to the heart of the issue about “teaching Critical Race Theory” in school – it’s not that elementary school students are reading Derrick Bell or Kimberle Crenshaw, it’s that the lens of CRT is presumed and then the subject matter is taught from that perspective.
Abductive teaching might instead be more like this: teaching students all the available evidence, without referencing the theory, and then seeing if the student can abduce the most likely explanation for the evidence. So, for teaching about race in the United States, a sober rendering of the events that occurred during slavery, the Civil War, reconstruction, Jim Crow and segregation (the sociology, economics, and the case law that goes with it), the Civil Rights movement of the 1950’s and 1960’s (examining what different leaders in the movement thought), and then race relations and differential outcomes of the different races, supplemented with personal stories from those affected, and only then discuss how the different theories (CRT (both the Marxist/materialist and the cultural/intersectional versions); liberalism, socioeconomics, personal/cultural attitudes, etc.) interpret and explain all this. The students would then be able to assess which of these ways of interpreting the available data best fits said data.
Chapter 7: The Revaluation of Values
Storm begins by admonishing those who accuse postmodernism of being without values or morally relativistic or morally nihilistic; or, accusing it of having begun as morally relativistic and only in recent times having pivoted to being overly moralistic (e.g., overly concerned with Social Justice). In fact, Storm says, postmodernism has been a moralistic program from the very beginning. Storm says, however, that this moralizing is often negative (pointing out things that people ought to stop doing) rather than positive (describing those things people should be doing). Of course, Critical Theories are designed for negativity:
What Lindsay discusses in this chapter, he says, is the various tactics used for accomplishing Step 1. The reason, Lindsay says, is because Marxism (and CRT) doesn’t know how. This is the notion that nobody knows what the end (racial justice) actually looks like, only that they know what it doesn’t look like, and those are the things that must be Critiqued into oblivion, leaving standing only the shining Utopia that is contained inside (there is the dialectical thinking, that everything contains its own contradictions), like the statue in the block of marble.
Thus, the problem with Storm’s exhortations for a more positive ethics is likely to fall on deaf ears when activists view the obliteration of all opposition to their orthodoxy as the apotheosis of their evangel. When people have been taught to think in an inverted version of cognitive behavioral therapy, everything begins to look like violence, and as is the natural human tendency, the bad must first be subdued before one can worry about the good.
After talking about Max Weber‘s notions of value-neutrality not being a call to expunge values from the humanities and social sciences, and Franz Boas‘ call for value-neutrality being misinterpreted as moral relativism, Storm talks a bit about Hume’s distinction between is and ought. Storm references a 2005 paper by Allan Gibbard when claiming that the is/ought dichotomy is “incoherent.” The issue though, aside from Gibbard simply asserting that correct belief is normative, is that all paths from is to ought smuggles in a conditional whose antecedent is an a posteriori value statement. In other words, if we want to say:
- The belief in X is a correct belief
- Correct beliefs are ethically good
- Therefore the belief in X is good
But how can we justify saying that premise 2 is categorically true? Premise 2 is only good if we have the goal of holding correct beliefs. Thus, what we’re actually saying the following:
- The belief in X is a correct belief
- If someone thinks having correct beliefs is ethically good, then one ought to hold correct beliefs
- Therefore the belief in X is good
In other words, we have a hypothetical imperative and not a categorical imperative to accept that holding correct beliefs is good. Gibbard, however, says:
Alternatively, we might try construing the ‘ought’s in the statements that worried us as implicitly hypothetical: ‘‘If truth is to be the only object, then this is what one ought to accept.’’ And in a way, indeed, that will be my explanation of the paradox. There’s no special is/ought problem when the ought is hypothetical; hypothetical oughts can follow analytically from the facts. Think of oughts as equivalent to imperatives of a special kind; then hypothetical oughts are equivalent to hypothetical imperatives. And the validity of a hypothetical imperative can be fully a matter of fact. For example, from the is statement
If you were to climb out the window, you would escape the fire, and otherwise you wouldn’t,
follows the hypothetical imperative,
¡If you want to escape the fire, climb out the window!. (1)
Hare has taught us to see a hypothetical imperative like (1) as a conditional with an imperatives both in the antecedent and in the consequent:
If ¡Escape the fire! then ¡Climb out the window!.
(I indicate imperatives with a fusion of German and Spanish language punctuation.)
This kind of imperative does follow from an is. Anyone who accepts the facts must accept it, since sufficient norms to apply, given the facts, are introduced hypothetically in the antecedent. The logic of the concepts involved requires accepting the inference from the is premise to the hypothetical imperative conclusion no matter what substantive normative views are to be accepted.
If, then, the oughts of correct belief are hypothetical in this way, that might explain why some of them are equivalent to non-normative statements. Take this line, and we then don’t need to give up an is/ought gap in general. We can keep it and still admit that the form of a complex, hypothetical ought statement might make it follow from an is.
[Bold added]
But in what way do hypothetical oughts follow from the facts (i.e., from an is)? How is it analytic that a hypothetical about my preferences for avoiding harm is a priori determinable from the fact of the fire? My wanting to escape the fire is not something “contained within” or self-evident from the fact of a fire. The preferences of whoever is present to the fire is going to vary; it is merely inductive (a posteriori) that most people will prefer to avoid harm.
The same argument can be made for the other ways that Storm tried to defend the collapse of the is/ought dichotomy:
Epistemic Values and Scientific Spirit, : that when things are valid or accurate is held up as valuable to people again requires the a posteriori induction for the conditional to hold.
Values Based on Facts and Thick Ethical Concepts: of course people base their values on facts, but that doesn’t mean that the facts entail what those values ought to be.
Bridging Notions: Storm says that the formula “you want to achieve E; doing M is the one and only way to achieve E; therefore, you should do M” is analytic. But how? How is someone wanting to do E an analytic proposition? Person X and wanting to do E are not synonymous or “contained within” one another.
Values Masked as Facts: interestingly, this actually seems to argue for the is/ought dichotomy. People looking for facts that fit their values is exactly why value-neutrality is important. For instance, Storm uses as one example the IQ test. Well, it’s a fact that certain groups and individuals will be better at taking the IQ test, because the IQ test was testing for things that the people wanted to find. The issue was people saying that IQ was what was important to know about cognitive function. But what is important to people about cognitive function is a value judgement; the fact that some people are good at IQ tests and others are not doesn’t tell us what values we ought take away from that fact.
Storm uses the notion that there is no is/ought dichotomy to argue that it’s impossible for scholars not to bring values into their work (and perhaps in some instances it it even appropriate to do so). I don’t think Storm is right about the collapse of the is/ought dichotomy, but I take the point that wanting to be correct and accurate are values, and that values determine what research projects we even pursue. I also don’t see a problem with making value judgements in certain circumstances. It’s only an inductive, a posteriori judgement that most people (indeed, I would argue, the vast majority of people) prefer to maximize their well-being, which is good enough a reason to denounce and take actions to reduce the occurrences of lying, stealing, bigotry, murder, rape, war, genocide, and so on (and no, you don’t have to tell me, I already know I’m very wise for pointing this out).
—
Storm finishes this chapter with a positive vision of what the human sciences ought to promote: a virtue ethics geared toward a capital-H type of Happiness based on eudaimonia. This Happiness can be justified based on four things:
- Most of us want to be happy.
- Most of us want psychological and physical well-being.
- Most of us do not want to suffer unnecessarily.
- Most of us also want to live a life worth having lived, a meaningful life.
The way that the capital-H version of Happiness is distinguished from the normal, everyday lowercase-h sense of happiness is as follows:
- Happiness is not something that one finishes as if reaching a point of saying “this is Happiness and I need not continue working toward it”; rather, it is something one achieves by working toward it over the span of a whole life.
- Happiness is not primarily an emotion (i.e., hedonistic pleasure)
- Happiness is not constantly being cheerful, or never unhappy or disconnected. When suffering does occur, though, people can learn from it and not be broken by it.
- Happiness is living the kind of life our future selves (perhaps even our death-bed selves) can look back on with fondness and with as little regret as possible.
- Happiness is striving to reach our full potential and develop ourselves as human beings.
—
There are, I think, some important questions that need to be answered for a virtue ethics to work. Just to name off some of those questions:
- Does virtue ethics require anarchism (or some form of minimal state) to be truly virtuous (i.e., one cannot cultivate virtue if one is forced to perform virtuous actions; taking taxes from someone to give to the poor is not making the taxpayer virtuous)? Or is a virtuous society the sort of Platonic society run by philosopher kings who take measures not to scandalize the virtues of the plebs by allowing them access to unsavory knowledge? Doesn’t policy and law require consequentialist justification?
- How does one judge which virtues ought to be prioritized (or even which ones ought to be adopted at all)? How does one judge that a virtue is being adequately achieved? Is it better to try and fail at striving toward a greater virtue than to try and succeed at a lesser one? What if the virtue is adhered to, but doing so causes some other harm to self or others? Aristotle had the so-called “Golden Mean” in his virtue ethics, but how is this mean determined (and achieved)?
- Can virtues be contradictory to one another? For instance, if compassion is a virtue (as Storm says), should we have compassion for evil people? And should our compassion lead us to do things that are bad for ourselves in other ways?
- And what about when someone takes on political aspirations for the fulfillment of their own Happiness, but in so doing make things worse for everyone else (think of the Bolsheviks, the Khmer Rouge, the Maoists, etc.)? And what about people whose political tactics are to convince or indoctrinate people into believing themselves miserable?
- Also, if society is prosperous, does that reduce virtue since there aren’t as many obstacles to overcome? And if a lot of good art comes out of people speaking to their struggle (think of the Spirituals, Blues, Jazz, Rock-and-Roll, and Hip-Hop that has grown out of the struggle of black people in the United States), then wouldn’t a virtuous society (one with maximal human flourishing) miss out on a lot of that art?
—
Who gets to decide if someone lived a life worth living? If people who worked in a factory their entire life and had a family and friends decided at the end of their life that it was a good life, who are revolutionaries to say that it wasn’t a life worth living? Who gets to decide that the status quo, or that incremental improvement, isn’t good? What about those who do find Happiness in the status quo – is it less important merely because it’s not Revolutionary Happiness?
I’m not saying that nothing should change or improve. I doubt even the most conservative minded people would argue that there aren’t ways that things can be improved. My point is that revolutionary anything is by its very nature disruptive and will inevitably lead to a reduction in Happiness for some portion of the population. Why incrementalism works is that the cultural zeitgeist, or perhaps the Overton window, can change at the same pace as the social, political, and economic conditions. Its similar to the so-called Planck’s Principle that says that “science progresses one funeral at a time” – instead we might say that “social, political, and economic progress comes one funeral at a time.” In other words, in order for a new set of social, political, and economic conditions to allow for maximal Happiness, the people who would find those conditions antithetical to Happiness must grow old, obsolete, and fade into history (as opposed, hopefully, to just having anyone who disagrees with the revolution murdered or imprisoned, as communist regimes have often preferred to do).
Storm says:
So I want to try to imagine a nation dedicated not toward GDP, but toward national Happiness and toward facilitating its citizens pursuit of meaningful lives. I want to imagine a form of good government that works for the people instead of commanding them and that not only functions as a guarantor of democratic self-governance and collective autonomy, but also works for the promotion of virtue and deeper psychological flourishing. I want to call for a politics dedicated toward compassion, so that injustice can truly be overcome.
This rather chilling vision of a state whose function is to tell you when you’re experiencing false consciousness about your own Happiness and that you should (or indeed must) instead pursue some other form of state-sanctioned Happiness is very reminiscent to me of what the Catholic integralists are hoping for, but with their own vision of what Happiness is for everyone.
I’m of a mind that people often don’t know what will make them small-h happy, much less capital-H Happy. But I’m also pessimistic enough to think that no committee of administrators or bureaucrats or philosopher kings will be able to come up with the secret sauce of Happiness for everyone, either. As every communist country that has ever existed attests to, revolutions don’t produce any more Happiness the the suffering, socioeconomic inequality, and the ennui, or anomie, or disenchantment rampant within liberal capitalism. Humankind is almost certain to be miserable in one way or another no matter how the state (or lack thereof) is organized.
Chapter 8: Becoming Metamodern
Chapter 8 is pretty much just a summary and restatement of what the book already said, so I am not going to dwell on it here.
Conclusion
Jason Ānanda Josephson Storm’s book Metamodernism: The Future of Theory, is a well-written, thoughtful, and sensible monograph calling for a peace between modernism and postmodernism. Finding a way to work through the deconstructions of postmodern thinkers is imperative if the human sciences (the humanities and social sciences) is to have a fruitful future. Finding a path forward in the human sciences is what Storm is attempting to do in this book.
As Storm admits, this book is a propaedeutic or prolegomenon to what is christened metamodernism. Much of it is discussed in the very abstract, not getting into details of how the principles could be applied to different disciplines in the human sciences. Figuring out the practical application of the metamodern program would have to be done from those within the particular disciplines.
Storm’s book functions well as a propaedeutic to the necessary project of rescuing the human sciences from the quagmire of postmodern skepticisms. Metamodernism, however, is still a young movement in the human sciences. With the cynicism of postmodernism and the fanaticism of Critical Theory both rampant in the human sciences, I fear that there will be a lot of resistance to these metamodernist ideas. The epistemological relativism and ethical absolutism that postmodernism and Critical Theories have become offers a heady mix of pseudo-solipsistic self-assurance and moralistic self-righteousness that is very attractive to a post-religious society.
Although I disagree with Storm on some points, I find Metamodernism: The Future of Theory to be both erudite and lucid, thorough yet not weighed down with a lot of the jargon-laden and complicated diction that one comes to expect from literature in the humanities. It’s cool glass of water in the postmodern desert, offering real solutions to difficult problems (even if I’m not necessarily on board for some of those solutions). This book is for people like me, who find the postmodern impasse untenable, yet find themselves unable to ignore the very real issues it has brought to the fore. So, if you think you’re anything like me, then I offer both my sincere sympathy (nobody should have to be like me), but also my recommendation of this book.