Consciousness, the Brain, and Josh Rasmussen’s Counting Problem

Consciousness is one of the biggest philosophical questions we know of. David Chalmers says that consciousness has two issues: the easy question of consciousness and the hard question of consciousness. The former, while not easy per se, is much easier than the latter. The easy question has to do with everything that neuroscience, cognitive psychology, and so on, all pay attention to with questions such as: what are the neural correlates of perception, memory, belief, cognition, emotion, intuition, behavior, etc. and how can they be manipulated? How does thought/perception occur and how does it go wrong (e.g., cognitive biases, perceptual illusions)? How do thoughts/perceptions influence behavior? How are thoughts and behaviors shaped by biology and culture? Is the brain like a computer? And so on. These are all questions that we are relatively certain can be answered within the purview of these fields. Even if it is difficult to find the answers, we can be confident they have answers that will eventually be discovered. The hard question, however, is essentially this: how is it that non-conscious physical matter can give rise to conscious experience?

When contemplating the hard problem, we are asked to think about two possible worlds. Both of these worlds are exactly like our own in all physical ways. Every single subatomic particle has evolved according to the Schrodinger equation in exactly the same way, collapsing to the exact same observable state upon every “measurement” undertaken. Indeed, one of these two worlds could just be our actual world. The only difference between them is this: in one, which we can call W1, all of the humans (and perhaps other non-human organisms, but this is unimportant for our analysis) are conscious. In the other world, which we can call W2, all of the humans are not conscious. They are what are called philosophical zombies – these humans in W2 talk and behave in exactly the same way as the humans in W1, even claiming to be conscious (indeed, the humans in both possible worlds have spoken aloud all the exact same words). The people in W2 have the lights on, but nobody is home.

This thought experiment is meant to interrogate issues of consciousness, two important ones being:

  1. Is it even possible that the zombie world could exist?
  2. What is actually different about the two worlds?

To the first question, it appears that it is logically possible for the two worlds to exist (although there are issues with what is called the conceivability argument; see the SEP entry on philosophical zombies for more on that). In other words, there are no contradictions that can be found in the conjunction “all physical descriptions are exactly the same and consciousness does not exist” and no contradiction in the conjunction that W1 is possible and W2 is possible, nor in saying that either one of the two possible worlds is the actual world WA. The issue that arises, then, is why W1 = WA instead of W2 = WA? (Here, and throughout this article, I am using the equal sign “=” to mean “exactly alike in all ways and in no ways different” and not necessarily that they are one-in-the-same).

To the second question, we have already stipulated that, physically speaking, ω1 = ω2. (I will use the symbol ωi when discussing the full physical description of a world Wi). But, since W1 has consciousness C (where I will use C to mean “the complete description of everything that is necessary and sufficient for consciousness to exist and consciousness actually exists”) and W2 does not have consciousness, we know that, in fact, W1W2 (but that W1 = W2 + C, or conversely that W1C= W2). And thus, if it really is the case that the two worlds are logically possible, and that physically speaking ω1 = ω2, then it must be that consciousness does not supervene on the physical. Or, at the very least, that consciousness must be some sort of epiphenomenon, an incidental and non-causal “nothing” just sort of coming along for the ride.

There are, broadly speaking, four approaches to rectifying this problem (of the conjunction ω1 = ω2W1W2).

  1. The physical descriptions of W1 and W2 must be incomplete descriptions of everything that is necessary and sufficient for consciousness. Or, put simply, there must be some non-physical (perhaps spiritual or panpsychist) state of affairs σ such that ω1 + σ = W1 while ω2 = W2 with ω1 = ω2.
    • We might conversely think that, even if ω1 is a complete description of the state of affairs for W1, then there must be something λ missing from ω2, i.e., ω2λ =W2 with ω1 = ω2 (using λ is to indicate that it is not necessarily the case that λ = σ). This approach would be equivalent to either 1 or 2 in this list, so I will not consider it as a separate approach.
    • It might be argued that panpsychism is not adding some additional σ onto our description of the physical world, that in fact any complete description of a physical ωi is not (physically) possible without panpsychism in the same way that a complete description of our actual physical world ωA would not be possible if we left out, say, electric charge (i.e., having  λ be electric charge in any ωAλ description of our physical world). This may be the case physically speaking, but it is not logically impossible (as far as we know) to say that any ωi can be complete without consciousness (however, see 2 and 3 below). As such, I am saying that panpsychism is a candidate for our σ.
  2. It must be that ω1ω2. In other words, consciousness is supervenient on the physical, but we are just wrong that we can have ω1 = ω2 while simultaneously W1W2. This could perhaps be because we are missing some key aspect about the physical such that, if we took it into consideration, we would then see that it is, in fact, impossible for ω1 = ω2 while simultaneously W1W2.
    • Panpsychism could be another candidate here, such that ω1 “has” panpsychism (whatever that might mean) and ω2 does not, and so it must be that ω1ω2 since it just is the case that ω1 = ω2 + C.
    • Identity theory might also be a candidate here, since this says that C is an (improper) subset of ωi (i.e., Cωi) for any ωi (possibly improper because, conceivably, it could be the case that physical stuff just is mental stuff, and so we maintain the identity). Thus, the only explanation for W1W2 is that ω1ω2.
    • Orchestrated objective reduction may also be a candidate, where it is saying that we are wrong in our thinking about ω1 since we are missing a key fact about quantum mechanics. In other words, some quantum effect is the over looked C component of ω1.
    • Property dualism would also be a candidate for this, where the “mental property” is the overlooked C component of ω1.
    • Integrated information theory might also be a candidate, where Φ is the overlooked C component of ω1.
  3. It must be that, given ω1ω2, that W2 is not possible (i.e., ¬◇W2 ∀(ωi, ωj | ωi = ωj)). Put another way, there is some logical contradiction in the conjunction ω1 = ω2W1W2, but we are perhaps too limited in some way to discern this contradiction, or have not fully considered every possible inference that can be made from the conjunction to see if it leads to a contradiction.
    • We might say that panpsychism is necessary, i.e., that for all complete physical descriptions of a possible world ωi it must be the case that consciousness C is part of that description (i.e., that ∀◇ωi ⇔ ◻Cωi, again using the improper subset symbol because, conceivably, consciousness could be the only thing that exists).
  4. It must be that if ω1 = ω2, then (perhaps necessarily) W1 = W2 (i.e., ω1 = ω2W1 = W2). This seems like a contradiction, since it is essentially saying that “a world in which consciousness exists is a world in which consciousness does not exist,” (since we defined W2 as a world in which consciousness does not exist) but this is essentially the thesis of eliminative materialism. I think this position has serious flaws, but we could modify the claim to be saying that our conception of consciousness is perhaps flawed or incorrect in some important way that would make W and W2 either indistinguishable from one another, or else simply both descriptions of the worlds being incorrect in some way as it pertains to what we mean by consciousness, as opposed to our description of the physical (i.e., what we call consciousness is not actually consciousness, or that we can only describe consciousness analogically or equivocally rather than univocally, or something of this sort).
    • New mysterianism might be an example of this.
    • As mentioned, eliminative materialism is also a candidate.
    • Now, it may seem strange putting new mysterianism in a category adjacent to eliminative materialism, but I think the latter is similar to the former insofar as eliminative materialism says that our “qualia” is some sort of illusion and therefore inexplicable (because who is under such an illusion?). Indeed, the Stanford Encyclopedia entry on eliminative materialism says:
      • “Modern versions of eliminative materialism claim that our common-sense understanding of psychological states and processes is deeply mistaken and that some or all of our ordinary notions of mental states will have no home, at any level of analysis, in a sophisticated and accurate account of the mind.” [bold added]

What we are looking at, then, is whether the conjunction ω1 = ω2W1W2 is incomplete in some way (some σ must be added or λ removed to make it complete), or whether ω1 = ω2 must be wrong in some way we have not discovered or considered, or whether it must be that ¬(ω1 = ω2W1W2) is the case for some logical reason we have not thought of or considered yet, or whether we are simply wrong about what we think consciousness even is.

The first approach is one that commits itself to dualism (or pluralism of some degree) and so I will refer to it as the dualist position. One of the issues facing this approach is that it must postulate a greater ontological complexity, i.e., that there are more substances than just the physical that we are all intimately familiar with. The proponent of dualism will say that we are intimately familiar with the non-physical as well, in the form of qualia, perhaps by pointing to (for instance) the knowledge argument. Another problem face by this position is the interaction problem: how is that the two (or more) different substances interact? For instance, how does the mental substance interact with the physical substance? And if this interaction were to occur, it must break physical causality, which is something that is in principle measurable (as some ghostly F = ma), yet has never been measured. Another problem is that the mental appears to be dependent on the physical – changes in the brain cause changes in the mental, but any changes in the brain can be accounted for in a purely physicalist description (there is no ghostly F = ma that causes neurons to fire, but instead fire only in response to physical stimuli).

The physicalist approaches (approaches two, three, and four from above, which I will call the incomplete physicalism or IP, logical necessitarianism or LN, and new mysterianism or NM approaches, respectively) have their own issues. One of the biggest is one I’ve already mentioned, which is the knowledge argument. The Stanford Encyclopedia entry on it puts the problem this way:

Frank Jackson (1982) formulates the intuition underlying his Knowledge Argument in a much cited passage using his famous example of the neurophysiologist Mary:

Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal chords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’.… What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not? It seems just obvious that she will learn something about the world and our visual experience of it. But then is it inescapable that her previous knowledge was incomplete. But she had all the physical information. Ergo there is more to have than that, and Physicalism is false.

The argument contained in this passage may be put like this:

(1) Mary has all the physical information concerning human color vision before her release.

(2) But there is some information about human color vision that she does not have before her release.

Therefore

(3) Not all information is physical information.

Most authors who discuss the knowledge argument cite the case of Mary, but Frank Jackson used a further example in his seminal article: the case of a person, Fred, who sees a color unknown to normal human perceivers. We might want to know what color Fred experiences when looking at things that appear to him in that particular way. It seems clear that no amount of knowledge about what happens in his brain and about how color information is processed in his visual system will help us to find an answer to that question. In both cases cited by Jackson, an epistemic subject A appears to have no access to particular items of knowledge about a subject B: A cannot know that B has an experience of a particular quality Q on certain occasions. This particular item of knowledge about B is inaccessible to A because A never had experiences of Q herself.

Much ink has been spilled on these thought experiments. I am not going to go into objections and refinements of the argument here (if you are interested, definitely check out the link to the Stanford Encyclopedia provided above). Instead, I wanted to focus on a newer and less well known problem raised against any sort of physicalist approach: the counting problem. Josh Rasmussen, in the linked paper, puts it this way:

P1: For any class (or plurality) of physical properties, the ps, there is a mental property of thinking that the ps are physical.
P2: There are more classes of physical properties than there are [individual] physical properties.
P3: Therefore, there are more mental properties than physical properties. (1, 2)
C: Therefore, not every mental property is a physical property.

We can think about it this way: I can think that p is physical and I can think that q is physical. But I can also think that pq is physical. Thus, there are more ways to think about the two particular physical things p or q than there are physical things p or q. This is Cantor’s theorem which says that for all sets A, the cardinality of A is less than or equal to the cardinality of the power set of A, i.e., card(A) ≤ card(𝒫(A)). In our example, we have the set A = {p, q} and the power set 𝒫(A) = {p, q, pq, } (the empty set is essentially the mental state of not thinking about either p or q) where we can see that 𝒫(A) contains more members (has a higher cardinality). As it turns out, for any set of cardinality n, the power set will always have cardinality 2n.

We can think of this as a mapping Ψ from the set of all the possible physical brain states X to the set of all the possible mental states Y

Ψ : XY

And so, if card(X) < card(Y), then Ψ would need to be an injective function, with the Y0 = Y\(Ψ : XY) such that Y0Y being whatever the non-physical is (where Y = YIY0, with YIY being those with pre-image elements in the domain X and Y0Y being those without pre-image elements in the domain X such that YIY0 = ). But are we justified in thinking that card(X) < card(Y)?

Lets think about the cardinality of Y. How can we quantify it? First, let us say that a mental state is a complete description of a mental state in some time interval Δt (because mental states occur in finite, nonzero time; more on this later) including all senses, emotions, thoughts, and so on. We can therefore say that, if you are in a brain state xiX that contains a particular set of conscious sensations, emotions, thoughts, etc., then changing even one of those things in any consciously noticeable way (say, instead of smelling coffee you are smelling tea, but all else is the same) we then have xj such that xixj and xi,xjX since

Ψ : xi → yi ∧ Ψ : xjyj,   where    xi ≠ xj yiyj

Even a small change in, say, vision would be a different mental state. For instance, if just one part of your vision (the “smallest part” of the vision field is) is off by just the smallest amount that can be noticed (it is said that humans can detect about a million different colors), then that is a different mental state.

Human visual acuity is calculated by

s = d×tan(θ)

Where s is the size of objects we can distinguish, d the distance from our eyes, and θ an arcminute (1/60 degrees). If we say that, on average, a human is looking at things about 3 meters away, then the size we can distinguish is about 0.0009 meters (about 0.9 millimeters) which we can around up to 1 millimeter. With a visual field around 180 degrees, when looking at things 3 meters away, we then have the hemispheric surface 2πr2 = 2π(3m)2 ≈ 57 m2 as our average visual field size. We can thus see around 57 m2 / (1×10-6  m2/things) = 57×106 things up to 3 meters away. Now the minimal photons to register something is 2 to 7 photons (we’ll call it 5 average) and so we will say that it is 5 photons per thing that we see, (which is around 285×106 ≈ 3×108 photons total). With a million different colors that the human eye can detect, we can therefore have a total possible number of visual states of 1×106×57×106 ≈ 3×1012 different visual states. Of course, this is leaving out different intensities of light, which can range over all the 57×106 possible things we can see. Lets say, for the sake of argument, that this increases the number of possible visual states to 1×1015 different visual states (about 1 quadrillion, which I think is being very generous).

We then have to do the combinatorics with all other perceptions, feelings, thoughts, and so on. For instance, we have to range over all 1×1015 different visual states while smelling coffee and then 1×1015 different visual states while smelling tea, and so on (not to mention all the intensities at which we can smell). The total number of smells is also apparently about 1 trillion. If we then account for different smell intensities, lets also call this an even 1×1015 different smell states. With just vision and smell, then, we are up to 1×1030 different mental states. Let us assume, for the sake of argument, that all five senses have around 1 quadrillion possible states each, meaning that all the possible combinations of different perceptual states is (1×1015)5 = 1×1075 perceptual states.

The adult brain has around 100 trillion (1×1014) neural connections. If we assume each neural connection has, at any given time, two possible states (firing and not firing) we then have 2100,000,000,000,000 ≈ 10^1013 (ten to the power of ten to the power of thirteen) possible brain states. Of course, not every neural connection is necessary for someone to have a conscious state (parts of the brain can be damaged or removed and someone can still be conscious), but even if we reduce this down to a billion neural connections (down by five orders of magnitude) that are necessary (and maybe sufficient) for consciousness, we still have roughly 21,000,000,000 = 10100,000,000 = 10^108 (ten to the power of ten to the power of eight) different brain states that are necessary (and maybe sufficient) for consciousness.

If we take this 10^108 to be the cardinality our domain of possible physical states X, then this, to me, seems like many more than is necessary for all our possible mental states. Even if, upon accounting for all other components of conscious mental states like emotions, thoughts, and so on, we get our possible mental states much, much higher – lets say, for the sake of argument, 1×1010,000 = 10^104 possible mental states once everything is considered (being that with just perceptual states we had 1×1075 I think this is being quite generous), this is still much less than our 10^108 possible brain states that are important for consciousness.

Indeed, if anything our function Ψ might even need to be surjective, with multiple (as many as 10^104) possible brain states mapping to each mental state.

Ψ : {x1, x2,…,xN} → {y1, y2,…,xM},     N≫M

This will require that any two given people (or the same person at different times) could possibly have the exact same mental state (sensing all the exact same things while experiencing the exact same emotions, thoughts, and so on). Even if such a thing is possible, it is likely improbable.

What this means is that we need to then account for all the possible conscious beings that could ever exist. It is predicted that the universe will last about 100 trillion years (1014). Its thought that there may be as many as 1030 planets in the universe. Lets say that an average planet lasts for about 10 billion years before being destroyed or becoming completely uninhabitable. That means, in the lifespan of the universe, there will be around 1054 planets that come into existence. Estimates for the number of possible future humans is around 625 quadrillion (6.25×1017). Lets say that every planet that will ever exist in the universe will have this many conscious beings – in fact, for the sake of argument, lets go up an order of magnitude and call it an even quintillion (1×1018). We then have 1054×1018 = 1072 conscious beings that will ever exist in the universe. Lets also then assume, for the sake of argument, that any given mental state lasts for a millisecond (the next mental state might be extremely similar to the prior one, but we would still count it as a separate mental state) and so, if we assume that the average human lives 100 years, that means we end up with approximately 1012 mental states per person over their entire lifetime. And so, if we then multiply this by all of our possible conscious states for a single person and multiply by all the conscious mental states a person will have over their life we get 1010,000×1072×1012 = 1010,084 possible conscious states. This still pales in comparison to our 10^108 possible brain states that are necessary for conscious experience.

The upshot here is that, even if it is impossible for two brain states to map onto the same mental state such that no two (or more) people (or the same person at different times) can occupy the same location within the space of all possible mental states, i.e., even if we need a bijective mapping between brain states and mental states, it is still extremely unlikely that all such possible brain states will be achieved in the lifetime of the universe and thus every mental state can conceivably be unique. But this itself would be a fascinating result, as it would mean that the age old question “is your blue the same as my blue?” would likely need to be answered in the negative. In fact, “my blue” at time t1 and “my blue” at time t2 might not even be the same if it is unlikely that I would visit the same brain states while experiencing those (presumably) same, or very similar, mental states. In other words, if it is bijective instead of surjective, then if qb is the qualia of blue and qbyi for some mental state yi, then

Ψ : xiyi  ∧  Ψ : xjyj,  ∧   xi ≠ xj   ⇔   yiyj

Which means that the two mental states are not (and can not be) the same. Certainly this could just mean that any other perceptions, emotions, thoughts, etc. that we’ll call the complement of the blue qualia qb = ryi with qbr yi and qbr = ∅, but it could also be that the qualia for blue qb in each mental state qbiyi and qbjyj is not the same qbiqbj.

Now, if Ψ is a surjection, then we can have

Ψ : xiyi  ∧  Ψ : xjyj,  ∧   xi ≠ xj   ⇒  yiyj   ∨   yi = yj

And so, two different people (or the same person at two different times) with two different brain states xi ≠ xj can have the same mental state (including qualia) yi = yj. It would not necessarily have to be the case, but it would be far more likely than if Ψ were bijective.

The space of all possible brain states can be thought of as a μ-dimensional discrete topological space with topology τ so that we can say T = (Xμ, τ)⊗Δtmin. Every point in the space, then, is a member of our set xiXμ⊗Δtmin and can be given coordinates

xi = xi(x1, …, xμ)⊗xitmin)

We’ll say that the μ dimensions of this space are binary, where xk = 0 when the neural connection is in the state “not firing” and xk = 1 when the neural connection is in the state “firing.” We then see that the dimensionality of T depends on the exact number of neurons in a brain and thus the dimensionality will be different for different people (and for the same person at different times during their life). This adds a great deal of complexity, but for simplicity we can say that μ = 109 (the billion neural connections we have said is necessary for consciousness) for all humans.

You will notice that there is also a tensor product with xitmin) in our space of all possible brain states. This is because of something I’ve neglected to talk about up until now, which is that an instantaneous brain state is almost certainly not mapping onto an instantaneous mental state. The human brain is of finite, non-zero size, and so cannot communicate faster than the speed of light (and in fact communicates much slower than the speed of light). As such, a brain state xi must occur in some time span greater than or equal to some interval that I am calling Δtmin i.e., it must be that Δt ≥ Δtmin for some brain state xi to be mappable to a mental state yi

Ψ : xi(x1, …, xμ)⊗xitmin) → yi

The other thing to notice is that there are brain state neighborhoods such that xi can be close to (or far away from) some other brain state xj. We can also stipulate that if xi and xj are close and

Ψ : xiyi

Ψ : xjyj

Then yi and yj are also close (i.e., the mapping is continuous). What this means is that two brain states that are close in T result in two mental states that are close in Y (note that if Ψ is a surjection, then this does not necessarily have to hold). This would make sense since, between some time t1 and another time t2, the evolution of our mental state is nearly continuous (nearly because T is discrete, but is very dense and so it could “appear” continuous). In other words, as Δt = t2t1 approaches zero (or at least approaches Δtmin), we also get that Δx = ||xjxi|| approaches zero and so likewise Δy = ||yjyi|| approaches zero.

It is possible to have sudden changes in Δy = ||yjyi|| such that Δyt becomes large (think of being surprised or startled by something), but could it be so sudden that Δt is effectively zero and so Δyt blows up? I would say no because, again, for a mental state to occur we need that Δt ≥ Δtmin. Thus, any Δt < Δtmin would not register as a mental state.

The continuity of Ψ is just a consequence of determinism. Any theory of free will would need to postulate that Ψ is discontinuous (the discontinuity of Ψ is a necessary, but not sufficient, condition for free will). It might be argued that quantum probability would make Ψ discontinuous. This, I would argue, would only make Ψ locally stochastic – if, say, the threshold of some kth connection going from the “firing” state xk = 1 to the “not firing” state xk = 0 (or vice versa) is within some range on the order of a quantum uncertainty, this might cause a non-deterministic (nd) change Δxnd = ||xjxi|| with xj,xiU for the neighborhood UT, where card(U) ≪ card(T) since it is astronomically improbable that for every (or even a great many) k there is a non-deterministic change in xk during Δtmin (or, indeed, even in Δt ≫ Δtmin for some Δt on the scale of a human life). We can thus think of the course of a human life tracing out some “blurry” path in T where the “blurriness” of the path is perhaps some ±δψnd(t) such that

Ψ : [xi(x1, …, xμ)⊗xitmin)] ± δψnd(t) → yi ± ε

The counting problem is still not defeated. We can define a mapping A from each yi to zi, the aboutness or intentionality of the mental state yi.

A : yizi

We can think of it this way: a mental state yi is about something, it has subjective contents. This is a hallmark of consciousness. And so, intuitively, it must be that A is bijective, i.e., each mental state has its own intentionality. This means that we can come up with at least card(Y) ≥ card(T) in the following way.

For each physical brain state xiT there is a possible mental state yiY that is about xiT. In other words, there is a mental state yiY that has a thought

gliyiY

where gli can be a single thought gl that ranges over multiple fi such that yifi + gl (a single thought gl can accompany multiple perceptual and emotional states fi = yigl).

We can consider a thought gli that is about the physical brain state xiT. Thus each of the 10^108 possible brain states has at least one associated thought gli and therefore its own associated mental state. To think about it more concretely, we would say that there is a gl1 that is the thought “all 109 neural connections necessary for consciousness are in the not firing xk = 0 state” and then a gl2 “109 – 1 neural connections necessary for consciousness are in the not firing xk = 0 state with neuron 1 in the firing xk = 1 state” and then a gl3 “109 – 1 neural connections necessary for consciousness are in the not firing xk = 0 state with neuron 2 in the firing xk = 1 state” and so on through each combination up to a gl10^108 “all 109  neural connections necessary for consciousness are in the firing xk = 1 state” which takes us through all 10^108 of our possible brain states.

We could use this same thinking on the real numbers as well. Since a mapping α (I will use the Greek symbol alpha when talking only about the mapping of thoughts gli to their aboutness zi rather than complete mental states yi) can take thoughts to real numbers (where ℝ is the range of our ziZ):

αgli → ℝ     ∀i∈ℝ

And so there must be an uncountably infinite (20) number of thoughts gli (which would blow up our number of mental states yi since there can be multiple yifi + gl associated with each gl). And it is easy to see that 20 ≫ 10^108. However, it is with the numbers that we can see where this runs into problems. This is because each number, even trascendental numbers, is represented by repeated applications (or concatenations) from a set of numbers with a finite cardinality, namely the set

N = {n | n = 0, 1, 2, 3, 4, 5, 6, 7, 8, 9}

The reason is that it would be (at least physically) impossible, as I will argue below, to simultaneously hold ℵ0 elements in a single thought (much less 20 elements; and, as discussed below, even finite numbers over a given size), and so such large thoughts must be constructed from spatial and/or temporal combinations of smaller thoughts (such as the number 69,420,666,800,842 being made from elements of our set N).

And so, even if we are to think about, say, the entirety of the number π (assuming that thinking of an infinitely large object is possible; more on this in a bit), this would be done by repeated application of each nN, where each application of, say, n = 4 is an identical gli such that

(αgli → 4 ∧ α : glj → 4) ⇒ gliglji,j

This is because it is impossible to simultaneously think about π in its entirety (at all, but especially in some suitable Δtmin in order to register it as a thought) without abstraction to a simpler thought. In other words, it is impossible to simultaneously hold every digit of π in one thought without simply abstracting it and calling it “π“.

Similarly, it is impossible to hold every single xk = 0∨1 at the same time in a single thought gl in any suitable Δtmin. We would therefore need to perform repeated applications of the thought “neuron xk is in state 0″ or “neuron xk is in state 1″ for all 1 ≤ k ≤ 109 in order to obtain a thought about the entirety of a single brain state xi. Each such thought gl is identical such that (with b being either 0 or 1)

(αgli → xk = bα : gljxk = b) ⇒ gli = glji,j

These thoughts about an entire brain state are therefore not unique. There are two reasons this works. The first we already encountered: because of the fact that any thought needs to be within some Δt ≥ Δtmin. In other words, we cannot think infinitely fast, and so any thought about a complete brain state must be done in finite time, with each simple thought (composing the complex thought) being done in time Δt ≥ Δtmin. The more important reason, however, is because no brain has an infinite capacity to hold a single thought. What this means is that, to think about something beyond the capacity for a single thought, it must be done by repeated application of a finite number of simpler thoughts.

Indeed, by one estimate the human brain can store, total, up to about 2.5 petabytes (2.5×106 gigabytes or 2×1016 bits) of information. This means that we have an upper limit on the information H that can be held in a single thought of Hmax = 2.5 petabytes, which is far short of how much information would need to be in a single thought in order to hold a simultaneous thought about a complete brain state, as opposed to repeated applications of simpler thoughts gli with

H(gli) ≤ Hmaxgli and  Hmax ≥ ΣMi=1Hi(gli) for any M

But, if instead we say that a single thought is constrained by working memory, which is often said to be able to hold around 7 “chunks” of information, then we have H’max < Hmax). We could say that this amounts to H’max = 27 = 128 bits of information that can be held in a single, simultaneous thought. Of course, even if we argue that our working memory is not in binary, we could say that each of those 7 chunks could be a number between 0 and 9, which gives us H’max = 107 bits of information we could hold. Further, we do not just need to think about numbers in each of the 7 “chunks” but we could add letters as well, giving us H’max = 1026107 = 1034 bits of information.

This is all being very generous to the size of a single thought in working memory, but the point of this is that, even with this extremely large overestimate of H’max = 1034 bits of information we are still falling far short of being able to hold a simultaneous thought about an entire brain state, forcing us to think about it through repeated application of simpler thoughts, each instance of which are identical to each other. The upshot of this is that there is no mapping

αglizi

For any gli such that H(gli) > H’max, since this would need to be thought using M repeated applications ⊕ of simpler gsj with H(gsj) ≤ H’max.

gli = ⊕Mj=1gsj   

Where card({gsj}) ≪ card(T).

In the limit, as H(gsj) approaches H(gli) (in other words, going from H(gli) > H(gsj) to H(gli) = H(gsj)) for some thought gsj, the thought gsj would have to become identical with the object Θ the thought is about zj(Θ). In other words, to think about it fully and simultaneously would simply be to be the object Θ. But then it ceases to be a thought about the object zj(Θ) and simply becomes the object Θ. What this entails is that thoughts are always models or approximations of the objects they are about and thus contain less information than the objects they are about, meaning that H(gli) < H(Θ).

The question then becomes: what is the nature of Ψ? It might be alleged that even calling this a mapping is committing us to dualism – the domain and codomain are two separate “entities” or “substances” of some kind. This need not be the case, though, since Ψ could be something like a transformation (e.g., a Fourier transform). But either way, it seems like answering what Ψ is just is the hard problem of consciousness: how can it be that something completely physical like T = (Xμ, τ)⊗Δtmin gives rise to (seemingly non-physical) mental states Y = {yi}?

Approach 1 (dualism) from above would likely say that T is incomplete. We would need, for instance, some σ such that

Ψ : [xi(x1, …, xμ)⊗xitmin)⊗σ] ± δψnd(t) → yi ± ε

The other approaches (approaches 2, 3, and 4; or as I am calling them, the incomplete physicalism or IP, logical necessitarianism or LN, and new mysterianism or NM approaches, respectively) would attempt to avoid having to append our domain with σ. IP might simply say that T is capturing something we do not yet understand about the physical: think of Penrose and Hameroff’s orchestrated objective reduction that says some mysterious aspect of quantum mechanics explains how Ψ can take non-conscious physical T and map it to a mental yi. LN would argue that Ψ is necessary, i.e., that given T = (Xμ, τ)⊗Δtmin, it is logically necessary that yi and so

yi ⇔ ∃(Ψ : Tyi)    ∀xiT ∧ ∀yiY

Which means that

T ⇔ ∃(Ψ : Tyi)    ∀xiT ∧ ∀yiY

The NM approach would say, perhaps, that we do not, and perhaps cannot, know the nature of any yiY. We might know, for instance, that T is sufficient (and perhaps necessary) for any yiY, but that we might instead need some σ appended Y instead of to T, such that

Ψ : [xi(x1, …, xμ)⊗xitmin)] ± δψnd(t) → yiσ ± ε

Where σ is unknown, and possibly unknowable. Or it just may be that whatever Ψ is doing is unknowable, it is a solution to a differential equation that cannot be solved.

Any three of these non-dualism approaches might, as I suggested earlier, treat Ψ as a transformation sort of like a Fourier transform. What I mean by this is that Y might just be some “other way” of thinking about T, it is just T in the “consciousness domain” or T in some new basis. This would certainly gel with the identity theory (that mental states just are brain states).

Approaches to the Hard Problem of Consciousness

Concluding Remarks

I’m certainly not going to claim that I have solved the hard problem of consciousness here. I am not even so bold as to claim that I have ruled out or narrowed down any of the four approaches to thinking about the hard problem of consciousness. If there is one main takeaway from this analysis, it is that Josh Rasmussen’s counting problem is not a defeater of any strictly physicalist approach to the hard problem of consciousness. The issue then is that, if my rebuttal here is successful, it only means that the hard problem remains as hard as it ever was.