Is the Human Brain a Computer?

brain computer

The popular, even ubiquitous metaphor used in cognitive neuroscience is that the brain can be likened to a computer. The similarities seem obvious: neuronal activity is binary (a neuron is either depolarized (ON) during an action potential or polarized (OFF) when inactive); our vision and hearing has many aesthetic similarities to a computer display (indeed, the monitor is made exactly to fit the human experience of colors, shapes, etc.); humans process information (we can sit down and think through a math problem, for instance). So on and so forth. But is the “brains are computers” metaphor accurate? And if not, then is adherence to this metaphor slowing down progress in neuroscience?

This post inspired by this article.

The author of the linked article, Robert Epstein, argues that the computer metaphor, or Information Processing (IP) metaphor, ought to be abandoned. The human brain is not a computer, but a biological system that works in ways not yet understood. He uses the issue of our imperfect memory to show that we are not like a computer, which possesses perfect recall. A person trying to draw what a U.S. dollar looks like from memory is unlikely to reproduce a perfect copy of a U.S. dollar. It will probably be missing many details, being simplified, and likely will have inaccuracies.

I think two more issues highlight a problem with the IP metaphor for brains. The first is one that I find interesting, but also frustrating. It happens to me a lot: there is something I know, but I can’t remember it. This happens a lot with words, for instance. I’ll know that I know the word, and I know what it means, but I can’t recall the word. For some reason the word “arbitrary” is one for which this happens to me quite often. Stranger still, without any sort of external stimuli, the word can sometimes all of a sudden come back to me. “Oh, yeah!” I think to myself, “the word is arbitrary!” Where, exactly, was the word when I didn’t remember it? Clearly it was ‘there’ in some sense, because I was able to remember it without looking it up, meaning it wasn’t gone. This is different from a computer, for instance, which either has the information or it does not have the information. A computer can’t sit there, absent a document that was lost, and without prompt just recover the document. The human brain can, though.

The second thing is a phenomenon I’m sure most people have also experienced. It’s the phenomenon of expert taste. I don’t mean just flavor, but taste in many things: art, music, math, language, and even, yes, food. Music is one that comes to my mind quite often. I’m sure most people have a genre or two that, when they listen to it, they think “how can people like this? It all sounds the same!” Many people have also probably had the experience of someone saying that to them about their own taste in music: “you like [insert genre]? It all sounds the same! And it’s all noise!” Yet, the person ensconced in a particular genre of music will be able to discern the differences and hear how each song is different, how each artist/band is different, how each album by an artist/band differs from previous ones, and even start slotting things into different subgenres, sub-subgendres, and so on. The example of music comes to my mind because I fairly recently got into extreme metal music (after being a hip-hop head my entire life) and I made sure to pay attention to my experience of being able to hear the music differently over time. Now, after about three years, I’ve gotten quite good at being able to hear a song and think “yup, that’s black metal alright.” When I first started getting into extreme metal, I wouldn’t have been able to make that distinction. A computer, on the other hand, would be great at picking out different sounds right away, if that’s what it was programmed to do.

These examples, as well as the ones offered in the linked article, tell us that there is something importantly different between the human brain and (at least current) computers. I think the IP metaphor does push the metaphor past its breaking point and ought to be revised, but does not need to be thrown out altogether.

One of the things that makes a brain differ from (current) computers is that brains are plastic (in the sense of being able to change over time). When a computer chip is made, the transistors do not change over time. A particular transistor will always give the exact same output when given the exact same input. Indeed, the lithography of computer chips must happen in extremely clean environments so that the transistors do not undergo even the smallest changes. A brain, on the other hand, is recursive. When neurons experience an input, they give an output. Those outputs, which are what we would call behavior, generate new inputs. The way these inputs are processed so as to generate outputs are altered by the computation via the strengthening or weakening of neural connections (and even the formation or culling of neural connections altogether). This means that a person or animal will not give the exact same behavior in response to the same experience in every situation.

One of the places I think the IP metaphor is still helpful, though, is that, even if it is not happening in exactly the same way, networks of neurons are still processing information. The brain’s network does not have a one-to-one topology-to-object way of processing the experience of some object, but it is a form of data compression. This is why our memories are inaccurate, why unfamiliar things (such as genres of music) are experienced at a lower resolution, and how bits of information can be “lost” by the compression and then recovered (something akin to compressed sensing).

The brain needs to be able to change in order to learn. Computers learn by altering magnetic polarities on the hard disk (among other methods) that can be read as a 0 or 1. These 0’s and 1’s can then be read by the read/write heads, which tell the computer whether to send a current or not. The current then runs into the transistors, which compute based on the input of charge. The brain could be seen as storing information not on some hard disk as “ON” and “OFF” instructions for neurons, but more similarly to the topology of the transistors (i.e. how the transistors are connected).

It would be like having a computer that stores data by changing the way the transistors are connected to once another. Imagine a computer that, when you installed a new program, altered the way the transistors were connected, thereby changing the way it processed information. One could imagine how this would change the way other programs worked, since multiple programs would be using the same network of transistors. This would be akin to how new experiences change how we perceive and experience memories; it would explain how listening to a genre of music would alter the very experience of listening to that genre of music.

I would be surprised if brains turned out to use the same Boolean algebra of digital logic gates in order to do computations. We don’t seem to have cracked the “neural code” the way we have the genetic code. How a certain topology of neural connections could give rise to a certain kind of thought (even if we ignore consciousness and think of the thought in a purely informational sense) has not yet been determined.

The important similarity of a computer chip and its transistors to the brain and its neurons is that, given a snapshot of a brain state, stimulated by a particular input, the output will always be the same. Brains are dynamic and therefore will not remain the same, being altered by the experiences themselves. But in this thought experiment we can think of the network of neurons processing information in an analogous way to a network of transistors. One of the main differences, as I said, is that there is no separate hard drive on which memory is stored. When we experience a memory, it is because information is processed by the brain in a manner similar to how it was processed during the initial experience. Indeed, the evidence seems to support the model that recalling a memory is, essentially, a re-experiencing of the memory. That memory did not exist in some place, waiting to be shunted into the information processing network of the brain the way memory works in a computer. It exists as the particular topology of neural connections, which allows for the processing of information in a way that reproduces (in a compressed, imperfect way) the experience had when the memory was formed. This imperfection comes from the data compression itself (a brain is not the thing it is thinking of, only generating a model or approximation) and the fact that other experiences since the formation of that memory has reworked the particular topology produced from the experience.

It would not be all that helpful for the brain to store information like a computer. With a computer, we want the information processing to reliably work the same every time. When I run a program, I want it to work exactly the same as the last time I ran it and I want to know that when I run it in the future it will work the same as it does now. The inputs that I will give that program will be the same and I want the outputs to be reliable and predictable – a program could be defined as a set of instructions that take a type of input and gives a type of output Programs are useful because they perform a particular function efficiently and reliably.

The brain, however, must constantly adapt to new inputs. No two experiences are exactly alike. The number of possible experiences is large enough to functionally be considered infinite. Just change one little tiny (though still perceptible) thing in an experience and it makes it a different experience. Additionally, the brain is not simply meant to take inputs and give outputs. It has evolved to predict possible future inputs. A program on a computer doesn’t open up and try to predict what inputs you will give it. It does not have to adapt to anything new. The brain does. Its plasticity and ability to learn is one of its defining features. If we stored memories like a computer, then having a similar, but still different experience would be like having a completely new experience as if never experienced before, because it does not reliably map onto the memory of the similar experience. The imperfectness of memory allows for greater adaptability.

The important takeaway here is that although a computer is not a perfect analogue for the brain, the brain is still an information processor. The way the brain processes information is different from the way a computer processes information. The brain stores information via the topology of its connections rather than as strings of binary on a hard drive. Even though we don’t know the “code” or “language” of the brain’s information processing, it is still processing information.