Unsolvable Problems: Hyperobjects and Cognitive Closure

Kardashev scale

Possibly the guiding principle of modernity is that any problem can be solved if people just put their minds to it. Science and liberalism have been astonishing successes in raising the standard of living, in an objective sense, for more people than at any other time in history. People like Steven Pinker love to wax optimistic about how Enlightenment values and scientific progress have made the world an objectively better place to live than ever before, with the implication that things will only get better. But is this really true?

I’ve made no secret on this blog that I am a philosophical pessimist. I’m sure my outlook is much gloomier than most people. But lets try examining things in an objective way. Are there problems that are impossible, or at least difficult to the point of being functionally impossible, to which humans will eventually have to face up?

I see there being two ways in which a problem can be unsolvable. The first is due to what Timothy Morton calls a hyperobject – objects that are so massively distributed in time and space as to transcend spatiotemporal specificity. Morton gives as examples for hyperobjects: “the biosphere, climate, evolution, capitalism.” This vsauce video also talks about this concept of a hyperobject:

The other sort of unsolvable problem is what Owen Flanagan and Colin McGinn would call cognitive closure. This is the notion that there are simply just some things our human minds, evolved for survival instead of truth-finding as they are, will remain constitutionally incapable of understanding. These mysterious things include: “consciousness, the self, meaning, free will, the a priori, and knowledge.”

To me it seems like hyperobjects are issues that humans, given our evolved intelligence and the trend of our technological and scientific progress, might in principle be able to solve, even if it turns out we can’t in practice. Cognitive closure, on the other hand, are issues that may be in principle impossible for humans to understand with just our evolved intelligence and the trend of technological and scientific progress.

One of the issues with hyperobjects is that they require the ability to think in terms of complexity and dynamical systems, where sensitivity to initial conditions and multiple, non-linear variables are at play. Science is great at isolating individual variables and measuring the outcome of manipulations of those variables. But as our failures to understand the climate and the economy attest, science, in its current form, ends up falling short when faced with these hyperobjects.

Another issue that may hinder humankind’s ability to address hyperobjects is human nature itself. Our irrationality and incentives may be such that the problems will never be completely fixed, or that our solutions will be such that they only generate new hyperobjects.

With cognitive closure this same non-linear chaotic behavior may also be at play, such as in consciousness, but the problem is likely more ontological than it is complex. Even if we map the entire connectome of the brain, solve whatever “algorithm” it uses for form and pare connections, come up with a sort of “neural code” in the same sense as the genetic code, and perhaps can even recreate something on a computer that (at least acts like it) is conscious, there would still be the perennial hard problem of what actually is consciousness? Why is it that matter organized in a way that satisfies the “neural code” and connectome becomes conscious? What, ontologically speaking, is qualia (e.g. what, ontologically speaking, is the experience of the color red)? What is it like to have some other kind of consciousness – things like umwelt and “What is it Like to Be a Bat?

With cognitive closure, it is not that no answer exists. It’s simply that the human brain is constituted in such a way that it will never be able to understand the phenomena (or, perhaps more accurately, the noumena).

I suppose I don’t have much hope for either of these two types of insoluble problems. I tend to find the problems that come up against cognitive closure more interesting to think about than hyperobjects, although I’m aware that solving hyperobjects has a much greater practical element for human survival and well-being. I tend toward extropianism as the only avenue for even approaching the possibility of truly solving either type of problem. Until then, thinking about them offers a nice distraction from what will likely be the downfall of our species.