One of the things in maturing intellectually is realizing that our typical understanding of the world is most likely completely flawed.
I saw this when I began my first philosophy units at university.
When I learnt about formal logic, syllogism and critical thinking, I was struck by how bad I was at actually evaluating and truly understanding another person’s perspective. without unintentionally “straw-manning” them or yielding to common cognitive biases.
I also learned about Berkely, Descartes, Aquinas, and many more. But the one that really stuck out to me was David Hume. My own capacity to use inductive reasoning was shattered after I learned about Humes argument against induction. The Principle of the Uniformity of Nature (PUN) as Hume called it, was completely unjustified rationally, and so the question is, how can I know that when I get on a plane, that it won’t crash? More generally, how can I rationally know that any inductive inference I make it valid? I couldn’t justify it given our typical understanding of knowledge. 1
The only way I could, is that it was the best method we have. This was the turning point in my enquiry towards a pragmatist leaning. A more probabilistic approach to inference seemed appropriate.
Introducing Bayes Theorem, and Bayesian Inference.
Bayesian inference or is a method of statistical inference in which Bayes' theorem is used to calculate a probability of a hypothesis, given prior evidence, and update it as more information becomes available. - Wikipedia
I found this to be the most fitting way to describe how I make decisions. But the more I understood Bayesian reasoning, the more I found that it didn’t really make sense in the context I was using it.
Reconciling Bayesianism
For one, the Frequentist branch of Bayesian reasoning doesn’t work, because we don’t just come into the world with zero assumptions and pre-conceived ways of how to operate in the world, and we don’t simply use the frequency of occurrence as a way to determine the probability of it occurring. 2 In fact, the Bayesian-like way that humans make decisions must rely on things called priors, or prior-probabilities.
Fundamentally, Bayesian inference uses a prior distribution to estimate posterior probabilities.
Given the Frequentist branch is incorrect, these priors must come from structures that are built into the way that human’s reason. In other words, we don’t come into the world with un-informative prior distributions. But why?
Well, having a blank slate of an organism would not be very efficient. For example, it would be quite problematic if mothers had to learn how to give birth. Walking, speaking and learning itself would be something that wouldn’t be possible without some prior cognitive configuration that can actualize these processes in a quick and efficient manner. 3
Similarly, having a baseline cognitive mode of being allows us to “pre-order” ways of navigating the world which doesn’t involve too much cost of time and effort to build these cognitive structures from scratch. Babies can swim, Mothers give birth, we all have a fight-or-flight instinct. It is not a stretch to say that this “pre-ordering” of cognition also applies to how we perceive the world.4
The Survivability-Perception Argument
Say I had a population, and as God, I selected out of the population anyone who didn’t believe in Leprechauns, then after many generations, I would likely get a population who majority believed in Leprechauns.
Now this might take an extraordinarily long time, and I might need an extremely large population and birth rate, but if the cognitive structures in the brain are able to configure themselves to come to the conclusion that Leprechauns exist, then it’s possible to evolve these structures for the organism to fit its environment. 5
In this example, God acts as the environment and the selection mechanism. Despite this, the population doesn’t evolve to perceive God, they evolve to perceive Leprechauns. In the same sense, we don’t evolve to perceive our environment, we evolve to perceive the things that will give us a fitness payoff.
If this is true, it implies that cognition, knowledge formation, and perception is a function of survivability and reproduction - ultimately to maximize a fitness payoff.6
Donald Hoffman: Conscious Agent Theory
Donald Hoffman’s theory builds on this line of thought, arguing that our perceptions are evolutionary adaptations that maximize fitness rather than reveal objective reality.
And remarkably, Donald seems to have had the same idea that I had. In The Case Against Reality, Donald Hoffman argues that our perceptions do not reflect objective reality; instead, they are adaptive illusions shaped by evolution to enhance survival rather than to reveal the true nature of the world.
This theory is called Multimodal user interface (MUI) theory.
“For any generic structure that you might think the world might have; a total order, a topology metric, the probability is precisely zero that natural selection would shape any sensory system, of any organism, to see any aspect of objective reality.”
This isn’t just conjecture either. Hoffman and his teams’ simulations and mathematical work back this conclusion thoroughly.
Fitness beats Truth as it pertains to perceiving objective reality.
“Space and time, the thing we usually think of as our fundamental reality is just the format of our 3D desktop. Instead of a flat desktop, we have a 3D space-time desktop. And objects in 3D are merely the icons in our desktop. They’re not pointers to objective reality in any sense.
Reality that you see on your desktop interface is completely different to the reality of the actual computer generating that interface, i.e., the electron flows, resistors, capacitors etc.
And so, this implies that space-time, energy, quantum physics and so on, are simply projections that are useful for our perception. This explains the incredible discordance we see between quantum theory, relativity and so on. According to Hoffman, rather than quantum and relativity emerging from spacetime, space-time and these other theories emerge from something more fundamental.
But what does Hoffman think is at the fundamental level?
According to Hoffman, consciousness and conscious agents—interacting networks of subjective experiences—are the foundational elements from which the structure of our perceived reality emerges. 7
More interestingly, Hoffman describes each conscious experience as “The Universe experiencing itself through a straw”. The universe has infinitely many traces, where each trace loops back on themselves to experience a very narrow subset of reality, through different observer perspectives.
Each conscious experience, then, is a partial view of an expansive "consciousness field" or network where individual conscious agents are nodes. In this framework, interactions among these conscious agents give rise to what we perceive as the material world, including the illusion of separate objects, time, and space.
I think there’s something to this for several reasons.
It explains various psychedelic phenomenon and maybe even supernatural phenomenon
There is strong mathematical, philosophical and scientific backing i.e., this is not pseudoscience.
It greatly comports with other Theories of Everything being developed such as Stephen Wolframs “Ruliad” theory. You can see the two theories converge on each other at the end of this discussion.
It provides metaphysical backing to idealist theories developed by popular philosophers such as Bernardo Kastrup and David Chalmers.
Starting from the assumption of natural selection, we end up with a solution to the “hard problem” of Consciousness.
A problem with Hoffmans ideas is that the proofs for them aren’t very accessible. I don’t actually understand any of the mathematics underlying what Hoffman is talking about, and I’ve got a degree in engineering science. For me, and especially the average person, you kind of have to take him at his word and trust the fact that most other people with a highly advanced understanding of mathematics, such as Stephen Wolfram, can’t find much fault with what Donald is putting forward.
Chris Langan’s CTMU
I just recently stumbled across Chris Langan’s metaphysical theory, called the Cognitive-Theoretic-Model of the Universe (CTMU). This theory is even more puzzling than Hoffmans. But it does have the same, if not a higher level of intrigue.
Chris Langan reportedly has the highest IQ (between 195 and 210) in America, or even the world. This of course has no bearing on the truth of his claims, but it does increase my curiosity for them.
Here is an outline of the CTMU:
Core Premise: Reality as a Self-Processing Language. CTMU posits that reality is a self-referential, self-configuring system that can be understood as a kind of "language" capable of processing information about itself. In CTMU, reality is not separate from the rules governing it; instead, it includes a set of structural and logical rules that can dynamically evolve within itself.
One of the foundational ideas of CTMU is that reality functions analogously to a mind, in which cognition, perception, and action are interdependent and integrated. Thus, the universe is seen as a kind of “self-aware” system, capable of observing, reflecting upon, and interacting with itself in a way similar to human consciousness.
CTMU suggests that everything in reality adheres to a syntactic structure, akin to grammar in language, that determines how objects and events can relate and interact. Reality is thus seen as both the process of generating information and the information itself, which are inseparable.
Langan introduces a lot of conceptual baggage to iron out the issues in his theory, which I haven’t included here, but the overall approach is interesting.
Chris claims that the CTMU is completely logically coherent and therefore logically necessary.
But here are two problems I currently see with Langan’s model:
I imagine that there are at least more than one possible metaphysical theories that are logically self-contained and coherent, and therefore necessarily exist. This undermines its exclusivity.
Langan’s theory relies on logic itself but provides only a self-justification for this (which is the whole point of the argument). The problem with this, is that I can conceive of a world where evolution (as laid out in Hoffmans theory), has produced logic as a fictitious perceptual tool to survive. Maybe logic does point to objective and invariant structures in reality, but maybe it doesn’t. Maybe a different kind of logic is better at pointing to these structures. Maybe the idea of a structure is fictitious in and of itself, and reality is actually completely transient in form. Or maybe there is no reality, and we can adopt a solipsism, i.e., only my own phenomenological experience exists, logic doesn’t seem to play a role here.
So, while I think that the CTMU is certainly intriguing, and definitely plausible - I am unconvinced of his strong claim that the theory is NECESSARY. Logically it might be so, but philosophically and conceptually, it isn’t.
TLDR & Conclusion
Engagement with thinkers like David Hume challenged my trust in empirical reasoning and revealed the limitations of induction and cognitive biases.
Donald Hoffman's Multimodal User Interface (MUI) theory posits that our perceptions are adaptive illusions shaped by evolution rather than reflections of objective reality.
Chris Langan's Cognitive-Theoretic Model of the Universe suggests that reality may be a conscious, self- contained, self-referential system rather than an independent entity. Langan’s claims that this theory is Necessary is dubitable.
These ideas raise profound questions about existence and knowledge: if our beliefs are constructed for survival, what does that imply about the true nature of reality?
Going even further, what justification do I have that the laws of nature don’t suddenly change tomorrow?
If there is one instance of X occurring, that doesn’t mean that there is a 100% chance of X occurring in the future and a 0% chance of not-X occurring, that’s not how our brains work.
Further, calculating any type of posterior distribution requires a highly complex integral, particularly in calculating the marginal likelihood or “evidence”. Doing this calculation is incredibly inefficient for actual cognitive inference. We must, therefore, be relying on some kind of cognitive heuristic to determine suitable posterior inferences.
Or even reason morally.
Now to take it out of the fantastical, it is more easily imagined if I selected out anyone who could see the color red. I would get a “color-blind” population. But really, we’re already color blind. We can’t see ultra-violet or infra-red for example.
But let’s not get ahead of ourselves here.
It’s clear in these cases, that we don’t always believe in true things. And so, survivability and fitness are at least not always correlated with truth, and that truth is really an emergent property of optimizing for these fitness payoffs.
This begs the question; If everything we see might just be constructed by our minds in order to maximize our fitness payoffs, and since we can’t trust our senses, or possibly even our reasoning, then what actually exists metaphysically? And can we ever know?
This is in contrast to Stephen Wolfram who thinks consciousness emerges from cellular automata and simple computational rulesets.