Pull to refresh

On Computational Nature of Reality

Reading time15 min
Views1.3K
The Matrix (1999)
The Matrix (1999)

I explain experimental results of Bell’s Theorem by superdeterminism. I follow with insights into how such a universe may arise and be compatible with the subjective experience of free will.

Preface

How is the world arranged, and what is the meaning of life? Are our fates predetermined, or do we have complete control over every action? Is there God? These questions have been puzzling philosophers for centuries. Recently, a beautiful scientific theory has emerged that can explain everything.

I’ll talk about the theory later, but first a spoiler. In the last chapter of my book (sorry, but it is in Russian only), I came to the conclusion that our universe is an evolving information system that emerged from nothing. I didn’t know exactly how that would work; I only had an intuition. Now, over ten years later, I’m ready to write a sequel. It is based on some recent discoveries in physics and mathematics, so please be patient.

Riddles of the Micro World

In 2022, the Nobel Prize in Physics was awarded for experiments demonstrating the violation of Bell’s inequalities. What are those?

Almost a hundred years ago, shortly after the founding of Quantum Mechanics, the effect of “entanglement” was discovered. Two identical particles born in the same process always acquire opposite properties. For example, a beta-barium borate crystal is capable of splitting a light photon into two with random and mutually perpendicular polarizations. Let’s send both photons to different laboratories, where Alice and Bob will pass them through horizontal filters. What will they see?

Only one of the two photons — the one that turns out to have horizontal polarization — will pass through the filter. It is impossible to predict in which laboratory this will happen. But if the experiment is repeated many times, it is easy to observe that the probability will be 50–50. Alice and Bob will see opposite results each time. Either the photon is visible to Alice, but not to Bob, or vice versa. The “disagreement rate” will be 100%.

Let’s move Bob’s laboratory to the Moon. Alice has already seen her photon (it turned out to be horizontal), and its entangled brother still has a whole second to fly to Bob. But it is already clear that his photon will not pass through the filter. And vice versa, if Alice’s photon is absorbed by her filter, then Bob’s photon will definitely get through. If we move Bob even further, to another galaxy, his photons will still recognize exactly what Alice saw on the Earth and take on orthogonal polarization.

Experiment with entangled photons
Experiment with entangled photons

Einstein hated such instantaneous transmission of information over any distance, and he was convinced till the end of his days that photons were “pre-arranged.” He was a determinist, believing there must be some hidden layer of reality below the scale of quantum particles. We simply do not see these “hidden parameters” and conclude that the polarization of both photons arises spontaneously at the moment of the first measurement.

Irish physicist John Bell proposed a way to confirm or refute the hypothesis of hidden parameters in 1964, after the death of Albert Einstein.

Bell’s Theorem

A vertically polarized photon gets completely absorbed only by a filter orthogonal to it — horizontal (0°). Through a filter at an angle of 45° it passes with a probability of 50%, and at an angle of 30° with a probability of 25%. The larger the angle, the greater the probability of passing, up to 100% at 90°.

Let’s make the experiment described above a bit more complicated. Alice and Bob will randomly rotate their filters before each photon’s arrival. That is, in addition to the horizontal position of 0°, we add angles of 60° and 120° to the horizon. We will calculate the total probability of disagreements in such measurements.

Possible angles of polarization filters
Possible angles of polarization filters

Let’s assume that Alice set her filter at an angle of 0° and detected the first photon (i.e., it turned out to be horizontally polarized), and Bob set his filter at an angle of 60°. What is the probability that he will also see his (vertically polarized) photon? In this case, the probability is not zero, but rather 75%. Therefore, the probability of not seeing it and getting a disagreement is 25%. Let’s calculate this probability for all combinations of angles:

Probabilities of disagreement without hidden parameters
Probabilities of disagreement without hidden parameters

The overall probability of disagreement is (100 * 3 + 25 * 6) / 9 = 50%.

Now let’s look at the situation where photons decide in advance which filters to pass through and which not to. Let’s assume that Alice’s first photon decided to pass through 0° but not through 60° or 120°, which we can write as (✓0°​,×60°​,×120°​). Bob’s entangled photon, on the other hand, decided to do the opposite: (×0°​,✓60°​,✓120°​). Alice again chooses to set her filter at 0°, and Bob at 60°. What will they see? That’s right, each of them will see one photon. There is no disagreement.

Here are all possible combinations of hidden parameters:

And for each row, a disagreement table can be created depending on the position of the filters. For example, for the rule (✓0°,✓60°,×120°​) / (×0°,×60°​,✓120°​), the table looks like this:

The probability of disagreement is 5/9 = 55.6% of the time. It is possible to continue for all combinations of parameters, but most often we get exactly 55.6%. On the diagonal, there will always be a disagreement (as in the case without hidden parameters). The final result: the hypothesis of hidden parameters predicts a frequency of disagreements greater than 55%.

Which theory corresponds to reality? Nobel laureates Alain Aspect, John Clauser, and Anton Zeilinger have indeed conducted a similar experiment and conclusively proved that the frequency of disagreements is close to 50%. Bell’s inequalities are violated, there are no hidden parameters.

For those who struggle to understand the above, I recommend Sabine Hossenfelder’s interactive course on Brilliant, from which I borrowed the illustrations. Or watch this video: https://youtu.be/ZuvK-od647c

The Loophole of Superdeterminism

The violation of Bell’s inequalities confirms the generally accepted Copenhagen interpretation of Quantum Mechanics, which postulates that elementary particles exist in all possible states (superpositions) until we observe them, and randomly assume certain parameters only at the moment of the first measurement. This theory violates the principle of locality, which states that the influence of one object on another must be mediated by something and cannot exceed the speed of light.

Copenhagen interpretation sharply contrasts with the previously discovered and well-functioning deterministic laws of nature, such as classical mechanics, termodynamics and theories of relativity. Most scientists had accepted this state of affairs even before the experimental verification of Bell’s inequalities. Quantum entanglement acts instantly at a distance, Einstein was wrong, and God plays dice. Is the question settled?

Nobel laureate Gerard ‘t Hooft does not think so. In the early 1980s, he proposed a loophole that allows for the preservation of the hidden variables hypothesis when Bell’s inequalities are violated. Verification of these inequalities relies on independent random distribution of the filter positions and the polarization of the emitted photons. But what if this is impossible in principle?

Even if Alice’s filter is controlled by photons from one distant galaxy and Bob’s filter is controlled by photons from another, there is a possibility of their correlation with each other and with the measured entangled photons. Can this happen in reality? Perhaps, if all the elements of our universe were initially entangled from the moment of the Big Bang!

To illustrate the fundamental mechanism of his “superdeterminism” theory, Gerard ‘t Hooft used the idea of cellular automata. The simplest example, which many are familiar with, is John Conway’s “Game of Life.”

This cellular automaton demonstrates the evolution of patterns on a two-dimensional field as a result of applying simple rules to each cell. Some initial states lead to rather complex structures:

Gosper’s glider gun
Gosper’s glider gun

It is not possible to base our real universe on a similar automaton, even a three-dimensional one. It a priori requires space: cells that can change their value. An information structure with a more general form and not limited in advance by any parameters is needed. Such a structure exists in mathematics and is called a hypergraph.

Wolfram Physics

The name of Stephen Wolfram is well known to many thanks to his computational software packages Mathematica and Wolfram Alpha. He is a brilliant physicist, mathematician, and programmer all in one. Stephen published his first paper on Quantum Field Theory at the age of 15 and had received his PhD in theoretical physics from Caltech by the age of 20, with Richard Feynman himself as his scientific supervisor. They then worked together on one of the first quantum computers.

Like Gerard ‘t Hooft, Wolfram began studying cellular automata in the early 1980s and dedicated 20 years of his life to this topic. In 2002, Stephen published his research in the book A New Kind of Science. It’s main conclusion: repeated application of simple computational rules can generate systems of great complexity. This echoes Alan Turing's machine capable of computing any algorithm.

In April 2020 Stephen launched the Wolfram Physics Project, with the aim of explaining the structure of our universe as an infinite process of simple calculations — the evolution of a hypergraph. Full information about this theory is available on the website, and progress can be tracked through social networks, YouTube, and even Twitch. I will only mention the basic principles and the first successes.

The main cornerstone is the concept of the Ruliad, an abstract space of all possible rules that can be applied to an equally abstract hypergraph. Here is an example of one such rule:

Rule: {{x, y}, {x, z}} -> {{x, y}, {x, w}, {y, w}, {z, w}}
Rule: {{x, y}, {x, z}} -> {{x, y}, {x, w}, {y, w}, {z, w}}

It involves finding two edges that share a vertex, removing one of them, adding a new vertex, and connecting it with the three existing edges. Below is an example of applying this rule to the simplest graph five times in a row:

The complexity grows very quickly, and after just 15 iterations, our graph looks like this:

Here we have an ordinary graph as a special case of a hypergraph, in which edges always connect exactly two vertices, rather than one or more. I will from now on use the term “graph” for brevity. Visualizing the graph as dots and arrows is not necessary, it is easier to manipulate sets of edges and vertices directly. Here is an example of one state, and the numbers do not mean anything and can be replaced with any tags: {{1, 3}, {1, 5}, {2, 5}, {3, 5}, {1, 4}, {1, 4}}.

According to Wolfram’s idea, these vertices are the building blocks of our physical space. Just as water consists of discrete molecules, space is discrete and consists of graph vertices. Wolfram calls them “atoms of space.” The distance between two atoms corresponds to the number of graph edges that separate them (but not their length, which is arbitrary in the visual representation of the graph). And just as stable vortices can form in water, in such a discrete space all increasingly complex stable patterns can form and move: from photons and quarks to atoms and molecules.

Another important concept of Wolfram Physics is called “computational irreducibility”. Each step in the evolution of the graph of the universe requires computing over all points in space, adding atoms and modifying bonds. This is how the arrow of time appears. For an observer existing inside the system, it is impossible to accurately compute this evolution faster than it happens. It is also impossible to “roll back” all changes and return to the past. Laplace’s demon can’t exist.

Does this mean that the laws of the universe are unknowable? Not at all. Wolfram cites the gas dynamics as an example. We cannot measure and calculate the trajectory of each molecule, but we have learned with sufficient accuracy to predict its aggregate properties, such as temperature and pressure. It’s the same with all the laws of physics: we find regularities in the simple but frequent irreducible computations of the universe and learn to predict them with the help of complex and smooth formulas and theories. Jonathan Gorard, a young math prodigy in the project’s team, has already published papers on the compatibility with both special and general theories of relativity, as well as with quantum mechanics.

A skeptic would say, “Well, such a simple object cannot be at the foundation of everything!” This simplicity is deceptive. In the illustration above, I showed the application of one of the rules in one of the possible ways. Already at the first step, either the right or the left edge can be erased. Mathematics cannot make decisions or flip a coin. The process goes deterministically in both directions, branching the universe into two possibilities, but not losing the connection between them (unlike the hypothesis of a multiverse), because branching sometimes converges back to a common state. And this happens at each step of the computation.

Now imagine that not just one, but all possible rules are applied in all possible ways to the hypergraph. We will get a very, very, very rapidly growing universe, with all possible topologies of space and resulting laws of physics. In some small sector of the Ruliad, patterns have evolved into intelligent life. This is a version of the Anthropic Principle for an absolutely deterministic universe.

I am not trying to say that some graph’s branch just randomly created the Earth and all its biosphere. There was a bootstrapping process of more and more complex “subroutines” (see the definition of consciousness in the next chapter) that bred, multiplied and propagated the process. And yes, one of the rules can be such that when there is nothing, create a vertex with two looped edges to start the process. Or some other seed rule.

Fun fact: a hypergraph can be represented as a matrix of edges.

If you are completely skeptical about Wolfram’s ideas, you are not alone. Please continue reading, because this paper does not depend on a particular implementation of the deterministic universe. 100 years ago we would be talking about mechanical clockwork underlying all reality, today we are using IT vocabulary to describe this idea. The main pushback has always been that such a universe deprives us of free will, which I am arguing to disprove.

On randomness and predictability

Determinism is defined as “everything has cause, and hence, everything that will happen is predictable”. The “hence” here is wrong. If nothing can calculate the events faster than they unfold, they are truly unpredictable.

There is another philosophical definition of determinism: will we make all the same decisions if the whole universe to be replayed again from scratch? A positive response implies the absence of free will. In practice, there is no point in such "replaying," just as there is no external observer over the process. All elements of nature, plants, animals, and humans, advance the process of evolution with their actions along an unknown trajectory.

Another common misconception is that under determinism there is no room for chance. Also not true. The universe can be based on probabilities as a whole, yet every elementary particle will not have an independent distribution. This adds a minor detail to the Copenhagen interpretation: that all stochastic processes of elementary particles are interconnected.

Any programmer knows how difficult it is for a computer to generate random numbers. Algorithms for such calculations are complex, yet randomness is still impossible without information from the outside world. It needs a “seed” value, such as computer’s hardware clock, to launch a pseudo-random sequence.

Likewise, it is not possible to achieve true randomness in a deterministic universe. Our world is one huge clockwork mechanism, where the states of all matter particles are correlated. One giant generator of pseudo randomness. Without an unentangled conscious observer, such a universe would be dead.

On Consciousness and Free Will

Consciousness is a difficult concept to grasp. It relates to the ability of an isolated system to perceive patterns in the surrounding environment and respond to them. Human consciousness perceives the external world through our various senses to animate our material body. While the matter in our bodies is entangled with the rest of the world, our minds are not.

It was a real feat of engineering to enable such a setup. Our brains and nerves work much slower than the irreducible computations of the universe, so we form a completely unique and independent model of the world inside our minds to comprehend it. All our senses are involved in the process of building this picture, and we are constantly calibrating our models, among other, by communicating with other people.

Not only our timescale is different, the physical principle of our brain’s operation is different. Therefore, thoughts inside our minds, themselves deterministic computations, are isolated from the true universe, in which we were born. In IT terms, our material brain acts like a firewall, separating an internal network (our mind) from the external one (hypergraph). Each person really exists in her own separate internal world, and this is not a metaphor.

Cartesian theater

The matter of our body/brain is entangled only at the quantum level. At the scale of neurons, quantum fluctuations average out and do not affect our consciousness. Such a structure enables free will in a superdeterministic universe.

There is an objective reality available to us in sensation. There is an independent conscious observer. There is no predictability of events and predetermination of fate. All that is happening is cause and effect of our actions. And all that can be said about probabilities is that they are being drawn from a finite space. On average, everyone is lucky and unlucky in the same way, and that does not depend on anything (or similarly, depends on everything at once).

Imagine your life as playing chess with the universe. Every move you make is influenced by all the available information. You know that in this chess program there is a pseudo-random element. Does that make it less interesting for you? No. When you can’t choose between two good moves you flip a coin (using the same pseudo-random generator). Does that restrict your freedom? No. You flip a coin when you want to take chances!

This explains the paradox with Alice and Bob in Bell's experiments. Despite having complete free will, they adjust the filters in such a way that after numerous measurements, the disagreement rate converges to 50/50. In other words, their actions are properly correlated with the polarization of the photons incoming into each filter. Numerous similar experiments have been conducted, both with live lab assistants and with random number generators based on quantum effects. All results, with varying degrees of reliability, indicate a violation of Bell's inequalities. The quantum entanglement of a person with the rest of the world determines their random actions.

Is this the Theory of Everything?

No. Despite the plausibility of Wolfram’s theory, I don’t think it can become the Theory of Everything, nor even Quantum Gravity in a scientific sense. In order for it to do so, it needs to evolve to make precise predictions about reality, but its own principle of computational irreducibility makes this practically impossible. This is what sets Wolfram’s theory apart from the rest of science: mathematics here does not describe reality, it is reality!

To create even one electron in this model, the team needs to guess the actual rule(s) of our segment of the Ruliad and conduct a simulation of the first moments of the Big Bang. This will require a very large computer, because an electron is 10²⁰ times larger than the Plank’s length (resolution of discrete space in Wolfram Physics). They do study the properties of this theory to come up with ‘coarse graining’ methods of predicting reality which may bear fruit.

The computational nature of reality is not a scientific theory in the strict sense of the word, as it is not falsifiable. It is impossible to propose an experiment that would provide numerical results to refute it. In this sense, it is similar to the belief in God who created our Universe. You can believe it or not, but it is impossible to prove conclusively.

However, there is at least one scientific experiment that with a high degree of reliability shows that everything in the world is correlated. This is The Global Consciousness Project at Princeton University. Since 1999, scattered around the globe, sensors of random numbers based on quantum effects have been collecting statistics of mutual correlation. And this correlation inexplicably jumps during periods of emotional events, such as September 11, 2001, or at the onset of the COVID-19 pandemic.

Although not a scientific method, Wolfram’s theory agrees well with experiments and explains the fundamental causes of many mysterious phenomena in physics. We have already talked about quantum entanglement’s “spooky action at the distance”, which it explains. The Big Bang and inflation become apparent as hypergraph’s growth at the beginning of the evolution of the universe, the slowdown of this growth when space was collapsing into matter, and the acceleration again today (dark energy), when most of the stars are burning out, but the space between them keeps growing.

Wolfram’s physics explains dark matter as a “proto-substance” that has mass, but does not yet interact with anything else, and vacuum energy as the hypergraph’s primary ability to turn empty space into fields and particles. The slowdown of time near massive bodies is explained by the local density of calculations, in relation to empty space. This dilation of time causes gravity, and not vice versa.

It is worth mentioning the observer effect in quantum mechanics experiments. Since the experimenter or measuring device is entangled with the experimental object in a certain way, it is not surprising that the observed picture ceases to be stochastic - as in the case when no one is looking.

Is this God?

Depends on what you mean by that word. Yes, as an evolving process that created our world and gave birth to all, including people with free will. This process continues today with our hands and minds. No, as an omnipotent being that sees everything and can grant wishes. 

At the same time, one can consider God as the immutable laws of probability operating in our world. The theory of the computational nature of reality, where the entire world is one big pseudo-random number generator, easily explains stochastic experiments.

Galton Board
Galton Board

On the Meaning of Life

The hypergraph has no memory and knows no future. It changes with each computational step and exists only in one moment: now. Our role is to be an integral part of this process, this game called "Life". To observe and study its regularities, to directly influence its further evolution, gain various experiences while gradually advancing towards an unpredictable future.

The only truly random and astonishing thing in this process is how we, humans, come into existence. From nothing,  a new self-awareness. Once it happened to you, so it must be happening all the time. "Reincarnation" should be understood exactly like this: to spontaneously appear in the world and realize that it is happening again and again.

Imagine an experiment. In the future, humans have learned to clone people and completely replicate consciousness. You wake up after such an operation and see your doppelgänger. Who is the original and who is the copy? You are indistinguishable, and there is no one nearby to tell you. You must admit that the second person is also you. You just cannot see the world through their eyes. But you know for sure that their perspective perfectly matches yours. Now, imagine that your consciousness has been transplanted into the body of a dissimilar clone, replicating all neural connections. Again, it’s still you!

Everything that distinguishes us from another person is the content of accumulated memory since birth - experience. There is innate specificity, but in general, people are the same fruits of a single nature. Fruits of the Earth's biosphere. In IT language, this is called parallel computational processes in a common operating system. Fragments of a single consciousness. Like a colony of ants.

We are constantly born into this world and live simultaneously in different bodies. Each "I" experiences its perspective. In this sense, our life is eternal, and karma is collective. We constantly reap the fruits of what we ourselves have sown. What world are we leaving for our future selves? Unlike ants, we learned selfishness, incapable of perceiving ourselves as one with the universe. But we still continue to fulfill our main task - the evolution of Life.

In conclusion, the main question. In which “computer” is the real code of our universe executed? This is the same paradox as with the turtles that held the ancient Earth — an infinite stack of them all the way down. I, like Wolfram, am certain that a pure mathematical abstraction needs nothing.

Tags:
Hubs:
Total votes 1: ↑0 and ↓1-1
Comments0

Articles