The unsolvability of the mind-body problem enables free will

the mind-body problem, consciousness

Jan Scheffel, Professor from KTH Royal Institute of Technology, Sweden, argues that the insolvability of the mind-body problem enables free will

In 1976, an astounding book was published where the author claims that humans are mere vehicles for their genes, in their perpetual pursuit of reproducing themselves. You might believe that it is your own or the group’s survival that you are striving for, but the author finds that the basic entities of your DNA use your body and your mind for their reproduction. In “The Selfish Gene”, evolutionary biologist Richard Dawkins outlined, in a dramatic way, the astounding state-of-the-art of evolutionary research [1].

The view of humans as vehicles or machines raises the classical question of free will. Historically, some sophisticated species have developed consciousness, and quite a few human minds feel that they can make free choices. Obviously, we are not machines in the traditional sense – electromechanical, deterministic robots with brains that have been pre-programmed. But we are, as it seems, slaves under the properties and laws of nature. At least if we take a reductionistic, scientific view of things.

Consciousness and freedom of the will are classical problems that have attracted many philosophers, dating back to Descartes and beyond. Why then, haven’t these problems been solved yet? Clearly, we first need some sort of understanding of the mind, of consciousness, to make progress. But consciousness itself, in particular, subjective first-person experiences like pain, lust and longing are extremely elusive and no theory for consciousness has seen the light of dawn.

The mind-body problem concerns how the mental capacities of the mind can arise from and interact with purely physical matter in the brain. It also relates to religion, since here a dualistic view is assumed, where the mind is distinguished from matter.

Interestingly, the mind-body problem is of even greater interest today. As we irreversibly move into an era of artificial intelligence, we wonder if the robots we construct will have the same, or superior, cognitive properties as ourselves.

Consciousness cannot be understood

In my research, I have found that a reductionistic theory for subjective consciousness cannot be constructed. What is it, then, that prevents neurobiology to explain the high-level conscious and subjective experiences that we all know for realities, from the low-level neuronal activity among the 80 billion neurons of the brain?

One should not in any way diminish the immense historical efforts by philosophers in trying to explain consciousness and assess whether its will is free. However, to reach my conclusions, it was necessary to employ recent advancements in neurophysiology, information theory applied to the quantum-mechanical description of reality, nonlinear dynamics and the concept of emergence.

Subjective, first-person conscious experiences cannot be reduced to low-level neural phenomena, on which they supervene, since consciousness is ontologically emergent. This means that properties of consciousness, such as thoughts and feelings, have properties that are “more complex than the sum of its constituents”. Let me illustrate with the thought experiment “The Jumping Robot”. Assume that a number of a particular kind of robots are deployed on an isolated island. All robots are programmed to be able to freely walk around the island and perform certain tasks. If a robot becomes more efficient by performing a certain action, it should ‘memorise’ it and ‘teach’ the other robots.

Assume now that the robot’s movements are partially initiated by sequences of numbers generated from so-called discrete logistic equations. Algorithmic information theory says, that although the numbers are generated deterministically, there is no way of predicting them beforehand. As the sequence develops, eventually these numbers become emergent. At this point, quantum mechanics shows that not even a computer the size of the universe could predict them; they appear as a “surprise” to nature. Assume also that we want the robots to be able to jump without falling. The involved complexity is substantial; the robot consists of a large amount of joints, muscles and other bodyparts that should be coordinated. The coordination also involves taking into account that part of the robot’s actions are related to emergent numbers generated internally by the logistic equation. After numerous unsuccessful attempts, including numerical modelling on high performance computers, the task is given up; the robots cannot be taught to jump.

Having left the robots on the island for themselves for some time, we return. To our surprise, we now find that some robots make their way also by jumping over obstacles. However, no theory can predict this emergent behaviour at a high-level. There is no magic involved, rather we see similarities to random mutations in the genome of an individual organism, producing improved characteristics through evolution. Similar reasoning as for The Jumping Robot can be transferred to the neural processes of the brain [2]. Consciousness, like jumping for the robot, is too complex to be understood at a low-level.

Free will – a consequence of the mind as an ontologically open system

If, on the other hand, a detailed theory for consciousness could be designed, then its behaviour would in principle be computable or could be simulated deterministically. There would thus be little room for free will.

To provide a scientifically based, and testable, answer to the free will problem the common, but vague, characterisation “to be able to act differently” needs improvement [3]. Consciousness needs to be “ontologically open”. By this, we mean a causal high-level system the future of which cannot, even in an a posteriori sense, be reduced to the states of its associated low-level-systems, not even if the system is rendered physically closed. The independence given by emergence is not sufficient in itself to attribute the required degree of independence of conscious processes.

Ontological openness leads to downward causation; there are irreducible processes of the mind that are not controlled by low-level neural activity. This enables us to make choices that we, from a low-level perspective, are not forced to make. Does this sound strange? How can we possibly relieve ourselves from the determinism that science shows us prevails at low-level? This can be understood by returning to The Jumping Robot. If we would attempt to predict what a particular robot on the island would do next, even if we had access to all information about the robot’s present state, we would probably fail seriously since jumping is not in our model.

This epistemologically emergent property evades any theoretical or numerical modelling. A sufficiently complex robot would also have ontologically emergent features, in which case there is no one-to-one relation between its downwardly caused actions and the bits and pieces it is made of. But wouldn’t this robot then also have free will? Well, there are a few other conditions that need to be satisfied. I can only refer you to [2-3] for details.

In summary, we have found that reductionistic scientific understanding of subjective conscious processes is not possible; emergence lies in the way. Ontological emergence, in turn, leads to ontological openness which enables downward causation and free will. The theory can, due to the proposed downward causation, be tested experimentally. With these tools, we may now move on to explore really interesting implications, for example, robot ethics, the position of dualism and the relation between science and religion.

References

[1] Richard Dawkins. “The Selfish Gene”, Oxford university press: London, 1976.

[2] Jan Scheffel, “On the Solvability of the Mind-Body Problem”, Axiomathes (2020) 30:289–312,

[3] Jan Scheffel, “Free Will of an Ontologically Open Mind”, PhilPapers (2020), https://philpapers.org/rec/SCHFWO-4.

*Please note: this is a commercial profile

Contributor Profile

Professor
Division of Fusion Plasma Physics, KTH Royal Institute of Technology, Stockholm, Sweden
Phone: +46 8790 8939
Email: jans@kth.se
Website: Visit Website

1 COMMENT

  1. Great post. However I don’t quite agree with the conclusion drawn that no theory can predict the robots’ emergent behavior. I don’t think the right question is how likely the robots’ complexity would arise from the initial conditions the robots were given to work with but rather if it can. Very very unlikely events are almost impossible to model and predict because from an algorithmic information theoretic viewpoint finding a representation/model that’s both shorter than and descriptive of that event or state is extremely hard.

    So I think it’s better to ask if its possible that the robots learn to jump given their initial state, before asking how probable it is that they do. The answer to the first question tells you if a high level system can be reduced to its associated low-level components, while the answer to the second question gives you a measure of the complexity of a model of that high level system.

LEAVE A REPLY

Please enter your comment!
Please enter your name here