The unsolvability of the mind-body problem enables free will

the mind-body problem, consciousness

Jan Scheffel, Professor from KTH Royal Institute of Technology, Sweden, argues that the insolvability of the mind-body problem enables free will

In 1976, an astounding book was published where the author claims that humans are mere vehicles for their genes, in their perpetual pursuit of reproducing themselves. You might believe that it is your own or the group’s survival that you are striving for, but the author finds that the basic entities of your DNA use your body and your mind for their reproduction. In “The Selfish Gene”, evolutionary biologist Richard Dawkins outlined, in a dramatic way, the astounding state-of-the-art of evolutionary research [1].

The view of humans as vehicles or machines raises the classical question of free will. Historically, some sophisticated species have developed consciousness, and quite a few human minds feel that they can make free choices. Obviously, we are not machines in the traditional sense – electromechanical, deterministic robots with brains that have been pre-programmed. But we are, as it seems, slaves under the properties and laws of nature. At least if we take a reductionistic, scientific view of things.

Consciousness and freedom of the will are classical problems that have attracted many philosophers, dating back to Descartes and beyond. Why then, haven’t these problems been solved yet? Clearly, we first need some sort of understanding of the mind, of consciousness, to make progress. But consciousness itself, in particular, subjective first-person experiences like pain, lust and longing are extremely elusive and no theory for consciousness has seen the light of dawn.

The mind-body problem concerns how the mental capacities of the mind can arise from and interact with purely physical matter in the brain. It also relates to religion, since here a dualistic view is assumed, where the mind is distinguished from matter.

Interestingly, the mind-body problem is of even greater interest today. As we irreversibly move into an era of artificial intelligence, we wonder if the robots we construct will have the same, or superior, cognitive properties as ourselves.

Consciousness cannot be understood

In my research, I have found that a reductionistic theory for subjective consciousness cannot be constructed. What is it, then, that prevents neurobiology to explain the high-level conscious and subjective experiences that we all know for realities, from the low-level neuronal activity among the 80 billion neurons of the brain?

One should not in any way diminish the immense historical efforts by philosophers in trying to explain consciousness and assess whether its will is free. However, to reach my conclusions, it was necessary to employ recent advancements in neurophysiology, information theory applied to the quantum-mechanical description of reality, nonlinear dynamics and the concept of emergence.

Subjective, first-person conscious experiences cannot be reduced to low-level neural phenomena, on which they supervene, since consciousness is ontologically emergent. This means that properties of consciousness, such as thoughts and feelings, have properties that are “more complex than the sum of its constituents”. Let me illustrate with the thought experiment “The Jumping Robot”. Assume that a number of a particular kind of robots are deployed on an isolated island. All robots are programmed to be able to freely walk around the island and perform certain tasks. If a robot becomes more efficient by performing a certain action, it should ‘memorise’ it and ‘teach’ the other robots.

Assume now that the robot’s movements are partially initiated by sequences of numbers generated from so-called discrete logistic equations. Algorithmic information theory says, that although the numbers are generated deterministically, there is no way of predicting them beforehand. As the sequence develops, eventually these numbers become emergent. At this point, quantum mechanics shows that not even a computer the size of the universe could predict them; they appear as a “surprise” to nature. Assume also that we want the robots to be able to jump without falling. The involved complexity is substantial; the robot consists of a large amount of joints, muscles and other bodyparts that should be coordinated. The coordination also involves taking into account that part of the robot’s actions are related to emergent numbers generated internally by the logistic equation. After numerous unsuccessful attempts, including numerical modelling on high performance computers, the task is given up; the robots cannot be taught to jump.

Having left the robots on the island for themselves for some time, we return. To our surprise, we now find that some robots make their way also by jumping over obstacles. However, no theory can predict this emergent behaviour at a high-level. There is no magic involved, rather we see similarities to random mutations in the genome of an individual organism, producing improved characteristics through evolution. Similar reasoning as for The Jumping Robot can be transferred to the neural processes of the brain [2]. Consciousness, like jumping for the robot, is too complex to be understood at a low-level.

Free will – a consequence of the mind as an ontologically open system

If, on the other hand, a detailed theory for consciousness could be designed, then its behaviour would in principle be computable or could be simulated deterministically. There would thus be little room for free will.

To provide a scientifically based, and testable, answer to the free will problem the common, but vague, characterisation “to be able to act differently” needs improvement [3]. Consciousness needs to be “ontologically open”. By this, we mean a causal high-level system the future of which cannot, even in an a posteriori sense, be reduced to the states of its associated low-level-systems, not even if the system is rendered physically closed. The independence given by emergence is not sufficient in itself to attribute the required degree of independence of conscious processes.

Ontological openness leads to downward causation; there are irreducible processes of the mind that are not controlled by low-level neural activity. This enables us to make choices that we, from a low-level perspective, are not forced to make. Does this sound strange? How can we possibly relieve ourselves from the determinism that science shows us prevails at low-level? This can be understood by returning to The Jumping Robot. If we would attempt to predict what a particular robot on the island would do next, even if we had access to all information about the robot’s present state, we would probably fail seriously since jumping is not in our model.

This epistemologically emergent property evades any theoretical or numerical modelling. A sufficiently complex robot would also have ontologically emergent features, in which case there is no one-to-one relation between its downwardly caused actions and the bits and pieces it is made of. But wouldn’t this robot then also have free will? Well, there are a few other conditions that need to be satisfied. I can only refer you to [2-3] for details.

In summary, we have found that reductionistic scientific understanding of subjective conscious processes is not possible; emergence lies in the way. Ontological emergence, in turn, leads to ontological openness which enables downward causation and free will. The theory can, due to the proposed downward causation, be tested experimentally. With these tools, we may now move on to explore really interesting implications, for example, robot ethics, the position of dualism and the relation between science and religion.

References

[1] Richard Dawkins. “The Selfish Gene”, Oxford university press: London, 1976.

[2] Jan Scheffel, “On the Solvability of the Mind-Body Problem”, Axiomathes (2020) 30:289–312,

[3] Jan Scheffel, “Free Will of an Ontologically Open Mind”, PhilPapers (2020), https://philpapers.org/rec/SCHFWO-4.

*Please note: this is a commercial profile

Call 116 123 to speak to a Samaritan

Contributor Profile

Professor
Division of Fusion Plasma Physics, KTH Royal Institute of Technology, Stockholm, Sweden
Phone: +46 8790 8939
Email: jans@kth.se
Website: Visit Website

2 COMMENTS

  1. Great post. However I don’t quite agree with the conclusion drawn that no theory can predict the robots’ emergent behavior. I don’t think the right question is how likely the robots’ complexity would arise from the initial conditions the robots were given to work with but rather if it can. Very very unlikely events are almost impossible to model and predict because from an algorithmic information theoretic viewpoint finding a representation/model that’s both shorter than and descriptive of that event or state is extremely hard.

    So I think it’s better to ask if its possible that the robots learn to jump given their initial state, before asking how probable it is that they do. The answer to the first question tells you if a high level system can be reduced to its associated low-level components, while the answer to the second question gives you a measure of the complexity of a model of that high level system.

  2. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

LEAVE A REPLY

Please enter your comment!
Please enter your name here