The idea that machines or supercomputers will become so intelligent that they will eventually surpass human beings and ultimately enslave us; is one that I have heard often in conversations and have seen popularized in quite a few movies. Just think Terminator. It always makes for a great movie plot I must admit, however I find the idea problematic. In an article from Scientific American, Artificial Intelligence Is Not a Threat–Yet, the author quotes Elon Musk and Bill Gates being concerned about super intelligence and how it poses a bigger threat than nuclear weapons. Stephen Hawking in the same article is quoted saying “The development of full artificial intelligence could spell the end of the human race”. It is often unclear to me what the justification for the idea that computers and artificial intelligence (AI) pose an existential and apocalyptic threat to humanity is. My guess is it has to do with more than just the fact that they can out compute and solve certain problems better than humans can. Think of how the IBM computer deep blue beat former grandmaster Gary Kasparov at chess, or how Google’s computer beat the number one player in GO. If that is all to the problem of artificial intelligence dominating humanity then it is an odd one. For starters it is a basic fact of technological progress that machines were invented precisely because they could do certain tasks more efficiently than humans. Even calculators can add numbers with an exponentially greater accuracy than humans but for some reason we don’t think my calculator will be revolting and enslaving us some day – so why think supercomputers will? I think it has to do with the implicit assumption that artificially intelligent systems will eventually become autonomous conscious intelligent agents acting on their own reasons and desires.
I do think we have nothing to worry about in terms of a machine uprising or revolution – AI is simply not intelligence in the strong sense. Contemporary philosophy of mind divides the features of mental activity roughly into three categories:
1) Qualia – this concerns our qualitative, sense and perceptive experience: our capacity to taste, smell, hear, see, touch. It encompasses consciousness, awareness and subjectivity; the sense that we are selfs or subjects distinct from objects.
2) Intentionality – our mental states have content or aboutness, they are about other things. We can have thoughts about other objects whether concrete or abstract. Thoughts about yesterday, our spouses, favourite song etc- our mental states such as thoughts have content and meaning.
3) Rationality – the capacity to: grasp concepts which are abstract, universal and determinate; as well as the capacity to grasp concepts, form judgments and combine them logically.
The three categories pose a tremendous challenge to materialism (the idea that all thats exist is physical or is reducible to physical mechanisms) and for the same reasons pose a challenge to artificial intelligence (machines made from physical components) being a true display of human intelligence.
The modern conception of what is physical is inherited from Descartes and is based on a mechanistic view of nature. The world is at bottom made of physical particles such as atoms or quarks; which serve as the basic building blocks from which all the other objects emerge. The key traits of what is physical is that: it must occupy space, have some length, width and height; and should be detectable with our senses, can be touched, seen, tasted etc. We can add to that conception other physical features being discovered by the hard sciences of physics and chemistry such as charge, mass, spin, electronegativity etc.
I will only discuss the problem of qualia and consciousness in this post and how it poses a problem to materialism as well as claims that artificial intelligence will surpass human intelligence.
Frank Jackson in his essay Epiphenominalism and Qualia asks us to imagine Mary an expert neurophysiologist kept in a black and white room all of her life dedicated to understanding everything there is to know about the physics and neurophysiology of vision. She knows everything there is to be possibly known about colour, light, frequency and wavelength. She has all the physical facts about colour and how it behaves when it strikes the eye and the cellular mechanism involved in producing vision. If Mary were to be released from her black and white room and into the world of full colour; filled with red tomatoes and apples – would she learn something new? Does her knowledge of the physical facts about the colour red actually mean that she knows what red looks like? The answer is no – she has no idea what redness looks like even though she all the physical facts about colour. She has never experienced redness in her vision. The same argument could be made about ice-cream – someone who has never tasted ice-cream and yet knows all the physical facts about its chemical composition, density, melting point – would not know what ice-cream tastes like, what it is like. Therefore the point is that the physical objective facts cannot and do not determine qualia and consciousness. Qualia, cannot be reduced to physical mechanisms. Complete knowledge about the physical mechanisms of colours like red and vision does not give us knowledge about what redness actually looks like. In other words there is more in the world then just the physical interactions of parts. A complete physical description of the world would leave out knowledge about how things feel, look, and smell and therefore materialism is false.

A. G. Petzoldt. Nanocolumns at the heart of the synapse. Nature
536,
151–152
The problem would apply to the brain itself which is composed of complex physical parts such as neuron cells. Indeed we know there is a correlation between certain brain states and mental states – however it is a complete puzzle and mystery how physical brain states could indeed produce mental states of perception, qualia and consciousness. Figure 1 above describes the mechanisms involved when a synapse transmits a signal across neuron cells found in the brain. It is clear that an exhaustive description of the neurotransmitter molecules, the scaffold proteins and their chemical interactions across nanocolumns would completely leave us clueless about what particular mental state, perception or qualia it actually is and what it feels like. The conclusion we are led to is then that the brain itself being a physical object cannot determine or cause consciousness.
Physical objects as conceived within the modern mechanistic view of nature; simply cannot determine and cause qualia and consciousness. The implications for artificial intelligence and supercomputers which are made of physical parts would then be that they cannot ever produce subjectivity, consciousness and qualia because those are phenomena irreducible to physical parts and mechanisms. If supercomputers cannot even possibly rise to qualia and consciousness clearly then we do not have to worry about them actually being intelligent seeing how the first step to being an autonomous intelligent agent is possessing consciousness, qualia and subjectivity – the hallmarks of autonomy.
For further reading, See my essay on the mind-body problem and how it has its origins in the philosophy of Rene Descartes. The Cartesian ghost in the machine: The origins of the mind-body problem
What an excellent post! Thank you so much. As I was reading, a couple of things came to mind.
1) I agree that AI will never be able to experience things as humans do. However, I think that’s a separate issue to the threat of AI to humanity. I could easily see computers becoming so advanced that they pose a threat to human life (and even to our species), if we’re not already at that stage.
2) Totally agree with your points about materialism. I have often argued that looking at matter (e.g. the brain) does not hold the answer for helping us to understand experience. I believe that the spiritual aspect of experience is often neglected by scientists.
Looking forward to more writing from you on these matters.
Best wishes,
Steven
Thanks for the generous comments Steven. In what specific way do you see AI threatening human life?
I find this topic a very interesting one filled with all sorts of nuances. I can understand why a brain scientist would exclude experience from their enquiry; science necessarily excluded and focuses on certain features – however the problem lies in thinking that’s all there is to reality – only what science can describe. Indeed experience cannot be explained in mechanistic terms.
I see the threat, for instance, in automated weapons systems. If we are already swiftly moving towards fully automated vehicles, it’s not difficult to see how technical errors could lead to massive disasters in the area of, for instance, nuclear weapons. I’m no expert, but I am worried about this and believe those who work in these areas are going to have to be incredibly careful.
Agreed, as Spider-Man’s uncle famously said, “with great power comes great responsibility”!