In a previous post, Is Artificial Intelligence an Existential Threat to Humanity? Making sense of artificial consciousness, I began looking at the claim that artificial intelligence is a threat to humanity. What I realized is that the claim assumes that artificially intelligent objects will eventually become autonomous conscious intelligent agents able to rival humans and act on their own reasons. I argued that this assumption is flawed based on Frank Jackson’s thought experiment about the problem of qualia and consciousness. In this post I look at what contemporary philosophy of mind calls the problem of intentionality, albeit indirectly, and the hurdles it raises for artificial intelligence and materialism. 

John Searle came up with the Chinese Room thought experiment to demonstrate that syntax is distinct from semantics; that following a formal program is not the same as understanding the content of that program. He asks us to imagine a ourselves unable to speak and understand the chinese language locked inside of a room with a box of chinese symbols as well as instructions written in English explaining how to match the symbols. Suppose on the outside are chinese speakers who input a certain set of symbols and our job is to give an output based on matching their input symbols using the instructions. Imagine that I get so good at matching the input with the output symbols that to a person outside it appears that they are speaking to someone who understands chinese perfectly. According to Turing Test then, I would have passed the Chinese test of intelligence because an expert chinese speaker can talk to me in chinese; therefore I would be considered intelligent according to the Turing Test.

Something has clearly gone wrong here, because I simply do not understand the chinese language, I do not understand what the symbols mean and what the arrangement of symbols mean. I did not even understand that the input symbols were actually questions and my output symbols were answers to them. And yet according to the Turing Test I am intelligent. We could imagine that the there is no one in the room, just a computer (like Siri) which is programmed to respond certain output of sounds based on the input of sounds. The program is if input is sound X then output is sound X1. The program will run perfectly and pass the Turing Test and yet it has no understanding and meaning of the content. Searle concludes,

What this simple argument shows is that no formal program by itself is sufficient for understanding, because it would always be possible in principle for an agent to go through the steps in the program and still not have the relevant understanding.

The presence of an agent carrying out a formal program does not necessarily mean that the agent understands the content and meaning of the program.

However I think Searle does not go too far in his critique as there are a number of presuppositions that are problematic. Fortunately in the article, Is the Brain a Digital Computer, he addresses the concerns I have.
The question we could pose is – do computers follows instructions? In the case of the man who doesn’t know Chinese but was able to follow instructions to simulate Chinese speaking – (1) they still had to understand and follow the instructions they were given; (2) they had to be able to recognize given symbols and match them with certain symbols. In the case of a computer does it follow instructions the same way a person does. Does it understand the instructions given to it? Does it recognize symbols?

Money provides a good illustration of this general principle that symbolic meaning is not a physical property – it is not derived or intrinsic to physics. Whether an objects is a kind of money is not determined by its physical properties. Rather whether an object is a kind of money is relative to us humans which is why metal, coins, cigarettes, gold, silver, paper can function as money. There is nothing in the physical properties of the object that determines that it has this symbolic property of being money. The symbolic meaning is imposed on the physical object.

When you are at a traffic light and it turns red that means you must stop. The meaning of the red traffic light is derived and imposed on it rather than being intrinsic to the red light. The meaning is derived because we humans decided it to be that way. The meaning is not determined by the physical properties of the light. We could have easily chosen a red light to mean go or slow down. The meaning of the traffic light is not intrinsic to its physical nature but is derived and relative to us humans. Physical objects have no intrinsic meaning as a property.

The same applies for symbols – they have no intrinsic meaning but are only derived and relative to us humans. Which is why the one set of symbols such as the word “run” can mean different things. The meaning is not determined by the symbols but is imposed on them by us humans. What this tells us is that meaning in general and symbolic meaning in particular can neither be reduced to nor be determined by physical properties or even symbols. No physical object can possess intrinsic meaning. The meaning is relative to a human agent.

The problem of intentionality is a broad phenomena describing a feature that minds possess. It is the idea that mental states possess “aboutness”, they can be about other states of affairs, and objects. We can have thoughts about yesterday, or the future, or atoms and cats. Meaning, content, representation are particular instances of the general mental phenomena of intentionality. Stanford Encyclopedia defines intentionality as, “Intentionality is the power of minds to be about, to represent, or to stand for, things, properties and states of affairs”. The maverick philosopher in his post, Intentionality, Potentiality, and Dispositionality: Some Points of Analogygives a wonderful introductory description of the term:

 ‘Intentionality’ is Brentano’s term of art (borrowed from the Medievals) for that property of mental states whereby they are (non-derivatively) of, or about, or directed to, an object. Such states are intrinsically such that they ‘take an accusative.’ The state of perceiving, for example is necessarily object-directed. One cannot just perceive; if one perceives, then one perceives something. The idea is not merely that when one perceives one perceives something or other; the idea is that when one perceives, one perceives some specific object, the very object of that very act. The same goes for intending (in the narrow sense), believing, imagining, recollecting, wishing, willing, desiring, loving, hating, judging, knowing, etc. Such mental states refer beyond themselves to objects that may or may not exist, or may or may not be true in the case of propositional objects. Reference to an object is thus an intrinsic feature of mental states and not a feature they have in virtue of a relation to an existing object. 

What does this mean for computers then?

A computer is a machine that computes. It carries out logical operations, algorithms and calculations. The binary system is an integral feature of digital computers. The binary system is a language which uses binary symbols of 0’s and 1’s to represent information. Recall the argument that Searle presents is that no physical part is intrinsically a symbol with meaning- the meaning is always relative to a human. This means the binary system is precisely a binary system relative to humans – we assign and impose it on physical objects and processes. For example we can say a 10 volts in an electronic circuit represents or means 1 and 5 volts means 0. The meaning is not determined by the properties of electrons. A point Searle eloquently puts:

“…The computer then goes through a series of electrical stages that the outside agent can interpret both syntactically and semantically even though, of course, the hardware has no intrinsic syntax or semantics: It is all in the eye of the beholder. And the physics does not matter provided only that you can get it to implement the algorithm. Finally, an output is produced in the form of physical phenomena which an observer can interpret as symbols with a syntax and a semantics”

A computer is a physical objects made up of these physical parts which have no intrinsic symbols and meaning. Therefore a computer could never possess intrinsic symbols and meaning and therefore could never possess a mental state which is a hallmark of agency and intelligence. A mental state intrinsically possesses meaning and content. Therefore concerns that artificial intelligence will soon surpass human intelligence rests on deeply flawed assumptions. The ability of a calculator to perform arithmetic operations at a greater accuracy and speed than any human could – is relative to us humans. The intelligence of a calculator is relative to us humans. The calculator in itself has no understanding and knowledge of its functions and “intelligence” anymore than a light bulb knows that it’s function is giving light. A computer is not following instructions anymore than a light bulb is following instructions when it gives light. A computer simply possesses no intentionality and therefore no true mental state.

Computers are simply no threat however, the greatest existential threat to humanity remains humanity itself.