Can Computers Think? – 04/04/2016

Alan Turing opened his 1950 paper “Computing Machinery and Intelligence” with the question ‘Can machines think?’ only to quickly replace it with the question ‘Are there imaginable digital computers which would do well in the imitation game?’ His reasons for replacing the former question with the latter are understandable: ‘thinking’ is hard to define. Turing deemed it impossible to establish whether machines can really think, so he considered the latter question not only less ambiguous, but also – as opposed to the former – answerable. Turing’s answer is ‘yes’, there are computers imaginable which would be able to imitate human behaviour to such an extent that a human would be unable to distinguish it from a real human. For the purpose of testing whether a computer meets these criteria, Turing developed a test. The Turing Test involves a set of questions which would make it easy for a human interrogator to establish whether she is dealing with a machine or a human. A computer that passes the test can be said to imitate human intelligence successfully.

In the film Blade Runner (1982), Blade Runner Deckard interrogates Rachael using a version of the Turing Test and concludes that she is a replicant: a machine and not a human.

You can do the Blade Runner test yourself here.

But can a machine think?

Let’s say Turing is correct, that it is imaginable to build a computer that can imitate human intelligence in such a way that it is indistinguishable from real human intelligence. Can we then conclude that the computer indeed thinks?

One difficulty in answering this question is: what do we mean by ‘thinking’? What do we need for genuine thinking to occur? A mind? Consciousness? Understanding? Note that these are not the same things. I’m not always conscious of what happens in my mind, and when I am conscious of things, it may well be that I don’t understand anything. According to philosopher John Searle it is understanding that we’re after. Let’s follow Searle in this respect, because we’re not asking whether a machine can feel or experience (however interesting these questions are), but more specifically, whether it can think, whether it can be said to have intelligence. For something to be intelligent, it must be able to understand something. So let’s take ‘thinking’ to mean ‘understanding’, for current purposes.

Searle, in his 1980 paper “Minds, Brains, and Programs”, argues that Turing has failed to prove that machines can think. He goes a step further and argues that machines cannot think. To make his point, he develops a thought experiment known as the Chinese Room thought experiment.

Imagine you speak English, but understand no Mandarin whatsoever. However, you have the following job: you are placed in a room where, through a gap in the wall, Chinese people from outside the room hand you cards with Mandarin writing on. Using an elaborate sorting system in the room, you can correlate the cards with other Mandarin symbols in such a way that you are able to produce a card of your own and give it back to the people outside. The sorting system is so elaborate, that you are guaranteed to produce cogent responses to the questions on the cards that were given to you, without having to understand Mandarin. The responses you produce are indistinguishable from the responses a fluent Mandarin speaker would produce. The people outside think you are fluent in Mandarin and understand the questions they give you. But in reality you don’t really understand Mandarin, you can only simulate an understanding of Mandarin.

According to Searle, computers work in much the same way. A computer is an elaborate system that produces certain outputs when it is given certain inputs, and in doing so it functions to an outside observer as an intelligence, but there is no genuine understanding going on in the machine.

Searle’s argument is directed against a philosophical position known as functionalism: the view that the mind is reducible to how it functions in relation to itself and the outside world. Following the saying ‘if it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck’, functionalists argue that if something talks and behaves like an intelligent mind, then it probably is an intelligent mind.

Functionalists argue that the hardware is irrelevant to the question whether a machine – or, indeed, a human – can think. Whether your brain is made of organic cells or of chips and wires is unimportant. What matters is the software. If the mental processes of a human mind – note: mind, not brain – can be mimicked with computer software, then the conditions for consciousness and understanding are met, says the functionalist.

Searle, however, does not accept this. He advocates biological naturalism and argues that there is something specific to the brain which gives rise to consciousness – which is a requirement for understanding – that cannot be produced in machines, even if all other behavioural characteristics of intelligence can be produced.

Other minds

But how do I know that you have a mind, or that you are conscious or intelligent? All you can offer is behaviour. Your consciousness is only immediately available to yourself. How do I know I’m not the only conscious person in existence, interacting with machines (organic or otherwise) behaving in a very sophisticated manner? All I have that might lead me to contribute consciousness to other persons is their behaviour. And if behaviour is sufficient to contribute consciousness to other humans, animals, extraterrestrials, then why not machines? Perhaps, after all, the Turing Test is sufficient to conclude that a machine can think?

Creativity, evolution, socialisation

Suppose we follow the argument further and accept that some version of a Turing Test might be sufficient to determine that a machine can indeed think (in the sense of ‘understand’). Is it imaginable that we might indeed produce such a machine? Of course, Artificial Intelligence is increasingly clever and can do things human minds are incapable of. There is no end in sight for technological advancement. But that doesn’t mean there are no limits. How does one code creativity in a piece of software? ‘Creativity’ is an even vaguer concept than ‘thinking’, but for human-like intelligence, one would argue that it is needed. Creativity relies on wild and unusual connections, imaginations, and associations, many of which are learned through experience. Humans aren’t born with an intelligence, they aren’t born with the ability to think, they acquire and develop that ability through time. Humans need other humans to do so: we need socialisation for our mental abilities. Even if it is imaginable that a machine can have a mind, is it imaginable that we can create a machine that can develop an intelligence over time, through socialisation with humans, or indeed other machines?

According to Prof. David Deutsch, AI research is doomed to stagnate if it doesn’t face the inevitable philosophical questions it raises (see this article), something that AI departments have not always acknowledged. However, Aarhus University in Denmark is an exception to that rule with its robophilosophy project Pensor about social robotics.

A final thought experiment

I have a confession to make. Last night, I sneaked into a student’s house. While he was asleep, I anesthetised him, downloaded any information (memories, patterns, habits, thoughts…) that was stored in his brain, and converted it to code. I then uploaded the code onto a computer chip and placed it, with a long-lasting battery, in his skull, along with some adapters to connect it efficiently to what remained of his central nervous system. I destroyed the brain, because my student will have no use for it any more. This morning, he woke up and went about his ordinary life. He’ll come to my philosophy class and will contribute to our discussion as usual. Is he aware that he does so? And if so, do you think he has noticed any change? And do I still owe him respect and consideration?

Resources

A chapter from Stephen Law’s Philosophy Gym presents a philosophical dialogue between a robot and a human. Who do you agree with?

If you duplicate yourself on the 11th of April, you can send one of you to the marvelous Science Fiction Theater evening screening the film Robot & Frank (2012) including a talk by a very intelligent academic in Dalston!

Free will – 11/01/2016

Do we have free will?

Ali is a free man, or so he thinks. What he doesn’t know, is that the evil professor Klatz has planted a chip in his brain. This chip in Ali’s brain allows professor Klatz to exercise full control over all of Ali’s decisions. Ali is unaware of this. He experiences his own decision making as if he himself takes his decisions. Ali is free to go wherever he wants, do whatever he chooses to do. Nobody, not even the professor, curbs Ali’s freedom of action. It’s just that the professor controls Ali’s brain in such a way that when he is faced with the choice whether to have porridge or toast for breakfast, the professor makes Ali choose porridge rather than toast, regardless of whether Ali would’ve chosen the same thing if he hadn’t been manipulated.

Meanwhile, Babs is imprisoned in a small cell. She is physically constrained in her freedom of action. She can’t go anywhere and she can’t do many of the things she would choose to do, but this doesn’t stop her fantasizing about what she would do if she hadn’t been confined in a prison cell. But all to no avail: she can’t even arrange her own breakfast. This morning, the prison guard asked her: “What would you like for breakfast, Babs? Porridge or toast?” Babs considers both options and chooses porridge. There is no chip in Babs’ brain and no evil professor controls her decisions, the choice is hers. Yet, it has no influence whatsoever on the actions of the guard; he is just teasing her, he has already made her toast.

Who has free will, Ali or Babs?

A: Babs has free will, but Ali hasn’t. The will is free if you can choose between more than one option and nothing but you determines which of the two you choose. The ability to choose makes the will free. The fact that the physical world Babs lives in makes it impossible for her to manifest those choices is irrelevant.

B: Both Babs and Ali have free will. The will is always free, whether you can cause your own decisions has nothing to do with it.

C: Neither Babs nor Ali have free will. Even professor Klatz has no free will. Free will is an illusion.

D: Ali has free will, but Babs hasn’t (or at the very least the freedom of her will is severely limited). Regardless of how a decision is caused, one can only truly speak of freedom of the will if the decision, once it is made, can be acted upon. Babs’ fantasies are not real exercises of a free will.

 

Philosophers have distinguished freedom of the will from freedom of action for centuries. But do we have a free will? How we answer that question depends on what we understand by freedom of the will. Furthermore, however we answer this question, it will have consequences for how we think about morality, personal identity and a wide range of other philosophical, legal and psychological questions.

Resources:

A lecture by Daniel Dennett on free will in which he uses this comic strip of Dilbert. He disagrees with Sam Harris, who gives a lecture on his side of the argument. But these are just two of many possible positions on the matter.

This ‘Mind over Masters’ debate between a philosopher, a neuroscientist, a psychologist and a developmental psychologist shows that there are still quite some conceptual misunderstandings between the interlocutors! (is the developmental psychologist talking about the same kind of freedom as the philosopher?) But it also shows that a belief in a free will has real, immediate, implications beyond the realm of philosophy.