Can Computers Think? – 04/04/2016

Alan Turing opened his 1950 paper “Computing Machinery and Intelligence” with the question ‘Can machines think?’ only to quickly replace it with the question ‘Are there imaginable digital computers which would do well in the imitation game?’ His reasons for replacing the former question with the latter are understandable: ‘thinking’ is hard to define. Turing deemed it impossible to establish whether machines can really think, so he considered the latter question not only less ambiguous, but also – as opposed to the former – answerable. Turing’s answer is ‘yes’, there are computers imaginable which would be able to imitate human behaviour to such an extent that a human would be unable to distinguish it from a real human. For the purpose of testing whether a computer meets these criteria, Turing developed a test. The Turing Test involves a set of questions which would make it easy for a human interrogator to establish whether she is dealing with a machine or a human. A computer that passes the test can be said to imitate human intelligence successfully.

In the film Blade Runner (1982), Blade Runner Deckard interrogates Rachael using a version of the Turing Test and concludes that she is a replicant: a machine and not a human.

You can do the Blade Runner test yourself here.

But can a machine think?

Let’s say Turing is correct, that it is imaginable to build a computer that can imitate human intelligence in such a way that it is indistinguishable from real human intelligence. Can we then conclude that the computer indeed thinks?

One difficulty in answering this question is: what do we mean by ‘thinking’? What do we need for genuine thinking to occur? A mind? Consciousness? Understanding? Note that these are not the same things. I’m not always conscious of what happens in my mind, and when I am conscious of things, it may well be that I don’t understand anything. According to philosopher John Searle it is understanding that we’re after. Let’s follow Searle in this respect, because we’re not asking whether a machine can feel or experience (however interesting these questions are), but more specifically, whether it can think, whether it can be said to have intelligence. For something to be intelligent, it must be able to understand something. So let’s take ‘thinking’ to mean ‘understanding’, for current purposes.

Searle, in his 1980 paper “Minds, Brains, and Programs”, argues that Turing has failed to prove that machines can think. He goes a step further and argues that machines cannot think. To make his point, he develops a thought experiment known as the Chinese Room thought experiment.

Imagine you speak English, but understand no Mandarin whatsoever. However, you have the following job: you are placed in a room where, through a gap in the wall, Chinese people from outside the room hand you cards with Mandarin writing on. Using an elaborate sorting system in the room, you can correlate the cards with other Mandarin symbols in such a way that you are able to produce a card of your own and give it back to the people outside. The sorting system is so elaborate, that you are guaranteed to produce cogent responses to the questions on the cards that were given to you, without having to understand Mandarin. The responses you produce are indistinguishable from the responses a fluent Mandarin speaker would produce. The people outside think you are fluent in Mandarin and understand the questions they give you. But in reality you don’t really understand Mandarin, you can only simulate an understanding of Mandarin.

According to Searle, computers work in much the same way. A computer is an elaborate system that produces certain outputs when it is given certain inputs, and in doing so it functions to an outside observer as an intelligence, but there is no genuine understanding going on in the machine.

Searle’s argument is directed against a philosophical position known as functionalism: the view that the mind is reducible to how it functions in relation to itself and the outside world. Following the saying ‘if it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck’, functionalists argue that if something talks and behaves like an intelligent mind, then it probably is an intelligent mind.

Functionalists argue that the hardware is irrelevant to the question whether a machine – or, indeed, a human – can think. Whether your brain is made of organic cells or of chips and wires is unimportant. What matters is the software. If the mental processes of a human mind – note: mind, not brain – can be mimicked with computer software, then the conditions for consciousness and understanding are met, says the functionalist.

Searle, however, does not accept this. He advocates biological naturalism and argues that there is something specific to the brain which gives rise to consciousness – which is a requirement for understanding – that cannot be produced in machines, even if all other behavioural characteristics of intelligence can be produced.

Other minds

But how do I know that you have a mind, or that you are conscious or intelligent? All you can offer is behaviour. Your consciousness is only immediately available to yourself. How do I know I’m not the only conscious person in existence, interacting with machines (organic or otherwise) behaving in a very sophisticated manner? All I have that might lead me to contribute consciousness to other persons is their behaviour. And if behaviour is sufficient to contribute consciousness to other humans, animals, extraterrestrials, then why not machines? Perhaps, after all, the Turing Test is sufficient to conclude that a machine can think?

Creativity, evolution, socialisation

Suppose we follow the argument further and accept that some version of a Turing Test might be sufficient to determine that a machine can indeed think (in the sense of ‘understand’). Is it imaginable that we might indeed produce such a machine? Of course, Artificial Intelligence is increasingly clever and can do things human minds are incapable of. There is no end in sight for technological advancement. But that doesn’t mean there are no limits. How does one code creativity in a piece of software? ‘Creativity’ is an even vaguer concept than ‘thinking’, but for human-like intelligence, one would argue that it is needed. Creativity relies on wild and unusual connections, imaginations, and associations, many of which are learned through experience. Humans aren’t born with an intelligence, they aren’t born with the ability to think, they acquire and develop that ability through time. Humans need other humans to do so: we need socialisation for our mental abilities. Even if it is imaginable that a machine can have a mind, is it imaginable that we can create a machine that can develop an intelligence over time, through socialisation with humans, or indeed other machines?

According to Prof. David Deutsch, AI research is doomed to stagnate if it doesn’t face the inevitable philosophical questions it raises (see this article), something that AI departments have not always acknowledged. However, Aarhus University in Denmark is an exception to that rule with its robophilosophy project Pensor about social robotics.

A final thought experiment

I have a confession to make. Last night, I sneaked into a student’s house. While he was asleep, I anesthetised him, downloaded any information (memories, patterns, habits, thoughts…) that was stored in his brain, and converted it to code. I then uploaded the code onto a computer chip and placed it, with a long-lasting battery, in his skull, along with some adapters to connect it efficiently to what remained of his central nervous system. I destroyed the brain, because my student will have no use for it any more. This morning, he woke up and went about his ordinary life. He’ll come to my philosophy class and will contribute to our discussion as usual. Is he aware that he does so? And if so, do you think he has noticed any change? And do I still owe him respect and consideration?


A chapter from Stephen Law’s Philosophy Gym presents a philosophical dialogue between a robot and a human. Who do you agree with?

If you duplicate yourself on the 11th of April, you can send one of you to the marvelous Science Fiction Theater evening screening the film Robot & Frank (2012) including a talk by a very intelligent academic in Dalston!


Zera Yacob (1599-1692), an Ethiopian philosopher – 21/03/2016

How many sub-Sahara African philosophers from before the 20th century do you know? I know two: Zera Yacob and his student Walda Heywat. Each of them wrote one treatise of only a couple of dozen pages each. It is believed that the written history of sub-Saharan philosophy begins and ends there. Why is this the case? Why is there not more written philosophy around? There are several factors that might explain this, one of these factors being the dominance of oral history in Africa until colonial times, which means that wisdom was passed on in the form of stories from generation to generation, rather than in the form of academic treatises. Whatever the explanation, Zera Yacob’s treatise is one of the few we’ve got, so if you want to know what African philosophy was about back in the day, this is where you should start.

Zera Yacob lived in Ethiopia in the 17th century, and those of you who are familiar with European early modern philosophy (Descartes, for instance) will notice, when they read Zera Yacob’s treatise, that the philosophical questions he addresses are not particularly new. Like his European and Islamic colleagues, Zera Yacob wondered whether we can know God, and conclude God’s existence, by means of reason rather than faith or revelation. Like in Europe and the Islamic world, he chose the method of rational introspection over relying on authority or tradition.

Due to the very short length of the treatise and the familiarity with its philosophical message, Zera Yacob’s work may not be the first place to turn to for philosophers looking for depth or original philosophical questions. Yet, it is interesting to note how similar philosophical questions were important in Ethiopia as well as in Europe and the Islamic world, simultaneously. This gives offers the humbling realisation that modernity is not a distinct Western achievement.

A further interesting characteristic of Zera Yacob’s work is that it is not only a philosophical treatise, but also an autobiography. He reports how he fled from the king after refusing to take sides in a religious conflict between the local Copts and the Jesuit missionaries who had managed to convert King Susenyos. Zera Yacob consequently lived in exile in a cave for about two years, which he describes as a pretty nice time, away from violent and ignorant people, meditating on God and humanity in welcome solitude. Philosophy, for Zera Yacob, was not an academic interest but a matter of urgency and immediate relevance.

You can read the entire treatise, along with some comments, on this blog.

Prisoner’s Dilemma, Bonnie and Clyde – 18/01/2016

Imagine the following scenario:

You, Bonnie, and your associate Clyde, have robbed a bank. You have been arrested and locked up in separate cells, unable to communicate with each other. After a while, the sheriff enters your cell to interrogate you. When it becomes clear that you are not going to confess, the sheriff admits that he has enough evidence to get both of you convicted for possession of the illegal substances he found in your car, but not enough evidence to get either of you convicted for armed robbery. So he offers you the following deal, and informs you that he will offer Clyde the same deal:

1. If you confess and rat on Clyde, but Clyde remains silent, your account will be enough to lock Clyde up for 10 years. In return for this favour, you get your freedom.
2. If you confess and rat on Clyde, and Clyde also confesses and rats on you, both of you get a prison sentence of 7 years.
3. If you remain silent, but Clyde rats on you, you will face 10 years in prison, whilst Clyde goes free.
4. If you both remain silent, all the sheriff has is the illegal substance, which will land both of you a prison sentence of two years.

Then he leaves you to consider your options.

It is immediately clear that the best thing to do for you and Clyde is to remain silent.



But then you start to doubt. Although you love Clyde, you are not willing to sacrifice 10 years of your life for him. Knowing him, you suspect that he won’t do that for you, either! What’s more, you don’t trust him one bit. Coming to think of it: he’s a self-interested bastard, and cunning too! You decide to take the option that will leave you with the shortest possible prison sentence, assuming that Clyde will do the same.

When the sheriff knocks on your cell door, will you rat or remain silent? How long will each of you spend in prison?

And whatever you choose, what does that tell us about cooperation and rationality?