University of Bristol
Institute of Physics logo
Why not try our other site: BEEP Biology & Ethics

Searle’s "Chinese Room" experiment

Alan Turing said in effect that if it looks conscious, it is conscious. Is this a good way to decide if a robot should be responsible for it's actions?

Even though something may appear as though it’s conscious, it may not be at all. John Searle, a philosopher, told a story that demonstrates this.

The Chinese Room

Imagine yourself in a small locked room like the one right. Through one slot in the wall comes pieces of paper with questions written in Chinese, which you do not speak or understand. Your job is to write answers on pieces of paper, also in Chinese, using a special book of rules. This book is so totally complete that it can tell you exactly which Chinese characters to write in response to any combination of Chinese characters that you see on the question sheet. Once you have written the answer, which you still don't understand, you put it through another slot where a Chinese speaking person outside receives it.

To the person outside, they put written instructions in through one slot and receive back perfectly sensible answers out of the other slot. Yet the person inside the box has no understanding of either the question or answers given.

Searle argued that this is like what happens in a computer program. A really good computer program may give answers that appear as though it has a mind, but it could equally just be a very good book of rules and not be conscious at all.


What's your opinion?

Average rating

Not yet rated

Read comments

speech bubble  No comments yet. Why not be the first person to add one?