University of Bristol
Institute of Physics logo
Why not try our other site: BEEP Biology & Ethics

Robot responsibilities

Asimov’s Three Laws of Robotics

If robots were ever to be completely autonomous, they would need some sort of guidance so as to not cause harm, by accident. The science fiction writer Isaac Asimov famously devised a set of laws that might be used to restrain robot actions.

The laws

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

However to follow these rules rules the robot would need to be capable of some very subtle judgements, such as telling if one person was genuinely threatening another, or just joking.

Do you think a robot could ever be capable of such fine distinctions? And would such a degree of intelligence indicate it should also be worthy of some kind of "robot rights"?

Next: Conscious robots



What's your opinion?

Average rating

Current rating: 3/5 (from 1 votes cast)

Read comments

james johnson 20-05-09 17:18
anyone from holland park school reading this in 7h1 should read this, i find it intresting but sometimes u guys are a lot different to me! so enjoy