Asimov’s Three Laws of Robotics
If robots were ever to be completely autonomous, they would need some sort of guidance so as to not cause harm, by accident. The science fiction writer Isaac Asimov famously devised a set of laws that might be used to restrain robot actions.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
However to follow these rules rules the robot would need to be capable of some very subtle judgements, such as telling if one person was genuinely threatening another, or just joking.
Do you think a robot could ever be capable of such fine distinctions? And would such a degree of intelligence indicate it should also be worthy of some kind of "robot rights"?