We generally hold people responsible for their actions:
- if they are aware of what they have done
- if they knew of the possible consequences of their actions at the time
- and if they had a choice to have acted differently
Could a robot ever do these things? Would it need to be conscious, aware of itself and of the existence of others to be able to make such choices?
If we are going to decide whether robots should have rights and responsibilities, we need to know whether a robot could ever be more than just a machine. Could it become a person? And how could we possibly tell?
How do you know if anyone is conscious?
What's your opinion?
Average rating
Current rating: 5/5 (from 1 votes cast)