Of course I am talking about the famous 3 Laws as set forth by Isaac Asimov.
- A robot may not injure a human being or, though inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the first law.
- A robot must protect its own existence as long as such protection does not conflict with the first or second law.
A robot may not harm humanity, or by inaction, allow humanity to come to harm.
I will work on a Positronic Brain for him, but something tells that will be a little bit harder to pull off.