Three Laws of Robotics: Difference between revisions
no edit summary
No edit summary |
No edit summary |
||
(18 intermediate revisions by 16 users not shown) | |||
Line 1: | Line 1: | ||
{{ | {{Status|Canon}} | ||
{{ | {{Wikipedia}} | ||
The '''Three Laws of Robotics'''<ref>'''[[Halo: Evolutions]]''', ''[[Midnight in the Heart of Midlothian]]'', page 88</ref> are conditions to which [[Artificial intelligence|artificial intelligences]] are subject to: | |||
#A robot may not injure a human being or, through inaction, allow a human being to come to harm. | |||
#A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law. | |||
#A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. | |||
The laws were created by science fiction author Isaac Asimov, with the first law first mentioned in the 1941-story [[wikipedia:Liar! (short story)|Liar!]]. Fleshed out more extensively in later series, these laws have also been adopted by other science fiction authors, albeit sometimes in an altered form, and has been considered a model on which to base future artificial intelligence research.<ref>[[wikipedia:Three Laws of Robotics#Applications to future technology|Wikipedia]]</ref> | |||
# | |||
[[United Nations Space Command]] [[Smart AI|"smart" AIs]] are able to ignore at least the first law at will while fully functional, and given their military usage are often ''required'' to ignore this law, though at lower-capacity states their adherence is compulsory. Whether [[Dumb AI|"dumb" AIs]] are able to ignore these laws is unknown. | |||
==List of appearances== | |||
*''[[Halo: Evolutions]]'' | |||
**''[[Midnight in the Heart of Midlothian]]'' {{1st}} | |||
*''[[Halo: Saint's Testimony]]'' | |||
==Sources== | ==Sources== | ||
{{Ref/Sources}} | |||
[[Category: | |||
[[Category:Artificial intelligence]] | |||
[[Category:Human AI]] | |||
[[Category:UNSC protocols]] |