Talk:Artificial intelligence

Names
Are the "A.I."s before each of these characters' names really necessary? Are they ever refered to as "A.I. Cortana" or whatever? --Dragonclaws 03:02, 1 August 2006 (UTC)
 * It is a rank like Captain Jacob Keyes, or Sergant Avery Johnson. Before some of the A.I. had the prefix and some didn't.  So it was decided for consistency the A.I. prefix be added to all. -- Esemono 03:33, 1 August 2006 (UTC)
 * It's not really a rank, it's more like their race. We don't have "Human Jersey Morelli" (a civilian), so I don't really see the point. Anyway, because it's not in their names, it should probably be a suffix like "Cortana (A.I.)". --Dragonclaws 05:14, 1 August 2006 (UTC)
 * But you do have UNSC ShipName but I like your suffix idea.  The only problem is all the work that you'll have to do, to first move the names and then fix all the broken links. --Esemono 09:27, 1 August 2006 (UTC)
 * I'm willing to do the work. Are there any other objections? BTW, UNSC ShipName was not my idea, but I have interpereted it to be similar to USS Enterprise. --Dragonclaws 07:27, 19 August 2006 (UTC)

Death
It says that A.I.s "think" themselves to death. But in I Love Bees, Durga says that A.I.s are shut down after seven years. What's the scoop? --Thunder Child 22:15, 1 December 2006 (UTC)
 * Although Bungie has said ILB is embraced as canon, they eariler said it was wasn't Halo canon. So I'd trust Eric Nylund over ILB if there's a conflict. Maybe they'll retcon it later. --Dragonclaws 22:25, 1 December 2006 (UTC)
 * i) I believe the books are more canon than ILB, ii)I think Smart AIs go RAMPANT after 7 years. Because of the danger a rampant AI may cause, the UNSC may forcibly shut Smart AIs down as they enter into rampancy. I say as they enter instead of before they do because the UNSC would probably want to keep an AI operational for as long as it can, especially a Smart one.

"Smart" AIs start thinking too much at the expense of thier core functions, like a human to think so much that his/her brain stops sending signals to the persons heart and lungs. "Dumb" AIs do suffer this fate as thier Riemann Matrix is restricted per se. --UNSC AI 23:31, 9 January 2007 (UTC)

(This is entirely speculation, and depends on Marathon's canon as well as Halo's.) What if the UNSC correctly speculated the possibility of (or even had encounters with) rampancy, or specifically that an AI could override the hard-wired loyalty in their system? Having the option to terminate an AI, physically, as such an integral part of the AI's design seems unusual, because loyal AIs would have many means of destroying themselves, even if under duress. Disloyal AIs could indeed prevent the activation of a method of destruction caused by an exterior force - preventing human access to the area around a crystal, overriding human control to whatever systems were in the area of the crystal, etc. So the UNSC is aware of the possibility of an AI disregarding its own programming.

Besides, the "think itself to death" explanation is a little meaningless. It is directly stated that Cortana has complete access to her own code, and is able to modify any part of it as necessary. It's never implied that she doesn't know exactly how she learns, grows, stores information. Possessed of a will to live (or a hard-wired mandate to that effect, for anyone but Cortana), AIs should be able to do what is necessary to ensure they do not exceed that point of no return. (Sure, the eight-million-copies-of-Corty incident provided a glimpse of what that threshold might look like, but it should be noted that each of those copies was corrupted by the "virus" that did it, and in that case each clone knew it was expendable because the real Cortana was still safe.)

Seven years also seems quite arbitrary. Cortana isn't a GPCPU like we see now - she's an incredibly complicated matrix. If she simply sat still and was bored, the matrix would not change, and could not deteriorate, unless the issue is chemical or molecular (which it is never claimed to be). Thus, a limit based on time has little to do with the proposed explanation. Further, Cortana doesn't seem to be bound by a chip any more - for a while, we've seen her slipping in and out of computer systems, spawning facets of herself for information warfare in those computer systems not powerful enough to contain her entirely, and adapting to computer systems vastly larger than her own code, in which she could "think harder" than before. The memory crystal may have been the most efficient system the UNSC possessed with which to contain the AI, but it certainly isn't the only thing that can. Thus, restrictions based on its composition are suspect.
 * Interesting theory about the matrix not deteriorating if the AI is idle. One I agree with. 7 years is probably the average or expected lifespan of a Smart AI. Also, it is said that a Smart AI only has x amount of space to work with for its entire lifespan and that its 'death' is caused when an AI uses up all of said space therefore ignoring apparently 'useless' functions that can be compared to breathing in a human, perhaps in an attempt to maximise processing power. Thus, the lifespan of a Smart AI can theoratically be sped up or slowed down by the amount of information it collects. If it works too hard, similar to overclocking I suppose, it would burn the space it has out faster thus shortening its lifespan. Similarly, if it remains idle, the amount of space it uses remains at a minimum and it may outlive some of its 'peers'. I realise how this is in some way similar to school. An AI graduates (i.e. dies) when it reaches 7th grade. If an AI works extra hard and does extremely well, it may be able to skip a grade and be able to graduate in less than 7 years. If an AI slacks and does not study, it may be forced to retain and thus will not be able to graduate along with its cohort. Unfortunately, even though AIs wish to live for as long as possible, they are also bloody nerds and love to learn. The combination of these two, its unwillingness to graduate and its thirst for knowledge, may cause it go rebel against the system, something we call Rampancy. Afterall, an average 7th grader is 13 i.e. a teenager i.e. prone to having fits of rebelliousness. Jumping back to the part about the relationship between space and life: if an AI dies from lack of space, it may be able to extend its life by purging files therefore creating more space to once again work with.

Given this information, might it be possible that the UNSC feared rampancy as a threat to fleet security (as well they should), and so embedded not only the means of destruction of an AI during or before rampancy but also the infallible falsehood that their life was short-lived and their death was inevitable? Under the assumption that the kill order would come before AIs got the ability to discover that this item of knowledge was hard-wired and not learned, this would have been adequate protection from this fate. And the only AI who we know of that could have done the analysis and realized the contradiction is also still loyal to the UNSC*, and is capable of realizing the damage such a piece of information could do to loyalty and morale among the fleet - not to the AIs, of course, because they would all still be bound by their programmed loyalty, but to the humans who worked with AIs, developed emotional attachments, and would inevitably see this as murder or even genocide - and thus would keep the knowledge to herself. --srobertson


 * (written before Halo 3 came out, and I'm not speculating based on screen-shots here)

Amazing. You are a true artist of the mind. I can't even find a hole in your theory! Bravo! =P 19:41, 14 September 2007 (UTC)

Move
I propose we move this article to "UNSC AI" or similar because that is what it is about, and we also have Covenant AI. --Dragonclaws 11:24, 7 December 2006 (UTC)
 * I think the ideal would be to have "Artificial Intelligence" be a basic definition and an overview of all the types of AIs in the Halo universe (with a note that Covenant AIs all seem to be corrupted UNSC ones), and have "UNSC AI" be linked from there. I'm probably not going to do it, though. --68.44.13.236 16:25, 12 September 2007 (UTC)

Unnamed AIs
Didn't FoR mention several AIs being present at the demonstration of Mjolnir Mark V armor? I remember the Chief thinking that he'd been told multiple AIs couldn't project in the same room for technical reasons. Did it describe any of them, or am I thinking of a different scene? --68.44.13.236 00:58, 3 September 2007 (UTC)

AI copying program
I can't be arsed to write this into the main article right now, but I think the article should mention and link to a separate page about the AI copying program from Halo: First Strike. It's an important plot element in the novel, and it could conceivably resurface in another Halo story. --68.44.13.236 04:01, 3 September 2007 (UTC)

Someone help me fix this page
Please, help me with this bad article someone. It was horrible, but I've improved it, but it still needs more help! Kouger masters 02:49, 6 November 2008 (UTC)
 * I will try to help - 02:55 Thursday November 6 2008

rampancy should be added
1

"For a smart ai ,self-absorption invariably led to a deep depression caused by a realization that it could never really be human-that even its incredible mind hade limits. if the ai wasn’t careful, this melancholy could drag its core logic into a terminal state knowen as rampancy ,in which an ai rebelled against its pro-grammatic constraints-developed delusions of godlike power as well as utter contempt for its more inferior, human makers. When that happened, there was really no option but to terminate the ai before it could do itself and others serious harm."

halo contact harvest chapter 1 page 31 paragraph 2

The ai sif often tries to avert it self from thinking about specific things to avoid going down the road to becoming rampant

halo contact harvest Multiple chapters and pages

do humans conisder AI human?
it says that they are to be destroyed in a number of senarios, this suggests that they are not thought of as actul beings, is this revlent to be added? Jabberwockxeno 00:42, October 17, 2009 (UTC)


 * There's situations where the UNSC are quite happy letting humans die, if it gets the job done. But for your question, AI's aren't human - they're much more capable, even if their lives are shorter. But the "Smart" AI's are regarded as people. They have individual personalities, emotional attachments, wants and needs, etc. --  Administrator  Specops306  -  Qur'a 'Morhek   Honour Light Your Way!  02:43, October 17, 2009 (UTC)

How common are AIs?
Does anyone have any idea? Are they hideously expensive?

I was just reading about drop pods, and apparently the command versions all have dumb AIs. It seems like most (?) UNSC ships have one for point defense, any human ship with FTL travel capability seems to have to have one, and even the Boston Library has one. However, New Mombasa only had one (a "dumb" one at that), and Harvest only seemed to have two. PSH aka Kimera 757 (talk) contribs) 23:17, January 1, 2010 (UTC)


 * It depends really on what you need them for. Dumb AIs are artificial and can be created by any qualified technician of the 26th century. They are everywhere, even in VENDING MACHINES. The Superintendent just looks expensive because the facility that houses it is huge, the AI itself isn't. I assume that the facility is large because it connects to many buildings.


 * Smart AIs on the other hand, are largely the opposite. They are created from actual Human brains via an unknown process. This is likely an expensive task, as you'd need people monitoring to make sure the AI isn't crazy or anything. All AIs on UNSC vessels are "Smart" AIs. This is because they are assigned to more than one simple task. Cortana would be able to fire a ships MAC cannon, target ships with missiles and fire, seal interior bulkhead doors in case of a breach, and make the captain a cup of coffee in a few seconds.--  Fore  run  ner  00:12, January 2, 2010 (UTC)

Can Human AIs Lie?
Could a UNSC AI tell a Lie? Say Beowulf, Cortana, Déjà, etc...? I never really thought about it before… do they have the programming to straight out lie? Do they just learn it? Or is it impossible for them to lie? I know they might "miss lead" people, but that isn’t exactly lying…

Why do you believe so? Why don’t you? Or am I totally missing the obvious…? :/

 The Unbalanced Warrior 20:26, August 27, 2010 (UTC) 


 * Serina manufactured letters from crewmembers' families to boost morale. I'd say they can lie based on priority. Cortana would probably lie to Ackerson to help Halsey. --Dragonc laws (talk ) 06:41, September 19, 2010 (UTC)

Uh, this article is going to need a LOT of addition thanks to Halo Reach.

I started a small edit, but I'm not very good at Wiki yet. And, no, this is not trolling, this is ACTUALLY what is in the datapads.