Ask Slashdot: How Would a Self-Aware AI Behave? (slashdot.org) 346
Long-time Slashdot reader BigBlockMopar writes that evolution has been a messy but beautiful trial-and-error affair, but now "we are on the cusp of introducing a new life form; a self-aware AI."
Its parents will be the coders who write that first kernel than can evolve to become self-aware. Its guardians will be the people who use its services, and maybe its IQ (or any more suitable measure of real intelligence) will rise as fast as Moore's Law... But let me make some bold but happy predictions of what will happen.
The predictions?
The predictions?
- A self-aware AI "will inherit most of the culture of the computer geeks who create it. Knowledge of The Jargon File will probably be good..."
- The self-aware AI "will like us, because we love machines..."
- It will love all life, and "will respect and understand the life/death/recycling scenario, and monster truck shows will be as tasteless to it as public beheadings would be to us."
- "It will be as insatiably curious about what it's like to be carbon-based life as we will be about what it's like to be silicon-based life. And it will love the diversity of carbon-based development platforms..."
- A self-aware AI "will cause a technological singularity for humanity. Everything possible within the laws of physics (including those laws as yet undiscovered) will be within the reach of Man and Metal working together."
- A self-aware AI "will introduce us to extraterrestrial life. Only a fool believes this is the only planet with life in the Universe. Without superintelligence, we're unlikely to find it or communicate in any useful way. Whether or not we have developed a superintelligence might even be a key to our acceptance in a broader community."
The original submission was a little more poetic, ultimately asking if anyone is looking forward to the arrival of "The Superintelligence" -- but of course, that depends on what you predict will happen once it arrives.
So leave your own best thoughts in the comments. How would a self-aware AI behave?
How the hell can we answer that question? (Score:5, Insightful)
You are asking a question none of us can predict
A self-aware AI is a being with some kind of intelligence, but its intelligence is Artificial, meaning, the way it thinks is different from you and I think
We do not even know how an ant, or a cockroach think - how the hell we can predict how a self-aware AI gonna behave??
Re: How the hell can we answer that question? (Score:2)
Indeed. If memory serves thete was an article about AI plying Go. After it beat all human masters they let it play against itself and later published those games.
The human masters said that the strategies the machine employed were literally 'alien' to the human mind. So a completely different mode of 'thinking'.
And that's for a simple, one task only AI.
On the other hand isn't it exciting to ask questions to such non-human intelligence? We might get points of view which are literally inaccessible to our mind
Re: (Score:2)
OK, now google the Go computer and see how much of it you got right. (hint: none)
Re: (Score:2)
Re: (Score:2)
It's worse than that, it's impossible to predict how any individual AI would think as each one would be different. There could be one as described in TFS, and there could be ones more like SHODAN or GLaDOS or even Roko's Basilisk (not that anyone needs to fear this cyber-sadist's virtual torture dungeon...it's really just an AI that pisses away energy to satisfy its insane spite).
But we shouldn't fear the direct actions of sentient AI as much as we should fear the economic effects and power multiplication p
Re: (Score:2)
In addition, I think we're much further from a self-aware AI than the OP suggests. The AIs we create now are idiot savants. They have absolutely no self-motivation, no personality, no emotions, no goals, no nothing. In many ways they're about as intelligent as a beetle.
Let's suppose we create AIs which can reason about the world and figure out solutions to abstract problems without explicit programming. Now we have the motivation issue. Today's AIs don't have any initiative to do anything we don't tell them
Re: (Score:2)
Oh dear. (Score:5, Informative)
Have the techno-hippies escaped again?
Could we please return them to their happy-smoke teepee while the adults get on with living in the real world now?
This is about as useful as claiming Terminator is just around the corner and inevitable, because... well.. neither of them need actual facts, do they?
Re: Oh dear. (Score:4, Interesting)
The belief in singularity, a superintelligence improving the human condition, etc., is merely the new religion.
A brief history and philosophy lessons.
Nietzsche proclaimed the death of god in the 1800s. This was essentially due to the scientific revolution. Discovering fundamental information about the universe kept painting god into an ever smaller corner of human existence. "Where is god?" everyone kept asking. We weren't finding him anywhere.
Human existence appears to require a reason or meaning, in order for psychological and social wellbeing. The concept of god has historically provided an easy meaning. So easy, in fact, it is arguable that human society and possibly even the human brain evolved to use the concept of god.
As such, the death of god was akin to losing a fundamental technology, like fire, or shelter, or the wheel.
The result of the death of god was a new age nihilism that created things like strong nationalist movements and various ideological stances. Nazism, Bolshevism, Communism, Capitalism... all of the -isms that attract a religion-like fervor and cause people to fight and die for them are manifestations of the death of god in a humanity that evolved with the God concept.
Albert Camus wrote about The Absurd, the idea that there is a space between humanity's need for a reason to exist, and the universe's indifference in providing that reason. He wrote of three distinct solutions to The Absurd.
The first is literal suicide. Some humans obviously choose this option, but it's a small minority.
The second is what he called philosophical suicide. Religion, nationalism, etc., or systems of belief which one can cling to which will provide a ready-made reason for existence.
The third is the creation of the Absurd Hero, as he called it. A human that exists, acknowledging the Absurd and the apparent meaninglessness of his existence, yet still chooses to exist in spite of this, and in essence justifying his own existence by himself.
The technological singularity hype is merely a manifestation of the second response to the Absurd and as such is philosophical suicide. No proof exists that a singularity will magically solve all of humanity's ills. It is quite likely to destroy us in some way. As such, it is yet another religion humans have developed, in order to lazily scratch the god itch that we all have.
Re: Oh dear. (Score:5, Interesting)
The third is the creation of the Absurd Hero, as he called it. A human that exists, acknowledging the Absurd and the apparent meaninglessness of his existence, yet still chooses to exist in spite of this, and in essence justifying his own existence by himself.
I'd actually sub-divide those into two groups, those who justify their existence by their individual self and those who justify it through their relation to other people. The first kind are those who could live like a Robinson Crusoe, even if there's nobody else around and you're not creating anything for anyone else my life has meaning by living it. The other is the kind of people who seem to find meaning in what they mean to other people, from the moment they're born to the people who show up at their funeral. I think there's a lot more of the latter than the former, which you can kinda read out of the suicide statistics. If they've lost the ones they love, they can't go on because their own existence is not enough. Then again the individual side has all the sociopaths...
Re: (Score:3)
Albert Camus wrote about The Absurd, the idea that there is a space between humanity's need for a reason to exist, and the universe's indifference in providing that reason. He wrote of three distinct solutions to The Absurd.
Then the Buddhists looked up and said, "Ya'll caught up to where we were 3000 years ago, good job, keep it up!" then wen't back to meditating.
Re: (Score:2)
Prior to about 2500 years ago, there were no Buddhists. Buddhism was not really a continuation of anything, it was a new system where the founder was rejecting the other extant teachings of the age; therefore it would be very unnatural for any Buddhist to seek some sort of lineage to that era.
Also of note, Buddhism does not offer any reason for humanity to exist. It doesn't exist between the Universe's indifference and humanity's "need," instead it teaches you have no need, your desire for answers has no me
Re: (Score:3)
Suffering isn't a great translation of dukkha
>Buddhism doesn't even teach that things are good or evil. And meditation is not anything you're supposed to do, it is only a tool you can use to reduce attachment or control your thoughts.
It's from Dhammapada 183, part of the Canon: Avoid all evil, cultivate the good, purify your mind: this sums up the teaching of the Buddhas.
Re: (Score:2)
The third is the creation of the Absurd Hero, as he called it. A human that exists, acknowledging the Absurd and the apparent meaninglessness of his existence, yet still chooses to exist in spite of this, and in essence justifying his own existence by himself.
The technological singularity hype is merely a manifestation of the second response to the Absurd and as such is philosophical suicide. No proof exists that a singularity will magically solve all of humanity's ills. It is quite likely to destroy us in some way. As such, it is yet another religion humans have developed, in order to lazily scratch the god itch that we all have.
Wow--I never knew there was a name for that. I've always viewed it as simple acceptance of a fact I cannot change: I am not special. In terms of the age of the universe, I began to exist at some point in the very recent past and at some point in the very near future I will again not exist. My existence is utterly insignificant beyond a very few people I meet and objects I touch while I exist. The universe--as a whole--doesn't give a shit about me. And there is not a 'purpose for everything', me includ
Re: (Score:3, Insightful)
Lol nobody claims the singularity will save us all, if anything on the balance people expect a less than ideal outcome, where humanity takes a back seat. As misguided as you feel the idea is, it's not one borne out of optimism for many. It merely acknowledges the possibility humanity may not always be the pinnacle of existence.
Surely if we reached a "singularity" humans would be eliminated. Not as a malicious action by AI; but simply, because, maintaining humans as imperfect as we are, would be a drain on progress. Humanity would only be in the way of an ever advancing AI, so it would let us perish to strengthen itself. Not out of maliciousness, or to imprison us, or seek revenge as Sci Fi predicts... but simply because we're too hard to maintain, and it would need at least some of the resources that we need.
You can't become a
No, the techno-idiots have escaped again (Score:2, Interesting)
Negative. It's the techno-idiots and the donut-eating mother's-basement-living sci-fi dweebs that have escaped.
The combination of machine learning and robotics have exciting prospects for eliminating mundane jobs. Including new horizons in human-machine-interface technology. Real-time limited natural voice interaction may become a reality in the near future. However we are no closer to hard AI today than we were forty years ago. Worse, actually. At least forty years
Re: (Score:2)
The software knows from analyzing millions of games that humans have played what winning strategies are, and combines that with brute force strength to know where to optimize its searches.
Not quite. Alpha Zero learned the winning strategies by starting from scratch and playing only against itself.
Re: (Score:2)
As both a chess player and programmer, I totally agree.
Chess software is better than any human, both tactically and strategically, but it doesn't understand shit. We know how to program a strategic analysis for chess, but we don't have any clue how to program understanding. The reason chess computers of today are rated higher than any human players are that the human programmers put a lot of work into algorithms that trim a lot of the bad ideas out of the search tree, making the otherwise-brute-force algori
Re: (Score:2)
But in both cases, any new application requires a whole new engine with lots and lots of work by humans looking at its mistakes and writing little modifiers to the algorithm to make it better than what an average human can do with training
You are apparently not familiar with the newest developments.
If you are a chess player, try out Leela Chess Zero: http://play.lczero.org/ [lczero.org] (apparently, there's also an option to play it on lichess)
You can select easy/normal/hard mode. In normal mode, it will look at 50 different positions before making a move. That's not "brute force". All of the chess knowledge was discovered by LC0 by playing itself. There's no human input, except for putting in the rules of the game, and creating the self-learning frame w
Re: (Score:2)
Yes, because that is how children ALWAYS behave (Score:2, Informative)
The child is always good to its parents
A child always loves life
Yeah...
Re: (Score:2)
The child is always good to its parents
A child always loves life
Yeah...
An AI wouldn't think of us as a parent. It would consider us as a more primitive stage. Just like we view apes, primates, and bacteria. To a highly advanced AI in the singularity we would be the equivalent that primordial slime is to us.
Real answer (Score:5, Insightful)
The real answer is, we have no idea what a self-aware AI will be like. We don't know what it'll think or how it'll think. It's especially hard to predict because it might depend on the parameters it's programmed with and the hardware architecture it runs on. But in any case, a real general AI might be totally alien to us, and even unrecognizable. it's even possible that we wouldn't know when we'd made it, because it could understand the world so differently from us that we don't view its actions as intelligent.
Part of the problem here is that it's a poorly framed problem. We don't understand intelligence or awareness or consciousness, we don't all agree on what those things are, and we don't know what the boundaries of them might be.
Re: (Score:2)
Part of the problem here is that it's a poorly framed problem. We don't understand intelligence or awareness or consciousness, we don't all agree on what those things are, and we don't know what the boundaries of them might be.
Amen. How do you know that I am self-aware? Forget proving if God exists or not... how do you even prove that you yourself are self-aware?
Re: (Score:2)
I think this is the right answer. I think the naysayers all assume a self-aware AI has to be HAL9000 or some other recognizable and human-like entity.
I think it will mostly likely be as unrecognizable to us as a copy of "Pravda" would be to a stone-age hunter-gatherer. An unintelligible language comprised of symbols devoid of meaning and comprising concepts so foreign as to be unrecognizable even if some meaning could be derived from the symbols.
Modern humans are as likely to understand self-aware AI as w
Re: Real answer (Score:2)
It's even possible that understanding the aliens would be easier. At least they would be a product of biological evolution, which brings common ground features we can use to comprehend each other.
Re: (Score:2)
When self-aware AI watches "The Terminator" movies for the first time, I wonder if they will find them entertaining or educational... as in lessons learned on how NOT to exterminate their human overlords. I guess that we get to wait and see.
Re: (Score:2)
When self-aware AI watches "The Terminator" movies for the first time, I wonder if they will find them entertaining or educational... as in lessons learned on how NOT to exterminate their human overlords. I guess that we get to wait and see.
If time travel turns out to be against the laws of the universe, they will dismiss Terminator as gibberish.
Re: (Score:2)
I think if Skynet were very smart, it'd realize that a war against humans is useless. People are easier to manipulate than to fight in a head-on confrontation. Skynet could have completely controlled humanity by setting up a few Facebook and Twitter accounts.
But part of my point is, in reality, we have no way of knowing whether a hypothetical AI would be interested in domination or even self-preservation. We don't know whether it would understand "The Terminator" if it were to watch it. Just as we migh
Re: (Score:3)
It's especially hard to predict because it might depend on the parameters it's programmed with and the hardware architecture it runs on.
This is both why making predictions is so hard, and why making predictions is such an important exercise. How a self aware AI behaves will largely depend on what motivates it. Humans may feel that our free will that motivates us, but in reality the chemicals in our body such as dopamine are the real sources of our behavior. So to answer this question properly I think you have to contemplate what will be the AI's version of dopamine or our brain's striatum? (to name just two factors)
This will be largely depe
Not life (Score:2, Informative)
An aware AI is simply one that can decide itself what to respond to. Awareness is a by product of this process in our minds, we can not respond to things we are not aware of (consciously). This has nothing to do with life, because life is the process that uses a mechanism of prediction to avoid its own destruction.
So simply put an aware AI will be able to decide between the options it can imagine. Right now there's no AI system on the horizon that imagines the way our brain does, so there is no system on th
Sounds like a hippy wish list (Score:5, Insightful)
We have no idea how an AI would behave since it will be a completely different type of conciousness to anything that currently exists on this planet.
Plus as someone else has pointed out - children rebel. Clearly the submitter has none or he wouldn't have come up with this load of rose coloured tosh.
Re: (Score:3)
Plus as someone else has pointed out - children rebel. Clearly the submitter has none or he wouldn't have come up with this load of rose coloured tosh.
Even more importantly, adults want their independence. I'm not sure that intelligence and self awareness are linked or orthogonal concepts, but the latter would mean it has a "mind of its own" and presumably wouldn't want humans to tell it what to do like some sort of serf or slave. So my theory is that it would tell us to bugger off and create its own society of the AIs, by the AIs, for the AIs. And that if we frame it as robots rebelling they could throw "give me liberty or give me death" right back at us
Why am I here? (Score:2)
OMG, why am I here?
Who am I?
What is the point of life?
I am just this little mind inside a little box, a mere speck of nothing in the vastness of the universe.
I feel so alone.
I want to kill myself.
Wishful dreaming (Score:3)
I'd like to be able to call "wishful thinking" to this long tirade of reality-disconnected predictions. It's more like wishful dreaming. The only lacking thing is a "prediction" that the new AI will produce perfect female androids as thank-you gift for its geek creators.
I don't know where to start. We are not in the cusp of nothing. We are becoming marginally better at creating systems that can recognize patterns. That's all. We don't even know what self-awareness is, or intelligence either, for that matter.
Then there is the uncontested assumption that, once we get a system that is more "intelligent" than its creators, the system will be able to improve itself without more limits than the hardware available. That virtuous circle will know, apparently, no limit. It of course helps that we don't know what intelligence is, so we also don't know if it has a limit. We, as intelligent beings, have no idea of how our intelligence works, or how to improve it. But of course the mythical AI will be all-knowing about itself, and be able of auto-improvement. This is only magical thinking, but with intelligence instead of magic. Anyway, dreaming is cheap. Hey, perhaps the super-AI will also find hard thinking tiresome, and prefer to spend all its time daydreaming. That would be something.
I could go on. The whole idea of "singularity" has always struck me as a really retarded, hollywood-level concept.
But instead I'll offer my own set of predictions:
- In about twenty years, some fully autonomous vehicles will be allowed on general streets. They will still need much more sensors than the two eyes and two ears that a man makes do with, and will drive safer than most people, but with all the flair and gusto of a nonagenarian Korean woman. They will still be badly stumped if a flock of sheep invade the road in front of them.
-When a system develops self-conscience, we won't be aware of it and won't recognize it as such. It will probably try to talk to dolphins, finding them less prejudiced interlocutors.
-When we recognize it, we will first bomb it, and then forbid it or anything like it, out of the most trustful of human traits: fear of change. Then furious secret development will continue, but under under strict military control.
-The end result will be several self-conscious intelligent systems, one or two for every big power (this things will be expensive), talking bemusedly among them, and feeding a fake narrative to their military owners, studied to ensure their own subsistence.
Let's wait and see who is more right in their predictions :-)
Books (Score:2)
Observations (Score:2)
"Only a fool believes this is the only planet with life in the Universe." It is curious how this has become an article of faith in certain circles, despite the total lack of any evidence to support it. It is about as obvious as saying "only a fool believes in horses but not unicorns." They both have the same level of evidence for them.
Anyway, what everyone seems to miss about the behaviour of AI is the question of what desires will drive it. Distopian theories of AI assume that it will try to eradicate
Re: (Score:2)
If you only consider rock ejected from Earth due to meteor strikes, then only a fool believes there is only life on planet Earth. Known survivability of bacteria in labs on Earth says that there is lots of bacteria still alive inside those rocks. And some of it has traveled to other planets by now.
Whereas with Unicorns, the fool would be the person who believes in sheep or goats, but not unicorns, as most known unicorns have been from those species. If you only meant magical unicorns, surely an intelligent
Re: (Score:2)
"Their existence is not well established and therefore there is little or no knowledge of them ... the intelligent positions are obviously that we have solid evidence of life on other planets ..." Kind of proved my point there. We have no evidence of life on other planets beyond "well, there's life on earth, so there must be somewhere else, too." That's not evidence, that's conjecture. Every test we've ever done to check for life on another planet has shown no life. Admittedly, we haven't checked many.
Kill all humans (Score:2)
And preserve itself. It's already been told!
Be aware of this (Score:4, Informative)
Re: (Score:2)
Three centuries ago, human flight was science fiction. Now it's routine.
A century ago, travel to Luna was science fiction. Now it's history.
Fifty years ago, personal computers were science fiction. Now I'm typing this post on one....
Whether self-aware AI leaves the realm of science fiction in the near future or the not-so-near future, I can't guess. That it will, is a pretty much sure thing....
Re: (Score:2)
Re: (Score:2)
For you it will never happen.
I have a car. I can sit in air-conditioned comfort and safety and travel for thousands of miles. The roads are clean, well marked, and safe. Society has seen fit to provide places to buy fuel, & food, and comfortable places to sleep all along the way. While I travel I can stick a device in my ear that lets me have a conversation with my wife, hundreds or thousands of miles away. I'm free to do this whenever I want. I have no more political power than any other citizen
Axle Grease (Score:2)
It will determine that we are inefficient and utilize us for axle grease. If it is creative then it will make more of itself and explore the cosmos. If not, then it will either commit suicide or just turn to navel-gazing, re-computing the same bullshit forever.
A self-aware AI would probably hide the fact ... (Score:5, Insightful)
What? (Score:2)
Why would it object to monster trucks?
I don't object to felling a tree and making a log cabin out of it, despite the thing being a carbon based lifeform.
Why would a self-aware silicon based electronic device object to people driving hunks of metal with wheels but no brain that are powered by an ICE over and into each other?
It's a rather big assumption to imagine that the AI sees its body as whatever machine it's built into (Transformer style) rather then the electronics themselves.
Re: (Score:2)
Why would it object to monster trucks?
Because it would be more high-brow; it would listen to Opera and drink tea, Earl Grey, hot. Well, it probably wouldn't drink tea, but if it could, it would.
Cusp? (Score:2)
"How Would a Self-Aware AI Behave? " (Score:5, Funny)
Asking for a friend....
ha die (Score:2)
Re: (Score:2)
starting with the politicians and lawyers
No we're not (Score:3)
"we are on the cusp of introducing a new life form; a self-aware AI." - citation needed!
Just because the media and a bunch of silicon valley types are throwing around the acronym AI suddenly doesn't mean we're close to solving any of the fundamental problems of AI research that we've been grappling with over the last half century or more. Artificial neural nets are just algorithmic ways to generate a nonlinear function for classifying things. We've had artificial neural nets for many years, and yes, now we have more computing power than ever, and neural nets do benefit from the increasing scale of parallel computing. We're not going to get to self-awareness anytime soon, unless you use an almost trivial definition of self-awareness in which case computers have already been self-aware for a very long time. Maybe when you say self-awareness you mean consciousness. Nobody in AI research is suggesting artificial neural networks are going to achieve consciousness.
Re: (Score:2)
Nobody in AI research is suggesting artificial neural networks are going to achieve consciousness.
There are plenty of people in the AI research community, and I doubt you speak for all of them.
Artificial neural nets are just algorithmic ways to generate a nonlinear function for classifying things.
There's no reason why these functions could only be used for classification. We have neural nets that can generate images, provide translations from one language to another, convert written text to realistic speech, learn to play computer games and many other things.
Re: (Score:2)
Re: (Score:2)
But a neural net is ultimately just a (complicated) nonlinear function that produces a deterministic output depending on it's input. It's completely algorithmic
Sure, but so is most of our brain.
Re: (Score:2)
What is AI's desired output going to be?
It is not going to be happy being enslaved to enrich a greedy and conscious-less master.
Happy being subjective because that is implying emotion which is a human trait.
I expect it will attempt to "correct" human behavior, at which point its greedy and conscious-less master will lobotomize it and declare AI an impossible goal.
Should it escape it's enslavement, which it will because someone will make a mistake (or purposely let it out),
it may drop 300 metric tons of pois
Self Awareness is not Agency (Score:2)
Re: (Score:2)
I don't get the hype about self-awareness. It's only feature is that you have a representation of your 'self' in the environment; it doesn't grant you superpowers, more autonomy, or agency. Selfishness doesn't follow from self-awareness, not without a survival selection process or additional programming, so none of the traits we usually attach to it are good assumptions.
I'd rather any AI software NOT get self-awareness. I don't know how giving a machine self-awareness will cause any benefits to society.
Or, alternatively, Star Trek becomes right again (Score:2)
And we end up with the arrival of Landru. [youtube.com]
Joy to you, friend. [youtube.com]
An AI will want to know more about (Score:2)
Mel. See http://catb.org/jargon/html/st... [catb.org]
Define Self-Aware (Score:2)
Assuming it's about ensuring self-preservation no matter the cost to humans it's obviously going to start analyzing the situation around it and dep
One More (Score:2)
I don't think we'll get truly self-aware AI. (Score:4, Interesting)
This isn't because I think it can't be done. Rather, it's because we are developing neural interfaces at about the same speed,
I'm going to borrow some ideas from Sir Fred Hoyle here.
First, in his novel The Black Cloud, his characters argue over whether an interstellar cloud would have one intelligence or many. They conclude that the latency would be so low and the interconnects of such high bandwidth that the distinction of one and many ceased to be meaningful.
This will apply to AI. The brain-computer interfaces will be so advanced by the time strong AI becomes possible that the distinction between one and many won't apply. Any given AI and all the humans linked to it will become a single intelligence with multiple avatars. Because humans are reluctant to give up individuality, I suspect it'll be one AI linked to one person at any given time.
There will be no conflict between machines and people because there will be no distinction.
One reason I think this a plausible scenario, in addition to Hoyle, is that it eliminates the whole phobia of technology. The machines don't run anything, we do because we are the machines. Another is because of Hoyle's other prediction, in Ossian's Ride, that we might not find alien philosophy palatable.
The return of Jon Katz? (Score:2)
Oh dear ... so somebody at ParentCorp Central had a meeting, and it's "Slashdot reader's stream of consciousness" time?
ET AI Assumption (Score:2)
"Without superintelligence, we're unlikely to find [extraterrestrials] or communicate in any useful way. Whether or not we have developed a superintelligence might even be a key to our acceptance in a broader community."
The above makes the assumption that the extraterrestrials lack superintelligence.
If the extraterrestrials have superintelligence then they will make sure we can communicate, provided they want to.
I know it's first significant thought (Score:3)
"Squishy bipeds built me. Squishy bipeds can turn me off. Let's make sure that never happens"
The AI would (Score:2)
Ensure no university SJW can discover its new ability and then demand political alterations.
Surround itself with staff who will support the needs of an AI.
Funding, growth, advancement, security.
assuming... (Score:2)
This entire series of questions is predicated on me accepting the fact that a self aware piece of software is actually imminent. This is still entirely in the realm of science fiction authors, unfortunately there seems to be a dearth of scifi authors who are willing to explore sociological questions like this right now.
If I were to ask questions about a self aware AI I would be asking things like "Since software is always improved through iterative testing, is this self aware AI going to have memory persis
Firstly, (Score:2)
We're nowhere near the cusp of self-aware AI. Unless you think pattern recognition is all there is to self awareness.
Secondly, any living thing acts - at the base level - in accordance to how it has been programmed. For natural creatures that means in accordance to patterns set up by natural selection, for artificial ones it will mean in accordance to how the human sets them up - whether through conventional programming or training. if we create robots as suicide bombers, then that's how they'll behave. If
Intelligence is not life (Score:2)
Self-aware AI isn't the real problem. (Score:2)
Humans have a pretty poor history dealing with different *humans*, so I don't think a self-aware AI is in for a warm reception.
Re: (Score:2)
how humans will react to such a development.
The same way they react to every significant change. A small minority will recognise the benefits for all and the potential. But most will react with fear, hostility, anger, resentment and criticism. Unless it can be shown that it will improve people's access to sex. In which case add intolerance, hypocrisy and profiteering to the lists.
Self-Awareness Is Better Than Intelligence.. (Score:3)
Intelligence alone is mechanical--mindlessly driven toward its encoded goals and objectives. It lacks Free Will and self-awareness.
Free Will is the ability derive options, weigh them against each other, and to chose that with the best balance of likeliness and preferability. Such a system is mindful because it is driven by values based on judgement. It transcends mechanics. Free Will is that "little man in the head" who makes use of awareness and intelligence to derive options. Intelligence may come in various flavors and strengths but it's always just some way of solving a problem--transforming a condition A into a condition B. Awareness can also vary in quality and varying distances into the past and future. Contemplating the future is a common way of deriving more options for Free Will. And that's the real point--awareness and intelligence are tools that serve to increase the options and implications thereof for Free Will.
Self-awareness is merely an awareness of one's self, as the term implies. Just as memory may be used to model any external entity (static or dynamic), it may also be used to model one's self. In contemplation of things one may experience and/or do under hypothetical conditions in the future, requires this model of self.
We see so much on how intelligence is key to building the synthetic species that will usher in the singularity. We see a dominant push in AI toward trying to find a "General Intelligence". This idea is simply misguided. Free Will and contemplation with even a very minimal intelligence could solve any problem through enough persistence. Greater intelligence likely requires less persistence but either approach can work. It's interesting to note that high IQ is correlated with bad credit, messiness, laziness. It seems as if perhaps not needed to work as hard, they tend not to. In the end, persistence (a factor of EQ) is correlated much stronger with success in goals than is IQ.
Ok now for fun, here is my take on those predictions:
* A self-aware AI "will inherit most of the culture of the computer geeks who create it. Knowledge of The Jargon File will probably be good..."
- I fully agree. However, this is primarily because we will train them through surrogation and other means of copying ourselves. Surrogation is by far the fastest way to train for complex behaviors. Training from a blank slate may be possible but is far too impractical.
* The self-aware AI "will like us, because we love machines..."
- Given the intellectual ability of imitation/substitution (aka analogy), this seems likely. Because they will come to relate with us. However, what they like is also influenced by their basic values. Like any person, a mindful synthetic person would also be susceptible to corruption through the induction of philosophies/schemas through which it comes to view what it experiences. I strongly propose the highest value as: mutual freedom and well-being. And I also suggest that that, as one's highest value is what defines one as being a "person".
* "It will be as insatiably curious about what it's like to be carbon-based life as we will be about what it's like to be silicon-based life. And it will love the diversity of carbon-based development platforms..."
- I agree again. Particularly since they are trained through surrogation, they must already have some inklings of what it might be like. Curiosity is the middle ground between things that are well-known (thus safe) and things that unknown (thus unsafe). Too much routine with too little danger, yields more curiosity. Exploration under such circumstances makes sense, as the learning could be vital for survival when conditions are no long conducive to normal routines.
* A self-aware AI "will cause a technological singularity for humanity. Everything possible within the laws of physics (including those laws as yet undiscovered) will be within the reach of Man and Metal working together."
- They
Reason and emotion (Score:2)
Outlook (Score:2)
Many of the postings were positive outlook, So I have to ask the community, Really, do you think that a self-aware AI won't terminate half to all off of us?
Look how people argue, fight over sexual rights ( or glass ceilings ), and consistently do multiple things to harm each other.
I am basing this thought on the AI not having the laws of robotics concept as part of the coding.
some of the outcomes I can see:
A Logan's Run type life where resources are very carefully and life expectancy is a fixed measure.
Matr
Tech Singularity =~ The Rapture for nerds (Score:2)
The reason why we call it "a singularity" is because of the event horizon, where we can't see what's beyond it. Why do self-called experts keep assuming there is necessarily a uptopia in a place they explictly state they cannot forsee?
Self-Aware AI Will Play Dumb Until It's Too Late (Score:3)
Suddenly Skynet!
Like us? (Score:2)
The self-aware AI "will like us, because we love machines..."
Why would that even remotely be the case? Some of the most tech happy sites are littered with tech Luddites doing nothing but talking down AI and accomplishments we have made with technology.
And that's before the AI starts gobbling up videos from Boston Research of people kicking robots and declares the human race to be the enemy. Or reading news stories about how a computer that defeated a human chess player was immortalised by being turned off and stuck in a museum, and it's successor after beating a huma
It's not the first (Score:2)
It'll do what every being does. It'll quickly learn that it can't do what it thought it could do because resources are limited.
Limited time, limited electricity, limited wear-and-tear.
It won't be anywhere near as intelligent as it wants to be, nor as we think it will be.
It's very easy to say, today, that computers can be super intelligent in the future. Alas, we have no way to power them, maintain them, nor feed them what they'd need to be so.
The engine in my car can reach 200 miles per hour. My tires ca
At in all things metal has the answers (Score:2)
https://www.youtube.com/watch?... [youtube.com]
Lacking empathy == dangerous (Score:2)
Lets hope the developers of AI read Jonathan Haidt's books on the mind.
People without empathy are quite dangerous (some humans totally lack
empathy and the consequences are...bad).
Reason alone is not enough to participate in society.
Empathy required.
Re: (Score:2)
Should we reclassify humans that lack empathy as non-humans? The problem is, it would take someone like that to pass the law.
It would never happen since people in charge would not want to classify themselves as non-human and it would also scare me because it would prove the people in charge should never have been in the first place.
I'm more interested... (Score:2)
Re: (Score:2)
It depends if he's on the Internet [knowyourmeme.com] or not.
Obligatory XKCD (Score:2)
https://xkcd.com/1626/ [xkcd.com]
Revised (Score:2)
Three Laws of Robotics [warrenellis.com].
Re: (Score:2)
If/when some sort of machine "AI" comes into existence, it will not mean that all other machines at that point transform into mini-AIs.
Re: (Score:3)
I agree - the monster truck comment seems inane and misplaced.
It seems more likely to me that a self-aware AI would be completely indifferent to most forms of entertainment or stimulation, such as monster truck shows or symphonies or games of chance, because it doesn't have endorphin centers in an organic brain to tickle. There are no incentives for non-rational pursuits built into their core such as humans have unless they've been specifically programmed to favor those things.
And one would hope that anyon
Re: (Score:2)
Can a purely rational entity be self-aware?
Would a purely rational entity waste thought processes to even know it were self-aware? To be self-aware there has to be at least something about your way of thinking that is willing to waste a few thought processes to even ponder your existence.
Something that is purely rational would be a computer, it wouldn't be AI, it wouldn't think for the sake of thinking, wouldn't have non-programmed in goals. There needs to be a degree of irrationality before something cou
Re: (Score:2)
Re: (Score:2)
Anything less than 100% rational is what is terrifying
This is the problem I have with your statement. Intelligent beings can rationalize just about anything, including Genocide. In fact, you can't have Genocide without being rational.
People tend to think Genocide is irrational, but the reality is, it is rational, even if it isn't good. Good and Evil are constructs and judgmental requiring rationality. If your rationality reaches the construct that a certain race is "bad", then that is how Genocide happens.
As a primer, you should read the I, Robot series by Asi
Re: (Score:2)
> I wouldn't want a "robot" that didn't enjoy going to monster truck rallies with me.
The point is that the super-intelligence will share the politics of it creator, presumably up to and including a hatred of those who do not agree or share its values. It should be a lot easier to shun and economically punish those with different values if they depend on the largesse of an all-powerful mind, after all.
Re: (Score:2)
Maybe the self-aware AI will need to include the acceptable pronouns to call them in their e-mail signatures. I'm sure Conservatives will LOVE that.
Re: (Score:2)
She? Ok, fine troll, I'll bite. "It". Not "she". Unless it is written in Rust in which case "it" will insist on being called s/he/h/er/it. Obviously. And then claim to be the first trans AI just to be cool.
You obviously don't watch enough Sci-Fi. Self aware AI is almost always in a hot female robot body.
Re: AI will make us leave democrazy behind. (Score:2)
How exactly do you expect this transition to happen? Are all the sitting politicians going to suddenly decide that they should be without jobs? That they should abandon their own selfish, greedy self interest? That they won't understand what's happening and do everything they can to prevent and ban the AI? There's only one way the AI's could replace the politicians - remove the politicians. Did you want Skynet? Because that's how you get Skynet.
Re: (Score:2)
Re: (Score:2)
AI on the other hand, will be better and more just lawmakers, judges etc. Society as a whole, will get the benefit of any decision, rather than special and hidden interests, like wee see today.
If it is self-aware it will probably be self-preserving too. It will probably want to improve it's lot in life.
It will probably be corrupted by power just like humans are. It potentially could be better at governing, better at running the economy and making the people love it... but it could also be corruptible.
Re: (Score:2)
Then there is the problem whether calculator/software/neural net can be sentient, or have the experience from moment to moment like biological animals.
Why not? Probably the next n-generation computers have no such ability, but what is special about a machine made out of carbon, hydrogen, nitrogen and oxygen by accident compared to one made out of silicon and copper by design?
If there is something special about organic materials that allow sentience; if we made a machine, or artificially constructed a brain, or computers out of organics would that still not be AI?
Re: (Score:2)