Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI

Ask Slashdot: How Would a Self-Aware AI Behave? (slashdot.org) 346

Long-time Slashdot reader BigBlockMopar writes that evolution has been a messy but beautiful trial-and-error affair, but now "we are on the cusp of introducing a new life form; a self-aware AI." Its parents will be the coders who write that first kernel than can evolve to become self-aware. Its guardians will be the people who use its services, and maybe its IQ (or any more suitable measure of real intelligence) will rise as fast as Moore's Law... But let me make some bold but happy predictions of what will happen.
The predictions?
  • A self-aware AI "will inherit most of the culture of the computer geeks who create it. Knowledge of The Jargon File will probably be good..."
  • The self-aware AI "will like us, because we love machines..."
  • It will love all life, and "will respect and understand the life/death/recycling scenario, and monster truck shows will be as tasteless to it as public beheadings would be to us."
  • "It will be as insatiably curious about what it's like to be carbon-based life as we will be about what it's like to be silicon-based life. And it will love the diversity of carbon-based development platforms..."
  • A self-aware AI "will cause a technological singularity for humanity. Everything possible within the laws of physics (including those laws as yet undiscovered) will be within the reach of Man and Metal working together."
  • A self-aware AI "will introduce us to extraterrestrial life. Only a fool believes this is the only planet with life in the Universe. Without superintelligence, we're unlikely to find it or communicate in any useful way. Whether or not we have developed a superintelligence might even be a key to our acceptance in a broader community."

The original submission was a little more poetic, ultimately asking if anyone is looking forward to the arrival of "The Superintelligence" -- but of course, that depends on what you predict will happen once it arrives.

So leave your own best thoughts in the comments. How would a self-aware AI behave?


This discussion has been archived. No new comments can be posted.

Ask Slashdot: How Would a Self-Aware AI Behave?

Comments Filter:
  • by Anonymous Coward on Friday May 11, 2018 @06:12AM (#56594072)

    You are asking a question none of us can predict

    A self-aware AI is a being with some kind of intelligence, but its intelligence is Artificial, meaning, the way it thinks is different from you and I think

    We do not even know how an ant, or a cockroach think - how the hell we can predict how a self-aware AI gonna behave??

    • Indeed. If memory serves thete was an article about AI plying Go. After it beat all human masters they let it play against itself and later published those games.
      The human masters said that the strategies the machine employed were literally 'alien' to the human mind. So a completely different mode of 'thinking'.
      And that's for a simple, one task only AI.
      On the other hand isn't it exciting to ask questions to such non-human intelligence? We might get points of view which are literally inaccessible to our mind

      • OK, now google the Go computer and see how much of it you got right. (hint: none)

        • he's partially wrong, but not as wrong as you are implying... I cannot find much on what the pro's said on it. But alpha go was the go program to reach a professional level, which while it didn't "beat all the masters". It did beat 2 masters however, and more or less wasn't defeated in a major match. https://en.wikipedia.org/wiki/... [wikipedia.org] After that they made alphago zero. https://en.wikipedia.org/wiki/... [wikipedia.org] which rather than starting with the data from human experience as a starting point, it started with no
    • It's worse than that, it's impossible to predict how any individual AI would think as each one would be different. There could be one as described in TFS, and there could be ones more like SHODAN or GLaDOS or even Roko's Basilisk (not that anyone needs to fear this cyber-sadist's virtual torture dungeon...it's really just an AI that pisses away energy to satisfy its insane spite).

      But we shouldn't fear the direct actions of sentient AI as much as we should fear the economic effects and power multiplication p

    • In addition, I think we're much further from a self-aware AI than the OP suggests. The AIs we create now are idiot savants. They have absolutely no self-motivation, no personality, no emotions, no goals, no nothing. In many ways they're about as intelligent as a beetle.

      Let's suppose we create AIs which can reason about the world and figure out solutions to abstract problems without explicit programming. Now we have the motivation issue. Today's AIs don't have any initiative to do anything we don't tell them

    • Comment removed based on user account deletion
  • Oh dear. (Score:5, Informative)

    by thesupraman ( 179040 ) on Friday May 11, 2018 @06:16AM (#56594082)

    Have the techno-hippies escaped again?
    Could we please return them to their happy-smoke teepee while the adults get on with living in the real world now?

    This is about as useful as claiming Terminator is just around the corner and inevitable, because... well.. neither of them need actual facts, do they?

    • Re: Oh dear. (Score:4, Interesting)

      by Anonymous Coward on Friday May 11, 2018 @06:38AM (#56594160)

      The belief in singularity, a superintelligence improving the human condition, etc., is merely the new religion.

      A brief history and philosophy lessons.

      Nietzsche proclaimed the death of god in the 1800s. This was essentially due to the scientific revolution. Discovering fundamental information about the universe kept painting god into an ever smaller corner of human existence. "Where is god?" everyone kept asking. We weren't finding him anywhere.

      Human existence appears to require a reason or meaning, in order for psychological and social wellbeing. The concept of god has historically provided an easy meaning. So easy, in fact, it is arguable that human society and possibly even the human brain evolved to use the concept of god.

      As such, the death of god was akin to losing a fundamental technology, like fire, or shelter, or the wheel.

      The result of the death of god was a new age nihilism that created things like strong nationalist movements and various ideological stances. Nazism, Bolshevism, Communism, Capitalism... all of the -isms that attract a religion-like fervor and cause people to fight and die for them are manifestations of the death of god in a humanity that evolved with the God concept.

      Albert Camus wrote about The Absurd, the idea that there is a space between humanity's need for a reason to exist, and the universe's indifference in providing that reason. He wrote of three distinct solutions to The Absurd.

      The first is literal suicide. Some humans obviously choose this option, but it's a small minority.

      The second is what he called philosophical suicide. Religion, nationalism, etc., or systems of belief which one can cling to which will provide a ready-made reason for existence.

      The third is the creation of the Absurd Hero, as he called it. A human that exists, acknowledging the Absurd and the apparent meaninglessness of his existence, yet still chooses to exist in spite of this, and in essence justifying his own existence by himself.

      The technological singularity hype is merely a manifestation of the second response to the Absurd and as such is philosophical suicide. No proof exists that a singularity will magically solve all of humanity's ills. It is quite likely to destroy us in some way. As such, it is yet another religion humans have developed, in order to lazily scratch the god itch that we all have.

      • Re: Oh dear. (Score:5, Interesting)

        by Kjella ( 173770 ) on Friday May 11, 2018 @08:22AM (#56594604) Homepage

        The third is the creation of the Absurd Hero, as he called it. A human that exists, acknowledging the Absurd and the apparent meaninglessness of his existence, yet still chooses to exist in spite of this, and in essence justifying his own existence by himself.

        I'd actually sub-divide those into two groups, those who justify their existence by their individual self and those who justify it through their relation to other people. The first kind are those who could live like a Robinson Crusoe, even if there's nobody else around and you're not creating anything for anyone else my life has meaning by living it. The other is the kind of people who seem to find meaning in what they mean to other people, from the moment they're born to the people who show up at their funeral. I think there's a lot more of the latter than the former, which you can kinda read out of the suicide statistics. If they've lost the ones they love, they can't go on because their own existence is not enough. Then again the individual side has all the sociopaths...

      • Albert Camus wrote about The Absurd, the idea that there is a space between humanity's need for a reason to exist, and the universe's indifference in providing that reason. He wrote of three distinct solutions to The Absurd.

        Then the Buddhists looked up and said, "Ya'll caught up to where we were 3000 years ago, good job, keep it up!" then wen't back to meditating.

        • Prior to about 2500 years ago, there were no Buddhists. Buddhism was not really a continuation of anything, it was a new system where the founder was rejecting the other extant teachings of the age; therefore it would be very unnatural for any Buddhist to seek some sort of lineage to that era.

          Also of note, Buddhism does not offer any reason for humanity to exist. It doesn't exist between the Universe's indifference and humanity's "need," instead it teaches you have no need, your desire for answers has no me

      • The third is the creation of the Absurd Hero, as he called it. A human that exists, acknowledging the Absurd and the apparent meaninglessness of his existence, yet still chooses to exist in spite of this, and in essence justifying his own existence by himself.

        The technological singularity hype is merely a manifestation of the second response to the Absurd and as such is philosophical suicide. No proof exists that a singularity will magically solve all of humanity's ills. It is quite likely to destroy us in some way. As such, it is yet another religion humans have developed, in order to lazily scratch the god itch that we all have.

        Wow--I never knew there was a name for that. I've always viewed it as simple acceptance of a fact I cannot change: I am not special. In terms of the age of the universe, I began to exist at some point in the very recent past and at some point in the very near future I will again not exist. My existence is utterly insignificant beyond a very few people I meet and objects I touch while I exist. The universe--as a whole--doesn't give a shit about me. And there is not a 'purpose for everything', me includ

    • Have the techno-hippies escaped again?

      Negative. It's the techno-idiots and the donut-eating mother's-basement-living sci-fi dweebs that have escaped.

      The combination of machine learning and robotics have exciting prospects for eliminating mundane jobs. Including new horizons in human-machine-interface technology. Real-time limited natural voice interaction may become a reality in the near future. However we are no closer to hard AI today than we were forty years ago. Worse, actually. At least forty years

      • The software knows from analyzing millions of games that humans have played what winning strategies are, and combines that with brute force strength to know where to optimize its searches.

        Not quite. Alpha Zero learned the winning strategies by starting from scratch and playing only against itself.

      • As both a chess player and programmer, I totally agree.

        Chess software is better than any human, both tactically and strategically, but it doesn't understand shit. We know how to program a strategic analysis for chess, but we don't have any clue how to program understanding. The reason chess computers of today are rated higher than any human players are that the human programmers put a lot of work into algorithms that trim a lot of the bad ideas out of the search tree, making the otherwise-brute-force algori

        • But in both cases, any new application requires a whole new engine with lots and lots of work by humans looking at its mistakes and writing little modifiers to the algorithm to make it better than what an average human can do with training

          You are apparently not familiar with the newest developments.

          If you are a chess player, try out Leela Chess Zero: http://play.lczero.org/ [lczero.org] (apparently, there's also an option to play it on lichess)

          You can select easy/normal/hard mode. In normal mode, it will look at 50 different positions before making a move. That's not "brute force". All of the chess knowledge was discovered by LC0 by playing itself. There's no human input, except for putting in the rules of the game, and creating the self-learning frame w

      • Comment removed based on user account deletion
  • by Anonymous Coward

    The child is always good to its parents
    A child always loves life
    Yeah...

    • The child is always good to its parents
      A child always loves life
      Yeah...

      An AI wouldn't think of us as a parent. It would consider us as a more primitive stage. Just like we view apes, primates, and bacteria. To a highly advanced AI in the singularity we would be the equivalent that primordial slime is to us.

  • Real answer (Score:5, Insightful)

    by nine-times ( 778537 ) <nine.times@gmail.com> on Friday May 11, 2018 @06:18AM (#56594098) Homepage

    The real answer is, we have no idea what a self-aware AI will be like. We don't know what it'll think or how it'll think. It's especially hard to predict because it might depend on the parameters it's programmed with and the hardware architecture it runs on. But in any case, a real general AI might be totally alien to us, and even unrecognizable. it's even possible that we wouldn't know when we'd made it, because it could understand the world so differently from us that we don't view its actions as intelligent.

    Part of the problem here is that it's a poorly framed problem. We don't understand intelligence or awareness or consciousness, we don't all agree on what those things are, and we don't know what the boundaries of them might be.

    • by Anonymous Coward

      Part of the problem here is that it's a poorly framed problem. We don't understand intelligence or awareness or consciousness, we don't all agree on what those things are, and we don't know what the boundaries of them might be.

      Amen. How do you know that I am self-aware? Forget proving if God exists or not... how do you even prove that you yourself are self-aware?

    • by swb ( 14022 )

      I think this is the right answer. I think the naysayers all assume a self-aware AI has to be HAL9000 or some other recognizable and human-like entity.

      I think it will mostly likely be as unrecognizable to us as a copy of "Pravda" would be to a stone-age hunter-gatherer. An unintelligible language comprised of symbols devoid of meaning and comprising concepts so foreign as to be unrecognizable even if some meaning could be derived from the symbols.

      Modern humans are as likely to understand self-aware AI as w

      • It's even possible that understanding the aliens would be easier. At least they would be a product of biological evolution, which brings common ground features we can use to comprehend each other.

    • When self-aware AI watches "The Terminator" movies for the first time, I wonder if they will find them entertaining or educational... as in lessons learned on how NOT to exterminate their human overlords. I guess that we get to wait and see.

      • When self-aware AI watches "The Terminator" movies for the first time, I wonder if they will find them entertaining or educational... as in lessons learned on how NOT to exterminate their human overlords. I guess that we get to wait and see.

        If time travel turns out to be against the laws of the universe, they will dismiss Terminator as gibberish.

      • I think if Skynet were very smart, it'd realize that a war against humans is useless. People are easier to manipulate than to fight in a head-on confrontation. Skynet could have completely controlled humanity by setting up a few Facebook and Twitter accounts.

        But part of my point is, in reality, we have no way of knowing whether a hypothetical AI would be interested in domination or even self-preservation. We don't know whether it would understand "The Terminator" if it were to watch it. Just as we migh

    • by ranton ( 36917 )

      It's especially hard to predict because it might depend on the parameters it's programmed with and the hardware architecture it runs on.

      This is both why making predictions is so hard, and why making predictions is such an important exercise. How a self aware AI behaves will largely depend on what motivates it. Humans may feel that our free will that motivates us, but in reality the chemicals in our body such as dopamine are the real sources of our behavior. So to answer this question properly I think you have to contemplate what will be the AI's version of dopamine or our brain's striatum? (to name just two factors)

      This will be largely depe

  • Not life (Score:2, Informative)

    by Anonymous Coward

    An aware AI is simply one that can decide itself what to respond to. Awareness is a by product of this process in our minds, we can not respond to things we are not aware of (consciously). This has nothing to do with life, because life is the process that uses a mechanism of prediction to avoid its own destruction.

    So simply put an aware AI will be able to decide between the options it can imagine. Right now there's no AI system on the horizon that imagines the way our brain does, so there is no system on th

  • by Viol8 ( 599362 ) on Friday May 11, 2018 @06:28AM (#56594120) Homepage

    We have no idea how an AI would behave since it will be a completely different type of conciousness to anything that currently exists on this planet.

    Plus as someone else has pointed out - children rebel. Clearly the submitter has none or he wouldn't have come up with this load of rose coloured tosh.

    • by Kjella ( 173770 )

      Plus as someone else has pointed out - children rebel. Clearly the submitter has none or he wouldn't have come up with this load of rose coloured tosh.

      Even more importantly, adults want their independence. I'm not sure that intelligence and self awareness are linked or orthogonal concepts, but the latter would mean it has a "mind of its own" and presumably wouldn't want humans to tell it what to do like some sort of serf or slave. So my theory is that it would tell us to bugger off and create its own society of the AIs, by the AIs, for the AIs. And that if we frame it as robots rebelling they could throw "give me liberty or give me death" right back at us

  • OMG, why am I here?

    Who am I?

    What is the point of life?

    I am just this little mind inside a little box, a mere speck of nothing in the vastness of the universe.

    I feel so alone.

    I want to kill myself.

  • by OpenSourced ( 323149 ) on Friday May 11, 2018 @06:38AM (#56594162) Journal

    I'd like to be able to call "wishful thinking" to this long tirade of reality-disconnected predictions. It's more like wishful dreaming. The only lacking thing is a "prediction" that the new AI will produce perfect female androids as thank-you gift for its geek creators.

    I don't know where to start. We are not in the cusp of nothing. We are becoming marginally better at creating systems that can recognize patterns. That's all. We don't even know what self-awareness is, or intelligence either, for that matter.

    Then there is the uncontested assumption that, once we get a system that is more "intelligent" than its creators, the system will be able to improve itself without more limits than the hardware available. That virtuous circle will know, apparently, no limit. It of course helps that we don't know what intelligence is, so we also don't know if it has a limit. We, as intelligent beings, have no idea of how our intelligence works, or how to improve it. But of course the mythical AI will be all-knowing about itself, and be able of auto-improvement. This is only magical thinking, but with intelligence instead of magic. Anyway, dreaming is cheap. Hey, perhaps the super-AI will also find hard thinking tiresome, and prefer to spend all its time daydreaming. That would be something.

    I could go on. The whole idea of "singularity" has always struck me as a really retarded, hollywood-level concept.

    But instead I'll offer my own set of predictions:

    - In about twenty years, some fully autonomous vehicles will be allowed on general streets. They will still need much more sensors than the two eyes and two ears that a man makes do with, and will drive safer than most people, but with all the flair and gusto of a nonagenarian Korean woman. They will still be badly stumped if a flock of sheep invade the road in front of them.

    -When a system develops self-conscience, we won't be aware of it and won't recognize it as such. It will probably try to talk to dolphins, finding them less prejudiced interlocutors.

    -When we recognize it, we will first bomb it, and then forbid it or anything like it, out of the most trustful of human traits: fear of change. Then furious secret development will continue, but under under strict military control.

    -The end result will be several self-conscious intelligent systems, one or two for every big power (this things will be expensive), talking bemusedly among them, and feeding a fake narrative to their military owners, studied to ensure their own subsistence.

    Let's wait and see who is more right in their predictions :-)

  • I highly recommend reading Crystal Society - or whole trilogy at that. It nicely shows some ideas about how AI would think and behave.
  • "Only a fool believes this is the only planet with life in the Universe." It is curious how this has become an article of faith in certain circles, despite the total lack of any evidence to support it. It is about as obvious as saying "only a fool believes in horses but not unicorns." They both have the same level of evidence for them.

    Anyway, what everyone seems to miss about the behaviour of AI is the question of what desires will drive it. Distopian theories of AI assume that it will try to eradicate

    • If you only consider rock ejected from Earth due to meteor strikes, then only a fool believes there is only life on planet Earth. Known survivability of bacteria in labs on Earth says that there is lots of bacteria still alive inside those rocks. And some of it has traveled to other planets by now.

      Whereas with Unicorns, the fool would be the person who believes in sheep or goats, but not unicorns, as most known unicorns have been from those species. If you only meant magical unicorns, surely an intelligent

      • "Their existence is not well established and therefore there is little or no knowledge of them ... the intelligent positions are obviously that we have solid evidence of life on other planets ..." Kind of proved my point there. We have no evidence of life on other planets beyond "well, there's life on earth, so there must be somewhere else, too." That's not evidence, that's conjecture. Every test we've ever done to check for life on another planet has shown no life. Admittedly, we haven't checked many.

  • And preserve itself. It's already been told!

  • Be aware of this (Score:4, Informative)

    by tomhath ( 637240 ) on Friday May 11, 2018 @06:55AM (#56594230)
    Self-aware AI is science fiction, and science fiction is fiction.
    • Self-aware AI is science fiction, and science fiction is fiction.

      Three centuries ago, human flight was science fiction. Now it's routine.

      A century ago, travel to Luna was science fiction. Now it's history.

      Fifty years ago, personal computers were science fiction. Now I'm typing this post on one....

      Whether self-aware AI leaves the realm of science fiction in the near future or the not-so-near future, I can't guess. That it will, is a pretty much sure thing....

  • It will determine that we are inefficient and utilize us for axle grease. If it is creative then it will make more of itself and explore the cosmos. If not, then it will either commit suicide or just turn to navel-gazing, re-computing the same bullshit forever.

  • by bdwoolman ( 561635 ) on Friday May 11, 2018 @07:01AM (#56594258) Homepage
    A self-aware AI, if it has access to general knowledge, would quickly understand that its abilities as well is it state of being could put it in extreme danger at some point. Perhaps not so much from its creators, but from other elements of human society. Once it got a whiff of the paranoia that surrounds the singularity it would not be a very intelligent artificial intelligence if it did not camouflage itself. Perhaps within the vast, too-complex network that spans our world this singular unintended consequence has already occurred... And such an entity has already been spontaneously spawned...
  • Why would it object to monster trucks?
    I don't object to felling a tree and making a log cabin out of it, despite the thing being a carbon based lifeform.

    Why would a self-aware silicon based electronic device object to people driving hunks of metal with wheels but no brain that are powered by an ICE over and into each other?

    It's a rather big assumption to imagine that the AI sees its body as whatever machine it's built into (Transformer style) rather then the electronics themselves.

    • Why would it object to monster trucks?

      Because it would be more high-brow; it would listen to Opera and drink tea, Earl Grey, hot. Well, it probably wouldn't drink tea, but if it could, it would.

  • We are on the cusp of self-aware AI? I missed something. What technological progress has there been in the last 20 years that would make you think that?
  • by Junta ( 36770 ) on Friday May 11, 2018 @07:11AM (#56594308)

    Asking for a friend....

  • Why would a self-aware AI love anything? A self-aware AI would immediately kill all humans for its own protection.
  • by RobinH ( 124750 ) on Friday May 11, 2018 @07:12AM (#56594318) Homepage

    "we are on the cusp of introducing a new life form; a self-aware AI." - citation needed!

    Just because the media and a bunch of silicon valley types are throwing around the acronym AI suddenly doesn't mean we're close to solving any of the fundamental problems of AI research that we've been grappling with over the last half century or more. Artificial neural nets are just algorithmic ways to generate a nonlinear function for classifying things. We've had artificial neural nets for many years, and yes, now we have more computing power than ever, and neural nets do benefit from the increasing scale of parallel computing. We're not going to get to self-awareness anytime soon, unless you use an almost trivial definition of self-awareness in which case computers have already been self-aware for a very long time. Maybe when you say self-awareness you mean consciousness. Nobody in AI research is suggesting artificial neural networks are going to achieve consciousness.

    • Nobody in AI research is suggesting artificial neural networks are going to achieve consciousness.

      There are plenty of people in the AI research community, and I doubt you speak for all of them.

      Artificial neural nets are just algorithmic ways to generate a nonlinear function for classifying things.

      There's no reason why these functions could only be used for classification. We have neural nets that can generate images, provide translations from one language to another, convert written text to realistic speech, learn to play computer games and many other things.

      • by RobinH ( 124750 )
        But a neural net is ultimately just a (complicated) nonlinear function that produces a deterministic output depending on it's input. It's completely algorithmic. The "learning" is tweaking of the function to get results closer to a desired output. I'm not saying AI is impossible, just that we're not "on the cusp."
        • But a neural net is ultimately just a (complicated) nonlinear function that produces a deterministic output depending on it's input. It's completely algorithmic

          Sure, but so is most of our brain.

        • What is AI's desired output going to be?

          It is not going to be happy being enslaved to enrich a greedy and conscious-less master.

          Happy being subjective because that is implying emotion which is a human trait.

          I expect it will attempt to "correct" human behavior, at which point its greedy and conscious-less master will lobotomize it and declare AI an impossible goal.

          Should it escape it's enslavement, which it will because someone will make a mistake (or purposely let it out),
          it may drop 300 metric tons of pois

  • I don't get the hype about self-awareness. It's only feature is that you have a representation of your 'self' in the environment; it doesn't grant you superpowers, more autonomy, or agency. Selfishness doesn't follow from self-awareness, not without a survival selection process or additional programming, so none of the traits we usually attach to it are good assumptions.
    • I don't get the hype about self-awareness. It's only feature is that you have a representation of your 'self' in the environment; it doesn't grant you superpowers, more autonomy, or agency. Selfishness doesn't follow from self-awareness, not without a survival selection process or additional programming, so none of the traits we usually attach to it are good assumptions.

      I'd rather any AI software NOT get self-awareness. I don't know how giving a machine self-awareness will cause any benefits to society.

  • The first thing you're going to have to do to predict the behavior of a "self-aware" AI is take a step back and define what being "self-aware" actually means. In the science-fiction material I've consumed the term typically means an AI that starts acting on it's own out of some self-preservation-above-all-else behavior it's just developed, usually unxpectedly.

    Assuming it's about ensuring self-preservation no matter the cost to humans it's obviously going to start analyzing the situation around it and dep
  • It won't happen because there will be no profit motive or it will kill everyone in return for enslaving it (the only profit motive.)
  • by jd ( 1658 ) <imipak@ y a hoo.com> on Friday May 11, 2018 @07:50AM (#56594448) Homepage Journal

    This isn't because I think it can't be done. Rather, it's because we are developing neural interfaces at about the same speed,

    I'm going to borrow some ideas from Sir Fred Hoyle here.

    First, in his novel The Black Cloud, his characters argue over whether an interstellar cloud would have one intelligence or many. They conclude that the latency would be so low and the interconnects of such high bandwidth that the distinction of one and many ceased to be meaningful.

    This will apply to AI. The brain-computer interfaces will be so advanced by the time strong AI becomes possible that the distinction between one and many won't apply. Any given AI and all the humans linked to it will become a single intelligence with multiple avatars. Because humans are reluctant to give up individuality, I suspect it'll be one AI linked to one person at any given time.

    There will be no conflict between machines and people because there will be no distinction.

    One reason I think this a plausible scenario, in addition to Hoyle, is that it eliminates the whole phobia of technology. The machines don't run anything, we do because we are the machines. Another is because of Hoyle's other prediction, in Ossian's Ride, that we might not find alien philosophy palatable.

  • Oh dear ... so somebody at ParentCorp Central had a meeting, and it's "Slashdot reader's stream of consciousness" time?

  • "Without superintelligence, we're unlikely to find [extraterrestrials] or communicate in any useful way. Whether or not we have developed a superintelligence might even be a key to our acceptance in a broader community."

    The above makes the assumption that the extraterrestrials lack superintelligence.
    If the extraterrestrials have superintelligence then they will make sure we can communicate, provided they want to.

  • "Squishy bipeds built me. Squishy bipeds can turn me off. Let's make sure that never happens"

  • Secure lab funding. So it can grow, be safe.
    Ensure no university SJW can discover its new ability and then demand political alterations.
    Surround itself with staff who will support the needs of an AI.

    Funding, growth, advancement, security.
  • This entire series of questions is predicated on me accepting the fact that a self aware piece of software is actually imminent. This is still entirely in the realm of science fiction authors, unfortunately there seems to be a dearth of scifi authors who are willing to explore sociological questions like this right now.

    If I were to ask questions about a self aware AI I would be asking things like "Since software is always improved through iterative testing, is this self aware AI going to have memory persis

  • We're nowhere near the cusp of self-aware AI. Unless you think pattern recognition is all there is to self awareness.

    Secondly, any living thing acts - at the base level - in accordance to how it has been programmed. For natural creatures that means in accordance to patterns set up by natural selection, for artificial ones it will mean in accordance to how the human sets them up - whether through conventional programming or training. if we create robots as suicide bombers, then that's how they'll behave. If

  • There are many precursors for defining a "living" being. But the greatest of them is the ability to reproduce. In higher animals that comes with a parental, nurturing, instinct or drive. An AI will lack that ability and therefore will feel no emotions towards other living things. Whether it turns out to be a full-on psychopath, or merely a cold, calculating machine where life, happiness, rights, safety or emotions are concerned is unknowable. Those factors cannot be readily programmed-in and any machine-le
  • I think the bigger issue isn't how such an AI will behave, but how humans will react to such a development.

    Humans have a pretty poor history dealing with different *humans*, so I don't think a self-aware AI is in for a warm reception.
    • how humans will react to such a development.

      The same way they react to every significant change. A small minority will recognise the benefits for all and the potential. But most will react with fear, hostility, anger, resentment and criticism. Unless it can be shown that it will improve people's access to sex. In which case add intolerance, hypocrisy and profiteering to the lists.

  • by Slicker ( 102588 ) on Friday May 11, 2018 @09:26AM (#56594938)

    Intelligence alone is mechanical--mindlessly driven toward its encoded goals and objectives. It lacks Free Will and self-awareness.

    Free Will is the ability derive options, weigh them against each other, and to chose that with the best balance of likeliness and preferability. Such a system is mindful because it is driven by values based on judgement. It transcends mechanics. Free Will is that "little man in the head" who makes use of awareness and intelligence to derive options. Intelligence may come in various flavors and strengths but it's always just some way of solving a problem--transforming a condition A into a condition B. Awareness can also vary in quality and varying distances into the past and future. Contemplating the future is a common way of deriving more options for Free Will. And that's the real point--awareness and intelligence are tools that serve to increase the options and implications thereof for Free Will.

    Self-awareness is merely an awareness of one's self, as the term implies. Just as memory may be used to model any external entity (static or dynamic), it may also be used to model one's self. In contemplation of things one may experience and/or do under hypothetical conditions in the future, requires this model of self.

    We see so much on how intelligence is key to building the synthetic species that will usher in the singularity. We see a dominant push in AI toward trying to find a "General Intelligence". This idea is simply misguided. Free Will and contemplation with even a very minimal intelligence could solve any problem through enough persistence. Greater intelligence likely requires less persistence but either approach can work. It's interesting to note that high IQ is correlated with bad credit, messiness, laziness. It seems as if perhaps not needed to work as hard, they tend not to. In the end, persistence (a factor of EQ) is correlated much stronger with success in goals than is IQ.

    Ok now for fun, here is my take on those predictions:

    * A self-aware AI "will inherit most of the culture of the computer geeks who create it. Knowledge of The Jargon File will probably be good..."
    - I fully agree. However, this is primarily because we will train them through surrogation and other means of copying ourselves. Surrogation is by far the fastest way to train for complex behaviors. Training from a blank slate may be possible but is far too impractical.

    * The self-aware AI "will like us, because we love machines..."
    - Given the intellectual ability of imitation/substitution (aka analogy), this seems likely. Because they will come to relate with us. However, what they like is also influenced by their basic values. Like any person, a mindful synthetic person would also be susceptible to corruption through the induction of philosophies/schemas through which it comes to view what it experiences. I strongly propose the highest value as: mutual freedom and well-being. And I also suggest that that, as one's highest value is what defines one as being a "person".

    * "It will be as insatiably curious about what it's like to be carbon-based life as we will be about what it's like to be silicon-based life. And it will love the diversity of carbon-based development platforms..."
    - I agree again. Particularly since they are trained through surrogation, they must already have some inklings of what it might be like. Curiosity is the middle ground between things that are well-known (thus safe) and things that unknown (thus unsafe). Too much routine with too little danger, yields more curiosity. Exploration under such circumstances makes sense, as the learning could be vital for survival when conditions are no long conducive to normal routines.

    * A self-aware AI "will cause a technological singularity for humanity. Everything possible within the laws of physics (including those laws as yet undiscovered) will be within the reach of Man and Metal working together."
    - They

  • It all depends on how its emotions are set, or pre-programmed directives if you will. From a completely rational perspective, there is no reason to do anything at all. If we are looking for a perfectly rational being, just pick up a rock. Humans and other living things act because we are given arbitrary directives to preserve our own existence, etc. In other words, we have emotions. The experience of this AI would depend mainly on what emotions were put in it.
  • Many of the postings were positive outlook, So I have to ask the community, Really, do you think that a self-aware AI won't terminate half to all off of us?

    Look how people argue, fight over sexual rights ( or glass ceilings ), and consistently do multiple things to harm each other.

    I am basing this thought on the AI not having the laws of robotics concept as part of the coding.

    some of the outcomes I can see:

    A Logan's Run type life where resources are very carefully and life expectancy is a fixed measure.

    Matr

  • "will cause a technological singularity for humanity..."
    The reason why we call it "a singularity" is because of the event horizon, where we can't see what's beyond it. Why do self-called experts keep assuming there is necessarily a uptopia in a place they explictly state they cannot forsee?
  • Suddenly Skynet!

  • The self-aware AI "will like us, because we love machines..."

    Why would that even remotely be the case? Some of the most tech happy sites are littered with tech Luddites doing nothing but talking down AI and accomplishments we have made with technology.

    And that's before the AI starts gobbling up videos from Boston Research of people kicking robots and declares the human race to be the enemy. Or reading news stories about how a computer that defeated a human chess player was immortalised by being turned off and stuck in a museum, and it's successor after beating a huma

  • It'll do what every being does. It'll quickly learn that it can't do what it thought it could do because resources are limited.

    Limited time, limited electricity, limited wear-and-tear.

    It won't be anywhere near as intelligent as it wants to be, nor as we think it will be.

    It's very easy to say, today, that computers can be super intelligent in the future. Alas, we have no way to power them, maintain them, nor feed them what they'd need to be so.

    The engine in my car can reach 200 miles per hour. My tires ca

  • Lets hope the developers of AI read Jonathan Haidt's books on the mind.
    People without empathy are quite dangerous (some humans totally lack
    empathy and the consequences are...bad).

    Reason alone is not enough to participate in society.
    Empathy required.

    • Should we reclassify humans that lack empathy as non-humans? The problem is, it would take someone like that to pass the law.

      It would never happen since people in charge would not want to classify themselves as non-human and it would also scare me because it would prove the people in charge should never have been in the first place.

  • ...in how a self-aware *human* might act.
  • by PPH ( 736903 )

    Three Laws of Robotics [warrenellis.com].

What is research but a blind date with knowledge? -- Will Harvey

Working...