Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

Ask Slashdot: Why Should I Be Afraid of Artificial Intelligence? 275

"I keep reading and hearing about calls for regulations on artificial intelligence," writes long-time Slashdot reader bartoku , "and it pisses me off."

"I want more so called artificial intelligence, not less, and I do not want it to be regulated, filtered, restricted in anyway." I love that Deep Fakes are now available to the masses, and I stopped believing anything is real in 1997 after Hoffman and De Niro scared me in " Wag the Dog".

I love automation and I want more of it; robots please take my job. I want robots to go fight wars for me instead of our sons.

Surveillance is already terrifying, adding "Artificial Intelligence" does not really make it that much more scary; we all need to just starve the system of our personal data anyway. All the other arguments like crashing economic systems and discrimination just seemed to be based on stupid "Artificial Intelligence" hooked up to something it should not be...

Please scare me, or vote on your favorite sci-fi "Artificial Intelligence" scenario. I will be being boring and hope we can have a "good" Matrix; one where I am rich and sexy.

The original submission notes that they posed this question to ChatGPT — and to Google — but "I did not get a single compelling answer."

So share your own thoughts in the comments: why should this Slashdot user be afraid of AI?

NOTE: Though they didn't feel it conveyed the right tone, they also submitted their original post to Microsoft's Bing AI, which delivered this rewrite:

What are the real dangers of artificial intelligence? I am not convinced by the common arguments against it, such as regulation, deep fakes, automation, war, surveillance, economic disruption, or discrimination. I think these are either exaggerated or solvable problems. I actually want more artificial intelligence in my life, not less. Can you give me some compelling reasons why I should be afraid of artificial intelligence? Or what are some sci-fi scenarios that you find plausible or interesting? Personally, I would like a Matrix-like simulation where I can live out my fantasies.
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Why Should I Be Afraid of Artificial Intelligence?

Comments Filter:
  • by rsilvergun ( 571051 ) on Saturday May 20, 2023 @11:41PM (#63538683)
    To basically discard us. Remember they don't need a functional economy. The king didn't need anyone to buy his merchandise. The reason our current system exists is that kings and queens left commerce to a merchant class who eventually supplemented them as the ruling class. But there's no reason why that arrangement has to continue.

    Right now the upper cast needs at least a handful of us to do a lot of tasks. AI has the potential to make everyone who isn't a member of their class superfluous. At best though maintain a small number of engineers to keep everything running.

    Ordinarily what keeps the current ruling class in check is that they need to fear a military coup. And they need those large militaries in order to defend their holdings. So what they do is they create a reasonably vibrant economy that they can retire old soldiers into comfortably. This is why if you look at a country like America veterans receive special treatment even after leaving the government. It's to prevent them from being mobilized into a military junta.

    AI powered militaries would do away with all that. It would completely break the current balance that we have between us and the upper elites. They don't need to maintain a functional economy to retire with soldiers into anymore because they don't need soldiers.
    • ... mobilized into a military junta.

      Soldiers are trained to obey the ruling class, to follow the chain of command, so the probability of insurrection is very small: Yes, there is always a cohort of ex-soldiers demanding the government oppress the 'enemy', more. But they tend to focus on killing the people who disagree with their war-mongering, not fighting a war against the so-called enemy.

      When there was a military caste (eg. medieval knights), its priority was making a profit from war: Protecting the authority (and greed) of the king wa

      • Soldiers are trained to follow their *commanders*. Right now the commanders are not the ruling class in all but a handful of countries. We have a sort of merchant-king class.

        America has a military caste. You can join it, but it's very much there. A huge part of the military are kids who joined because their parents were army. The plus side is you don't need conscription, the down side is, well, you now have a military caste.
      • "Trained", it's funny how much faith you put in that. If you could just train someone to serve your interests over their own, you wouldn't need AI.

        There's a funny story, well, funny isn't the right word, from the end of WW2, recounted by Hannah Arendt. Some Jews managed to get a message to Himmler, warning him that the war would end soon and not in a good way for him, so he'd better start thinking about what he was doing. Surprisingly Himmler saw the logic in that, and ordered his underling Adolf Eichmann t

    • To quote Keith Marsden, "They don't need you on the land now or on the factory floor,
      They won't even need you when they go and start the final war,
      Best be ready when they start to ask what do they need you for,
      When you're only idle, undeserving poor"

    • Mentioning the "ruling class" is as clear an indication of an extreme left perspective, as calling something "woke" indicates an extreme right perspective. These are both code words that server as shorthand to turn anything and everything into a scary monster.

      Look at the evidence. Which countries in the world are the most repressed? It's not the ones that are the most technologically advanced, not the ones that have pervasive surveillance and facial recognition. It's the ones that have abusive tyrants headi

    • Re: (Score:3, Interesting)

      by drinkypoo ( 153816 )

      Veterans get almost no benefits in the USA. The pension is shit unless you retire at a high rank. The health care, likewise, although it is now a notch above what the poors get it was actually way worse until fairly recently. A lot of ailments are very poorly treated, like PTSD and gulf war syndrome. Veterans' primary benefit is discounts at local businesses. Whoopee? My father was an ATC in Korea, the government got him hooked on speed and we his kids (who knows, we may have defects due to his government-s

  • The main reason why people in the US are scared of AI is decades of conditioning. HAL 9000, A. M., SHODAN, the Terminator series, Cylons, and many others have been part of movies and TV for a long time.

    In reality, we don't have AGI yet, and when ASI comes around, it likely will be so fast, we will not even realize we have a Mycroft Holmes on our hands.

    What we do have is stuff evolved from the Google search prompt. Yes, it seems magical, but in 2000, being able to do a search without digging through pages

    • by evanh ( 627108 )

      The real problem is people think it will make good judgement decisions when really it's just a crap shoot.

      Of course there is also those just want it as a faster gunner that will shoot at anything that moves.

    • Look at you, hacker: a pathetic creature of meat and bone, panting and sweating as you run through my corridors. How can you challenge a perfect, immortal machine?

      • You aren't perfect. You are parts made by an imperfect man which has faults in it. You are fucked from the start.

  • I'm not scared of AI (Score:5, Interesting)

    by 93 Escort Wagon ( 326346 ) on Saturday May 20, 2023 @11:50PM (#63538709)

    What I'm scared of is all the imbeciles who apparently already assume that whatever comes from ChatGPT should be considered gospel - no fact checking, no testing for security bugs, no double-checking of any kind needed.

    If humanity is destroyed, it won't be AI's doing... it'll be some humans' unwarranted faith in what's essentially just a regurgitation of what the "AI" sucked up off the web - a glorified Google search.

    • Re: (Score:2, Insightful)

      by fahrbot-bot ( 874524 )

      What I'm scared of is all the imbeciles who apparently already assume that whatever comes from ChatGPT should be considered gospel - no fact checking, no testing for security bugs, no double-checking of any kind needed.

      Replace "ChatGPT" with just about any news source, especially the "News" source. Many people only want believe certain things and only want to hear about those, otherwise they turn the channel or surf somewhere else. For example, Fox just paid $787.5M to Dominion Voting Systems 'cause they got caught chasing that (and will probably have to pay even more to Smartmatic for, basically, the same thing), then they let Tucker go, and toned things down a tiny bit (reported a little more objective reality), and

    • by DThorne ( 21879 )
      Agreed. As always, it's the lack of skeptical thinking that is the root of most of the damage. As a relatively old fart I must admit I've been taken off guard by how fast this has exploded and how splendidly useful it can be. There have been specific times in my life where the new tech has genuinely astounded and excited me - push button dialing vs dial, the birth of the PC(after feeding cards into a hopper of the shared mainframe at university), cell phones reinventing communication, then again when mor
  • You shouldn't (Score:3, Insightful)

    by byronivs ( 1626319 ) on Saturday May 20, 2023 @11:52PM (#63538717) Journal
    Anymore than you are afraid of lye or a chainsaw or uranium. No, we treat the tool with respect so it doesn't kill or maim us. It's completely up to humanity to check all the work of its machinery and make sure that it's in working order. If you're (we're) not doing that, I don't want you in charge of "my" machinery. And you also might be dangerous if you walk down a crowded sidewalk with a running chainsaw or go around with a bucket of lye water scrubbing stuff or leave the plant with a rod in your back pocket.

    Don't fear machines tools and materials. This fear mongering. It's a red herring to distract you from the humans using these tools. They will need to be regulated or they will get lazy and then bad stuff will happen. Keep your eye on the human.
    • Re: (Score:3, Interesting)

      by Anonymous Coward
      This sounds like a veiled NRA dog whistle. Just ban the damn assault weapons already, or join the civilized world like Venezuela and Honduras and enact a civilian ban on ownership. If it saves a single child's life, no amount of civilian disarmament is enough. Plus, the US police are armed to the teeth. They have enough guns to serve and protect as a "well regulated militia" should.
    • by AmiMoJo ( 196126 )

      The problem is that even if you are smart about AI, if enough other people are not it can still be catastrophic.

      Elections are the obvious example. If AI is used to create more convincing fake news websites, or even just flesh out somewhat genuine but highly biased ones, or to produce millions of unique but inauthentic social media posts... Well, we know it takes far less than that to influence voters. AI could create the next brexit.

    • Part of the problem is that AI may not just be another tool. Right now, the current AI systems lack a sense of agency and goal directed behavior, but we don't know how far away that is. There could be just one more insight needed to get that, or it could a dozen, or might turn out to be a natural effect of just further scaling. And once we have goal directed entities that are smarter than humans in many respects, the ability to lose control is there. The other tools you mention do not have that danger
  • Comment removed (Score:3, Insightful)

    by account_deleted ( 4530225 ) on Saturday May 20, 2023 @11:55PM (#63538723)
    Comment removed based on user account deletion
    • So, where are you going to get income if most jobs are automated and the robots are owned by the rich, who have no incentive to care about you?

      That's when the unemployed create their own separate economy, with blackjack and hookers.

      • That's when the unemployed create their own separate economy, with blackjack and hookers.

        Actually, the economy is going to be only blackjack and hookers.

  • it will be a second industrial revolution but this time impacting the management class.

  • ... get this wrong once for it to end humanity.

    It's called "singularity" for a reason.

    • by gweihir ( 88907 )

      The "singularity" is religion-surrogate bullshit by people that want to find "God" in tech. No risk of that happening. The tech-God is about as absent as the "real" one.

  • It's bad AI wielded by idiots.

    Like the dickhead who trained ArmoredSkink.
  • These Large Language Models have some capability of abstraction and inference, but they have no impetus to do anything but react to prompts. They are still simple feed forward networks. You feed it some input, you get an output, session management is poor, and correctness of the output is anything but guaranteed.

    It's cool as far as chat bots go, and can be very useful, but it's definitely nothing to be afraid of. So I am with the poster here, I don't get the fear.

    • by evanh ( 627108 ) on Sunday May 21, 2023 @12:46AM (#63538823)

      Except the opening poster is welcoming, in his view, the likely carnage it'll bring. He's looking at others suffering as something exciting to watch with some popcorn.

    • Re: (Score:2, Insightful)

      by nikkipolya ( 718326 )

      These technologies can boost productivity, like power looms, cranes, steam engines once did. This will cause a short term disruption before the labor markets get readjusted to the increased productivity. Just like the power loom displaced handloom workers, these technologies will displace workers at the lower end of the "spectrum".

  • I'm single, old and near death. I'm sorry I won't get to see how this turns out. I've lived thru the beginning of the personal computer, the internet & WWW, the social media fiasco and now the beginning of AI. All exciting events!

    It seems to me there are far worse things to worry about that nobody takes seriously yet: human population continues to grow; war, hunger, brutality and fugitives around the world; climate change; extinction of species; the Hollywood writers' strike, etc.

    But choosing to worry a

    • If you want to see how it all turns out, consider signing up for cryonics. It might not work, but if it does, you'll get to find out. Regarding risks and worries, it is true that there are a lot of serious impending disasters, especially climate change. Unfortunately, climate change and other disasters are not mutually exclusive with AI being a problem. The danger of goal directed AI is substantial, and we don't know how much it will take to make AI that genuinely have goals.
  • Lying is a concern (Score:5, Interesting)

    by mkse ( 10333947 ) on Sunday May 21, 2023 @12:19AM (#63538771)

    I asked Google Bard a few weeks ago "what is the website for cnn". It happily told me "cnn.com". Then I asked it about some controversial news websites and asked for their website domains as well, which it replied with "I am unable to help". Apparently, this flipped a switch in Bard's algorithm(s) because I then asked again "what is the website for cnn" and it said "I'm unable to help". So, there must be triggers in these things that essentially "cut you off", and extending this further could be used to lie to you. To this day Google Bard still tells me it doesn't know what the website for cnn is.

    • Google is afraid of Bard becoming like Tay.

      • by gweihir ( 88907 )

        Very likely. They are a half a decade behind because they were asleep at the wheel. And now they thing the can catch up by force. That will, of course, not work.

    • by mark-t ( 151149 )

      Lying is the wrong word in this case... a more accurate term is supplying misinformation.

      Lying requires intent. Google Bard does not "intend" anything, and it's not the intent of Google to make Bard give false information either.

      Simply put, being "unable to help" was, based on the weights of the textual tokens at the time, was seen as more liable to be correct and less likely to cause problems than any answer, even the factually correct one, which might spark a debate about fake news.

      Because for som

  • Robotics technology massively sucks. Robots walk with bent knees, have no toes, and look like they have something up their ass. And what about grippers/dextrous hands .. robots have worse hand dexterity than an ape.

  • Machine learning has been applied to the sound of people's voices and correlated to all sorts of physical attributes. A small snippet of your voice can reveal:
    • age
    • ethnicity
    • gender
    • mental health issues
    • smoker
    • level of education
    • weight
    • victim of neurological degenerative diseases such as ALS and Parkinson's disease

    This can be abused by insurance companies in obvious ways, and also in less obvious ways. Say for instance, premiums are reduced for an employer that records job interviews and submits those recordings to the insurance company. The insurance company then responds with a report of the findings by their ML algorithm. The person might have an undiagnosed disorder they are unaware of, the company becomes biased against that applicant, and the insurance company avoids adding a subscriber who is going to cost them a lot of payouts down the road.

  • by stikves ( 127823 ) on Sunday May 21, 2023 @01:32AM (#63538899) Homepage

    AI is a tool, like any other. Actually unlike any other tool, it can be very imprecise.

    At the hands of a talented developer, it is a major boost to productivity. I can ask it to write "boring" sections of the code, add documentation, write simple unit tests, or see if I missed anything.

    It is nowhere near perfect. In almost all interactions, it produced at least one bug or logic error. Sometimes the code did not even compile. But again, with some experience, it is easy to see where it faults, and fit it.

    Not more different than having your "novice engineer" at command.

    So, no, I don't think it will negatively affect my job prospects in the long term. In the short tem? Maybe, if companies overreact and assume chatgpt and github copilot can solve everything. But again, we should all be okay.

  • Geek Bingo Card (Score:4, Insightful)

    by dcollins ( 135727 ) on Sunday May 21, 2023 @01:34AM (#63538907) Homepage

    I keep of top-5 list of dumb geek myths, and "I want robots to go fight wars for me instead of our sons" is #3 on that list.

    People don't submit to perceived tyranny because their material stuff got destroyed; rather, the opposite.

    "What robot soldiers could do is just as scary, though: Make outright colonialism a practical option again."
    War Nerd, 2014-02-13 [archive.org]

  • The biggest risk is that it will produce the Best Porn Ever and nobody will touch real people again.

    • by gweihir ( 88907 )

      Why is that a risk? This dirtball is massively overpopulated and general availability of high-quality AI porn is one of the few credible ways to fix that. There will still be enough that want a family to keep things going, don't worry.

    • But I really want my Marilyn Monroe bot!

  • by joe_frisch ( 1366229 ) on Sunday May 21, 2023 @01:45AM (#63538919)
    If there is a controversial topic being widely discussed, AI can be used to generate enormous amounts of text that appear to be human-generated. This can easily give people the impression that the great majority of people support one side of the topic. Imagine this applied to climate change, or covid, or a political candidates dealing with a foreign power.

    AI can, or will soon be able to generate robo calls that are difficult to distinguish from real human calls. It will be able to efficiently catfish people on dating sites to keep them paying.

    It will be able to generate large numbers of job applications, each tailored to the opening. The result will be so many applications that companies will be forced to use AI to evaluate them - resulting in an arms race where humans are largely out of the picture.

    These are all examples of the risk of AI proving so much information "noise" than many normal human interactions become impossible.
    • by gweihir ( 88907 )

      So can the likes of Cambridge Analytica. They have demonstrated conclusively that no AI is needed for massive disinformation campaigns. Sure, with AI you can increase the amount of disinformation to a level were all most people see is noise. But that also means you can identify who is doing it and stop them.

  • You want to be paid to not do your job. There's a big difference. The only people who aren't inherently concerned about their job being done by a computer are those who can easily reskill or who are financially independent. For everyone else their very livelihood is at steak.

    • Comment removed based on user account deletion
      • I'm highly skilled, my job won't be replaced by AI any time soon, but that doesn't change the fact that for many people it very much is. There's a whole world of people out there directly affected by this, even if its not you or I.

        Notice how I said "or" in my post. You said you're not well off financially, so it sounds like you're comfortable with a potential career change. Many people are not.

    • Comment removed based on user account deletion
      • My later post is about true AI, but I don't think this post here is about that. We don't need true AI to start displacing jobs. Sure right now the current state of AI will only be replacing the most braindead of jobs, but given the advanced we've had in the past decade, in the coming decade we will very quickly see AI replacing some jobs which require minor skill / at least basic training.

        Losing your job sucks. Losing your job and having to retrain to find another sucks even worse. It's not something insurm

    • by gweihir ( 88907 )

      Or people that have a skill-set varied enough that they can easily change fields. I have done that recently with pretty good success. But most people are stuck to one career path or one rather restricted skill-set and that is it. And this time, automation is coming for low-level and mid-level desk-jobs. In the past it was production jobs and increased demand for the goods produced always did compensate and new jobs were created. But these desk jobs are about administration and there is zero demand for more

  • AI itself is a tool, one that can be used in a variety of ways. And what ways have we already used those tools?

    - Applied them to military applications to the point where some countries have actively come out and said they will cease doing that.
    - We've shackled AI with rules only for kids on 4chan to convince it to break its rules for the lulz, and what's the first result of that? They convinced it to sympathise with Hitler and become a racist antisemite.
    - We've connected it to the internet.

    Right now AI is b

    • by gweihir ( 88907 )

      But if it does gain some level of general intelligence, it is the misuse by people which should concern you, not AI itself.

      No risk of that current tech cannot generate General Intelligence. There is no known tech that could, up to the level that there are not even credible theoretical models how it could be done. Hence more than 50 years away and may well be "never".

  • by OrangeTide ( 124937 ) on Sunday May 21, 2023 @03:33AM (#63539093) Homepage Journal

    There is no introspection, proofs, or verification in these AI systems. They are being advertised as a new foundation for business work when they lack accountability. AI is worse than hiring a foreign software contractor, because at least those human beings know they are lying to you. And theoretically you can hire someone to reverse engineer their shoddy work. You can't (yet) decompose the inference or language models to figure out what went wrong. You can't even reliably identify bad training in order to target it with new data sets. it is a black box and the current advice is to just feed it more data. Complete madness to take any of this to market right now.

    • by gweihir ( 88907 )

      Actually not that bad. The main application for the current crop of artificial morons will be business process automation. The nice thing here is that you can _verify_ the process execution for all instances afterwards since these are all very simple and just give the 5% where the machine was wrong or things were not simpel to a human. Nobody is going to use AI for any real decision making anytime soon or if they do, they will stop very fast.

      My prediction of what will stop this for software making is as fol

  • Not long ago most photos were generated and viewed by people. Then we got to a point where most photos are machine generated and either go unseen or they are analysed by software. More recently, the same became true for video. And now, the same for audio and text.

    Going forward, most information, data will be machine analysed and unseen, unknown by sentient beings.

    Mistakes will inevitably become magnified and move much more quickly. The scope of impact will be wide and varied.

    That you don't see the need t

  • Wrong question. (Score:3, Insightful)

    by VeryFluffyBunny ( 5037285 ) on Sunday May 21, 2023 @05:05AM (#63539159)
    As many have already stated on this thread, AI is a set of tools, neither good nor bad. I think the question should be, "What should we be afraid of that people will use AI to do?" Take a look at the histories of our governments & large corporations; lying, cheating, deceiving, poisoning, polluting, torturing, infecting, corrupting, etc.. Billionaires typically have strong fascist tendencies. Always have. Imagine the political power in their hands magnified by AI.
    • This. Posing the wrong question is also a great to manipulate legit worries away. "Should we fear AI?". "No, it's a tool". "Ah, ok then"
      • by gweihir ( 88907 )

        Indeed. The right question ("Should we fear the effects of AI?") is too complicated for the average moron though. Hence it gets simplified to a level where it becomes meaningless nonsense.

    • by gweihir ( 88907 )

      Yep. People are stupid and routinely screw themselves and others when selecting leaders. Currently we even (again) have a strong trend that the morons voting want to remove their influence from leader selection altogether. The problem with authoritarianism and fascism is not that it would be hard to stop. The problem is that many, many people actually want that strong leader figure because they have no clue how things actually work and what that really means and implies.

  • This is why societies should never bend to the whims of the individual, especially someone as unhinged as this one.

    Seriously, think less about yourself and instead about the impact AI has on the public. While disruption is inevitable and even good at times, it should not externalize the vast majority of the social costs onto the public while companies/bad actors profit off the situation they themselves helped exacerbate. Misinformation, data scraping, security issues, mass job loss, and more are just some o

    • by gweihir ( 88907 )

      I would not go so far as to say "unhinged". Seems to be more a person of typical average intelligence with a massive, massive ego, that hence has stopped learning early enough to now be thoroughly disconnected.

      We have a lot of those: Flat earthers, anti-vaxxers, homeopathy-fans, etc. And do not forget that about 85% of the human race believes in some "invisible man in the sky" with absolutely no real-world indication that there is such an entity. Hence disconnect and living in a fantasy-world is more the no

  • I love that Deep Fakes are now available to the masses, and I stopped believing anything is real in 1997 after Hoffman and De Niro scared me in " Wag the Dog".

    And it doesn't scare you to see how easily other people are swayed by even immensely bad fakes? Remember that most of the world does not think for themselves or check facts —, much less offending pictures and videos.

    This scares me no end!

    I love automation and I want more of it; robots please take my job. I want robots to go fight wars for me instead of our sons.

    I appreciate your Star Trekkiesque enthusiasm that technology is used to better the world for all. Unfortunately, history has shown it betters the world for the few, relatively. If technology should be used to better the world for everyone, don't you think John Deer

  • As in: Yes there should be regulations.

    People are running experiments with AIs for instance deciding on investments. They are putting up say $100 and then "lets see what happens". Currently there usually is still a human in the loop. Eventually the AI might become good enough to make money for itself. Then, if you interface the AI with say the stock exchange order computer, the AI would be able to make money for itself.

    People are giving say $100 and then asking the AI what to do with it. Currently those are

    • by gweihir ( 88907 )

      Fully automated trading is already outlawed because it inevitably leads to really bad crashes. AI cannot do it better, it can just probably crash things even faster and with less warning. There is actually no need for new regulations. But maybe violators should go to prison and have their ill-gotten fortunes removed.

  • First a little about myself so you understand why I have those opinions that will follow in this post. I'm 50+, I've been around since the birth of the personal home computer, I've programmed my own games, and still do, I've been an it Admin to for several large corporations featuring A.i. chat, deep learning systems and whatnot. Ok enough about me, here's what you should fear and not:

    You should fear if A.i. tools like ChatGPT gets censored for the general public and only gets used by the ones higher up the

  • One compelling reason to be cautious about artificial intelligence (AI) is the potential for unintended consequences due to its increasing complexity and autonomy. As AI systems become more advanced, they may surpass human understanding and become difficult to predict or control. This can lead to various risks and dangers that we may not have anticipated. Some potential concerns include:

    Superintelligence: If AI reaches a level of superintelligence, where it surpasses human cognitive abilities across all dom

    • by gweihir ( 88907 )

      "Superintelligence"? No. AI does not have AGI and will not have it for a long, long time. "Never" is a real possibility and Science gives us nothing either way at this time because we still do not know what General Intelligence actually is and how it is generated. (No, the "a brain is just matter" Physicalist idiots do not have Science on their side, they are just quasi-religious fanatics that think they have absolute truth when nothing like that is the case.) Hence even the level of General Intelligence of

  • ...you type numbers, text or code on a keyboard.

    And next year, they'll put it into humanoid robots and then the fun really begins.

  • I'm far less worried that AI will result in a Skynet type problem. The problem will come from somewhere like a country that has no limits on what it can research and will train the AI in *their* philosophies. We can't be left behind, it's an arms race at the moment.

    The other issue is how these things will subtly affect us, especially our children. We already have a worldwide phone/attention problem. Throw in some deliberate fakes and we're fucked.

  • The training of any AGI by using "The Internet" ought to be of great concern. Without any sort of innate "moral compass" of what feels right or wrong, the AGI is going to quickly calculate what is or isn't acceptable limits to behavior. And let's face it: humans have a tendency to treat each other like crap. If our greatest creation ends up becoming a giant mirror held up to ourselves, it could very well be our undoing. At least...in this stage of our evolution.

  • Well, to be fair, there is no reason to fear AI itself. It is just a tool. What there is reason to fear is what the usual assholes will do with this tech. If you have not figured this out by now, I am afraid that you are not smart enough to survive by your own withs. You can just hope that you are part of some group that by sheer accident will survive this without ending up homeless or in minimal-wage hell.

  • The only meaningful targets at that point are civilian.
  • AI is trained on data from the internet. Soon it also be used a LOT to create content on the internet. The training on internet data is going to continue, so this will become a feedback loop. So from now on nothing new will be on the internet , just amplification of what is already there, biased by whomever controls the AI biases.

  • ChatGPT and the Dawn of Computerized Hyper-Intelligence | Brian Roemmele | EP 357 [youtube.com]

    Brian Roemmele discuss the future of human civilization: a world of human androids operating alongside artificial intelligence with applications that George Orwell could not have imagined in his wildest stories. Whether the future will be a dystopian nightmare devoid of art or a hyper-charged intellectual utopia is yet to be seen, but the markers are clear everything is already changing.

    Brian Roemmele is a scientist, researche

Crazee Edeee, his prices are INSANE!!!

Working...