Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
BLACK FRIDAY DEAL: Trust the World's Fastest VPN with Your Internet Security & Freedom--A Lifetime Subscription of PureVPN at $48 with coupon code "BFRIDAY20" ×
AI Businesses Education Google Programming Technology

Ask Slashdot: What Types of Jobs Are Opening Up In the New Field of AI? 133

Qbertino writes: I'm about to move on in my career after having a "short rethink and regroup break" and was for quite some time now thinking about getting into perhaps a new programming language and technology, like NodeJS or Java/Kotlin or something. But I have the seriously growing suspicion that artificial intelligence is coming for us programmers and IT experts faster than we might want to admit. Just last weekend I heard myself saying to a friend who was a pioneer on the web, "AI is today what the web was in 1993" -- I think that to be very true. So just 20 minutes ago I started thinking and wondering about what types of jobs there are in AI. Is anything popping up in the industry from the AI hype and what are these positions called, what do they precisely do and what are the skills needed to do them? I suspect something like an "AI Architect" for planning AI setups and clearly defining the boundaries of what the AI is supposed to do and explore. Then I presume the requirements for something like an "AI Maintainer" and/or "AI Trainer," which would probably resemble something like an admin of a big data storage, looking at statistics and making educated decisions on which "AI Training Paths" the AI should continue to explore to gain the skill required and deciding when the "AI" is ready to be let go on to the task. You're seeing we -- AFAIK -- don't even have names for these positions yet, but I suspect, just as in the internet/web boom 20 years ago, that is about to change *very* fast.

And what about Tensor Flow? Should I toy around with it or are we past that stage already and will others do AI setup and installation better than me before I know how this thing really works? Because I also suspect most of the AI work for humans will closely be tied to services and providers such as Google. You know, renting "AI" as you rent webspace or subscribe to bandwidth today. Any services and industry vendors I should look into -- besides the obvious Google that is? In a nutshell, what work is there in the field of AI that can be done and how do I move into that? Like now. And what should I maybe get a degree in if I want to be on top of this AI thing? And how would you go about gaining skill and knowledge on AI today, and I mean literally, today. I know, tons of questions but insightful advice is requested from an educated slashdot crowd. And I bet I'm not the only one interested in this topic. Thanks.
This discussion has been archived. No new comments can be posted.

Ask Slashdot: What Types of Jobs Are Opening Up In the New Field of AI?

Comments Filter:
  • by tietokone-olmi ( 26595 ) on Friday June 09, 2017 @05:45PM (#54588441)

    But you gotta bring your own "battle-proven" rod.

  • Doing AI is much harder than being an application developer. I doubt most of us would be able to switch. Good luck.
    • Doing AI is much harder than being an application developer.

      Indeed. I know plenty of companies with openings, but they are looking for PhDs in machine learning and data science from top tier universities. Designing and training a deep ANN is a lot harder than learning how to edit HTML. It is not something you are going to learn in a 21 day "bootcamp".

  • ... It's just more accessible these days and more practical due to the huge increase in computing power. Back in the day, it was required to reduce the data (images, etc) to a much smaller set of features that could be fed into the AI algorithms. Now there's enough computing power for the AI engines to determine good features on their own.

    Even though you can train a neural net to recognize e.g. hotdogs by just feeding it a series of pictures of hotdogs, and, of course pictures of things that might be mist

    • by Anonymous Coward

      Here's the insidiously disruptive part (based on my limited understanding) - how the terminator (a portable, general AI) had such a compact computer, if you will - once you use a rented server or GPU farm to train the AI, actually running the neural network is not nearly as computationally intensive.

  • There is no 'AI' (Score:1, Interesting)

    by segedunum ( 883035 )
    Like 'self-driving' cars AI is a scam and my eyes glaze over when I hear anyone mentioning it. It's usually a clueless investor or someone with something to peddle. All we have are systems with an ever increasing number of if statements. There is no true AI and won't be for a very long time.
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Like 'self-driving' cars AI is a scam and my eyes glaze over when I hear anyone mentioning it. It's usually a clueless investor or someone with something to peddle. All we have are systems with an ever increasing number of if statements. There is no true AI and won't be for a very long time.

      If you think self-driving cars are a scam, then yes, AI is definitely a scam. Maybe you should wait a year or so and re-evaluate.
      You can't ignore that computers can now beat us at chess, Go, or pretty much any game based on strategy.
      Applications such as face-recognition, speech-recognition are already used in every-day life, and are increasingly based on deep-nets.
      I'm curious to know how many if statements are in the code?

    • Self driving cars are not driven by an AI.
      There are plenty of self driving cars on the roards right now. Many have millions of miles of driving history.
      I would suggest to google a bit instead of spreading your hubris.

    • Re: (Score:1, Informative)

      by Anonymous Coward

      Self-driving cars are powered by artificial neural networks, which rely on multiplication/convolution of 16 bit floating point matrices. The decision tree / expert system stuff you're talking about with "if statements" is ancient. Nobody is doing image segmentation, object detection, or object recognition with that.

      • So multiplication of 16 bit matrices is AI? Wow. amazing stuff. OP is right: there is no "AI". Just algorithms. Just because you call them "neural nets" doesn't change a damn thing.
        • So multiplication of 16 bit matrices is AI?

          Just as much as connecting 3 neurons together is I.

        • by Kjella ( 173770 )

          So multiplication of 16 bit matrices is AI? Wow. amazing stuff. OP is right: there is no "AI". Just algorithms. Just because you call them "neural nets" doesn't change a damn thing.

          This is flawed deconstructionist logic, humans are an intelligent carbon-based life form. A lump of coal is neither alive nor intelligent, but still made of carbon. Whatever intelligence is, I doubt it is any magic that can't be implemented in silicon, even though the building blocks are equally unintelligent.

        • So multiplication of 16 bit matrices is AI?

          Well, yep, math is how we solve pretty much everything. What's more interesting than just saying it's a bunch of matrix math is understanding at a more abstract level what the math functions represent.

          The reason neural nets end up being interesting is because they are essentially universal function approximators that can be adjusted/tweaked to move closer and closer to a desired function (based on input/output data) without actually knowing the function in advance.

    • All we have are systems with an ever increasing number of if statements.

      You must be completely unaware of what's happening in the field, if you believe that. Here's an example of how it works: https://www.youtube.com/watch?... [youtube.com]

    • by dbIII ( 701233 )
      Sadly it's just like "hoverboards" and "nanotechnology" - something that looks only slightly like what the term used to mean is now wearing the label.

      Then again I could be wrong because I think people might have been calling the Eliza bot "A.I." some time back. Current stuff called "A.I." is no more intelligent than that bot that would throw out different outputs based on the input. The pattern matching is just a bit more elaborate and there are now databases behind it to improve the pattern matching ove
      • providing an appearance of intelligence by eventually getting things right via brute force matching.

        That's all we need. That's how our brains evolved. We have 20 billion neurons in our cerebral cortex, with thousands of connections per neuron. That's also brute forcing. The appearance of intelligence will get better and better as we find better algorithms, and bigger networks.

  • I'm sorry, but you're too late. All of the programming jobs are already gone. :-)

  • It's been around for a while.

    If you really want to head in the right direction, talk to IT recruiters and ask them what they see trending. Keep a regular job while you mentally re-tool. Find a job that leverages your existing knowledge that also needs someone with the skills you're looking to acquire.

    Often you may find yourself picked up by people that are willing to overlook or even foster your learning curve based on demonstrating an ability to learn.

    Good Luck.

  • by Anonymous Coward

    It's mostly hype. Yes there will definitely be new jobs created, but mostly they will go to people with a strong academic background in the field. We are pretty much at the peak of the hype cycle and it is still very competitive to find jobs in the field. Don't think every man and his dog will be doing AI / machine learning. In fact the most likely long term scenario is that as the tooling gets more advanced, there is a consolidation of roles so that fewer people will be needed. There is no harm familiarizi

  • by jmcbain ( 1233044 ) on Friday June 09, 2017 @06:38PM (#54588857)

    for quite some time now thinking about getting into perhaps a new programming language and technology, like NodeJS or Java/Kotlin or something

    You are in for disappointment if you think that getting into a career in AI / ML is anything like a programming hobby, such as picking up NodeJS or Java over a few weekends. First, let me make clear the terminology I'm using. Artificial Intelligence is a broad field. Although the public perception of AI is of software/robots like HAL with which humans can talk or interact, the field also encompasses knowledge representation, reasoning, and learning. Machine learning is then an important subfield of AI, and it involves supervised learning, unsupervised learning, reinforcement learning, etc. Over the last several years, many people have started to conflate AI and ML, but for someone knowledgable in these fields, the distinction is clear. ML is the practical application of algorithms towards taking inputs and producing an output prediction, and it's this area that contains the vast majority of jobs in "AI". Some basic applications of ML include spam filtering, face detection and recognition, product recommendations, fraud detection, revenue forecasting, gait and step detection, voice recognition, etc. If you look at that list and think about them, you'll come to realize that you've probably been consuming ML results for the last five years or much longer. If you want to work as a "ML engineer" in this area, you'll have to be knowledgable with ML algorithms, setting up data pipelines, running experimentation, and using ML software, such as scikit-learn, R, Caffe, Tensorflow, etc.

    I manage a ML team at a large company. Let me make clear: Unless you have a strong academic background in this field, no one will take you seriously. I recently applied to be a principal engineer working on an AI/ML personal assistant, and the recruiter told me straightforwardly that the hiring managers are not interviewing anyone unless they have a recent PhD related to deep learning. I'm a bit elitist about this as well: I tend to turn away candidates that don't have at least an MS or PhD in a field related to ML. Why is this so? Because you need a rigorous background to understand why and how ML works, and this involves understanding loss functions, gradients, training vs. validation error, decision boundaries, optimization, and other things. You need to understand these things because a lot of current ML involves choosing the right knob settings (hyperparameters) that make your ML work best. If your ML algorithm isn't working well, how do you fix it? That's where this rigorous background comes in handy.

    Now, there are many things related to ML that you can still work on if you don't have a strong background. As opposed to a ML specialist, there are plenty of positions related to data engineering (e.g. setting up and maintaining huge data pipelines), infrastructure administration (e.g. installing and mastering all aspects of Hadoop and Spark), visualization (e.g. creating dashboards that take fresh data and display it), among many others.

    • by SlashTom ( 593611 ) on Friday June 09, 2017 @07:51PM (#54589217)
      Wow. That is pretty much the only serious answer so-far ;-)

      I think you are right in saying that AI/ML is not something you pick up over a long weekend. Still, it should be possible for someone to give it a shot without doing a PhD.

      There is a really nice Coursera course I can recommend: https://www.coursera.org/learn... [coursera.org] If you follow that and make all the assignments, this will take you some months (assuming you don't do it fulltime), but you end up knowing a lot more about ML then a lot of others. Also there are tons of books that cover Machine Learning basics. Then, if you want to get your hands dirty, you can go to Kaggle and participate. I think if you do that for a while, and prove that you are really good at it, a company/recruiter looking for an ML person might be interested even if you don't have a degree in an ML-related field.

      And as to the original question, what kind of job titles? I noticed that many times "Data Science" openings have something to do with some kind of machine learning. But the term is >>very As to "toying with Tensorflow", this is of no use whatsoever, IMHO, if you don't know anything about machine learning. So, first learn more about that, and then see how Tensorflow fits in, is what I would suggest.

      Anyway, long story short: go for it!! You won't be an expert in a month, but you are in for a treat ;-)

      • But the term is >>very

        But the term is *very* broad and is used for pretty much everything from toying around with Excel sheets to doing really cool ML.

        As to "toying with Tensorflow", this is of no use whatsoever, IMHO, if you don't know anything about machine learning. So, first learn more about that, and then see how Tensorflow fits in, is what I would suggest.

    • and tend to agree with your post based on my "non-expert but considerable time spent" experience.

      My story:
      I wanted to create intelligence via artificial life and evolution. I didn't want to create human intelligence, just tiny little creatures trying to survive type intelligence. I provided them basic sensor inputs and motor/movement raw materials but didn't program in any usage of those things, they need to figure out through evolution (how to see, move, find food, avoid getting eaten, etc.). They st
    • Unless you have a strong academic background in this field, no one will take you seriously

      I'm sure that's true today, but how much longer can that hold? There are only so many PHD's to go around.

      Just as there are a lot of programmers today that do not have CS degrees, in the future as use of machine learning stuff especially ramps up companies will rapidly start to get less picky about a piece of paper that says you know ML, and be more than OK with practical examples of what you have DONE with ML.

  • Is there really anything other than Software Engineer? (for non-research roles)

    As others have said, AI isn't new. But it is a fast growing field, so I'd caution against getting too specialized. In a field that is growing fast and changing quickly, it's important to be adaptable. You only tend to see highly specialized roles in established fields, and AI is not one of those.

    • I agree with this. There's a lot of theory and experimentation going on in Machine Learning currently, but that's all research, not something you just hop into. Although there are many attempts at technology transfer and commercial exploitation of ML, I don't think there are currently any established tools and practices for that. In that sense, there is no ML equivalent of web-designer, or system-administrator.
  • There is no 'AI field', it's still just computers running code. Nothing to see here, move along..
    • That's like saying, "there's no story, just a book full of letters", or "there's no intelligence, just a skull full of chemicals", or "there's no weather forecast, just computers running code".

    • There is no 'AI field', it's still just computers running code. Nothing to see here, move along..

      When I was doing job interviews in 1999, when I mentioned AI, I had several interviewers say "You can't say you're interested in AI, it's a dead field". Today, it's anything but a dead field. Yes, we're no where near having true AI but we are continuing to replace more and more complex human jobs with computers. The HBO show "Westworld" has a great line where one of the programmers questions the logic of continuing to make the androids in that show more lifelike. We don't want "true AI". A truely intel

      • You might have had my attention for a moment, then you quoted something from a television show, which just reinforces an opinion I already had: People don't know the difference between actual 'artificial intelligence' and the fantasy of television and movies; not even necessarily ostensibly educated, professional people. Then to make matters worse you misspelled 'conscious' (although I could give you a pass on that, could be you're on a phone and the autocorrect did that). I'm far from impressed by 'machine
  • It'd be nice to know what's available going forward, but is any of this readily available for the displaced/long-term jobless (but still have an interest in IT/CS)?

  • Once we stop calling it AI it will work.
  • New Field of AI? MIT was doing AI in 1963 and Marvin Minksy set up a dedicated AI lab in 1970. While he wrote many books on the subject, Society of Mind is a good one to start with.

    AI gets to the point were it solves a set of previously unsolvable problems, the algorithms are then researched and better non-AI solutions are then used to solve the same problems. Then AI falls out of fashion for a while and computer power increases thanks to Moore's law. Then it all repeats.

    • look ma, no code! (Score:5, Insightful)

      by epine ( 68316 ) on Friday June 09, 2017 @09:07PM (#54589521)

      AI gets to the point were it solves a set of previously unsolvable problems, the algorithms are then researched and better non-AI solutions are then used to solve the same problems. Then AI falls out of fashion for a while and computer power increases thanks to Moore's law. Then it all repeats.

      This old story is such a crock.

      I highly recommend the following to anyone who wants a different perspective on modern ML:

      * Talking Machines: Remembering David MacKay with Philipp Hennig [thetalkingmachines.com] — 21 April 2016

      * Probabilistic-Numerics.org [probabilis...merics.org]

      This is plain old numerical methods, optimization, and search viewed through a Bayesian inference filter. I would never have termed any of this "artificial intelligence".

      It took the recent large advances in unsupervised learning, the kitty classifier (and progeny), and the LSTM machine translation models to finally justify rethinking academic labels. Programs like SHRDLU [wikipedia.org] from 1968 were perhaps explorations in AI, if our baby-step microscope is especially well focused. But this was closer to natural philosophy than what later became physics. Even our shiny new LSTM language models remain weirdly proximal to Searle's Chinese room [wikipedia.org]. What have we really learned from watching our machines learn? Not a whole damn lot.

      I'd nominate a term such as I-cubed: inexplicable inductive inference, or perhaps MIII: massively inexplicable inductive inference.

      Even so impeded with an appropriate name, MIII is pretty mind-blowing. But it still ain't AI. It might be a viable building block to proceed in that direction, sooner rather than later, as we begin to erect dynamical systems upon this foundation. To drive the point home, it remains way overblown to call it MIIR: massive inexplicable inductive reasoning.

      An Alberta AlphaGo Pioneer Is in China to Watch the AI Wallop Human Opponents [vice.com]

      "Before AlphaGo, much of the fundamental games and machine learning research was done here," Muller wrote in an email. "If you look through the references list of the AlphaGo paper in the journal Nature, over 40% of these references have a University of Alberta (co-)author. Then, DeepMind greatly surpassed all of these previous efforts with their new ideas."

      I haven't waded through this yet, but I suspect even the vaunted AlphaGo has a backbone of techniques that I personally wouldn't have classed as "AI" (or even AI-ish) by my own standards.

      For decades, the big idea in AI was supposed to be recursion. Perhaps human language is recursive in theory, but it's only barely recursive in practice (nest more than three levels, your accurately attentive audience grows thin). Winograd is not completely wrong about this, but my long suspicion is that recursion is not going to enter the AI building through the ground floor.

      Lately, we really have hit home runs with distributed representation, and to a lesser degree with convolutional image recognition. These are actual AI-ish ideas. However, two solid take-home techniques do not a field make.

      Here's another possible intermediate term: generalized gradient exploitation (GGE). Plus there's tons of great mathematics about overfitting and regularization. But should we really call all this math "AI"?

      In practice what AI ends up meaning is "look ma, no code!" Hey, we just built an impressive system without hiring rooms full of code monkeys, so we must be doing something right.

      AI is not the moving target of lore. It's mainly our long AI pretension that fits the bill.

      How many legs d

  • If you want to be at the forefront of AI you should first invent it and to do that, you need to define intelligence and individualism. I think brain research would be your field to try.

    You seem to want to go into statistical classification and database systems, neither of which are new and have well defined paths via mathematical statistics, programming and systems design. The fact that you think learning a particular language will help shows that you are missing the point of modern IT systems but if anythi

    • If you want to be at the forefront of AI you should first invent it and to do that, you need to define intelligence and individualism. I think brain research would be your field to try.

      Before you try to invent anything, you should first learn the stuff that's already out there that others are working on.

      If you want to go into commercial AI, make an app

      Most AI requires more computation that fits in an app.

  • First of all if you are talking about true AI and not just basically the same sort of connectionist statistical learning algorithms that have been around for many years you are perhaps better off getting into electrical engineering rather than machine learning because we don't yet really have the hardware to properly run the software on.

    AI is all about research and research is still at a very early stage. When it comes to AI we are cavemen with crude stone axes. We need some fundamental breakthroughs and th

    • Actually I suspect we may find that the best approach is by cheating with a bit of biology. We may find it easier to grow our own biological neurons that map themselves than trying to figure out what a brain does and emulate it with electronic circuits

      Doubtful. Biological neurons are slow and inflexible. Artificial neural nets can already outperform human brains on various subtasks, and it's much easier to experiment with different network parameters in software. Looking at real brains is useful to learn some new tricks, but it's unlikely that brains have the perfect implementation for the task, given the huge constraints given to them by nature.

      My suggestion is to get at least 3 PhDs

      I would pick one, and work together with people who have the other 2.

  • Thanks for the replys and the lively discussion this far.

    Lots of sentiment here.

    First of all, on a sidenote: People should drop the notion that professional web development is trivial. It isn't. Yes, there is a disproportionate amount of eternal n00bs and dolts in the field and truckloads of throw-away code straight from Nightmare on Elm Street. Which is why I'd like to leave it and will only take jobs with teams that know CI, are building a product I'm interested in or pay obscene amounts of money. But bad

    • I am also an 80's computer kid, still coding. Recently I assessed an AI startup, with a team of young ML PhD's, on behalf of a potential investor. While apparently brilliant in science, that team was surprisingly inexperienced on software development, e.g. seriously struggling on web UI's. So perhaps you should look into job opportunities in ML teams, helping them to bridge their ML magic with the outside world?
  • Since when is AI new?
  • By the time you get that degree, the whole field could have poopped. Even if that's not the case, it's pretty likely that things have shifted within the field from one specialisation to another. On top of that, if something's hot now chances are there'll be oversupply in four years time because everybody's a genius and they all had the same idea as you.

    Plus, it might have helped to mention what qualifications & experience you already have.

  • I successfully retrained myself for AI. What you’re proposing is possible, but it was extremely difficult and required a sustained and disciplined effort over a period of years. It was like trying to do a whole graduate degree through self-imposed study. In brief, here’s my story. I graduated with a PhD in linguistics in 2001. I was also a competent programmer on the side. As a grad student, my studies were focused on pure linguistics, but I kept picking up odd jobs which fell in the overl

The person who makes no mistakes does not usually make anything.

Working...