Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Books Media

Reading Guide To AI Design & Neural Networks? 266

Raistlin84 writes "I'm a PhD student in theoretical physics who's recently gotten quite interested in AI design. During my high school days, I spent most of my spare time coding various stuff, so I have a good working knowledge of some application programming languages (C/C++, Pascal/Delphi, Assembler) and how a computer works internally. Recently, I was given the book On Intelligence, where Jeff Hawkins describes numerous interesting ideas on how one would actually design a brain. As I have no formal background in computer science, I would like to broaden my knowledge in the direction of neural networks, pattern recognition, etc., but don't really know where to start reading. Due to my background, I figure that the 'abstract' theory would be mostly suited for me, so I would like to ask for a few book suggestions or other directions."
This discussion has been archived. No new comments can be posted.

Reading Guide To AI Design & Neural Networks?

Comments Filter:
  • PDP (Score:5, Informative)

    by kahizonaki ( 1226692 ) on Tuesday December 02, 2008 @06:56AM (#25957457) Homepage
    Parallel Distributed Processing (both books) by Rumelhart, McClelland, and the PDP research group, 1986. "THE" classic neural network resource--and still somewhat relevant.
    • Re: (Score:3, Informative)

      by agravier ( 1411419 )
      For a somewhat more up-to-date and maybe complementary book, I advise you Computational Explorations in Cognitive Neuroscience by Randall C. O'Reilly and Yuko Munakata (The MIT Press). The simulator intends to extend and replace PDP++ and is quite pleasant to use. It is on http://grey.colorado.edu/emergent/index.php/Main_Page [colorado.edu]
    • Re:PDP (Score:4, Interesting)

      by babbs ( 1403837 ) on Tuesday December 02, 2008 @07:52AM (#25957757)
      I prefer James Anderson's "An Introduction to Neural Networks". I think it is better suited for someone coming from the physical, mathematical, or neuro- sciences.
      • Re: (Score:2, Interesting)

        by kahizonaki ( 1226692 )
        The great thing about the PDP books is that they make almost NO assumption as to what the reader's background is. There's no code, a bunch of pictures, and something in there for everyone. Each chapter is written with a specific goal in mind, and by leaders in the field--there are chapters on the mathematics of the networks, the dynamical properties of them (i.e. how they can be thought of as boltzmann's machines), as well as lots of ideas for applications and specific studies of how real experiments worked
    • Re: (Score:3, Informative)

      Cosma Shalizi is also a Physicist. I don't think he is actually doing research in machine learning or AI but he likes to read a lot and he tends to have fairly extensive reading lists.

      Machine Learning [umich.edu]

      AI [umich.edu]

      You may also want to get familiar with Geoffrey Hinton's current work in neural networks [youtube.com].
  • by Anonymous Coward on Tuesday December 02, 2008 @07:00AM (#25957483)

    Due to the possibility of a robot army rising up, I refuse to help.

    • Before The Terminator, there was JP:
      http://www.youtube.com/watch?v=jac80JB04NQ [youtube.com]
    • I, for one, welcome our new artificially intelligent overlords.
  • AIMA (Score:5, Informative)

    by omuls are tasty ( 1321759 ) on Tuesday December 02, 2008 @07:01AM (#25957487)
    Artificial Intelligence: A Modern Approach by Rusell and Norvig is more or less the standard AI textbook and the book I'd suggest to get an overview of AI and its different methodologies. Mind you, it's over 1000 pages, but a very interesting read.
    • Re: (Score:3, Interesting)

      by xtracto ( 837672 )

      I must second that, Russel and Norvig book is one of the most important books.

      I would also recommend:

      Artificial Intelligence: A new Synthesis [google.com] from Nills J. Nilson [wikipedia.org], who is considered one of the founders of A.I.

    • If it's the book I think it is, it gives a good overview of 'traditional' AI (rules, logic systems, planning) but not really anything about 'soft' approaches like neural nets. I found it rather disappointing. Read any of the classic Rob Brooks papers. If nothing else, they are certainly inspiring - they always make me want to build robots.
      • Re: (Score:3, Informative)

        by hoofinasia ( 1234460 )
        Nope. Its got neural networks. (section 20.5) Try walking into any cog sci / AI faculty office without seeing this book. Don't let anyone tell you it's dry (its got math! gasp!). It's accessible and thorough.

        Also:
        Statistics!

        ...learn it, love it. Thats mostly what AI is under all the gloss. That sound is a thousand Cog Sci students screaming in terror, ignore them.
        • Too true.

          For someone ready to face this fact, Christopher Bishop's _Neural Networks for Pattern Recognition_ is a nice read, and Hastie/Tibshirani's _Elements of Statistical Learning_ is a modern classic.

          Bishop also has a newer more accessible book called _Pattern Recognition and Machine Learning_. I haven't read it, but it looks a bit like Duda/Hart's book.

      • by Goaway ( 82658 )

        I think disappointment is the feeling any bright-eyed young man wanting to work with AI is going to feel in any case.

        Welcome to the AI winter.

      • The second edition of AIMA has much more content about "soft" AI methods than the first edition did; it's almost like there's a whole other book added. The second edition really is a great survey of all of the various subfields of AI, from traditional logic systems to neural networks to Bayesian reasoning and decision theory. I'd say its definitely worthwhile.
    • Re: (Score:2, Informative)

      by Anonymous Coward

      I'd like to add to this. AIMA gives you a very broad and moderately deep overview of the state of AI ten years ago. As such, it is a truly excellent introduction introduction to the subject.

      If you want a more recent, much more thorough and narrow introduction to neural networks in particular and machine learning in general, I'd recommend Chris Bishop's book: Pattern Recognition and Machine Learning (http://research.microsoft.com/~cmbishop/prml/), which focuses on learning rather than searching and planning.

    • Re: (Score:3, Informative)

      by Yvanhoe ( 564877 )
      Agreed. All the basic knowledge about the field is in this book. Part of these are available freely online. You can be judge : http://aima.cs.berkeley.edu/ [berkeley.edu]
    • Re:AIMA (Score:4, Informative)

      by six11 ( 579 ) <johnsoggNO@SPAMcmu.edu> on Tuesday December 02, 2008 @09:42AM (#25958453) Homepage

      Also seconded. Russel & Norvig. Artificial Intelligence: A Modern Approach [berkeley.edu] is a good book, well illustrated, and generally lacks the undecipherable academia-speak that pervades lots of AI literature.

      Here's an article that was particularly influential on me and some of my friends: Brooks, R. 1991. Intelligence Without Reason. MIT AI Memo num 1292 [mit.edu]. Even though it is 'just' a tech report, it is frequently cited. He had another one, Intelligence without Representation, which is also good.

      Somebody else mentioned the McClelland and Rumelhart PDP (neural networks) book, and it is also still quite good in spite of its age.

      The interesting thing about AI (to me) is the funny mix of domain expertise. You have philosophers, sociologists, cognitive scientists, psychologists, computer scientists, and mathematicians. That's not a complete list---I'm in human-computer interaction and design research.

      But because of the motley crew of domains you have a hundred people speaking a hundred different dialects. Some people put everything in really mathy terms, and their journal articles look (to me) like they are written in Klingon. Then you have others who write in beautiful prose but don't give any specifics on how to implement things. Still others express everything in code or predicate logic.

      The oldest school of AI holds that you can reduce intelligence to a series of rules that can operate on any input and make some deterministic and intelligent sense of it. That works to a degree, but it falls apart at some point partly because of the computational complexity (e.g. the algorithm works if you have a million years to wait for the answer). Another reason it falls apart is because there are some kinds of intelligence that can't be reduced to rational computation (e.g. I love my wife because of that thing she does...).

      There's a newer kind of AI that is based on having relatively simple computational structures that eat lots of data, "learn" rules based on that data, and are capable of giving fairly convincing illusions of smartness when given additional data from the wild. Neural nets fall into this category.

      A third kind of AI brings these two schools together in the belief that there are fundamental computational structures like Bayesian Networks that can model intelligence* but those structures by themselves are insufficient and must be able to adapt based on exposure to real data. So instead of having a static BN whose topology is defined at the start and remains the same throughout the life of the robot, we can have a dynamic BN whose structure changes based on the environment.

      I remember reading a recent article by John McCarthy arguing that all this statistical business is hogwash, and that the old school positivist, reductionist approach will eventually win. He's a smart guy, inventor of LISP and a Turing Award recipient. It seems his view is in the minority, but I'm not one to say he's wrong. However, my inclination is that the third hybrid group is probably going to be the one to make the most progress in the years to come.

      The reason for my preference to the hybrid school could probably be best explained by Lucy Suchman's Plans and Situated Actions [wikipedia.org]. I can't really do her thesis justice in a few sentences, but the short version of her argument is that there are plans (the sequence of steps that we think we are about to carry out before performing some task) and actions, which is the set of things we actually do. In my mind, a plan corresponds roughly with the underlying computational mechanism, but the actions correspond with how that mechanism executes and what happens when the underlying structure is insufficient, wrong, misleading, or fails.

      Hope that helps.

      Gabe

      * None of this is to say that computational structures that we implement with software/hardware ar

      • by starm_ ( 573321 )

        These are good tips. I would also suggest reading Eliezer Yudkowsky's post the Oxford based blog: http://www.overcomingbias.com/ [overcomingbias.com]. Read them in chronological order. They'll makes more sense.

        He writes criticism of the different AI approaches that is really worth reading. He'll tell you that you should read books by E.T. Jaynes and Judea Pearl. I highly recommend reading Jaynes before doing any probabilistic modeling. There is even a free draft of his book online.

  • AI != design brain (Score:5, Insightful)

    by Kupfernigk ( 1190345 ) on Tuesday December 02, 2008 @07:03AM (#25957499)
    There is a very big difference between AI - which is based on guesses about how "intelligence" works, and studies of brain function. I'm going to make a totally unjustified sweeping generalisation and suggest that one reason that AI has generally been a failure is because we have had quite wrong ideas about how the brain actually works. That's to say, the focus has been on how the brain seems to be like a distributed computer (neurons and the axons that relay their output) because up till now nobody has really understood how the brain stores and organises memory in parallel- which seems to be the key to it all, and is all about the software.

    So my feeling is that the first people really to get anywhere with AI will either work for Google or be the neurobiologists who finally crack what is actually going on in there. If I wasn't close to retirement, and wanted to build a career in AI, I'd be looking at how mapreduce works, and the work being done building on that, rather than robotics. I'd also be looking as seriously parallel processing.

    So my initial suggestion is nothing to do with conventional AI at all - look at Programming Erlang, and anything you can find about how Google does its stuff.

    • Re: (Score:2, Funny)

      by Anonymous Coward

      The human brain does not use anything that even remotely resembles software. The brain is hardwired.

      Software in brains... that a paddlin'

      • Re: (Score:3, Funny)

        by dmbasso ( 1052166 )

        The universe is software, the brain workings are just a tiny side-effect, but can still be considered software.

        From universe.c:

        int main()
        {
              [...]
              return 42;
        }

    • Re: (Score:3, Informative)

      by Dan East ( 318230 )

      http://www.databasecolumn.com/2008/01/mapreduce-a-major-step-back.html [databasecolumn.com]

      As both educators and researchers, we are amazed at the hype that the MapReduce proponents have spread about how it represents a paradigm shift in the development of scalable, data-intensive applications. MapReduce may be a good idea for writing certain types of general-purpose computations, but to the database community, it is:

      1. A giant step backward in the programming paradigm for large-scale data intensive applic

      • 1. A giant step backward in the programming paradigm for large-scale data intensive applications

        *blink* [slashdot.org]

        4. Missing most of the features that are routinely included in current DBMS

        TCP/IP is missing those same features. Oh noes!

    • by Viol8 ( 599362 ) on Tuesday December 02, 2008 @08:10AM (#25957853) Homepage

      .. as applied to normal computers. In this case its simply speeded up serial computation - ie the algorithm could be run serially so Programming Erlang is irrelevant. With the brain , parallel computation is *vital* to how it works - it couldn't work serially - some things MUST happen at the same time - eg different inputs to the same neuron, so studying parallel computation in ordinary computers is a complete waste of time if you want to learn how biological brains work. Its comparing apples and oranges.

      • No it isn't (Score:3, Interesting)

        by Kupfernigk ( 1190345 )
        You've just reinforced my point by not understanding how the brain works. Neuron inputs and outputs are known to be pulse coded, and as you would imagine with chemical based transmitters, the pulse frequency is low (it evolved, it didn't get designed from first principles!) So it is perfectly possible to represent a neuron by a time-slicing parallel system, because it is extremely low bandwidth, and its output varies very slowly, but is NEVER DC. As a result, the output of the neuron does not need to be con
        • Re: (Score:3, Interesting)

          by Viol8 ( 599362 )

          And you've missed my point. Parallel computing on a von neumann computer raises issues of race conditions, deadlocking etc. These are the sort of things you have to worry about with parallel silicon systems. None of these issues apply to brains (as far as we know) so what is the use in learning about them? You're talking about simulating a neural system which is not the same thing - a simulation of anything can be done serially given enough time, never mind in parallel. But it will never be an exact represe

          • a simulation of anything can be done serially given enough time, never mind in parallel. But it will never be an exact representation of the real physical process and in the case of brains

            But it may be close enough. You've only got so many inputs and outputs, so just roll through every single neuron in your ANN and simulate what it does at that given step. At time t+1, do the same thing again.

            I've seen a number of neural networks that do this, and yes, there's always a little less stochasticity when compa

          • Actually I don't think you are correct, except insofar as the analogy is not precise, but there are several instances of what looks like race conditions in the brain causing problems. Some of them are to do with optical illusions where one part of visual processing handles the field one way, another handles it another, and a static field appears to oscillate, have abnormal brightness etc. Another is the phenomenon of dyslexia. I do not pretend to be any kind of expert on this and my little knowledge is prob
    • You may be right, but it's never been a major goal of AI researchers to duplicate how the brain works. AI has been steadfastly interested in building machines that do what the brain does, but not how the brain does it. So while I'm sure that many AI researchers keep an eye on these things, I don't think that "wrong ideas about how the brain actually works" is the problem, since ideas about how the brain works have relatively little influence on AI.

      As an aside, MapReduce is not that complicated, nor is it

    • There is a very big difference between AI - which is based on guesses about how "intelligence" works, and studies of brain function. I'm going to make a totally unjustified sweeping generalisation and suggest that one reason that AI has generally been a failure is because we have had quite wrong ideas about how the brain actually works. That's to say, the focus has been on how the brain seems to be like a distributed computer (neurons and the axons that relay their output) because up till now nobody has really understood how the brain stores and organises memory in parallel- which seems to be the key to it all, and is all about the software.

      A lot of the brain's function is architectural, rather than merely a matter of 'software'.

      I don't know if you can say "AI has generally been a failure", but traditional AI has actually been guided by the non-biological notion of a "physical symbol system" rather than by conceptions about how the brain actually works. And even in the biologically inspired side of the field, only the most ignorant would think that artificial neural networks have much in common with the brain.

      The field of AI, with few execpti

    • If I wasn't close to retirement, and wanted to build a career in AI, I'd be looking at how mapreduce works...

      Why not do this stuff during your retirement? What else are you going to do with the time between now and your death?

    • by Xest ( 935314 )

      "There is a very big difference between AI - which is based on guesses about how "intelligence" works, and studies of brain function."

      Yes, there most certainly is. AI is a far broader topic than study of the brain for starters, it extends to the study of swarm intelligence and emergent properties in evolution for example. The field of AI generally uses nature as inspiration and builds useful techniques from there. The human brain is but one of these items that has been studied for inspiration and has led to

    • by TapeCutter ( 624760 ) on Tuesday December 02, 2008 @09:20AM (#25958263) Journal

      "I'd also be looking as seriously parallel processing."

      If you haven't seen this [bluebrain.epfl.ch] it might interest you. Note that it's a simulation for use in studying the physiology of the mammalian brain, not an AI experiment. Any ghost in the machine would have to emerge by itself in pretty much the same way mind emerges from brain function.

    • by HiThere ( 15173 )

      Erlang has lots of nice features...but it's too bloody slow!

      Well, Erlang HIPE is fast compared to python on the 2008 shootout, but it's still quite slow compared to Java (And I haven't tested it recently for stability. I know that when I tested it a few years ago it was prone to flakiness in the example programs.)

      (I was surprised to see how much Erlang had sped up since I last checked it out. I wonder if it's GUI has gotten any better.)

  • Heard of AGI? (Score:3, Informative)

    by QuantumG ( 50515 ) * <qg@biodome.org> on Tuesday December 02, 2008 @07:04AM (#25957503) Homepage Journal

    http://www.opencog.org/wiki/OpenCogPrime:WikiBook [opencog.org]

    Some interesting stuff.

    • Only philosophical bullshit. AI is making way too many simplifications in how the brain works, but this book contains even less material. It makes sweeping conclusions based on almost no data.

      It is very, very probably flat out wrong.

      • by QuantumG ( 50515 ) *

        "this book" .. by that do you mean "On Intelligence".. in which case I agree, but umm.. maybe you weren't trying to reply to me.

        Slashdot's comment system is fucked, I recommend you switch to "classic" view as soon as possible.

        It's a lot like Vista......

  • Russell & Norvig (Score:5, Interesting)

    by Gazzonyx ( 982402 ) <scott.lovenberg@gm a i l.com> on Tuesday December 02, 2008 @07:05AM (#25957517)
    In my AI class, last semester, we used Stuart Russell and Peter Norvig's Artificial Intelligence A Modern Approach, 2nd Ed.. It's fairly dry, but good for theory nonetheless. If you're a physics geek, it should be right up your alley; they approach everything from a mathematical angle and then have a bit of commentary on the theory, but never seem to get to the practical uses for the theory.

    If you're in the US, send me an email and I'll send you my copy. They charge an arm and a leg for these books and then buy them back for 1/10 the price. I usually don't even bother selling them back.
    • Oh... yeah, my email is moc.liamg@grebnevol.ttocs (reversed for spam protection).
    • by stiller ( 451878 )

      It's fairly dry, but good for theory nonetheless.

      Dry? As far as AI/machine learning goes, it's a regular pageturner!

      Go read some dedicated NN book, that's dry!

    • by IICV ( 652597 )

      I don't get it. When I took AI, everyone in my class said the book was "dry" - but it's got all sorts of little jokes. Every chapter is opened with a silly little quote along the lines of:

      Chapter 1: in which we try to explain why we consider AI to be a subject most worthy of study, and in which we try to decide what exactly it is, this being a good thing to decide before embarking"

      The problem in the chapter is sometimes humorous, too; the chapter on probabilities is basically about whether or not the author

      • You make a good point, but the way I see it the authors were tasked with taking a very hard subject and making it bearable, if not enjoyable. I think they did reasonably well. That being said, the topic of AI is just like file systems; unless you're the (special) kind of person that finds it to be a sexy topic, there just isn't anything that's going to make a textbook on the subject anything more than bearable. It kind of comes with the territory, I guess.

        FWIW, my current top 3 books are:
        • Code Complet
  • by Anonymous Coward on Tuesday December 02, 2008 @07:05AM (#25957521)

    Following Books are must have for machine learning enthusiasts:

    Christopher Bishop
    http://research.microsoft.com/~cmbishop/prml/

    Richard Duda
    http://rii.ricoh.com/~stork/DHS.html

    There you will get an insight how machine learning methods (like neural networks, SVM, boosting, bayes classificator) work

    for general AI (not so much in direction of statistical learning as the books above, but more into higher level learning like inference rules) I can recommend published work done by

    Drew McDermott
    http://cs-www.cs.yale.edu/homes/dvm/

    • Re: (Score:2, Informative)

      by DocDJ ( 530740 )
      +1 for the book by Bishop (don't know about the others). In addition, have a look at Information Theory by David Mackay which I found stunningly good. There is a free on-line version available, but you should buy it: http://www.inference.phy.cam.ac.uk/itprnn/book.html [cam.ac.uk]
    • Re: (Score:3, Informative)

      I'll second Duda and Hart, though I guess it's Duda, Hart, and Stork now.

      It's probably the most widely used pattern classification book that I've seen, and covers most of the techniques that you'll find. The coverage of neural networks is limited to Backprop though, so you'll need to look elsewhere for more in-depth on those.

  • by MosesJones ( 55544 ) on Tuesday December 02, 2008 @07:07AM (#25957537) Homepage

    Question: Where can I find a Reading Guide to AI Design & Neural Networks

    Answer: Why do you want to AI design & Neural Networks?

    Question: Because I want to learn.

    Answer: Will learn AI design & neural networks make you happy

    Question: Yes

    There you go. Now the question is whether Slashdot beats the Turing test on this one.

  • Adding another point to your feature space, I'll put in a plug for a technique called Stochastic Discrimination. It's not well known but is quite good at pattern recognition and avoids a lot of the weaknesses of neural networks such as over-training. Since it's not so well known, you have to go to the few academic papers to read up on it. Or visit the website http://kappa.math.buffalo.edu/ [buffalo.edu]. But it's got a very solid mathematical foundation (developed by a former math professor if mine) and isn't as "hacky"

  • On Neural Nets at least.. The only text book that I can think of offhand which is decent is Duda, Hart and Stork [ricoh.com]

    Hawkins, like many others, has ripped off many of his ideas from Steve Grossberg [bu.edu] (in this case, the ART model). Although he's not very easy to read, especially if you start much earlier than say, Ellias and Grossberg, 1975. You should also check out the work of people like Jack Cowan [uchicago.edu], Rajesh Rao [washington.edu], Christof Koch [caltech.edu], Tom Poggio [mit.edu], David McLaughlin [nyu.edu], Bard Ermentrout [pitt.edu], among many, many others. I think

  • by Gearoid_Murphy ( 976819 ) on Tuesday December 02, 2008 @07:41AM (#25957697)
    be careful before committing to a large scale neural network project. Aside from the intuition that the brain is a massively interconnected network, no one is really sure what aspect of neural network functionality is necessary for intelligence. My advice to you is to spend time coming to terms with the abstract nature of intelligence rather than coding up elaborate projects. This link [uh.edu] is a philosophical discussion on directed behaviour which I found quite interesting (if a bit vague, which is the mark of philosophy). Also, as you become familiar with the literature, you will see many examples of algorithms which claim to model certain aspects of intelligence. These algorithms work because they have a reliable and unambiguous artificial environment from which they draw their sensory information. The problem with practical artificial intelligence is that the real world is extremely ambiguous and noisy (in the signal sense). Therefore the problem is not creating an algorithm which can emulate intelligent behaviour but solving the problem of taking the empirical information of the sensory input and producing from that data a reliable abstract representation which is easily processed by the AI algorithms (whatever they may be, neural networks, genetic programming, decision trees etc) Good luck.
    • My advice to you is to spend time coming to terms with the abstract nature of intelligence rather than coding up elaborate projects. This link is a philosophical discussion on directed behaviour which I found quite interesting (if a bit vague, which is the mark of philosophy).

      I wouldn't recommend for anyone to waste their time reading philosophers' opinions about AI research. Might as well read a used car salesman's treatise on automotive design.

      At least used car salesmen actually have cars to sell...

  • Christoph Adami's Introduction to Artificial Life. He's a closet physicist and it shows. Do at least read the TOC before you dismiss it.
    • I have read that book, and implemented/hacked with AVIDA-type stuff.

      I think it's even more off-topic than it sounds, even if artificial life is neato-keen (but generally useless).

  • Cognitive Psychology (Score:3, Interesting)

    by tgv ( 254536 ) on Tuesday December 02, 2008 @07:58AM (#25957783) Journal

    I would strongly recommend starting with a text book on Cognitive Psychology, or reading it in parallel. AI tends to overlook the fact that intelligence is a human trait, not the most efficient algorithm for solving a logic puzzle. Anderson's book can be recommended: http://bcs.worthpublishers.com/anderson6e/default.asp?s=&n=&i=&v=&o=&ns=0&uid=0&rau=0 [worthpublishers.com].

    • by khallow ( 566160 )

      AI tends to overlook the fact that intelligence is a human trait

      That's incorrect unless one wants to claim other intelligent creatures such as some cetaceans and octopi, to give a couple examples, are human. And once we develope actual artificial intelligences, are they now human as well?

      • I thing GP was trying to make the point that cognition is not optimal. The kind of AI used for Google strives to be the best solution to a problem. Humans on the other hand, use (bad) heuristics, guesswork, and even superstition. When programming AI to try to understand "human intelligence" it's probably important to try to understand what "human intelligence" is.

  • These might seem a little old, but are still a couple of my favorites:
    Reinforcement Learning by Sutton & Barto [amazon.com]
    Machine Learning by Tom Mitchell [amazon.com]
  • you said you don't have any formal knowledge on CS. then don't think about neural networks yet, you have to build from the ground up. you need to take algorithms (doesn't matter if you're a programmer) and language theory (languages, regex, ... turing machines) at the very least. after that you can start experimenting with AI.

  • Haven't we had a number of stories recently questioning the validity of CS degrees with lots of (usually sys admins) waffling on about how degrees are a waste of time and how anyone can pick up computer skills? Ok all you "I don't need no degree , I can do it all on my own" , show us how you've all conquered the world of AI where so many others doing BScs, MSCs and PHds degrees have failed?

    What? Is that the sound of silence I can hear?

  • I think 'neural gas' is the area of neural networks research inspired by statistical physics. Don't know if there are any books about it, but you may find a chapter in an ANN textbook, and can certainly find papers vial Google.

    Contrary to what others are suggesting, you probably aren't looking for the Russell & Norvig book, which is in fact good and almost qualifies as "the standard AI textbook". I counterrecommend it only because it's about Good Old Fashioned AI, which is interesting stuff, but compl

  • We seem to be reading a lot of Skynet related posts these days.

    I better get the drapes for the bunker finished!
  • Without knowing the details about where you stand with things, my advice would be to concentrate on finishing your PhD first. There's no limit to the number of distractions during that final push, but big new areas of study are usually a bad idea.

    Assuming that's not an issue (nor or eventually), as a beginner in the field, you don't need to start with articles, there are books that will help for a while. But you may find quickly that you need to place yourself in one of two camps: people who want to devel

  • that is, its complete bullshit, but as a dream forever out of reach, it drives a lot of important and accidental discoveries, like databases or optical character recognition

    so we need lots of bright minds working in AI. none of them will ever actually achieve the goal. but along the way, they will spin off fantastic new technology

    so i applaud your focus, but you should be aware that anything you do of any import will be orthogonal to your goals

  • By Prof Penrose.

    Your PhD should stand you in good stead for the math required.
  • Recent stuff I ran across that seemed very interesting: http://www.youtube.com/watch?v=AyzOUbkUf3M [youtube.com]

    Beyond that, Neural Networks are a dead field; they're cool, but can't really do much with them.

  • I'd recommend "The Age Of Spiritual Machines: When Computers Exceed Human Intelligence" [amazon.com] by Ray Kurzweil. The first chapter is a bit dense, but it really picks up from there. It touches on a lot of highly technical issues, such as artificial intelligence and quantum computing, without being overtly technical itself. It would be a good launch-point into some heavier reading, is it contains a very extensive bibliography and recommended reading list.

    Penguin has an excerpt from Chapter 6: Building New Brains [penguingroup.com]

  • Back in the late 1970s-early 1980s, Byte Magazine had several really good primer articles on AI, expert systems, and neural nets. I spent many an hour reading them back in my university days. They even had an entire issue dedicated to artificial intelligence. They had articles like The Brains of Men and Machines, A Model of the Brain for Robot Control.

    In one of the articles they look at the structure of the brain and nervous system in terms of motor control. A lot of processing gets done outside of th
  • The term AI is so nebulous that it doesn't really mean much of anything. It's more of a functional goal (computer-based human-like ability) than anything more concrete, and as anything that may fall under that general umbrella does become better understood or more concrete, then it tends to be no longer regarded as part of AI (e.g. machine learning, expert systems, speech recognition).

    It's also worth noting that natural intelligence is also a rather nebulous concept - you'll find many definitions offered (e

  • Hawkins is misguided (Score:3, Interesting)

    by joeyblades ( 785896 ) on Tuesday December 02, 2008 @10:30AM (#25959003)

    I read "On Intelligence", too. While Hawkins has some interesting thoughts, I was less than inspired. Probably because I read John Searle's "Rediscovery of the Mind" first. Actually, most of Searle's work, as well as the work of Roger Penrose has led me to the conclusion that the Strong AI tract is missing the boat. The Strong AI proponents, like Hawkins, believe that if we build a sufficiently complex artificial neural network we will necessarily get intelligence. Searle and Penrose have very convincing arguments to suggest that this is not the right path to artificial intelligence.

    Realistically, how could one build an artificial brain without first understanding how the real one works? And I don't mean how neural networks function; I mean how the configuration of neural networks in the brain (and whatever other relevant structures and processes that might be necessary) accomplish the feat of intelligence. We still do not have a scientific theory for what causes intelligence. Without that, anything we build will just be a bigger artificial neural network.

    Also, the thing that Strong AI'ers always seem to forget... An artificial neural net only exhibits intelligence by virtue of some human brain that interprets the inputs and outputs of the system to decide whether the results match expectation (i.e. it takes "real" intelligence to determine when artificial intelligence has occured). Contrast this with the way your brain works and how you recognize intelligence from within, then you'll realize just how far from producing artificial brains we really are...

    I'm not saying that artificial intelligence is impossible, and neither is Searle (Penrose is still on the fence). I'm just saying, don't think you can slap a bunch of artificial neurons together and expect intelligence to happen.

    • Did you in fact read "On Intelligence". I did. You're not describing anything I found in Hawkins's ideas. And I can certainly tell you he's not the type to say that intelligence magically happens once you get enough complexity. You are especially unfair to say,

      Realistically, how could one build an artificial brain without first understanding how the real one works? ... don't think you can slap a bunch of artificial neurons together and expect intelligence to happen.

      because Hawkins's main lament throughout the text is that, when he researched the problem, no one was coming up with theories for how the brain works. He specifically says something like (paraphrasing since i don't have it with me), "It's not tha

  • I've never understood the draw behind "neural networks" ... it's a really cool-sounding term for an otherwise not-so-exciting algorithm.

    A neural network lets you determine an approximation to a function for which there may be no closed-form expression. It's basically a piece-wise linear approximation with heuristic edge-waiting, where the edge weights are "trained" by inputting numerous samples to the "neural network".

       

    • It sounds as if you're describing a feed-forward network. Things get much more interesting once you bring feedback paths into the picture. Try googling "Adaptive Resonance Theory (ART)" for one particular architecture, or consider your own grey noodle as the ultimate proof of concept of the power of neural nets!

  • Well, this is an easy one. You should read books on how the brain is built. I would read "On Intelligence" by Jeff Hawkins to start. The idea is that you want to see how the brain functions so that we can emulate it. That means you need to understand the functions of both brain hemispheres. The left generally handles linear sequential pattern stream processing while the right handles visual simultaneous pattern stream processing. In short, the left handles language, the right images. The brain functions the
  • Get Valentino Braitenberg's book "Vehicles: Experiments in Synthetic Psychology". Vehicles are his term for robots. It starts very simple and builds to great stuff. It's a great book which is amazingly short though it took me months to read it because I'd read a page or two then spend a few days thinking about it. I can't recommend it enough.
  • I don't know if this is still the case, but 10+ years ago California State University Stanislaus [csustan.edu] had a very well respected AI/NN program. Maybe look around their site or email them to get some suggestions?
  • AI can work from one of two "ends". I think it is clear the brain is built with neurons. So you might think to study neural networks. But that is like saying computers are built with many interconnected transistors and I want to design a web site so I'll study the physics of semiconductors. No, if your goal is web site yu ned to work at a higher level of abstraction, maybe at the level of PHP or Java Script. Likewise the brain almost certainly organizes networks of neurons into higher level stuctures

    • To answer my own post, What I'm saying is that the brain likely does NOT store information the way modern computers do. In a computer you can point to the physical place that any bit is stored. It will live inside a cell of RAM or a spot on a disk.

      But if I were to to compute a Furier transform of what I'm looking at right now and then transmit it into a feedback loop that is rigges to decay with a 4 second half life. You could not point to where the picture of my coffee cup is stored.

      Neurons have a long

  • by ja ( 14684 )
    Russel/Norvig has already been mentioned many times above and if you can follow that one, you should be fine regarding basic Computer Science. While you wonder where to go next, take a detour into digital signal processing so that your machines will end up having sensors tailor made for the job they are supposed to do as well as being able to easily transform a dataset into some format that actually makes sense.
  • There's an obscure old book by Edward DeBono (now a creativity and problem-solving guru) called "Mechanism of Mind" that I found fascinating. It's very much non-academic and non-computer oriented, but it has an interesting take on pattern recognition and decision making in the human mind (as opposed to the human brain). If you liked "On Intelligence", this is a similar kind of thing, but at a much more abstract level, and without, well... any real academic basis. I think it is out of print now, but maybe

  • before I'd ever read anything about computer science neural networks, I read steve grands book about making a robot chimp, with a very basic explanation of how neurons work. On that basis I wrote Democracy [positech.co.uk], a computer game based on a neural network.
    Obviously you will learn a hell of a lot from good books, but there's something to be said for just jumping in and coding it 'your way', to see what happens. It will likely make the (somewhat dry) text books on the subject seem much more relevant when you have al

  • One of the arguments made in the "On Intelligence" book concerned the inadequacy of using the Von Nuemann Architecture [wikipedia.org] with the standard fetch, decode, execute, store paradigm, which is at the heart of all modern computing, to construct a true human like intelligence. You either have to build hardware that approximates the human brain (i.e. lots of nueron like devices) OR you have to simulate such a device on the above mentioned von nuemann type computers (which are inefficient at simulating a brain and req
  • I'm surprised this hasn't been mentioned yet but Kenneth Stanley did some interesting work at the University of Texas on NEAT, Neuro Evolution of Augmenting Topologies. He and others have expanded this in several directions including things like Compositional Pattern Producing Networks (CPPNs) that can be joined together into a larger network.

    I actually just signed up for Safari to read the chapter in AI Techniques for Game Programming on NEAT and some other approaches.

    I also found that the books by G

  • Well once you have read a bit and want to play, may I suggest you look into Breve [spiderland.org] for your experimenting. Think of it as your AI simulation Expert Lego set. Lots of tools to visualize your algorithms. Cheers.

  • If you come from theoretical physics, I recommend the following two books to you:
    • David MacKay, Information Theory, Inference, and Learning Algorithms free online version available as PDF [cam.ac.uk] (Written by a Cambridge physicist who hails from the Cavendish Lab.)
    • Pawel Lewicki and Thomas Hill, STATISTICS: Methods and Applications order here [statsoft.nl] (Depite its title, it contains statistical machine learning methods like decision tree induction, Bayes classifiers and neural networks)

    Good luck with your studies! ~ Joc

  • ... walk over to the CS department and talk to the chair. Explain what you want and (s)he'll point you to the best person in the department to give you the answers you want, if it isn't him/her-self.

    Seriously, why the hell would you ask here when you have far more reputable people a few steps away?

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...