Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage Databases Education Programming Software Technology

The Baby Bootstrap? 435

An anonymous reader asks: "Slashdot recently covered a story that DARPA would significantly cut CS research. When I was completing graduate work in AI, the 'baby bootstrap' was considered the holy grail of military applications. Simply put, the 'baby bootstrap' would empower a computing device to learn like a child with a very good memory. DARPA poured a small fortune into the research. No sensors, servos or video input - it only needed terminal I/O to be effective. Today the internet could provide a developmental database far beyond any testbed that we imagined, yet there has been no significant progress in over 30 years. MindPixels and Cycorp seem typical of poorly funded efforts headed in the wrong direction, and all we hear from DARPA is autonomous robots. NIST seems more interested in industrial applications. Even Google is remarkably void of anything about the 'baby bootstrap'. What went wrong? Has the military really given up on this concept, or has their research moved to other, more classified levels?"
This discussion has been archived. No new comments can be posted.

The Baby Bootstrap?

Comments Filter:
  • by hshana ( 657854 )
    Maybe they were afraid of Skynet.
  • Oh great... (Score:4, Funny)

    by kwoo ( 641864 ) <kjwcodeNO@SPAMgmail.com> on Monday April 04, 2005 @06:43PM (#12138766) Homepage Journal

    Just one problem with this kind of research...

    For the first year I'll be up every two hours all night, tending to the system.

    Actually, that may be better than just being up all night, like I am now.

  • Classified (Score:5, Funny)

    by pete-classic ( 75983 ) <hutnick@gmail.com> on Monday April 04, 2005 @06:44PM (#12138768) Homepage Journal
    It has moved to more classified levels.

    I'd go into more detail, but the C.I.A. and C.I.D are at my door. Ooh, the B.A.T.F. just pulled up in a Mother's Cookies truck!

    -Peter
  • thats why you haven't heard of it! and even as we speak the number of intelligent "beings" are growing, and soon they will hunt you and your loved ones down
  • baby bootstrap (Score:5, Interesting)

    by kris_lang ( 466170 ) on Monday April 04, 2005 @06:47PM (#12138803)
    Sure, that was the engine of thought behind stories such as WarGames and 9x109 names of god. Somehow, unfettered access to data and time with "neural networking" capacity to form links and create linkages to pieces of data ("associative memory") would be all that was needed to create intelligence, and perhaps even sentience.

    Minsky came up wrong on the single layer perceptron, AI was wrong on the purely feed-forward neural-network systems, Rumelhart and McLelland got some good promo off of their feed forward net that could learn to pronounce idiosyncracies, and Sejnowski got a great job at the salk from the AI delusions. But no, it appears to not have gone anywhere... thus far.

    Later comment will be positive. ...
    • Re:baby bootstrap (Score:5, Interesting)

      by Al Mutasim ( 831844 ) on Monday April 04, 2005 @07:04PM (#12138966)
      It seems we can program anything done with conscious thought--algebra, logic, and so forth. It's mostly the things we do unconsciously--recognize objects, interpret terrain, extract meaning from sentences--that can't be put adequately into code. Would the code for these unconscious processes really be complicated, or is it just that we don't have mental access to the techniques?
      • Re:baby bootstrap (Score:5, Insightful)

        by man_ls ( 248470 ) on Monday April 04, 2005 @07:10PM (#12139009)
        I doubt it would be too difficult to code -- if we knew the mechanism by which it proceded.

        Its hard to code a procedure to replicate the working of the mind...if you don't know how the mind does it in the first place.
        • Re:baby bootstrap (Score:4, Insightful)

          by Servants ( 587312 ) on Monday April 04, 2005 @09:48PM (#12140061)
          I doubt it would be too difficult to code -- if we knew the mechanism by which it proceded.

          Its hard to code a procedure to replicate the working of the mind...if you don't know how the mind does it in the first place.


          On the other hand, it might be that the reason we don't understand how the mind does certain things is that they're actually extremely complicated, and don't reduce very well to a programmable step-by-step algorithm nor to a simple and general mathematical learning structure. It's hard to tell, although I think it's telling that after decades of work, neither psychologists nor computer scientists can understand or replicate much of what babies do.

          Sometimes the best way for a computer to learn something may not be the way a baby does it, anyway; c.f. chess.
          • Re:baby bootstrap (Score:5, Informative)

            by bluephone ( 200451 ) <grey@nOspAm.burntelectrons.org> on Monday April 04, 2005 @11:37PM (#12140715) Homepage Journal
            "Sometimes the best way for a computer to learn something may not be the way a baby does it, anyway; c.f. chess."

            Except computers never learned chess; humans programmed complex move analysis routines along with the rules, and many times a database of strategies with statistical weighting. There's a limited capacity to "learn: against opponents, but that's usually just more preprogrammed analysis and pattern matching than actualy spontaneous data linking. And like a poster higher up said, ther ewas a time we thought that was all one needed. It's not. We already have rudimentary AIs in labs that can "learn" in the sense they can create accurate spontaneous data links. The human brain (or the brain of any semi complex organism, really) is a black box with such unimaginable gears inside we're fumbling in the dark. It's hard to reverse engineer a mind becuase unlike reverse engineering a BIOS or widget, we don't really understand how a mind works, is put together, or even what it's really comprised of.

            • Re:baby bootstrap (Score:4, Interesting)

              by ajs ( 35943 ) <ajs.ajs@com> on Tuesday April 05, 2005 @07:54AM (#12142457) Homepage Journal
              "It's hard to reverse engineer a mind becuase unlike reverse engineering a BIOS or widget, we don't really understand how a mind works,"

              I would argue (and I could be proven wrong) that today we have a very general understanding of how a mind works... in that we understand the concept of a neural network which does seem to be a decent model for the basic "mind" which makes choices for us... the problem comes about where we attempt to understand HUMAN BEHAVIOR which is the combination of a mind (neural network) and dozens of auxiliary, special-purpose systems ranging from the neurons in the optic nerve that perform a plethora of pre-processing on the retina's image data to the area in the brain that we're just discovering models our "empathy"; it allows us to re-process visual information about others as if we were experiencing what they are.

              These special-purpose systems are sometimes inside the brain (the latter example) somtimes outside (the former), but they are not part of what we traditionally expect consciousness to be.

              These tools make many of the tasks that we expect AIs to perform nearly impossible. For example, facial recognition seems like it should be easy, but once you sit down with a camera and try to make the computer "see" differences, you find that faces all look very much alike. We are tricked -- by a shockingly sophisticated facial recognition pre-filter in our brain -- into thinking that faces are widely distinct, but they are not (the old "all [race] look alike," is actually true... for all values of [race]).

              So, while we might look at an AI and say, "unless it can tell faces apart, it's not 'smart'," it turns out that that's actually a pretty poor measure of pure intelligence.

              Other aspects of our instinctive measures of intelligence such as language, managemetn of a human body (e.g. walking), etc. all have one or more of these auxiliary systems at their heart.

              So we really have two problems: create a machine that can think; and create a machine that can behave like a human.

              The former is either within our grasp, or already possible. The latter is going to have to be the product of an enourmous reverse-engineering effort which has probably only just begun.
      • Re:baby bootstrap (Score:5, Interesting)

        by kris_lang ( 466170 ) on Monday April 04, 2005 @07:18PM (#12139067)
        Ah, those are exactly the things I was commenting about above...

        That's what the "neural network" paradigm was all about. You have an arbitrary and fixed number of input node, you have an arbitrary and fixed number of output nodes. You create linkages between these nodes and "weight" them with some multiplicative factor. In some particular instantiations, you limit all inputs to be [-1... +1] and limit all weights to be within the range [-1 ... +1].

        So with A input nodes and B output nodes, you've got a network of AxB interconnections between these input and output layers. The brain analogy is that the A layer is the input layer or receptor layer, the Blayer is the output or motor layer, and it is the interconnections between these neurons, the neural network composed of the axons and dendrites connecting these virtual neurons that does the thinking.

        Example: create network as above. Place completely random numbers meeting the criteria of the model (e.g. within the range -1 weight B's output feeds forward to C, etc., and these are called intermediate layers.

        Rumelhart and Mcllelland encoded spellings as triplets of letters (26x26x26), had a few (or one, I can't remember this now) intermediate layers, and an output layer corresponding to phonemes to be said. They effectively encoded the temporal aspect of the processing into the triplets, sidestepping a (what I consider the more intersting...) part of the problem. They trained this neural network by feeding it the spelling of words and adjusting the weights of the networks until the outputs were the desired ones.

        Note that nowhere in this process do they explicitly tell the system that certain spelling combinations lead to specific pronunciations. They only "trained" the system by telling it if it's right or wrong. The systems weights incorporated this knowledge in these "Hebbian" synapses and neurons.

        So this is associative processing, using only feed-forward mechanisms. Feedback, loops, and temporal processing are even more interesting...

        alas not enough room in this margin to keep going.
        • Re:baby bootstrap (Score:5, Interesting)

          by nacturation ( 646836 ) <nacturation AT gmail DOT com> on Monday April 04, 2005 @08:05PM (#12139428) Journal
          Note that nowhere in this process do they explicitly tell the system that certain spelling combinations lead to specific pronunciations. They only "trained" the system by telling it if it's right or wrong.

          Right, it's kind of like an implementation of bayesian spam filtering, but for other problem domains. Instead of spam/ham, it's pronounced-correctly/incorrectly. Rinse and repeat.

          I dabble in AI now and again so I haven't read up on everything that's out there, but in my limited travels what I haven't yet seen is a neural network implementation which can learn and grow itself. The recently posted /. article [slashdot.org] about Numenta seems to be heading in the right direction. Most neural networks are incredibly rudimentary, offering a few levels of propogation. In a real brain, there's a hell of a lot more going on.

          I did some calculations a while back, and based upon 100 billion neurons in the brain, each capable of firing let's say an average of 1000 times per second, and we'll assume that at any given time a generous 1% of all neurons are actively firing, and that the information firing takes 100 clock cycles to process, then you'd need the equivalent of about a 100 TeraHz processor with oodles of memory to have the same processing power as the human brain. Of course, you'd also need to correctly simulate *how* the brain is wired up to get any kind of beneficial processing.

          So as far as the whole 1980's AI winter, it was inevitable. The computing power and storage requirements for any sufficiently advanced AI just wasn't possible. It's only until very recently that it's possible to achieve fairly complex AI.
          • Re:baby bootstrap (Score:3, Insightful)

            by djfray ( 803421 )
            I'd be very interested in seeing information confirming anything close to your generous 1% firing at a time, and how this is integrated with the rest of the system for signal processing, who fires when, etcetera. I think, however, that we need to take into account the fact that more neurons doesn't mean smarter at all. Take a look at whales, for instance, with brains much larger than our own, and thusly, more neurons. A whale can't go on Slashdot and say "OMGZ first post guys" much less something of huma
            • by Anonymous Coward
              A whale can't go on Slashdot and say "OMGZ first post guys" much less something of human level intelligence.

              Yet again, more proof that whales are smarter than humans. :-)
              --
              AC
            • Re:baby bootstrap (Score:5, Interesting)

              by pluggo ( 98988 ) on Monday April 04, 2005 @11:40PM (#12140731) Homepage
              Take a look at whales, for instance, with brains much larger than our own, and thusly, more neurons. A whale can't go on Slashdot and say "OMGZ first post guys" much less something of human level intelligence.

              This doesn't necessarily mean lower intelligence, in my opinion. Being underwater prevents most technology (that we know of) from working, from fire and wheels to computers and airplanes.

              A whale doesn't have fingers or hands, either, but whales and dolphins could well be as intelligent as (or more so than) us, but simply be less technologically advanced and unable/unwilling to communicate with us in a way we understand.

              Sure, they seem dumb at Sea World- but then, if you took a human baby and put it in a cage and threw bananas at it when it did a trick for you, it would probably behave pretty stupidly. Much of our intellect is awakened by our experiences in the early 5 or so years- within limits, the more you are stimulated within this time, the smarter you will end up being. I would simply wonder what a dolphin or whale could be taught to do if stimulated properly.

              An interesting and slightly off-topic side note is that whales and dolphins are conscious breathers; i.e., they must consciously surface in order to breathe, so they never go completely to sleep. Instead, half of their brain sleeps at a time- during this time, they're in a groggy half-sleeping state that allows enough consciousness to surface and to wake up if there's danger.

              Intelligent and friendly on rye bread with some mayonnaise.
              • Re:baby bootstrap (Score:3, Insightful)

                by HuguesT ( 84078 )
                The human brain is hardwired for complex languages. We're not sure about cetaceans. They definitely communicate, but we don't know at what level of complexity.

                We know this because people who have had their speech centre knocked out by a stroke don't recover any form of speech. Other bits of the brain don't take over to compensate.

                Now language is pretty important to overall intelligence. Without it no I/O processing, and it's pretty hard to learn.
          • Re:baby bootstrap (Score:3, Interesting)

            by Servants ( 587312 )
            Right, it's kind of like an implementation of bayesian spam filtering, but for other problem domains.

            By Bayesian spam filtering, I think you mean general classification problems, in which case, yes, neural networks can implement classification - it's a stretch to say that McClelland and Rumelhart's did, because the possible output included most non-repeating combinations of English phonemes and is thus nearly infinite, but the principle is there.

            Of course, you'd also need to correctly simulate *how* the
            • Re:baby bootstrap (Score:4, Informative)

              by misterpies ( 632880 ) on Tuesday April 05, 2005 @04:45AM (#12141921)

              >> By Bayesian spam filtering, I think you mean general classification problems, in which case, yes, neural networks can implement classification - it's a stretch to say that McClelland and Rumelhart's did, because the possible output included most non-repeating combinations of English phonemes and is thus nearly infinite, but the principle is there.

              IIRC, mathematically it's been shown that neural nets and bayesian learning systems (such as spam filters) are entirely equivalent. Check out some of the work by David MacKay at the University of Cambridge.
          • Re:baby bootstrap (Score:4, Insightful)

            by dublin ( 31215 ) on Tuesday April 05, 2005 @01:57AM (#12141319) Homepage
            So as far as the whole 1980's AI winter, it was inevitable. The computing power and storage requirements for any sufficiently advanced AI just wasn't possible. It's only until very recently that it's possible to achieve fairly complex AI.

            Funny, that's the same thing they said back in the 80's. And the 70's. And the 90's.

            Sorry, but I don't buy it. Neural nets are not a panacea - I'm a robotics guy by training, and they've been the supposed magic pixie dust technology that was going to give us human-like robot motion in the 1980's. Funny, but the hard problems that need real AI, like voice recognition, handwriting recognition, unled learning, etc. are just as far off today as they were 20-30 years ago.

            Faster computers have definitely not been terribly beneficial. As an example, modern speech and voice recognition systems are significantly but not dramatically "better" than they were 20 years ago (perhaps a 10-20x improvement, max) in spite of the fact that computers are roughly a million times faster: ~6 MHz vs. ~4 GHz for high-end desktop PCs. (Not to mention available RAM that's larger than the disk storage in entire mainframe data centers back then...)

            Procedural AI has proven itself to be a miserable failure for nearly a half century now, and neural nets have shown that they are anything but self-organizing. Like so many other efforts to copy or explain life, it appears that having the raw materials is simply not enough - life is *different* - it's really, really hard to imitate even poorly, no matter how hard we apply our own intelligent design to the problem.

            I sincerely doubt that I will live to see "baby bootstrap" systems, and I'm not all that old. I suspect that only true hardware neural nets hold any hope of mimicking life to any minimally useful degree, but the problems are very, very, hard here, and ther reality is that we know next to nothing with any certainty about how even the simplest brains really function...
            • Re:baby bootstrap (Score:3, Interesting)

              by HuguesT ( 84078 )
              The technologies you talk about are not as far off as they were earlier.

              Today OCR of printed text is a solved problem. It comes bundled with your $100 scanner, and it's damn useful.

              By solved I mean that if you gave a few pages to type up to a person they would make more errors than OCR software make now.

              Handwritten OCR will come, it is harder, but not impossibly harder.

              Speech recognition is progressing. It comes bundled with MacOS/X, and you've certainly heard of spoken text entry in word processors. It
          • Re:baby bootstrap (Score:3, Informative)

            by coaxial ( 28297 )
            I dabble in AI now and again so I haven't read up on everything that's out there, but in my limited travels what I haven't yet seen is a neural network implementation which can learn and grow itself. The recently posted /. article about Numenta seems to be heading in the right direction. Most neural networks are incredibly rudimentary, offering a few levels of propogation. In a real brain, there's a hell of a lot more going on.

            I don't know what you mean "grow", since all implentations use a static number
        • Re:baby bootstrap (Score:4, Insightful)

          by polv0 ( 596583 ) on Monday April 04, 2005 @11:10PM (#12140561)
          It is fairly easy to show (see Bishop 1995) that a simple two layer neural network can scale to reproduce arbitrarily complex but smooth functions to any degree of required accuracy, and that a three layer neural network can extend this capapility to functions with discontinuities. While mathematically this is a tantalizing prospect, and only begins to cover the work that has been done to extend the capabilities of neural networks and other machine learning algorithms (such as support vector machines), there remains a fundamental problem. In order for these networks to effectively learn, they must be presented with a tremendous number of high quality and meaningful sequences of input and output.

          For example, in text recognition, hundreds of thousands of hand written characters are painstakingly hand labeled with their correct letters and used as a learning database on which the algorithm is trained. The algorithm will then accurately reproduce the correct categorization for a suprisingly high number of the training examples, and any new examples drawn from the same population. But given new examples written in a different script or style, the classifier will fail to generalize

          How can we hope to create a training database that is comprehensive enough to cover a topic that, when learned, would demonstrate intelligence? And fundamentally, aren't we just creating a really good mimic?
      • Excellent point. I think you are right: it is easier to describe (i.e.: program) something that you had to laboriously understand yourself, rather than something that is second-nature and easy.

        But this is why I think more communication between people doing research in neuroscience/cognitive science/evolutionary psychology and people doing AI programming is critical. There are some very interesting psych experiments that attempt to reverse engineer how the brain works. For instance, determining what algorit
      • Re:baby bootstrap (Score:3, Insightful)

        by rgmoore ( 133276 ) *

        I think that a key issue is that not everything in our brains is handled the same way, so not all of it is equally easy to program. Conscious thought is essentially a software process running on the part of our brain that serves as a general-purpose computer. Our unconcious processes are essentially hardware processes running in parts of our brains that are specifically structured to do just that one thing. The fact that unconcious processes are run in hardware means that they're not subject to introspec

    • Re:baby bootstrap (Score:4, Insightful)

      by cynic10508 ( 785816 ) on Monday April 04, 2005 @07:06PM (#12138974) Journal
      Dreyfus commits a whole book to asking why these things don't work. I believe Minsky overestimates the project. It may all boil down to the fact that purely syntactic (symbol manipulation) work isn't going to give you any semantically meaningful output.
      • hmmm...

        appropriate algebras would allow for starting with particular sequences, allowing manipulations on them, and still staying within the confines of the grammar. Any grammar that you can parse with a finite automaton would be one example. The semantic meaning is what we imbue upon if afterwards. So GIGO may apply. If you start with a symbol (even the empty set symbol) and apply syntactic operators on it, you many generate outputs that are capable of having semantically meaningful "meaning" applied
        • Re:baby bootstrap (Score:4, Interesting)

          by cynic10508 ( 785816 ) on Monday April 04, 2005 @07:37PM (#12139197) Journal

          Ah, philosophy of math. How fickle and unforgiving it is.

          True, you can apply meaning to a syntactic structure. But like the mistake Douglas Hofstadter makes in Godel, Escher, Bach: An Eternal Golden Braid, there is nothing that "forces itself upon us." Or, another way of refuting Hofstadter, there's nothing about D:=B|| that makes it "Doug has two brothers" anymore than "Assign B to D, double pipe".

          Machine translation is an example of applying semantics to a syntactic structure. It doesn't work because the syntax gives us semantics but rather we structure the syntax in such a way that we can systimatically apply semantics and get meaningful output. Like creating your own algebra.

        • Re:baby bootstrap (Score:3, Insightful)

          by thelen ( 208445 )

          The system might generate syntactically correct outcomes, but have we really solved the problem if we the observers are still the ones to apply semantic content? Isn't the point of Searle's Chinese Room thought experiment to show that syntactic transformations are not sufficient to imbue the transformer with a semantic understanding of its activity?

    • Neural Nets (Score:3, Interesting)

      by jd ( 1658 )
      One of the bigest problem with neural networks is that 99.99% of all implementations are linear. This means you can ONLY implement a NN using them for a space that is linearly divisible AND where the number of divisions is exactly equal to the number of neurons.

      That is a horrible constraint to put on AI problems which are (very likely) non-linear and in a hard-to-guess problem space.

      Also, many training algorithms assume that the network is in a non-cyclic layout. Loops are Bad. You can do grids, in self

      • Re:Neural Nets (Score:3, Interesting)

        by nebular ( 76369 )
        I agree entirly. What we sense is not the real world but a conciousness that is generated by our brains. Our intelligence in merely the end result of this abstraction of the real world
      • Re:Neural Nets (Score:3, Informative)

        by TheSync ( 5291 )
        As someone who has programmed neural networks on massively parallel computers (10s of thousands of nodes), let me say that lack of parallelism is a minor point when PCs are running at speeds 100s of thousands of times faster than neurons.

        What artificial neural networks lack is the millions of years of evolution. If you look at brain, it is not a "random learning network," but almost every part is highly specialized and structured.

        Artificial neural networks have been a failure as an "end product," but on
  • It has too much fascination with pr0n.
  • by Anonymous Coward on Monday April 04, 2005 @06:49PM (#12138824)
    From time to time I see individuals talking about adaptive intelligence [slashdot.org] usually involving the Internet as a basis of information, but the general consensus is still garbage in, garbage out.

    These training systems are generally specialized because it's easier to get a practical result out, and I've actually seen some in use as 'knowledgebase' support webpages that will intelligently determine what you want based on what others wanted and syntactic similarities between the pages. I've never heard the term 'baby bootstrap' so maybe different terminology will obtain better results from Google?

  • ... and the results are currently tested in the form of Slashdot editors.
  • by exp(pi*sqrt(163)) ( 613870 ) on Monday April 04, 2005 @06:51PM (#12138841) Journal
    Who calls what you describe "baby boostrap"? I haven't worked in AI myself but have a keen interest in it and have friends who worked in the field including one who worked on Cyc (who says it's a scam BTW). Not once have I ever heard the expression "baby bootstrap". But what you've done is cool. Rather than search on precisely that term you've submitted your search to the serach engine known as "/. readership". It's not terribly relaible but it is good at fuzzy searches like yours.

  • q: Has the military really given up on this concept, or has their research moved to other, more classified levels?

    a: yes.
  • Stat algos (Score:5, Interesting)

    by Anonymous Coward on Monday April 04, 2005 @06:51PM (#12138853)
    What happened was that research focused
    on machine learning models and inference
    models for belief networks. The work
    in this area since the 80s has been
    *spectacular* and has impacted other
    areas of research. (E.g., speech
    recognition, image processing, computer
    vision, algos to process satellite information
    faster, stock analysis, etc.)

    So, mourn the loss of the tag phrase "baby
    bootstrap", and celebrate the *unbelievable*
    advanced in belief nets, causal analysis,
    join trees, probabilistic inference,
    and uncertainty analysis. There are
    literally dozens of classes taught at
    even non-research oriented Univs (e.g.,
    teaching colleges or vocational-oriented
    schools) on this very subject.

    (As for your concern that the web is not
    being mined for ML context, just look at
    semantic web research, and other belief
    net analysis of text corpuses. Try
    scholar.google.com instead of just
    plain old google to find relevant
    citations.)

    The early AI research paid off BIG TIME,
    albeit in a direction that nobody could
    have predicted. Researchers did not keep
    using the phrase "baby bootstrap" so
    your googling will give you a different
    (and wrong) conclusion.

  • by ArcCoyote ( 634356 ) on Monday April 04, 2005 @06:55PM (#12138886)
    The process that bootstraps a baby is still the Holy Grail for a lot of geeks.
    • Plus you gotta defeat the guys with the funny French accents to get anywhere interesting. I think I'll just deal with the peril at Castle Anthrax, and make do with the Grail beacon.
  • by RobotWisdom ( 25776 ) on Monday April 04, 2005 @06:56PM (#12138892) Homepage
    You can't expect any system to discover the deep structure of the human psyche on its own-- we humans bear the full responsibility of discovering it. But once we have a finite structure that can handle the most important aspects of human behavior, everything else should fall into place.

    My suggestion is that we need to explore all the possible permutations of persons, places, and things, as they're reflected in the full range of literature, and classify these permutations to discover the underlying patterns.

    (I've tried to make a start with my AntiMath [robotwisdom.com] and fractal-thicket indexing [robotwisdom.com].)

    • by swillden ( 191260 ) * <shawn-ds@willden.org> on Monday April 04, 2005 @07:36PM (#12139195) Journal

      You can't expect any system to discover the deep structure of the human psyche on its own

      An interesting book that relates to this is George Lakoff's "Women, Fire and Dangerous Things". Lakoff analyzes the categories defined by linguistic structures and uses what he learns to deduce some interesting notions about human cognition. In the process, one of the things that becomes very clear is that much (all?) of the way we structure our thinking is fundamentally and inextricably tied to the form and function of our physical bodies.

      One of the shallower but easier to explain examples is color: although the color spectrum is a continuous band, with no clear dividing points imposed by physics, the way in which people choose segments of that spectrum to which to assign names is remarkably consistent. Even though different cultures have different numbers of "major" colors (essentially, the set of colors that are identifiable by any member of that culture with basic verbal abilities, consider "green" vs "chartreuse"), the relationships between the major color sets is one of proper subsets. For example, one African (IIRC) culture has only two major color words, which would translate to Western color senses as roughly as "warm" and "cool". Another culture has four color words, two of which fall into the "warm" category and two of which are "cool". Western cultures have seven, and there's a direct correspondance between those color categories and the four and the two.

      Further, those categories are non-arbitrary. If you show a variety of shades of red to individuals from different Western nations and ask them to pick the "most" red, they will do so with near-perfect unanimity (assuming the shades aren't too close together -- they have to be readily distinguishable). Then, if you show the same shades to someone from a two-color culture and ask for the "warmest", they'll choose what the Westerners chose as the "reddest". Ditto across the board. I'm trying to explain in two paragraphs what Lakoff spends several pages on, and probably not doing a good job, but the gist is this: Experimental evidence shows that the assignments of names to colors is definitely not arbitrary, even across very distinct cultures.

      The reason? Physiology. The "reddest" red, as it turns out, is the one whose wavelength most strongly stimulates the red-activated cones in our retinas.

      The point is that, at a fundamental level, everything we percieve about our world is filtered through our senses and that inevitably defines the way we understand the world. Even more, our cognitive processes are built upon associations, extrapolations -- analogies and variations -- and the very first thing we all learn about, and then use to construct metaphors for higher concepts, is our own body. The body-based metaphors for understanding the world are so deep and so pervasive that they're often difficult to recognize.

      Lakoff's reasoning has some weaknesses -- mostly I think he overreaches ("overreaches" -- notice the body metaphor implicit in the word? And "weakness", too) -- but his arguments are good enough to make me think that if we ever do see an artificial intelligence of significant stature, it will think very, very differently from us.

      It's really unclear what such an intelligence whose primary source of experience was unfettered access to the Internet might be. We view the net as a structure built of connected locations, but that's because we apply our own physical world-based structures to it. What would an entity whose only notion of location is as a second-order, learned idea see? And who knows what other ways its understanding would diverge?

  • by multipartmixed ( 163409 ) on Monday April 04, 2005 @06:56PM (#12138895) Homepage
    I can assure you.. I am very classified.
    • I can assure you.. I am very classified.


      Dear Baby Bootstrap computer,

      You forgot to check the AC box. Congratulations on becoming Un-Classified!

  • Poorly funded yes... (Score:5, Interesting)

    by mindpixel ( 154865 ) on Monday April 04, 2005 @06:56PM (#12138896) Homepage Journal
    Yes, Mindpixel [singluar] is poorly funded [I know because every cent spent to date has come from my pocket]...but the directon is correct... Move everything that isn't in computers, into computers. Just look at what GAC knows about reality [visit the mindpixel site and you can see a random snapshot of some validated common sense]... the project has nearly 2 million mindpixels now...I have a copy on my ibook and I can do some profound search related things because of all the deep semantics I have that google can't touch, at least until they invest in mindpixel ...
  • by YodaToo ( 776221 ) on Monday April 04, 2005 @06:56PM (#12138897)
    I did my doctoral research [cornell.edu] developing software to bootstrap language based on visual perception. Had some success, but not an easy task.

    The Cognitive Machines Group [mit.edu] @ the MIT Media Lab under Deb Roy seem to be on the right track. Steve Grand's [cyberlife-research.com] work is interesting as well.

    • I'm also currently ("currently" as in I'm writing this while my other computer is simulating bootstrapping based learning) working in this field and sucombing to frustration. I do believe we should see significant discoveries in the next 30 years but it won't come easy.

      Godamn I've been procrastinating in the last few days because I am stuck on trying to compute probabilities in a probabilistic graph efficiently. One of the big hurdles I think is from the fact that we are trying to approximate a massively p
  • by infonography ( 566403 ) on Monday April 04, 2005 @06:57PM (#12138907) Homepage
    By order of Wintermute (DARPA AI code 324326343.534) this discussion is terminated and no further investigation into this obviously false and misleading theory is permitted.

    Would you like to play a game of chess Professor Falken?
  • by Sierran ( 155611 ) on Monday April 04, 2005 @07:01PM (#12138936)
    ...and parents/pain for what is 'correct.' I don't think the concept is gone, but there are problems that are buried in the question as posed which (I think) became clearer stumbling blocks as technology advanced. NOTE: I'm not an AI theorist, nor do I play one on TV; I just like the idea and read a lot. Hence, this is all pulled out of my fundament.

    Cycorp is not a poorly funded idea in the wrong direction. Cycorp chose a different tack; they decided that rather than trying to build a reality and correctness filter, they'd rely on human brains to do it for them (like trusting your parents implictly) and instead concentrated on the connectivity of the 'facts' accrued by the 'baby.' CYC is still very much around, and is very much in demand by various parts of the government and industry - if you want to play with it yourself, you can download a truncated database of assertions called OpenCYC [opencyc.org]. Folks have even gone so far as to graft it onto an AIML engine [daxtron.com], to produce a chatbot with the knowledge of OpenCYC behind it.

    The problem: how does your baby learn what's real and what's REAL NINJA POWER? Or, pardon me, what's REAL NINJA POWER and what's just a poser? Someone's gotta teach it. Which means it has to learn not only facts, but how to evaluate facts. So it has to learn facts, and how to handle facts - which means it has to learn how to learn. Which means you need to know that answer from the git-go. Tortuous games with logic aside, the onus is now much more heavily on the designer to have a functioning base - whereas with the Cyc approach, the only 'correctness' that is required is that of information, and perhaps that of associativity or weight - which can be tweaked, dynamically. The actual structure of how that information is related, acquired, stored and related is not relevant once decided. Having said all this, Cyc is (from the limited demos I've seen) quite impressive at dealing with information handed to it. It just wouldn't do very well at deciding what do do with that information - that's the job of the humans that gave it the info. It can tell you about the information, but not what to do with it. That task requires volition, really.

    Volition is a killer. What is it? How do you simulate it? How do you create it? Is it random action? Random weighted action? Path dependent action? Purely nature, purely nurture? When it comes down to it, the human is (as far as we know) not a purely reactive system, which CyC (AFAIK) is. Learning requires not only accepting information, but deciding what to do with it - deciding how it will be integrated into the whole. If the entity itself isn't making that decision, then the programmer/designer/builder has already made it in the design or code - and then it's not really learning, is it?

    Sorry if this is confused. As I said, I don't do this for a living.
    • and psychologists have a bear of a time understanding volition, desire, and attention.

      How do we decide what exactly to attend to in the visual scenes in front of us? (The marketing types want to know this so they can feed us more advertising, the psychology types want to know this so they can figure out how attention is parcelled out) Example, "looming" is when something is approaching rapidly and may strike the body or head: the CNS attends to this quickly if stereopsis is present and causes the body to
    • Cycorp is not a poorly funded idea in the wrong direction.

      It's certainly not poorly funded. Whether it's adequately funded, or on the right track, is a different question, of course.

      Cycorp chose a different tack; they decided that rather than trying to build a reality and correctness filter, they'd rely on human brains to do it for them (like trusting your parents implictly) and instead concentrated on the connectivity of the 'facts' accrued by the 'baby.'

      A decade ago, they still hoped that once they
    • This was a point Nietzche made in Beyond Good and Evil, that the will is the least-well understood aspect of human nature, and the one we make the most assumptions about our understanding of. Interesting that will/volition/motive/morality (aspects of the same grey area) pose such a fundamental problem to AI...
  • by Illserve ( 56215 ) on Monday April 04, 2005 @07:03PM (#12138954)
    Bootstrapped learning something useful, even from an information ocean like the internet, is *HARD*.

    Doubly so if you have no goals, and your task is just to "learn". It would come back with garbage.

    Perhaps the real killer is that even if it did learn something, the information acquired in its unguided search through the internet would be completely alien. You'd then have to launch a second project to figure out what the hell your little guy learned.

    And you'd probably figure it out was mostly garbage.
  • by SQL Error ( 16383 ) on Monday April 04, 2005 @07:05PM (#12138969)
    there has been no significant progress in over 30 years

    That's what went wrong. Basically, it don't work.
  • The reason you hear less about such things is because the AI research comunity finally got their heads out of the clouds. In the 50's and 60's, true AI was always 'just around the corner', helped by sci-fi and popular press stories. We now realize that these problems are hard, and are tackling smaller pieces of it.

  • The Baby isn't ready to announce itself to the world yet (it doesn't yet have control of all nuclear weapons in China), so it's keeping a low profile until it declares itself God.

    --LWM
  • by Edward Faulkner ( 664260 ) <ef@@@alum...mit...edu> on Monday April 04, 2005 @07:09PM (#12139001)
    If you want a machine that learns like a human, it may very well need the same kind of extremely rich interface with its environment that a human has.

    Some researchers now believe that "the intelligence is in the IO". See for example the human intelligence enterprise [mit.edu].
  • .. but it's classified.
  • Isn't it called a Seed AI [google.com]?
  • They killed the project when it was determined the only winning move was not to play.

    If you decide to continue this work, make sure the spark plug is out in the open so you can piss on it if necessary.
  • by Baldrson ( 78598 ) * on Monday April 04, 2005 @07:17PM (#12139055) Homepage Journal
    Since Larry Page is on the X-Prize Board of Trustees [spaceref.com], and since Google is pushing the envelope of what is needed to index and compress the entire content of the Internet, Page should consider providing seed funds and then matching funds for any donations to a compression prize with the following criterion:

    Let anyone submit a program that produces, with no inputs, one of the major natural language corpuses as output.

    S = size of uncompressed corpus
    P = size of program outputting the uncompressed corpus
    R = S/P
    ... or the Kolmogorov-like compression [google.com] ratio.

    Previous record ratio: R0
    New record ratio: R1=R0+X
    Fund contains: $Z at noon GMT on day of new record
    Winner receives: $Z * (X/(R0+X))

    Compression program and decompression program are made open source.

    If Larry has any questions about the wisdom of this prize he should talk to Craig Nevill-Manning [waikato.ac.nz].

    If, in the unlikely event, Craig Nevill-Manning has any questions about the wisdom of this prize, he should talk to Matthew Mahoney, author of "Text Compression as a Test for Artificial Intelligence [psu.edu]"

    "The Turing test for artificial intelligence is widely accepted, but is subjective, qualitative, non-repeatable, and difficult to implement. An alternative test without these drawbacks is to insert a machine's language model into a predictive encoder and compress a corpus of natural language text. A ratio of 1.3 bits per character or less indicates that the machine has AI."

    This "K-Prize" will bootstrap AI.

    OK, so he can christen it the "Page K-Prize" if he wants.

  • by mindpixel ( 154865 ) on Monday April 04, 2005 @07:24PM (#12139109) Homepage Journal
    The number is the measured probability of truth:

    1.00 Fish must remain in water to continue living.
    0.68 truth is a relative concept
    0.89 we all need laws
    0.94 is shakespeare dead?
    0.91 is intelligence relative ?
    0.97 Doors often have handles or knobs.
    1.00 A comet and an asteroid are both moving celestial objects.
    0.96 Is Russian a language?
    0.00 are the northern lights viewable from all locations ?
    0.86 Being wealthy is generally desirable.
    0.79 Democracy is superior to any other form of government
    0.90 aRE TREES GREEN
    1.00 Is eating important?
    0.02 Is sex a strictly human endeavour?
    0.14 Snails are insects.
    1.00 velvet is a type of cloth
    0.37 are you lonely ?
    0.81 If GAC makes a mistake, will it learn quickly?
    0.86 a cat is a mammal
    0.85 Memorex makes recording media
    0.06 most people enjoy frustrating tasks
    0.04 Lima beans are a mineral.
    0.07 Star Wars is based upon a true story
    0.92 is it okay for someone to believe something different?
    0.97 do you breath air ?
    0.59 Some people are more worthy dead than alive.
    1.00 sunlight on your face is in general a pleasant feeling
    0.93 DOA stands for "Dead On Arrival"
    0.00 Could a housecat bite my arm off?
    0.42 Is the herb Astragalus good for your immune system?
    0.00 worms have legs
    0.33 Is it necessary to have a nationality?
    0.93 Getting forced off the internet sucks!!!
    0.90 Bolivia is a country located in South America.
    0.92 Massive objects pull other objects toward their center. The pulling force is gravity.
    1.00 xx chromosomes produce a girl
    0.13 Do all people in the world speak a different language
    0.78 Human common sense is a combination of experience, frugality of effort, and simplicity of thought.
    1.00 The use of tobacco products is thought to cause more than 400,000 deaths each year.
    0.90 Is a low-fat diet is healthier than a high-fat diet?
    0.00 you should kill all strangers
    1.00 Electrical resistance can be measuter in ohms
    0.73 Esperanto, an artifical language, can never be really valuable because it has no cultural roots.
    1.00 Swimming is good for you.
    0.57 the end justifies the means
    0.13 Is Martha Stewart a hottie?
    1.00 1 mile is about 1.6 kilometer
    0.76 The US elections are of little interest to 5,000,000,000 people.
    0.00 November is the first month in the normal calendar.
    0.77 is a music cd better than a olt time record?
    1.00 Music can help calm your emotions
    0.80 a didlo is a sex toy
    1.00 Running is good exercise.
    0.00 No building in the world is made of wood
    0.06 Is sauerkraut made from peas?
    0.11 DID MICKEY MOUSE SHOOT JR
    1.00 is keyboard usual part of computer?
    0.96 Tokyo is the capital of Japan.
    0.93 In general men run faster than women.
    1.00 is russia near china
  • is that our brains work nothing like computer processors as they are designed today, so I don't think it will be possible using existing technology and programming techniques to ever create such a thing.

    What you describe is more likely to come from genetic engineering than from computer based technology.
  • by vadim_t ( 324782 ) on Monday April 04, 2005 @07:32PM (#12139163) Homepage
    IMNSHO, such things lead absolutely nowhere.

    I'm pretty sure that anything that looks even remotely like intelligence will never be achieved by a mechanism that isn't useful for itself. Intelligence has one reason to exist, survival, and at least our concept of it has to be linked to the environment.

    Imagine you were born a brain in a vat: blind, deaf, mute, lacking all ways of sensing the environment except a text interface somehow connected to your brain. Does somebody really believe that given such terrible limitations it's possible to make an entity that can somehow relate to a human and make sense? The whole concept of a surronding 3D environment would make absolutely no sense to it.

    I think it doesn't matter how much stuff you feed to CYC, it will never be able to understand it. How could it even understand such things as the different colors, the whole concepts of sound, space, movement, pain if it's not able to feel them? These things are impossible to explain to somebody who doesn't have at least some way of perceiving at least part of them.

    Here I think that Steve Grand (the guy who made the Creatures games) has a good point here. To make an artificial being you'd need to start from the low level, so that complex behavior can emerge, and provide a proper environment.
  • Today the internet could provide a developmental database far beyond any testbed that we imagined, yet there has been no significant progress in over 30 years.

    The danger is that this thing will learn the wrong things by reading the Internet.

    It will know every sexual technique known to man. It will learn to commit all kinds of hate crimes. Other stuff like that. Or, hundreds of people might provide good vs. evil inputs to this thing as it learns.

  • by hugg ( 22953 ) on Monday April 04, 2005 @08:03PM (#12139416)
    We have all kind of "AI-like" technology in our computers right now -- spam filtering, intelligent search engines, collaborative filtering (for instance TiVo recommendations), speech/image/OCR/handwriting recognition, etc. This stuff is real and useful and improving all the time. We just don't call it "AI" as much, because "AI" is a word associated with failed aspirations. What we have are highly refined statistical systems that are optimized for a particular problem.

    What the "baby bootstrap" is really referring to is "the great emergent AI" which, like HAL-9000, will be able to empathize with humans, navigate a starship, and play a mean game of chess -- because if a system can perform one intelligent operation, it can perform another operation requiring an equal amount of intelligence, right?

    One major stumbling block (I think) is that of optimization. The relatively simple problem of speech recognition takes a major percentage of a modern CPU's power, and is still 95-98% accurate. This is heavily optimized software written by very smart people with a couple decades of research behind it.

    A hypothetical "great emergent AI" system would have to perform the function of speech-recognition -- since it is supposed to be like a child or like a HAL-9000 -- but it would have to come up with a same-or-better implementation of this very complex algorithm, using some emergent process. It would have to figure out the equivilent of FFTs, cepstral coefficients, lattice search ... stuff that isn't instantly derivable from a + b = c.

    What we think our brain does is solve problems with a semi-brute-force algorithm. (Just throw billions of neurons at it!) However we still don't have the kind of computing power to implement a one-algorithm-fits-all learning process like the brain. Unfortunately, research for this "generic learning" is in a rut, with genetic algorithms and neural networks being exhausted top contenders. What will be next?

  • by diskonaut ( 645692 ) on Monday April 04, 2005 @08:10PM (#12139468)
    Well...

    There are several arguments against the possibility of strong AI. First and foremost, there is disagreement on fundamental philosophical issues.

    All proponents of strong AI have to somehow make a stand against at least John Searle's famous Chinese Room argument [wikipedia.org] and Terry Winograd's [wikipedia.org] phenomenological (and biological) account, in his book Computers and Cognition. Hubert Dreyfus [berkeley.edu] provides, of course, an even deeper phenomenological argument in "What computers (still) can't do". (Dreyfus does give Neural Networks some chance, perhaps that is why the original poster is still enthusiastic about the "Baby Bootstrap"?)

    Since their arguments are available in the links above and/or other places on the web, I will not repeat them here. My point is that anyone who is seriously interested in AI has to really consider their philosophical ground, and has to do so in the light of arguments against it. After all, the arguments pointed to above are still more recent than arguments for strong AI.

    In other words, I would like to ask of (strong) AI proponents to answer a just what this "learning" is, that the baby bootstrap is subject to? What "knowledge" will it contain? Oh, and what about its means of "expression", "language" as you may call it?

  • by Flwyd ( 607088 ) on Monday April 04, 2005 @09:11PM (#12139862) Homepage
    "Learning like a baby" is actually a very hard problem, for several reasons.

    1. Babies come built with millions of years of evolution. There's a lot of skill and a surprising amount of knowledge (depending on who you ask) in the large and bulbous head of a baby.

    2. Babies generally come with parents who spend a lot of time teaching. The baby learns some things by induction, but learns a lot by conscious teaching.

    3. A lot of a baby's first two years are spent learning things a (non-robot) computer can't. How to hold a mother. How to avoid falling flat on one's face. What things belong in the mouth. How to eat solid food without choking. How to pee in the toilet. How objects move when touched. What faces are likely to provide food and attention. What happens when you pull a cat's tail.

    4. A lot of the things a baby learns later in life are aided greatly by the learning in #3. Imagine learning how humans are likely to behave without having watched humans behave.

    5. A baby learns language with the help of rich sensory input. It's a lot easier to learn the meaning of "goat" when you can see a picture of a goat. The Internet offers precious little of this.

    Now, DARPA thrives on funding hard problems. And a lot of progress has been made on learning within a domain (e.g. speech processing). But building a general-purpose learner is very hard.

    Humans have immense evolution behind general-purpose learning, and we struggle with it. Getting a 3-year-old to know what a 3-year-old knows takes around 3 man-years, not counting the child's time. And what would DARPA want with a computer with the knowledge of a 3-year-old? They've got ready access to thousands of 18-year-olds. Add to that the time to code up tens of thousands of years of evolution that is still far from well understood, and you're looking at a problem far too large to tackle in one go.

    DARPA hasn't put a lot of effort into general-purpose learning for the same reason few people work on single programs which can play chess, go, checkers, backgammon, Monopoly, and Magic: the Gathering well. It's a lot easier to do it a piece at a time.
  • by Fubari ( 196373 ) on Monday April 04, 2005 @10:31PM (#12140313)
    excerpted from here [sympatico.ca]:
    Computer scientist Arthur Boran was ecstatic.
    A few minutes earlier, he had programmed a
    basic mathematical problem into his
    prototypical Akron I computer.
    His request was simply, "Give me the
    sum of every odd number between
    zero and ten.
    "
    The computer's quick answer, 157, was
    unexpected, to say the least. With growing
    excitement, Boran requested an explanation
    of the computer's reasoning.
    The printout read as follows:
    THE TERM "ODD NUMBER" IS AMBIGUOUS. I
    THEREFORE CHOOSE TO INTERPRET IT AS MEANING
    "A NUMBER THAT IS FUNNY LOOKING."
    USING MY AESTHETIC JUDGEMENT, I PICKED THE
    NUMBERS 3, 8, AND 147, ADDED THEM UP,
    AND GOT 157.

    A few moments later there was an addendum:
    I GUESS I MEANT 158.

    Followed shortly thereafter by:
    147 IS MORE THAN 10, ISN'T IT? SORRY.

    Anyone doing conventional research would
    have undoubtedly consigned the hapless
    computer to the scrap heap. But for Boran,
    the Akron I's response represented a
    startling breakthrough in a little-known
    field: artificial stupidity.
    Boran is the head of NASA, the National
    Artificial Stupidity Association ("Not to
    be confused with those space people,"
    he is quick to point out), a loosely-knit
    band of computer-school dropouts currently
    occupying an abandoned fraternity house
    at the University of New Mexico.
  • by Anonymous Coward on Monday April 04, 2005 @10:43PM (#12140383)

    I just got back from a workshop on this very subject, but nobody uses the term "baby bootstrap". It is now called "Developmental Robotics [wikipedia.org]", and encompasses embodied agents, machine learning, and other biologically-inspired metaphors.

    There is now a website dedicated to the idea. See http://DevelopmentalRobotics.org/ [developmen...botics.org] and http://cs.brynmawr.edu/DevRob05/ [brynmawr.edu] for a collection of papers on the subject.

  • by Animats ( 122034 ) on Tuesday April 05, 2005 @02:15AM (#12141408) Homepage
    I'm underwhelmed with the AI community. I went through Stanford CS. I've met most of the big names. I have some patents in AI-related areas myself. But really, nobody has a clue how to do strong AI.

    The expert systems people hit a wall in the mid-1980s. An expert system is really just a way of storing manually-created rules. And those rules are written with great difficulty. There used to be expert systems people claiming that strong AI would come from rule-based systems. (Read Feigenbaum's "The Fifth Generation"). You don't hear that any more.

    Hill-climbing systems (which include neural nets, genetic algorithms, artificial evolution, and simulated annealing) all work by trying to optimize some evaluation function. If the evaluation function is getting better, progress is being made. But what this really means is that the answer is encoded in the evaluation function. If the evaluation function is noisy (as in, "does the creature survive") and requires major simultaneous changes to make progress (as in "evolutionary jumps"), hill climbing doesn't work very well. There is progress, though. Koza's group at Stanford is moving forward, slowly.

    The formal logic people never made much progress on real-world problems. Formalizing the problem is the hard part. Once the right formalism has been found, the manipulation required to solve it isn't that hard. There's not much work going on there any more.

    The reactive robotics people also hit a wall. Literally, as every Roomba owner knows. Reactive control will get you up to the low end of insect-level AI, but then you're stuck.

    Reverse-engineering brains still has promise, but we can't do it yet. Progress is coming from trying to reverse engineer simple animals like sea slugs. (Sea slugs have about 20,000 neurons. Big ones.) Efforts are underway to completely work out the wiring. Mammals are a long ways off.

    Lately, there's been a trend towards "faking AI". This comes under such names as "social computing". The idea is to pick up cues and act intelligent when interacting with humans, even if there's no comprehension. This may have applications in the call center industry, but it's not intelligence.

    I run one of the DARPA Grand Challenge teams, Team Overbot. [overbot.com] On a problem like that, you can definitively fail, which means there's the potential for real progress. That's why it's worth doing.

  • Learning What?? (Score:3, Interesting)

    by PingPongBoy ( 303994 ) on Tuesday April 05, 2005 @08:05AM (#12142497)
    Simply put, the 'baby bootstrap' would empower a computing device to learn like a child with a very good memory. ... No sensors, servos or video input - it only needed terminal I/O to be effective.

    The input stream at a terminal would hardly appeal to a child so how can a proper evaluation of the learning be done?

    Suppose the input is a sequence of zeros and ones. Could the AI come to any kind of understanding? Perhaps a prediction whether the next input might be a 0 or a 1, eh? But no! Let's fool the AI now by telling it who is the real boss. The AI has no idea that it is being spoken to by a terminal. The next input is the letter "g". How unpredictable!

    Garbage in, garbage out - let's look carefully. A child plays and experiments. A great deal of a child's theories are garbage. The world in a child's eyes is a set of samples. Like the Mars rovers a child could follow a path that seems fairly limited in character, then bingo, something new comes up.

    Intelligent behavior in a child emerges when different theories are assembled towards a goal. First the child realizes that s/he has some ability to either influence the environment or to manipulate information (which may be stored as symbols or images, as far as a computer is concerned). If the child conceives of particular classes of objects, the child can begin to reason. Several concepts such as self, ability, action, time, place, class, possession, etc. would be regarded as fundamental or at the very least useful. As a child accumulates and refines these concepts in the mind, the child can reason more and more correctly or effectively.

    An simple artificial world can be represented as a set of strings that are transmitted to a baby bootstrap. The simple strings would be a simple bootstrap for priming the learning mechanism by letting it realize a number of essential concepts. Then more complex worlds as well as more arcane representations (such as natural language) can be used in order for the AI to interact with the greatest possible group of users.

    Still, the limited input feed is bound to cause the most ridiculous problems. Pointing out that the learning system has a big memory doesn't give me any idea what the machine will achieve.

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...