Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Databases Education Programming Software Technology

The Baby Bootstrap? 435

An anonymous reader asks: "Slashdot recently covered a story that DARPA would significantly cut CS research. When I was completing graduate work in AI, the 'baby bootstrap' was considered the holy grail of military applications. Simply put, the 'baby bootstrap' would empower a computing device to learn like a child with a very good memory. DARPA poured a small fortune into the research. No sensors, servos or video input - it only needed terminal I/O to be effective. Today the internet could provide a developmental database far beyond any testbed that we imagined, yet there has been no significant progress in over 30 years. MindPixels and Cycorp seem typical of poorly funded efforts headed in the wrong direction, and all we hear from DARPA is autonomous robots. NIST seems more interested in industrial applications. Even Google is remarkably void of anything about the 'baby bootstrap'. What went wrong? Has the military really given up on this concept, or has their research moved to other, more classified levels?"
This discussion has been archived. No new comments can be posted.

The Baby Bootstrap?

Comments Filter:
  • by Anonymous Coward on Monday April 04, 2005 @06:49PM (#12138824)
    From time to time I see individuals talking about adaptive intelligence [slashdot.org] usually involving the Internet as a basis of information, but the general consensus is still garbage in, garbage out.

    These training systems are generally specialized because it's easier to get a practical result out, and I've actually seen some in use as 'knowledgebase' support webpages that will intelligently determine what you want based on what others wanted and syntactic similarities between the pages. I've never heard the term 'baby bootstrap' so maybe different terminology will obtain better results from Google?

  • Re:I for one (Score:3, Insightful)

    by Gentlewhisper ( 759800 ) on Monday April 04, 2005 @06:57PM (#12138905)
    I'm not sure if it is related, but i've once read an article about some research DARPA is doing in the field of aeronautics.. where they have whole squadrons on autonomous fighter jets controlled by only one human (who also happens to be part of the squadron).

    It is some pretty neat stuff, especially if you are having trouble enlisting enough humans to fight wars for you.
  • Re:baby bootstrap (Score:4, Insightful)

    by cynic10508 ( 785816 ) on Monday April 04, 2005 @07:06PM (#12138974) Journal
    Dreyfus commits a whole book to asking why these things don't work. I believe Minsky overestimates the project. It may all boil down to the fact that purely syntactic (symbol manipulation) work isn't going to give you any semantically meaningful output.
  • Re:baby bootstrap (Score:5, Insightful)

    by man_ls ( 248470 ) on Monday April 04, 2005 @07:10PM (#12139009)
    I doubt it would be too difficult to code -- if we knew the mechanism by which it proceded.

    Its hard to code a procedure to replicate the working of the mind...if you don't know how the mind does it in the first place.
  • Re:baby bootstrap (Score:3, Insightful)

    by rgmoore ( 133276 ) * <glandauer@charter.net> on Monday April 04, 2005 @07:37PM (#12139202) Homepage

    I think that a key issue is that not everything in our brains is handled the same way, so not all of it is equally easy to program. Conscious thought is essentially a software process running on the part of our brain that serves as a general-purpose computer. Our unconcious processes are essentially hardware processes running in parts of our brains that are specifically structured to do just that one thing. The fact that unconcious processes are run in hardware means that they're not subject to introspection. I suspect also that many of those processes are the kinds of things that are most efficiently done with custom hardware like DSPs rather than with general purpose CPUs.

  • Re:baby bootstrap (Score:3, Insightful)

    by thelen ( 208445 ) on Monday April 04, 2005 @07:41PM (#12139229) Homepage

    The system might generate syntactically correct outcomes, but have we really solved the problem if we the observers are still the ones to apply semantic content? Isn't the point of Searle's Chinese Room thought experiment to show that syntactic transformations are not sufficient to imbue the transformer with a semantic understanding of its activity?

  • Cycorp is not a poorly funded idea in the wrong direction.

    It's certainly not poorly funded. Whether it's adequately funded, or on the right track, is a different question, of course.

    Cycorp chose a different tack; they decided that rather than trying to build a reality and correctness filter, they'd rely on human brains to do it for them (like trusting your parents implictly) and instead concentrated on the connectivity of the 'facts' accrued by the 'baby.'

    A decade ago, they still hoped that once they had manually laid the groundwork, the system would bootstrap itself, reading newspapers and so on. Bootstrapping was expected to start in the late 90s, like commercial adaption (integration into Windows, for example). It seems that neither has happened, at least in the predicted scale. Cyc may not be a failure (it's hard to tell, because a lot of it is a trade secret), but it couldn't reach its ambitious goals.
  • Re:Doublethink (Score:2, Insightful)

    by wolenczak ( 517857 ) <paco@cot e r a .org> on Monday April 04, 2005 @08:04PM (#12139421) Homepage
    Regular implementations of knowledge engines in AI use true/false semantics for automated learning, this would be the answer you could expect from such application, just like Mindpixel e.g. Sad you missed the sarcasm.
  • Re:I for one (Score:5, Insightful)

    by fyngyrz ( 762201 ) on Monday April 04, 2005 @08:06PM (#12139433) Homepage Journal
    Because it is fighters that are pushed to the edge (or designed to the edge) of the human performance envelope, but not pushed to, or designed to the potential of their own.

    A human will black out during some types of maneuvers unless the aircraft is prevented from making them (from simple tricks like spring return to center for the stick after a blackout to computers that measure g force and won't let the flight envelope go that far in the first place.)

    Pilots use "G-suits" to try and keep blood in their heads by controlling pressure on their legs (for instance) but you can only go so far with that type of thing. And, as it's low tech, the opposition can do it as well.

    An AI won't have a problem with a very high G turn. A human is in deep trouble. Airframes can be designed for considerably more than a human can take, if there is no human pilot. If there is, there is little point in such a design -- the aircraft will become pilotless if it enters such a flight regime.

    Now, put this up against the fact that most other countries can't afford to put an AI in the pilots seat, and the result is continuous overwhelming air superiority without risk to humans on our side. That's the combination of factors that drives the urge to go in this particular direction.

  • Re:I for one (Score:3, Insightful)

    by infornogr ( 603568 ) on Monday April 04, 2005 @08:07PM (#12139449)
    That's far too computationally intensive. You know the Folding@Home project [stanford.edu]? That just handles protein folding. That's the very first step of turning DNA into cells. There's a a gazillion and one steps involved in putting together a human being, and even the very first one, translating DNA into proteins, eludes us.
  • by Rorschach1 ( 174480 ) on Monday April 04, 2005 @09:00PM (#12139810) Homepage
    It _would_ learn about hacking. Come on. Such an entity would be born in a pure data environment. Getting through a basic firewall would probably seem like jumping over a small fence does to a 6-years old.

    I disagree. I think that's like saying that since we're made up of tiny biological factories (our cells) that we should be able to conciously manipulate the world around us on a chemical level. But that's now how it works - there are many, many layers of complexity between our concious thoughts and those low-level functions.

    I doubt a purely virtual creature would have any more influence over its existence at such a low level than we do.

  • by Flwyd ( 607088 ) on Monday April 04, 2005 @09:11PM (#12139862) Homepage
    "Learning like a baby" is actually a very hard problem, for several reasons.

    1. Babies come built with millions of years of evolution. There's a lot of skill and a surprising amount of knowledge (depending on who you ask) in the large and bulbous head of a baby.

    2. Babies generally come with parents who spend a lot of time teaching. The baby learns some things by induction, but learns a lot by conscious teaching.

    3. A lot of a baby's first two years are spent learning things a (non-robot) computer can't. How to hold a mother. How to avoid falling flat on one's face. What things belong in the mouth. How to eat solid food without choking. How to pee in the toilet. How objects move when touched. What faces are likely to provide food and attention. What happens when you pull a cat's tail.

    4. A lot of the things a baby learns later in life are aided greatly by the learning in #3. Imagine learning how humans are likely to behave without having watched humans behave.

    5. A baby learns language with the help of rich sensory input. It's a lot easier to learn the meaning of "goat" when you can see a picture of a goat. The Internet offers precious little of this.

    Now, DARPA thrives on funding hard problems. And a lot of progress has been made on learning within a domain (e.g. speech processing). But building a general-purpose learner is very hard.

    Humans have immense evolution behind general-purpose learning, and we struggle with it. Getting a 3-year-old to know what a 3-year-old knows takes around 3 man-years, not counting the child's time. And what would DARPA want with a computer with the knowledge of a 3-year-old? They've got ready access to thousands of 18-year-olds. Add to that the time to code up tens of thousands of years of evolution that is still far from well understood, and you're looking at a problem far too large to tackle in one go.

    DARPA hasn't put a lot of effort into general-purpose learning for the same reason few people work on single programs which can play chess, go, checkers, backgammon, Monopoly, and Magic: the Gathering well. It's a lot easier to do it a piece at a time.
  • Re:baby bootstrap (Score:3, Insightful)

    by djfray ( 803421 ) on Monday April 04, 2005 @09:45PM (#12140043) Homepage
    I'd be very interested in seeing information confirming anything close to your generous 1% firing at a time, and how this is integrated with the rest of the system for signal processing, who fires when, etcetera. I think, however, that we need to take into account the fact that more neurons doesn't mean smarter at all. Take a look at whales, for instance, with brains much larger than our own, and thusly, more neurons. A whale can't go on Slashdot and say "OMGZ first post guys" much less something of human level intelligence. (Apologies to creationists for the following...)It took billions of years of evolution, all the way back to the primordial ooze(or whatever) to get to the point of having a species with the genetic mappings to produce the neural networks that allow us to learn, remember, think, and process as we humans do. I think this would add a significant number of zeroes to your processor calculation, even when we incorporate a design based on our own incredible thinking.
  • Re:baby bootstrap (Score:4, Insightful)

    by Servants ( 587312 ) on Monday April 04, 2005 @09:48PM (#12140061)
    I doubt it would be too difficult to code -- if we knew the mechanism by which it proceded.

    Its hard to code a procedure to replicate the working of the mind...if you don't know how the mind does it in the first place.


    On the other hand, it might be that the reason we don't understand how the mind does certain things is that they're actually extremely complicated, and don't reduce very well to a programmable step-by-step algorithm nor to a simple and general mathematical learning structure. It's hard to tell, although I think it's telling that after decades of work, neither psychologists nor computer scientists can understand or replicate much of what babies do.

    Sometimes the best way for a computer to learn something may not be the way a baby does it, anyway; c.f. chess.
  • Re:I for one (Score:3, Insightful)

    by nate nice ( 672391 ) on Monday April 04, 2005 @10:17PM (#12140235) Journal
    "Now, put this up against the fact that most other countries can't afford to put an AI in the pilots seat, and the result is continuous overwhelming air superiority without risk to humans on our side."

    All is fair in love and war, but starting wars without human loss on our side seems like we have nothing to lose, as far as life is concerned. And that's kind of scary. I hope it is never used as a justification for fighting. One of the costs of war is the life you may lose and if that's too compared to what you may gain, then you cannot fight.
  • Re:I for one (Score:5, Insightful)

    by infornogr ( 603568 ) on Monday April 04, 2005 @10:21PM (#12140262)
    There is a need to go to such a low level, unlesss you want to start it off with more data than is available in a strand of DNA.

    DNA speaks in the language of proteins. You can't tell what sort of cell a piece of DNA is going to produce or how the cells it produces will be arranged without running the simulation all the way down to the protein level. We have no other cookbook for how to arrange these simulated cells once they exist except a long list that says "produce this protein, then this one, then one of these, then another one, then this...", and we've not any clue how those proteins get turned into a person. We can understand the process at the chemical level, and no higher. The finished product, of course, isn't like that at all. We understand humans on the levels of cells and organs, but DNA isn't so conveniently arranged.

    Simulating cells is not sufficient. If it were, we could pour a couple gallons of blood into a bathtub and say "Behold, it is human." The organization of the cells matters just as much as the cells themselves. Simulating a human being to the level of even cellular precision would require that we be able to *scan* a human being at the cellular level to see how he's put together. If we actually knew the weightings of all the neuronal connections in a person's brain, then connectionist AI approaches might be able to produce real intelligence. To quote Levels of Organization in General Intelligence [singinst.org], "The classical hype of early neural networks, that they used 'the same parallel architecture as the human brain', should, at most, have been a claim of using the same parallel architecture as an earthworm's brain." You can't expect high-level organization from low-level simulations unless you want to simulate all the way down to DNA, where the information behind the complexity is really stored.

    Or you build the complexity yourself, without relying on the hideously-designed mess that is Homo sapiens. But that's a different kettle of fish.
  • Re:baby bootstrap (Score:4, Insightful)

    by polv0 ( 596583 ) on Monday April 04, 2005 @11:10PM (#12140561)
    It is fairly easy to show (see Bishop 1995) that a simple two layer neural network can scale to reproduce arbitrarily complex but smooth functions to any degree of required accuracy, and that a three layer neural network can extend this capapility to functions with discontinuities. While mathematically this is a tantalizing prospect, and only begins to cover the work that has been done to extend the capabilities of neural networks and other machine learning algorithms (such as support vector machines), there remains a fundamental problem. In order for these networks to effectively learn, they must be presented with a tremendous number of high quality and meaningful sequences of input and output.

    For example, in text recognition, hundreds of thousands of hand written characters are painstakingly hand labeled with their correct letters and used as a learning database on which the algorithm is trained. The algorithm will then accurately reproduce the correct categorization for a suprisingly high number of the training examples, and any new examples drawn from the same population. But given new examples written in a different script or style, the classifier will fail to generalize

    How can we hope to create a training database that is comprehensive enough to cover a topic that, when learned, would demonstrate intelligence? And fundamentally, aren't we just creating a really good mimic?
  • Re:I for one (Score:4, Insightful)

    by fyngyrz ( 762201 ) on Tuesday April 05, 2005 @12:34AM (#12140983) Homepage Journal
    All's fair in love and war

    No. The only thing that is fair is when things are fair.

    Any time there is a serious imbalance, there is a risk that the side holding the best cards will use that power in a manner that no one else is able to justify.

    We see it at every level of human endeavor; children who bully non-conformists, husbands who beat their wives essentially because they can (and wives who bully, browbeat and otherwise abuse husbands because they're constitutionally unable to respond), churches who excommunicate or otherwise sanction members when those members don't toe the line (instead of counseling and advising and the reasonable things a social group with a particular outlook can do), cities that take property from landowners not to leverage a service to the public, but to enable a commercial enterprise, states that uniformly take children from fathers under the absurd presumption that mothers are superior human beings, countries that take resources from weaker countries or force them to adopt their way of life (for the former, Saddam's invasion of Kuwait serves as a good example, for the latter, our recent invasion of Iraq serves just about as well, IMHO.)

    In contrast, the underlying ethics of a particular person or institution are what prevents abuses of power; as soon as a person or institution becomes bereft of ethics, or if they never had a solid ethical foundation, misuse of that power is almost inevitable. History shows us again and again that power has the same effect as a drug on some personalities, and often those personalities are the ones who seek and obtain power.

    It doesn't do any good to hope, or wish, at least I don't think it does. If you don't raise your children carefully, if you allow your children to bully, if you stand for your church sanctioning those who aren't "normal", if you allow cities and states and governments to walk on you and walk on others... then you, and everyone else, reap what you sow.

    One of the costs of war is the life you may lose and if that's too compared to what you may gain, then you cannot fight.

    With regard to war -- politicians are typically willing for you to lose your life; the political will to go to war is entirely divorced from the fear of dying in war. They have the will; you have the fear. You need ethics and principles to control over-reaching governments. I always thought that the politicians who declare war should be in the first year's mandatory front-line participants. Might calm them down a bit. Unfortunately, it's not that way. There are even covenants in place where politicians are immune from attack. I'm not talking about ambassadors, which of course is sensible, I'm talking about heads of state. Disgusting, in my view.

    I launched this rant (sorry) because I feel that in the US, we've lost our way. 20 years ago, the idea of the US attacking another country without ourselves having been attacked was laughable. Today, it is the norm. I sympathize with your hope, but I must observe that it is not hope that will rein in the kind of people who run our government. If we sit around and let them continue to abuse us, and the people around us, all the hope in the world won't prevent a pariah status far more intense than the one we "enjoy" already.

    It's not about (more) overwhelming power. Don't focus on power now. We're way too far along for that (go look up what a J-SOW does, for instance, or consider how a stealth fighter will fare against some third-world's 1960's-era surplus radar installation.) It's about ethics. Look at the US government. Decide if you like what you see. At the very least, vote against those who you feel are doing wrong. We have the power as a group to say "if you do this, you will not stay in office" and truly, right now, I think that's all most of these politicians understand.

  • Re:baby bootstrap (Score:4, Insightful)

    by dublin ( 31215 ) on Tuesday April 05, 2005 @01:57AM (#12141319) Homepage
    So as far as the whole 1980's AI winter, it was inevitable. The computing power and storage requirements for any sufficiently advanced AI just wasn't possible. It's only until very recently that it's possible to achieve fairly complex AI.

    Funny, that's the same thing they said back in the 80's. And the 70's. And the 90's.

    Sorry, but I don't buy it. Neural nets are not a panacea - I'm a robotics guy by training, and they've been the supposed magic pixie dust technology that was going to give us human-like robot motion in the 1980's. Funny, but the hard problems that need real AI, like voice recognition, handwriting recognition, unled learning, etc. are just as far off today as they were 20-30 years ago.

    Faster computers have definitely not been terribly beneficial. As an example, modern speech and voice recognition systems are significantly but not dramatically "better" than they were 20 years ago (perhaps a 10-20x improvement, max) in spite of the fact that computers are roughly a million times faster: ~6 MHz vs. ~4 GHz for high-end desktop PCs. (Not to mention available RAM that's larger than the disk storage in entire mainframe data centers back then...)

    Procedural AI has proven itself to be a miserable failure for nearly a half century now, and neural nets have shown that they are anything but self-organizing. Like so many other efforts to copy or explain life, it appears that having the raw materials is simply not enough - life is *different* - it's really, really hard to imitate even poorly, no matter how hard we apply our own intelligent design to the problem.

    I sincerely doubt that I will live to see "baby bootstrap" systems, and I'm not all that old. I suspect that only true hardware neural nets hold any hope of mimicking life to any minimally useful degree, but the problems are very, very, hard here, and ther reality is that we know next to nothing with any certainty about how even the simplest brains really function...
  • AI is not AL (Score:1, Insightful)

    by Anonymous Coward on Tuesday April 05, 2005 @02:33AM (#12141500)
    AI has become such a generic misused term that it can only reliably be used to describe machine learning. A collection of useful algorithms derived from memetic principles which can be used to model real world problems. Nothing more.

    Unfortunately what most people think of AI is actually AL. The quest for artificial life, artificial reasoning, congnition, and sentience.

    The reason why we are fumbling in the dark when it comes to artificial life research is becuase we still have no idea how the human brain (or any brain for that matter) really functions at a low lever. Sure, we know a little bit about neurons and their connectivity, but we cannot model the way the brain works down to the level of why a certain neuron fires and another doesn't.

    Its quite plausible that the processes which enable creative thought and sentience are quantum in nature, and therefore not something that would ever be possible to emulate using current technology.

    Its also plausible that in our ceaseless human ego we overlook how simple the processes really are in an effort to mystify our sense of self which is nothing more than an internally focused narrative. Looking at the processes that lead to on ones own perception of reality is fraught with difficulty.

    The thing is, we just don't know which one of these alternatives were looking for.
  • Re:baby bootstrap (Score:3, Insightful)

    by nacturation ( 646836 ) <nacturation AT gmail DOT com> on Tuesday April 05, 2005 @04:30AM (#12141884) Journal
    The problem, however, is that no matter how efficient your filters are, they will lack the motivation of a learning, growing human being to learn. They will not notice things as a human would; they won't notice things at all. They'll simply take input and use a pre-determined algorythm to produce output.

    Then you need to ask, what is motivation? Unless you believe that people have a soul which AI can never possess, then why can't a sufficiently intelligent AI achieve everything a human can? Put another way, if we were able to take DNA and perfectly simulate its growth just as a fetus would, so that we have a machine duplicate of a human brain, is there any reason to believe (again, given a perfect simulation) that our software AI would operate any differently than our own wetware?
  • Re:baby bootstrap (Score:3, Insightful)

    by HuguesT ( 84078 ) on Tuesday April 05, 2005 @05:41AM (#12142103)
    The human brain is hardwired for complex languages. We're not sure about cetaceans. They definitely communicate, but we don't know at what level of complexity.

    We know this because people who have had their speech centre knocked out by a stroke don't recover any form of speech. Other bits of the brain don't take over to compensate.

    Now language is pretty important to overall intelligence. Without it no I/O processing, and it's pretty hard to learn.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...