Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage Databases Education Programming Software Technology

The Baby Bootstrap? 435

An anonymous reader asks: "Slashdot recently covered a story that DARPA would significantly cut CS research. When I was completing graduate work in AI, the 'baby bootstrap' was considered the holy grail of military applications. Simply put, the 'baby bootstrap' would empower a computing device to learn like a child with a very good memory. DARPA poured a small fortune into the research. No sensors, servos or video input - it only needed terminal I/O to be effective. Today the internet could provide a developmental database far beyond any testbed that we imagined, yet there has been no significant progress in over 30 years. MindPixels and Cycorp seem typical of poorly funded efforts headed in the wrong direction, and all we hear from DARPA is autonomous robots. NIST seems more interested in industrial applications. Even Google is remarkably void of anything about the 'baby bootstrap'. What went wrong? Has the military really given up on this concept, or has their research moved to other, more classified levels?"
This discussion has been archived. No new comments can be posted.

The Baby Bootstrap?

Comments Filter:
  • by darth_MALL ( 657218 ) on Monday April 04, 2005 @07:13PM (#12139027)
    Isn't it called a Seed AI [google.com]?
  • by Anonymous Coward on Monday April 04, 2005 @10:43PM (#12140383)

    I just got back from a workshop on this very subject, but nobody uses the term "baby bootstrap". It is now called "Developmental Robotics [wikipedia.org]", and encompasses embodied agents, machine learning, and other biologically-inspired metaphors.

    There is now a website dedicated to the idea. See http://DevelopmentalRobotics.org/ [developmen...botics.org] and http://cs.brynmawr.edu/DevRob05/ [brynmawr.edu] for a collection of papers on the subject.

  • Re:baby bootstrap (Score:5, Informative)

    by bluephone ( 200451 ) <greyNO@SPAMburntelectrons.org> on Monday April 04, 2005 @11:37PM (#12140715) Homepage Journal
    "Sometimes the best way for a computer to learn something may not be the way a baby does it, anyway; c.f. chess."

    Except computers never learned chess; humans programmed complex move analysis routines along with the rules, and many times a database of strategies with statistical weighting. There's a limited capacity to "learn: against opponents, but that's usually just more preprogrammed analysis and pattern matching than actualy spontaneous data linking. And like a poster higher up said, ther ewas a time we thought that was all one needed. It's not. We already have rudimentary AIs in labs that can "learn" in the sense they can create accurate spontaneous data links. The human brain (or the brain of any semi complex organism, really) is a black box with such unimaginable gears inside we're fumbling in the dark. It's hard to reverse engineer a mind becuase unlike reverse engineering a BIOS or widget, we don't really understand how a mind works, is put together, or even what it's really comprised of.

  • by maxjenius22 ( 560382 ) on Monday April 04, 2005 @11:55PM (#12140810)
    Cycorp is making progress, though.

    I recommend reading Witbrock, Michael, D. Baxter, J. Curtis, et al. An Interactive Dialogue System for Knowledge Acquisition in Cyc. In Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence, Acapulco, Mexico, 2003 [cyc.com].

    Also, if you are a lucky college student, go see the author talk about Cyc teaching itself at USC [isi.edu] or Carnegie Mellon. [cmu.edu].

    Oh, and for once, I actually am an expert on the topic, not that that matters on slashdot.
  • by Animats ( 122034 ) on Tuesday April 05, 2005 @02:15AM (#12141408) Homepage
    I'm underwhelmed with the AI community. I went through Stanford CS. I've met most of the big names. I have some patents in AI-related areas myself. But really, nobody has a clue how to do strong AI.

    The expert systems people hit a wall in the mid-1980s. An expert system is really just a way of storing manually-created rules. And those rules are written with great difficulty. There used to be expert systems people claiming that strong AI would come from rule-based systems. (Read Feigenbaum's "The Fifth Generation"). You don't hear that any more.

    Hill-climbing systems (which include neural nets, genetic algorithms, artificial evolution, and simulated annealing) all work by trying to optimize some evaluation function. If the evaluation function is getting better, progress is being made. But what this really means is that the answer is encoded in the evaluation function. If the evaluation function is noisy (as in, "does the creature survive") and requires major simultaneous changes to make progress (as in "evolutionary jumps"), hill climbing doesn't work very well. There is progress, though. Koza's group at Stanford is moving forward, slowly.

    The formal logic people never made much progress on real-world problems. Formalizing the problem is the hard part. Once the right formalism has been found, the manipulation required to solve it isn't that hard. There's not much work going on there any more.

    The reactive robotics people also hit a wall. Literally, as every Roomba owner knows. Reactive control will get you up to the low end of insect-level AI, but then you're stuck.

    Reverse-engineering brains still has promise, but we can't do it yet. Progress is coming from trying to reverse engineer simple animals like sea slugs. (Sea slugs have about 20,000 neurons. Big ones.) Efforts are underway to completely work out the wiring. Mammals are a long ways off.

    Lately, there's been a trend towards "faking AI". This comes under such names as "social computing". The idea is to pick up cues and act intelligent when interacting with humans, even if there's no comprehension. This may have applications in the call center industry, but it's not intelligence.

    I run one of the DARPA Grand Challenge teams, Team Overbot. [overbot.com] On a problem like that, you can definitively fail, which means there's the potential for real progress. That's why it's worth doing.

  • Re:Classified (Score:2, Informative)

    by PhosterPharms ( 748413 ) on Tuesday April 05, 2005 @03:40AM (#12141751)
    Not to be an insensitive clod, but the BATF exists no longer. I work for a winery, so I am certain of this much. The TTB is the new regulatory agency which governs over our side of thinge, and I believe that the Department of Homeland Security deals with the Firearms now.

    Regards,

    -PhosterPharms
  • Re:baby bootstrap (Score:3, Informative)

    by coaxial ( 28297 ) on Tuesday April 05, 2005 @04:25AM (#12141877) Homepage
    I dabble in AI now and again so I haven't read up on everything that's out there, but in my limited travels what I haven't yet seen is a neural network implementation which can learn and grow itself. The recently posted /. article about Numenta seems to be heading in the right direction. Most neural networks are incredibly rudimentary, offering a few levels of propogation. In a real brain, there's a hell of a lot more going on.

    I don't know what you mean "grow", since all implentations use a static number of neurons that are connected to each other via a series of preexisting links. In a very real sense there's no real difference between a neuron that's connected another neuron via link of weight 0 and one that isn't connected at all. Fully connect the neurons, and walla. And you a completely abstract network. Of course, now you have cycles, so the propagation algorithms get complicated real fast. Also you can't just throw neurons into a network and expect it to work. Every neuron and every link between neurons adds another degree of freedom to the network, and so stability can become harder to reach.

    NNs are kind of neat, but they're far from the end-all-be-all. A single neuron can only divide the search space via a hyperplane. Determining how many neurons in how many hidden layers is a bit of dark art. And to dash the last bit mistique about NNs, backpropagation is nothing more than hill-climbing.
  • Re:baby bootstrap (Score:4, Informative)

    by misterpies ( 632880 ) on Tuesday April 05, 2005 @04:45AM (#12141921)

    >> By Bayesian spam filtering, I think you mean general classification problems, in which case, yes, neural networks can implement classification - it's a stretch to say that McClelland and Rumelhart's did, because the possible output included most non-repeating combinations of English phonemes and is thus nearly infinite, but the principle is there.

    IIRC, mathematically it's been shown that neural nets and bayesian learning systems (such as spam filters) are entirely equivalent. Check out some of the work by David MacKay at the University of Cambridge.
  • by atomice ( 228931 ) on Tuesday April 05, 2005 @05:26AM (#12142072)
    No it isn't. Take a look at the AI in C&C Generals as a case in point - it's all scripted. Half-life 2? - all scripted. Doom III - scripted.
    Most game AI today is not NNs but scripts.
  • Re:Two words (Score:3, Informative)

    by thelen ( 208445 ) on Tuesday April 05, 2005 @06:33AM (#12142240) Homepage

    I don't think this objection is fatal to Searle's basic view. He was interested in arguing that mental states could be derived from the physical processes of the brain, but not from simple computation using rules and states, which is what AI of the 60s and 70s was striving for.

    The account, just by virtue of its monistic materialism, must allow for the possibility of a machine being in principle capable of generating consciousness. I mean, the brain is a physical entity performing observable actions that can be described according to physical, chemical and biological laws, so therefore it's basic functioning must be replicable. The weak spot in the theory is that a lot has to happen during the "emergence" phase. But there's nothing to prevent, say, a sufficiently complex neural network from generating emergent properties, perhaps even consciousness.

    I don't see anything in this (admittedly thumbnail) view that would lead us either to dissect aliens or forbid us to attribute consciousness to the remote control Mao. What it does purport to prove is that any alien or communist Chairman we believe is intelligent cannot be just an overgrown Turing machine.

    In short, the Chinese Room experiment is meant to undermine the AI of the previous decades for being too focused on rules and syntax and computational states. I don't see it as a rebuttal of the notion of AI in general. It wouldn't be a very good naturalistic account if it did forbid AI a priori IMO.

  • Re:Neural Nets (Score:3, Informative)

    by TheSync ( 5291 ) on Tuesday April 05, 2005 @10:15AM (#12143412) Journal
    As someone who has programmed neural networks on massively parallel computers (10s of thousands of nodes), let me say that lack of parallelism is a minor point when PCs are running at speeds 100s of thousands of times faster than neurons.

    What artificial neural networks lack is the millions of years of evolution. If you look at brain, it is not a "random learning network," but almost every part is highly specialized and structured.

    Artificial neural networks have been a failure as an "end product," but on the other hand the study has taught a generation of neurophysiologists about parallel computation and signal processing techniques so they can understand better how parts of brain work. In that sense, the study has been a success...

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...