The Baby Bootstrap? 435
An anonymous reader asks: "Slashdot recently covered a story that DARPA
would significantly cut CS research. When I was completing graduate
work in AI, the 'baby bootstrap' was considered the holy grail of military
applications. Simply put, the 'baby bootstrap' would empower a computing device to learn like a child with a very good memory. DARPA poured a small fortune into the research. No sensors, servos or video input - it only needed terminal I/O to be effective. Today the internet could provide a developmental database far beyond any testbed that we imagined, yet there has been no significant progress in over 30 years. MindPixels
and Cycorp seem typical of poorly funded efforts headed in the wrong direction, and all we hear from DARPA is autonomous robots. NIST seems more interested in industrial applications. Even Google
is remarkably void of anything about the 'baby bootstrap'. What went wrong? Has the military really given up on this concept, or has their research moved to other, more classified levels?"
The Terminator (Score:2, Funny)
Re:The Terminator (Score:4, Funny)
Oh great... (Score:4, Funny)
Just one problem with this kind of research...
For the first year I'll be up every two hours all night, tending to the system.
Actually, that may be better than just being up all night, like I am now.
Re:Oh great... (Score:3, Funny)
I guess I'm not the only MCSE on this site after all...
Classified (Score:5, Funny)
I'd go into more detail, but the C.I.A. and C.I.D are at my door. Ooh, the B.A.T.F. just pulled up in a Mother's Cookies truck!
-Peter
Re:Classified (Score:4, Funny)
Tremble as they pass...stare in awe at their mighty power
Re:Classified (Score:4, Funny)
Flowers
By
Irene
truck parked on your street all week?
its out there! (Score:2)
baby bootstrap (Score:5, Interesting)
Minsky came up wrong on the single layer perceptron, AI was wrong on the purely feed-forward neural-network systems, Rumelhart and McLelland got some good promo off of their feed forward net that could learn to pronounce idiosyncracies, and Sejnowski got a great job at the salk from the AI delusions. But no, it appears to not have gone anywhere... thus far.
Later comment will be positive.
Re:baby bootstrap (Score:5, Interesting)
Re:baby bootstrap (Score:5, Insightful)
Its hard to code a procedure to replicate the working of the mind...if you don't know how the mind does it in the first place.
Re:baby bootstrap (Score:4, Insightful)
Its hard to code a procedure to replicate the working of the mind...if you don't know how the mind does it in the first place.
On the other hand, it might be that the reason we don't understand how the mind does certain things is that they're actually extremely complicated, and don't reduce very well to a programmable step-by-step algorithm nor to a simple and general mathematical learning structure. It's hard to tell, although I think it's telling that after decades of work, neither psychologists nor computer scientists can understand or replicate much of what babies do.
Sometimes the best way for a computer to learn something may not be the way a baby does it, anyway; c.f. chess.
Re:baby bootstrap (Score:5, Informative)
Except computers never learned chess; humans programmed complex move analysis routines along with the rules, and many times a database of strategies with statistical weighting. There's a limited capacity to "learn: against opponents, but that's usually just more preprogrammed analysis and pattern matching than actualy spontaneous data linking. And like a poster higher up said, ther ewas a time we thought that was all one needed. It's not. We already have rudimentary AIs in labs that can "learn" in the sense they can create accurate spontaneous data links. The human brain (or the brain of any semi complex organism, really) is a black box with such unimaginable gears inside we're fumbling in the dark. It's hard to reverse engineer a mind becuase unlike reverse engineering a BIOS or widget, we don't really understand how a mind works, is put together, or even what it's really comprised of.
Re:baby bootstrap (Score:4, Interesting)
I would argue (and I could be proven wrong) that today we have a very general understanding of how a mind works... in that we understand the concept of a neural network which does seem to be a decent model for the basic "mind" which makes choices for us... the problem comes about where we attempt to understand HUMAN BEHAVIOR which is the combination of a mind (neural network) and dozens of auxiliary, special-purpose systems ranging from the neurons in the optic nerve that perform a plethora of pre-processing on the retina's image data to the area in the brain that we're just discovering models our "empathy"; it allows us to re-process visual information about others as if we were experiencing what they are.
These special-purpose systems are sometimes inside the brain (the latter example) somtimes outside (the former), but they are not part of what we traditionally expect consciousness to be.
These tools make many of the tasks that we expect AIs to perform nearly impossible. For example, facial recognition seems like it should be easy, but once you sit down with a camera and try to make the computer "see" differences, you find that faces all look very much alike. We are tricked -- by a shockingly sophisticated facial recognition pre-filter in our brain -- into thinking that faces are widely distinct, but they are not (the old "all [race] look alike," is actually true... for all values of [race]).
So, while we might look at an AI and say, "unless it can tell faces apart, it's not 'smart'," it turns out that that's actually a pretty poor measure of pure intelligence.
Other aspects of our instinctive measures of intelligence such as language, managemetn of a human body (e.g. walking), etc. all have one or more of these auxiliary systems at their heart.
So we really have two problems: create a machine that can think; and create a machine that can behave like a human.
The former is either within our grasp, or already possible. The latter is going to have to be the product of an enourmous reverse-engineering effort which has probably only just begun.
Re:baby bootstrap (Score:5, Interesting)
That's what the "neural network" paradigm was all about. You have an arbitrary and fixed number of input node, you have an arbitrary and fixed number of output nodes. You create linkages between these nodes and "weight" them with some multiplicative factor. In some particular instantiations, you limit all inputs to be [-1... +1] and limit all weights to be within the range [-1
So with A input nodes and B output nodes, you've got a network of AxB interconnections between these input and output layers. The brain analogy is that the A layer is the input layer or receptor layer, the Blayer is the output or motor layer, and it is the interconnections between these neurons, the neural network composed of the axons and dendrites connecting these virtual neurons that does the thinking.
Example: create network as above. Place completely random numbers meeting the criteria of the model (e.g. within the range -1 weight B's output feeds forward to C, etc., and these are called intermediate layers.
Rumelhart and Mcllelland encoded spellings as triplets of letters (26x26x26), had a few (or one, I can't remember this now) intermediate layers, and an output layer corresponding to phonemes to be said. They effectively encoded the temporal aspect of the processing into the triplets, sidestepping a (what I consider the more intersting...) part of the problem. They trained this neural network by feeding it the spelling of words and adjusting the weights of the networks until the outputs were the desired ones.
Note that nowhere in this process do they explicitly tell the system that certain spelling combinations lead to specific pronunciations. They only "trained" the system by telling it if it's right or wrong. The systems weights incorporated this knowledge in these "Hebbian" synapses and neurons.
So this is associative processing, using only feed-forward mechanisms. Feedback, loops, and temporal processing are even more interesting...
alas not enough room in this margin to keep going.
Re:baby bootstrap (Score:5, Interesting)
Right, it's kind of like an implementation of bayesian spam filtering, but for other problem domains. Instead of spam/ham, it's pronounced-correctly/incorrectly. Rinse and repeat.
I dabble in AI now and again so I haven't read up on everything that's out there, but in my limited travels what I haven't yet seen is a neural network implementation which can learn and grow itself. The recently posted
I did some calculations a while back, and based upon 100 billion neurons in the brain, each capable of firing let's say an average of 1000 times per second, and we'll assume that at any given time a generous 1% of all neurons are actively firing, and that the information firing takes 100 clock cycles to process, then you'd need the equivalent of about a 100 TeraHz processor with oodles of memory to have the same processing power as the human brain. Of course, you'd also need to correctly simulate *how* the brain is wired up to get any kind of beneficial processing.
So as far as the whole 1980's AI winter, it was inevitable. The computing power and storage requirements for any sufficiently advanced AI just wasn't possible. It's only until very recently that it's possible to achieve fairly complex AI.
Re:baby bootstrap (Score:3, Insightful)
Whale's intelligence (Score:3, Funny)
Yet again, more proof that whales are smarter than humans.
--
AC
Re:baby bootstrap (Score:5, Interesting)
This doesn't necessarily mean lower intelligence, in my opinion. Being underwater prevents most technology (that we know of) from working, from fire and wheels to computers and airplanes.
A whale doesn't have fingers or hands, either, but whales and dolphins could well be as intelligent as (or more so than) us, but simply be less technologically advanced and unable/unwilling to communicate with us in a way we understand.
Sure, they seem dumb at Sea World- but then, if you took a human baby and put it in a cage and threw bananas at it when it did a trick for you, it would probably behave pretty stupidly. Much of our intellect is awakened by our experiences in the early 5 or so years- within limits, the more you are stimulated within this time, the smarter you will end up being. I would simply wonder what a dolphin or whale could be taught to do if stimulated properly.
An interesting and slightly off-topic side note is that whales and dolphins are conscious breathers; i.e., they must consciously surface in order to breathe, so they never go completely to sleep. Instead, half of their brain sleeps at a time- during this time, they're in a groggy half-sleeping state that allows enough consciousness to surface and to wake up if there's danger.
Intelligent and friendly on rye bread with some mayonnaise.
Re:baby bootstrap (Score:3, Insightful)
We know this because people who have had their speech centre knocked out by a stroke don't recover any form of speech. Other bits of the brain don't take over to compensate.
Now language is pretty important to overall intelligence. Without it no I/O processing, and it's pretty hard to learn.
Re:baby bootstrap (Score:3, Interesting)
By Bayesian spam filtering, I think you mean general classification problems, in which case, yes, neural networks can implement classification - it's a stretch to say that McClelland and Rumelhart's did, because the possible output included most non-repeating combinations of English phonemes and is thus nearly infinite, but the principle is there.
Of course, you'd also need to correctly simulate *how* the
Re:baby bootstrap (Score:4, Informative)
>> By Bayesian spam filtering, I think you mean general classification problems, in which case, yes, neural networks can implement classification - it's a stretch to say that McClelland and Rumelhart's did, because the possible output included most non-repeating combinations of English phonemes and is thus nearly infinite, but the principle is there.
IIRC, mathematically it's been shown that neural nets and bayesian learning systems (such as spam filters) are entirely equivalent. Check out some of the work by David MacKay at the University of Cambridge.
Re:baby bootstrap (Score:4, Insightful)
Funny, that's the same thing they said back in the 80's. And the 70's. And the 90's.
Sorry, but I don't buy it. Neural nets are not a panacea - I'm a robotics guy by training, and they've been the supposed magic pixie dust technology that was going to give us human-like robot motion in the 1980's. Funny, but the hard problems that need real AI, like voice recognition, handwriting recognition, unled learning, etc. are just as far off today as they were 20-30 years ago.
Faster computers have definitely not been terribly beneficial. As an example, modern speech and voice recognition systems are significantly but not dramatically "better" than they were 20 years ago (perhaps a 10-20x improvement, max) in spite of the fact that computers are roughly a million times faster: ~6 MHz vs. ~4 GHz for high-end desktop PCs. (Not to mention available RAM that's larger than the disk storage in entire mainframe data centers back then...)
Procedural AI has proven itself to be a miserable failure for nearly a half century now, and neural nets have shown that they are anything but self-organizing. Like so many other efforts to copy or explain life, it appears that having the raw materials is simply not enough - life is *different* - it's really, really hard to imitate even poorly, no matter how hard we apply our own intelligent design to the problem.
I sincerely doubt that I will live to see "baby bootstrap" systems, and I'm not all that old. I suspect that only true hardware neural nets hold any hope of mimicking life to any minimally useful degree, but the problems are very, very, hard here, and ther reality is that we know next to nothing with any certainty about how even the simplest brains really function...
Re:baby bootstrap (Score:3, Interesting)
Today OCR of printed text is a solved problem. It comes bundled with your $100 scanner, and it's damn useful.
By solved I mean that if you gave a few pages to type up to a person they would make more errors than OCR software make now.
Handwritten OCR will come, it is harder, but not impossibly harder.
Speech recognition is progressing. It comes bundled with MacOS/X, and you've certainly heard of spoken text entry in word processors. It
Re:baby bootstrap (Score:3, Informative)
I don't know what you mean "grow", since all implentations use a static number
Re:baby bootstrap (Score:3, Insightful)
Then you need to ask, what is motivation? Unless you believe that people have a soul which AI can never possess, then why can't a sufficiently intelligent AI achieve everything a human can? Put another w
Re:baby bootstrap (Score:4, Insightful)
For example, in text recognition, hundreds of thousands of hand written characters are painstakingly hand labeled with their correct letters and used as a learning database on which the algorithm is trained. The algorithm will then accurately reproduce the correct categorization for a suprisingly high number of the training examples, and any new examples drawn from the same population. But given new examples written in a different script or style, the classifier will fail to generalize
How can we hope to create a training database that is comprehensive enough to cover a topic that, when learned, would demonstrate intelligence? And fundamentally, aren't we just creating a really good mimic?
Re:baby bootstrap (Score:2)
But this is why I think more communication between people doing research in neuroscience/cognitive science/evolutionary psychology and people doing AI programming is critical. There are some very interesting psych experiments that attempt to reverse engineer how the brain works. For instance, determining what algorit
Re:baby bootstrap (Score:3, Insightful)
I think that a key issue is that not everything in our brains is handled the same way, so not all of it is equally easy to program. Conscious thought is essentially a software process running on the part of our brain that serves as a general-purpose computer. Our unconcious processes are essentially hardware processes running in parts of our brains that are specifically structured to do just that one thing. The fact that unconcious processes are run in hardware means that they're not subject to introspec
Re:baby bootstrap (Score:4, Insightful)
Re:baby bootstrap (Score:2)
appropriate algebras would allow for starting with particular sequences, allowing manipulations on them, and still staying within the confines of the grammar. Any grammar that you can parse with a finite automaton would be one example. The semantic meaning is what we imbue upon if afterwards. So GIGO may apply. If you start with a symbol (even the empty set symbol) and apply syntactic operators on it, you many generate outputs that are capable of having semantically meaningful "meaning" applied
Re:baby bootstrap (Score:4, Interesting)
Ah, philosophy of math. How fickle and unforgiving it is.
True, you can apply meaning to a syntactic structure. But like the mistake Douglas Hofstadter makes in Godel, Escher, Bach: An Eternal Golden Braid, there is nothing that "forces itself upon us." Or, another way of refuting Hofstadter, there's nothing about D:=B|| that makes it "Doug has two brothers" anymore than "Assign B to D, double pipe".
Machine translation is an example of applying semantics to a syntactic structure. It doesn't work because the syntax gives us semantics but rather we structure the syntax in such a way that we can systimatically apply semantics and get meaningful output. Like creating your own algebra.
Re:baby bootstrap (Score:3, Insightful)
The system might generate syntactically correct outcomes, but have we really solved the problem if we the observers are still the ones to apply semantic content? Isn't the point of Searle's Chinese Room thought experiment to show that syntactic transformations are not sufficient to imbue the transformer with a semantic understanding of its activity?
Re:Two words (Score:3, Informative)
I don't think this objection is fatal to Searle's basic view. He was interested in arguing that mental states could be derived from the physical processes of the brain, but not from simple computation using rules and states, which is what AI of the 60s and 70s was striving for.
The account, just by virtue of its monistic materialism, must allow for the possibility of a machine being in principle capable of generating consciousness. I mean, the brain is a physical entity performing observable actions tha
Re:baby bootstrap (Score:2)
I don't think symbol manipulation is really the thing that makes us "intelligent". It is more likely a byproduct of what lies below that level. Trying to reduce the processes that allow us to think like we do to a purely symbolic level does not account for the perturbations that have to occur at a really low level.
John Searle advocates a position that symbol manipulation isn't intelligence. Rather that consciousness is an emergent property of patterns in neural firing. Although the details how we get
Neural Nets (Score:3, Interesting)
That is a horrible constraint to put on AI problems which are (very likely) non-linear and in a hard-to-guess problem space.
Also, many training algorithms assume that the network is in a non-cyclic layout. Loops are Bad. You can do grids, in self
Re:Neural Nets (Score:3, Interesting)
Re:Neural Nets (Score:3, Informative)
What artificial neural networks lack is the millions of years of evolution. If you look at brain, it is not a "random learning network," but almost every part is highly specialized and structured.
Artificial neural networks have been a failure as an "end product," but on
The Internet as a Intellect... (Score:2, Funny)
Some tentative approaches towards AI being made (Score:3, Insightful)
These training systems are generally specialized because it's easier to get a practical result out, and I've actually seen some in use as 'knowledgebase' support webpages that will intelligently determine what you want based on what others wanted and syntactic similarities between the pages. I've never heard the term 'baby bootstrap' so maybe different terminology will obtain better results from Google?
The project was continued ... (Score:2)
Re:The project was continued ... (Score:2)
If they did, they would be able to remember not to post dupes from six months ago, let alone six hours ago.
It's obvious why the search failed (Score:5, Funny)
Re:It's obvious why the search failed (Score:5, Funny)
I've also noticed that nobody seems to make Horseless Carriages [wikipedia.org] anymore (and after they showed such promise). Likewise, the Difference Engine [wikipedia.org] has been a total flop. I do, however, expect we will see in the future some use made of the Vegetable Lamb of Tartary [pantheon.org], though no use has been made of it in the last 1000 years since it was discovered.
Re:It's obvious why the search failed (Score:2)
Good point; however, each query made to the /. readership search engine is quite expensive in terms of all the employer-funded man-hours it consumes. If we all stopped wasting so much time reading/posting here, the world economy would surely take off like a bat out of hell.
Doublethink (Score:2, Funny)
q: Has the military really given up on this concept, or has their research moved to other, more classified levels?
a: yes.
Stat algos (Score:5, Interesting)
on machine learning models and inference
models for belief networks. The work
in this area since the 80s has been
*spectacular* and has impacted other
areas of research. (E.g., speech
recognition, image processing, computer
vision, algos to process satellite information
faster, stock analysis, etc.)
So, mourn the loss of the tag phrase "baby
bootstrap", and celebrate the *unbelievable*
advanced in belief nets, causal analysis,
join trees, probabilistic inference,
and uncertainty analysis. There are
literally dozens of classes taught at
even non-research oriented Univs (e.g.,
teaching colleges or vocational-oriented
schools) on this very subject.
(As for your concern that the web is not
being mined for ML context, just look at
semantic web research, and other belief
net analysis of text corpuses. Try
scholar.google.com instead of just
plain old google to find relevant
citations.)
The early AI research paid off BIG TIME,
albeit in a direction that nobody could
have predicted. Researchers did not keep
using the phrase "baby bootstrap" so
your googling will give you a different
(and wrong) conclusion.
Re:Stat algos (Score:5, Funny)
Nonono! (Score:5, Funny)
Re:Stat algos (Score:2, Interesting)
you make quite a bad mistress
compared to the moon
Baby Bootstrap? (Score:4, Funny)
Re:Baby Bootstrap? (Score:2)
Hardest problem not yet addressed (Score:3, Interesting)
My suggestion is that we need to explore all the possible permutations of persons, places, and things, as they're reflected in the full range of literature, and classify these permutations to discover the underlying patterns.
(I've tried to make a start with my AntiMath [robotwisdom.com] and fractal-thicket indexing [robotwisdom.com].)
Re:Hardest problem not yet addressed (Score:5, Interesting)
You can't expect any system to discover the deep structure of the human psyche on its own
An interesting book that relates to this is George Lakoff's "Women, Fire and Dangerous Things". Lakoff analyzes the categories defined by linguistic structures and uses what he learns to deduce some interesting notions about human cognition. In the process, one of the things that becomes very clear is that much (all?) of the way we structure our thinking is fundamentally and inextricably tied to the form and function of our physical bodies.
One of the shallower but easier to explain examples is color: although the color spectrum is a continuous band, with no clear dividing points imposed by physics, the way in which people choose segments of that spectrum to which to assign names is remarkably consistent. Even though different cultures have different numbers of "major" colors (essentially, the set of colors that are identifiable by any member of that culture with basic verbal abilities, consider "green" vs "chartreuse"), the relationships between the major color sets is one of proper subsets. For example, one African (IIRC) culture has only two major color words, which would translate to Western color senses as roughly as "warm" and "cool". Another culture has four color words, two of which fall into the "warm" category and two of which are "cool". Western cultures have seven, and there's a direct correspondance between those color categories and the four and the two.
Further, those categories are non-arbitrary. If you show a variety of shades of red to individuals from different Western nations and ask them to pick the "most" red, they will do so with near-perfect unanimity (assuming the shades aren't too close together -- they have to be readily distinguishable). Then, if you show the same shades to someone from a two-color culture and ask for the "warmest", they'll choose what the Westerners chose as the "reddest". Ditto across the board. I'm trying to explain in two paragraphs what Lakoff spends several pages on, and probably not doing a good job, but the gist is this: Experimental evidence shows that the assignments of names to colors is definitely not arbitrary, even across very distinct cultures.
The reason? Physiology. The "reddest" red, as it turns out, is the one whose wavelength most strongly stimulates the red-activated cones in our retinas.
The point is that, at a fundamental level, everything we percieve about our world is filtered through our senses and that inevitably defines the way we understand the world. Even more, our cognitive processes are built upon associations, extrapolations -- analogies and variations -- and the very first thing we all learn about, and then use to construct metaphors for higher concepts, is our own body. The body-based metaphors for understanding the world are so deep and so pervasive that they're often difficult to recognize.
Lakoff's reasoning has some weaknesses -- mostly I think he overreaches ("overreaches" -- notice the body metaphor implicit in the word? And "weakness", too) -- but his arguments are good enough to make me think that if we ever do see an artificial intelligence of significant stature, it will think very, very differently from us.
It's really unclear what such an intelligence whose primary source of experience was unfettered access to the Internet might be. We view the net as a structure built of connected locations, but that's because we apply our own physical world-based structures to it. What would an entity whose only notion of location is as a second-order, learned idea see? And who knows what other ways its understanding would diverge?
Baby Bootstrap (Score:4, Funny)
Re:Baby Bootstrap (Score:2, Funny)
Dear Baby Bootstrap computer,
You forgot to check the AC box. Congratulations on becoming Un-Classified!
Poorly funded yes... (Score:5, Interesting)
Cognitive Machines Group @ MIT Media Lab (Score:5, Interesting)
The Cognitive Machines Group [mit.edu] @ the MIT Media Lab under Deb Roy seem to be on the right track. Steve Grand's [cyberlife-research.com] work is interesting as well.
Re:Cognitive Machines Group @ MIT Media Lab (Score:3, Interesting)
Godamn I've been procrastinating in the last few days because I am stuck on trying to compute probabilities in a probabilistic graph efficiently. One of the big hurdles I think is from the fact that we are trying to approximate a massively p
Re:Cognitive Machines Group @ MIT Media Lab (Score:3, Funny)
Maybe a Beowulf cluster of those would help?
--ducks--
Shutting down this discussion as of now. (Score:5, Funny)
Would you like to play a game of chess Professor Falken?
Babies have an instinctive understanding of 'real' (Score:5, Interesting)
Cycorp is not a poorly funded idea in the wrong direction. Cycorp chose a different tack; they decided that rather than trying to build a reality and correctness filter, they'd rely on human brains to do it for them (like trusting your parents implictly) and instead concentrated on the connectivity of the 'facts' accrued by the 'baby.' CYC is still very much around, and is very much in demand by various parts of the government and industry - if you want to play with it yourself, you can download a truncated database of assertions called OpenCYC [opencyc.org]. Folks have even gone so far as to graft it onto an AIML engine [daxtron.com], to produce a chatbot with the knowledge of OpenCYC behind it.
The problem: how does your baby learn what's real and what's REAL NINJA POWER? Or, pardon me, what's REAL NINJA POWER and what's just a poser? Someone's gotta teach it. Which means it has to learn not only facts, but how to evaluate facts. So it has to learn facts, and how to handle facts - which means it has to learn how to learn. Which means you need to know that answer from the git-go. Tortuous games with logic aside, the onus is now much more heavily on the designer to have a functioning base - whereas with the Cyc approach, the only 'correctness' that is required is that of information, and perhaps that of associativity or weight - which can be tweaked, dynamically. The actual structure of how that information is related, acquired, stored and related is not relevant once decided. Having said all this, Cyc is (from the limited demos I've seen) quite impressive at dealing with information handed to it. It just wouldn't do very well at deciding what do do with that information - that's the job of the humans that gave it the info. It can tell you about the information, but not what to do with it. That task requires volition, really.
Volition is a killer. What is it? How do you simulate it? How do you create it? Is it random action? Random weighted action? Path dependent action? Purely nature, purely nurture? When it comes down to it, the human is (as far as we know) not a purely reactive system, which CyC (AFAIK) is. Learning requires not only accepting information, but deciding what to do with it - deciding how it will be integrated into the whole. If the entity itself isn't making that decision, then the programmer/designer/builder has already made it in the design or code - and then it's not really learning, is it?
Sorry if this is confused. As I said, I don't do this for a living.
Re:Babies have an instinctive understanding of 're (Score:2)
How do we decide what exactly to attend to in the visual scenes in front of us? (The marketing types want to know this so they can feed us more advertising, the psychology types want to know this so they can figure out how attention is parcelled out) Example, "looming" is when something is approaching rapidly and may strike the body or head: the CNS attends to this quickly if stereopsis is present and causes the body to
Re:Babies have an instinctive understanding of 're (Score:3, Insightful)
It's certainly not poorly funded. Whether it's adequately funded, or on the right track, is a different question, of course.
Cycorp chose a different tack; they decided that rather than trying to build a reality and correctness filter, they'd rely on human brains to do it for them (like trusting your parents implictly) and instead concentrated on the connectivity of the 'facts' accrued by the 'baby.'
A decade ago, they still hoped that once they
Re:Babies have an instinctive understanding of 're (Score:3, Interesting)
Well maybe the realized that it's hard (Score:4, Interesting)
Doubly so if you have no goals, and your task is just to "learn". It would come back with garbage.
Perhaps the real killer is that even if it did learn something, the information acquired in its unguided search through the internet would be completely alien. You'd then have to launch a second project to figure out what the hell your little guy learned.
And you'd probably figure it out was mostly garbage.
What Went Wrong? (Score:5, Funny)
That's what went wrong. Basically, it don't work.
AI (Score:2)
What the Baby is doing (Score:2)
--LWM
Narrow IO Insufficient (Score:5, Interesting)
Some researchers now believe that "the intelligence is in the IO". See for example the human intelligence enterprise [mit.edu].
We could tell you.. (Score:2)
Google for the correct term (Score:2, Informative)
project terminated (Score:2)
If you decide to continue this work, make sure the spark plug is out in the open so you can piss on it if necessary.
Larry Page Should Seed the K-Prize (Score:3, Interesting)
Let anyone submit a program that produces, with no inputs, one of the major natural language corpuses as output.
S = size of uncompressed corpus
... or the Kolmogorov-like compression [google.com] ratio.
P = size of program outputting the uncompressed corpus
R = S/P
Previous record ratio: R0
New record ratio: R1=R0+X
Fund contains: $Z at noon GMT on day of new record
Winner receives: $Z * (X/(R0+X))
Compression program and decompression program are made open source.
If Larry has any questions about the wisdom of this prize he should talk to Craig Nevill-Manning [waikato.ac.nz].
If, in the unlikely event, Craig Nevill-Manning has any questions about the wisdom of this prize, he should talk to Matthew Mahoney, author of "Text Compression as a Test for Artificial Intelligence [psu.edu]"
"The Turing test for artificial intelligence is widely accepted, but is subjective, qualitative, non-repeatable, and difficult to implement. An alternative test without these drawbacks is to insert a machine's language model into a predictive encoder and compress a corpus of natural language text. A ratio of 1.3 bits per character or less indicates that the machine has AI."
This "K-Prize" will bootstrap AI.
OK, so he can christen it the "Page K-Prize" if he wants.
Some random mindpixels... (Score:3, Interesting)
1.00 Fish must remain in water to continue living.
0.68 truth is a relative concept
0.89 we all need laws
0.94 is shakespeare dead?
0.91 is intelligence relative ?
0.97 Doors often have handles or knobs.
1.00 A comet and an asteroid are both moving celestial objects.
0.96 Is Russian a language?
0.00 are the northern lights viewable from all locations ?
0.86 Being wealthy is generally desirable.
0.79 Democracy is superior to any other form of government
0.90 aRE TREES GREEN
1.00 Is eating important?
0.02 Is sex a strictly human endeavour?
0.14 Snails are insects.
1.00 velvet is a type of cloth
0.37 are you lonely ?
0.81 If GAC makes a mistake, will it learn quickly?
0.86 a cat is a mammal
0.85 Memorex makes recording media
0.06 most people enjoy frustrating tasks
0.04 Lima beans are a mineral.
0.07 Star Wars is based upon a true story
0.92 is it okay for someone to believe something different?
0.97 do you breath air ?
0.59 Some people are more worthy dead than alive.
1.00 sunlight on your face is in general a pleasant feeling
0.93 DOA stands for "Dead On Arrival"
0.00 Could a housecat bite my arm off?
0.42 Is the herb Astragalus good for your immune system?
0.00 worms have legs
0.33 Is it necessary to have a nationality?
0.93 Getting forced off the internet sucks!!!
0.90 Bolivia is a country located in South America.
0.92 Massive objects pull other objects toward their center. The pulling force is gravity.
1.00 xx chromosomes produce a girl
0.13 Do all people in the world speak a different language
0.78 Human common sense is a combination of experience, frugality of effort, and simplicity of thought.
1.00 The use of tobacco products is thought to cause more than 400,000 deaths each year.
0.90 Is a low-fat diet is healthier than a high-fat diet?
0.00 you should kill all strangers
1.00 Electrical resistance can be measuter in ohms
0.73 Esperanto, an artifical language, can never be really valuable because it has no cultural roots.
1.00 Swimming is good for you.
0.57 the end justifies the means
0.13 Is Martha Stewart a hottie?
1.00 1 mile is about 1.6 kilometer
0.76 The US elections are of little interest to 5,000,000,000 people.
0.00 November is the first month in the normal calendar.
0.77 is a music cd better than a olt time record?
1.00 Music can help calm your emotions
0.80 a didlo is a sex toy
1.00 Running is good exercise.
0.00 No building in the world is made of wood
0.06 Is sauerkraut made from peas?
0.11 DID MICKEY MOUSE SHOOT JR
1.00 is keyboard usual part of computer?
0.96 Tokyo is the capital of Japan.
0.93 In general men run faster than women.
1.00 is russia near china
Re:Some random mindpixels... (Score:2, Funny)
Re:Some random mindpixels... (Score:3, Funny)
What about flying fish?
And what about evolution? Does this mean your mindpixel mind will believe in creationism? So much effort and then we'll end up with an artificial fool!
The simple answer (Score:2)
What you describe is more likely to come from genetic engineering than from computer based technology.
Still the wrong approach (Score:3, Interesting)
I'm pretty sure that anything that looks even remotely like intelligence will never be achieved by a mechanism that isn't useful for itself. Intelligence has one reason to exist, survival, and at least our concept of it has to be linked to the environment.
Imagine you were born a brain in a vat: blind, deaf, mute, lacking all ways of sensing the environment except a text interface somehow connected to your brain. Does somebody really believe that given such terrible limitations it's possible to make an entity that can somehow relate to a human and make sense? The whole concept of a surronding 3D environment would make absolutely no sense to it.
I think it doesn't matter how much stuff you feed to CYC, it will never be able to understand it. How could it even understand such things as the different colors, the whole concepts of sound, space, movement, pain if it's not able to feel them? These things are impossible to explain to somebody who doesn't have at least some way of perceiving at least part of them.
Here I think that Steve Grand (the guy who made the Creatures games) has a good point here. To make an artificial being you'd need to start from the low level, so that complex behavior can emerge, and provide a proper environment.
Who will teach it? (Score:2)
The danger is that this thing will learn the wrong things by reading the Internet.
It will know every sexual technique known to man. It will learn to commit all kinds of hate crimes. Other stuff like that. Or, hundreds of people might provide good vs. evil inputs to this thing as it learns.
AI under a different name (Score:3, Interesting)
What the "baby bootstrap" is really referring to is "the great emergent AI" which, like HAL-9000, will be able to empathize with humans, navigate a starship, and play a mean game of chess -- because if a system can perform one intelligent operation, it can perform another operation requiring an equal amount of intelligence, right?
One major stumbling block (I think) is that of optimization. The relatively simple problem of speech recognition takes a major percentage of a modern CPU's power, and is still 95-98% accurate. This is heavily optimized software written by very smart people with a couple decades of research behind it.
A hypothetical "great emergent AI" system would have to perform the function of speech-recognition -- since it is supposed to be like a child or like a HAL-9000 -- but it would have to come up with a same-or-better implementation of this very complex algorithm, using some emergent process. It would have to figure out the equivilent of FFTs, cepstral coefficients, lattice search
What we think our brain does is solve problems with a semi-brute-force algorithm. (Just throw billions of neurons at it!) However we still don't have the kind of computing power to implement a one-algorithm-fits-all learning process like the brain. Unfortunately, research for this "generic learning" is in a rut, with genetic algorithms and neural networks being exhausted top contenders. What will be next?
Chinese Room, Phenomenology, bla, bla (Score:4, Interesting)
There are several arguments against the possibility of strong AI. First and foremost, there is disagreement on fundamental philosophical issues.
All proponents of strong AI have to somehow make a stand against at least John Searle's famous Chinese Room argument [wikipedia.org] and Terry Winograd's [wikipedia.org] phenomenological (and biological) account, in his book Computers and Cognition. Hubert Dreyfus [berkeley.edu] provides, of course, an even deeper phenomenological argument in "What computers (still) can't do". (Dreyfus does give Neural Networks some chance, perhaps that is why the original poster is still enthusiastic about the "Baby Bootstrap"?)
Since their arguments are available in the links above and/or other places on the web, I will not repeat them here. My point is that anyone who is seriously interested in AI has to really consider their philosophical ground, and has to do so in the light of arguments against it. After all, the arguments pointed to above are still more recent than arguments for strong AI.
In other words, I would like to ask of (strong) AI proponents to answer a just what this "learning" is, that the baby bootstrap is subject to? What "knowledge" will it contain? Oh, and what about its means of "expression", "language" as you may call it?
Babies are Hard, Despite their Skull Softness (Score:4, Insightful)
1. Babies come built with millions of years of evolution. There's a lot of skill and a surprising amount of knowledge (depending on who you ask) in the large and bulbous head of a baby.
2. Babies generally come with parents who spend a lot of time teaching. The baby learns some things by induction, but learns a lot by conscious teaching.
3. A lot of a baby's first two years are spent learning things a (non-robot) computer can't. How to hold a mother. How to avoid falling flat on one's face. What things belong in the mouth. How to eat solid food without choking. How to pee in the toilet. How objects move when touched. What faces are likely to provide food and attention. What happens when you pull a cat's tail.
4. A lot of the things a baby learns later in life are aided greatly by the learning in #3. Imagine learning how humans are likely to behave without having watched humans behave.
5. A baby learns language with the help of rich sensory input. It's a lot easier to learn the meaning of "goat" when you can see a picture of a goat. The Internet offers precious little of this.
Now, DARPA thrives on funding hard problems. And a lot of progress has been made on learning within a domain (e.g. speech processing). But building a general-purpose learner is very hard.
Humans have immense evolution behind general-purpose learning, and we struggle with it. Getting a 3-year-old to know what a 3-year-old knows takes around 3 man-years, not counting the child's time. And what would DARPA want with a computer with the knowledge of a 3-year-old? They've got ready access to thousands of 18-year-olds. Add to that the time to code up tens of thousands of years of evolution that is still far from well understood, and you're looking at a problem far too large to tackle in one go.
DARPA hasn't put a lot of effort into general-purpose learning for the same reason few people work on single programs which can play chess, go, checkers, backgammon, Monopoly, and Magic: the Gathering well. It's a lot easier to do it a piece at a time.
They're still working on artificial stupidity... (Score:5, Funny)
Computer scientist Arthur Boran was ecstatic.
A few minutes earlier, he had programmed a
basic mathematical problem into his
prototypical Akron I computer.
His request was simply, "Give me the
sum of every odd number between
zero and ten."
The computer's quick answer, 157, was
unexpected, to say the least. With growing
excitement, Boran requested an explanation
of the computer's reasoning.
The printout read as follows:
A few moments later there was an addendum:
Followed shortly thereafter by:
Anyone doing conventional research would
have undoubtedly consigned the hapless
computer to the scrap heap. But for Boran,
the Akron I's response represented a
startling breakthrough in a little-known
field: artificial stupidity.
Boran is the head of NASA, the National
Artificial Stupidity Association ("Not to
be confused with those space people,"
he is quick to point out), a loosely-knit
band of computer-school dropouts currently
occupying an abandoned fraternity house
at the University of New Mexico.
Funny you should mention that... (Score:5, Informative)
I just got back from a workshop on this very subject, but nobody uses the term "baby bootstrap". It is now called "Developmental Robotics [wikipedia.org]", and encompasses embodied agents, machine learning, and other biologically-inspired metaphors.
There is now a website dedicated to the idea. See http://DevelopmentalRobotics.org/ [developmen...botics.org] and http://cs.brynmawr.edu/DevRob05/ [brynmawr.edu] for a collection of papers on the subject.
There is way too much bullshit in this field (Score:4, Informative)
The expert systems people hit a wall in the mid-1980s. An expert system is really just a way of storing manually-created rules. And those rules are written with great difficulty. There used to be expert systems people claiming that strong AI would come from rule-based systems. (Read Feigenbaum's "The Fifth Generation"). You don't hear that any more.
Hill-climbing systems (which include neural nets, genetic algorithms, artificial evolution, and simulated annealing) all work by trying to optimize some evaluation function. If the evaluation function is getting better, progress is being made. But what this really means is that the answer is encoded in the evaluation function. If the evaluation function is noisy (as in, "does the creature survive") and requires major simultaneous changes to make progress (as in "evolutionary jumps"), hill climbing doesn't work very well. There is progress, though. Koza's group at Stanford is moving forward, slowly.
The formal logic people never made much progress on real-world problems. Formalizing the problem is the hard part. Once the right formalism has been found, the manipulation required to solve it isn't that hard. There's not much work going on there any more.
The reactive robotics people also hit a wall. Literally, as every Roomba owner knows. Reactive control will get you up to the low end of insect-level AI, but then you're stuck.
Reverse-engineering brains still has promise, but we can't do it yet. Progress is coming from trying to reverse engineer simple animals like sea slugs. (Sea slugs have about 20,000 neurons. Big ones.) Efforts are underway to completely work out the wiring. Mammals are a long ways off.
Lately, there's been a trend towards "faking AI". This comes under such names as "social computing". The idea is to pick up cues and act intelligent when interacting with humans, even if there's no comprehension. This may have applications in the call center industry, but it's not intelligence.
I run one of the DARPA Grand Challenge teams, Team Overbot. [overbot.com] On a problem like that, you can definitively fail, which means there's the potential for real progress. That's why it's worth doing.
Learning What?? (Score:3, Interesting)
The input stream at a terminal would hardly appeal to a child so how can a proper evaluation of the learning be done?
Suppose the input is a sequence of zeros and ones. Could the AI come to any kind of understanding? Perhaps a prediction whether the next input might be a 0 or a 1, eh? But no! Let's fool the AI now by telling it who is the real boss. The AI has no idea that it is being spoken to by a terminal. The next input is the letter "g". How unpredictable!
Garbage in, garbage out - let's look carefully. A child plays and experiments. A great deal of a child's theories are garbage. The world in a child's eyes is a set of samples. Like the Mars rovers a child could follow a path that seems fairly limited in character, then bingo, something new comes up.
Intelligent behavior in a child emerges when different theories are assembled towards a goal. First the child realizes that s/he has some ability to either influence the environment or to manipulate information (which may be stored as symbols or images, as far as a computer is concerned). If the child conceives of particular classes of objects, the child can begin to reason. Several concepts such as self, ability, action, time, place, class, possession, etc. would be regarded as fundamental or at the very least useful. As a child accumulates and refines these concepts in the mind, the child can reason more and more correctly or effectively.
An simple artificial world can be represented as a set of strings that are transmitted to a baby bootstrap. The simple strings would be a simple bootstrap for priming the learning mechanism by letting it realize a number of essential concepts. Then more complex worlds as well as more arcane representations (such as natural language) can be used in order for the AI to interact with the greatest possible group of users.
Still, the limited input feed is bound to cause the most ridiculous problems. Pointing out that the learning system has a big memory doesn't give me any idea what the machine will achieve.
Re:I for one (Score:3, Insightful)
It is some pretty neat stuff, especially if you are having trouble enlisting enough humans to fight wars for you.
Re:I for one (Score:2)
Re:I for one (Score:5, Insightful)
A human will black out during some types of maneuvers unless the aircraft is prevented from making them (from simple tricks like spring return to center for the stick after a blackout to computers that measure g force and won't let the flight envelope go that far in the first place.)
Pilots use "G-suits" to try and keep blood in their heads by controlling pressure on their legs (for instance) but you can only go so far with that type of thing. And, as it's low tech, the opposition can do it as well.
An AI won't have a problem with a very high G turn. A human is in deep trouble. Airframes can be designed for considerably more than a human can take, if there is no human pilot. If there is, there is little point in such a design -- the aircraft will become pilotless if it enters such a flight regime.
Now, put this up against the fact that most other countries can't afford to put an AI in the pilots seat, and the result is continuous overwhelming air superiority without risk to humans on our side. That's the combination of factors that drives the urge to go in this particular direction.
Re:I for one (Score:3, Insightful)
All is fair in love and war, but starting wars without human loss on our side seems like we have nothing to lose, as far as life is concerned. And that's kind of scary. I hope it is never used as a justification for fighting. One of the costs of war is the life you may lose and if that's too compared to wh
Re:I for one (Score:4, Insightful)
No. The only thing that is fair is when things are fair.
Any time there is a serious imbalance, there is a risk that the side holding the best cards will use that power in a manner that no one else is able to justify.
We see it at every level of human endeavor; children who bully non-conformists, husbands who beat their wives essentially because they can (and wives who bully, browbeat and otherwise abuse husbands because they're constitutionally unable to respond), churches who excommunicate or otherwise sanction members when those members don't toe the line (instead of counseling and advising and the reasonable things a social group with a particular outlook can do), cities that take property from landowners not to leverage a service to the public, but to enable a commercial enterprise, states that uniformly take children from fathers under the absurd presumption that mothers are superior human beings, countries that take resources from weaker countries or force them to adopt their way of life (for the former, Saddam's invasion of Kuwait serves as a good example, for the latter, our recent invasion of Iraq serves just about as well, IMHO.)
In contrast, the underlying ethics of a particular person or institution are what prevents abuses of power; as soon as a person or institution becomes bereft of ethics, or if they never had a solid ethical foundation, misuse of that power is almost inevitable. History shows us again and again that power has the same effect as a drug on some personalities, and often those personalities are the ones who seek and obtain power.
It doesn't do any good to hope, or wish, at least I don't think it does. If you don't raise your children carefully, if you allow your children to bully, if you stand for your church sanctioning those who aren't "normal", if you allow cities and states and governments to walk on you and walk on others... then you, and everyone else, reap what you sow.
With regard to war -- politicians are typically willing for you to lose your life; the political will to go to war is entirely divorced from the fear of dying in war. They have the will; you have the fear. You need ethics and principles to control over-reaching governments. I always thought that the politicians who declare war should be in the first year's mandatory front-line participants. Might calm them down a bit. Unfortunately, it's not that way. There are even covenants in place where politicians are immune from attack. I'm not talking about ambassadors, which of course is sensible, I'm talking about heads of state. Disgusting, in my view.
I launched this rant (sorry) because I feel that in the US, we've lost our way. 20 years ago, the idea of the US attacking another country without ourselves having been attacked was laughable. Today, it is the norm. I sympathize with your hope, but I must observe that it is not hope that will rein in the kind of people who run our government. If we sit around and let them continue to abuse us, and the people around us, all the hope in the world won't prevent a pariah status far more intense than the one we "enjoy" already.
It's not about (more) overwhelming power. Don't focus on power now. We're way too far along for that (go look up what a J-SOW does, for instance, or consider how a stealth fighter will fare against some third-world's 1960's-era surplus radar installation.) It's about ethics. Look at the US government. Decide if you like what you see. At the very least, vote against those who you feel are doing wrong. We have the power as a group to say "if you do this, you will not stay in office" and truly, right now, I think that's all most of these politicians understand.
Re:I for one (Score:3, Interesting)
Re:I for one (Score:3, Interesting)
An expensive remote-controlled fighter is useless unless it has onboard AI at least good enough to disengage from combat and return home on its own if it loses its control signal. Even at that, it would probably still not be worth the expense unless it could actually carry out a combat mission without a remote pilot. Jamming signals is just too easy to trust that the enemy won't be able to do it.
Re:I for one (Score:2)
Re:I for one (Score:3, Insightful)
Re:I for one (Score:5, Insightful)
DNA speaks in the language of proteins. You can't tell what sort of cell a piece of DNA is going to produce or how the cells it produces will be arranged without running the simulation all the way down to the protein level. We have no other cookbook for how to arrange these simulated cells once they exist except a long list that says "produce this protein, then this one, then one of these, then another one, then this...", and we've not any clue how those proteins get turned into a person. We can understand the process at the chemical level, and no higher. The finished product, of course, isn't like that at all. We understand humans on the levels of cells and organs, but DNA isn't so conveniently arranged.
Simulating cells is not sufficient. If it were, we could pour a couple gallons of blood into a bathtub and say "Behold, it is human." The organization of the cells matters just as much as the cells themselves. Simulating a human being to the level of even cellular precision would require that we be able to *scan* a human being at the cellular level to see how he's put together. If we actually knew the weightings of all the neuronal connections in a person's brain, then connectionist AI approaches might be able to produce real intelligence. To quote Levels of Organization in General Intelligence [singinst.org], "The classical hype of early neural networks, that they used 'the same parallel architecture as the human brain', should, at most, have been a claim of using the same parallel architecture as an earthworm's brain." You can't expect high-level organization from low-level simulations unless you want to simulate all the way down to DNA, where the information behind the complexity is really stored.
Or you build the complexity yourself, without relying on the hideously-designed mess that is Homo sapiens. But that's a different kettle of fish.
Re:Maybe it's a good thing they failed (Score:5, Interesting)
It _would_ learn about hacking. Come on. Such an entity would be born in a pure data environment. Getting through a basic firewall would probably seem like jumping over a small fence does to a 6-years old. Getting to jump over better firewall would probably take time - in the sense that the entity would need to learn - but, since it would become a survival trick, it would happen.
Artificial intelligence is not bad in and of itself at all.
No technology is either good or bad. Only the use we make of it can be considered as such, and it still depends on what you consider is good/bad. If I was to say "War on Iraq is bad", how many people would react by saying it's good?
The problem is when we want a machine that thinks like humans, especially a program that could potentially control our military.
I don't think that's the point of the "baby bootstrap" thing. The only point is to get it to think. But, just like you learnt how to think according to the way you perceive the world, through your five human senses, an AI built that way would react according to its own senses. How it would interpret that data and react to it is something - I'm willing to bet - that would be completely alien to us.
Given the record of flesh and blood humans toward each other in the 20th century alone, an artificial life form with the same basic psychological makeup as a human would be potentially an evil that'd make Hitler, Stalin and Pol Pot look like church ladies.
This is only valid if you don't consider what I just said. Such an AI would probably be more interrested in getting the human race to serve it in an absolutely hidden way - build more computers, extend the networks, research better networking technologies - until it _can_ replace us. Even then, that would make sense on an evolutionnary point of view.
AI that is capable of adapting to only one scenario is probably for all intents and purposes totally safe.
This is called an automaton. It is not AI.
. AI that is capable of adapting in general and learning like a human will probably ultimately have the same psychological defects as a human, including a propensity for violence.
Most of the defects you are speaking about are related to our very nature - we are, after all, an evolution of omnivorous primates. We are therefore predators, with an important tendency towards territorialism and whatever comes with it. We are stuck somewhere between instinct and reason. Anyway, my point is that even if an AI was to learn "like" an human ("by undergoing the same process"), it certainly wouldn't react like one.
Re:Maybe it's a good thing they failed (Score:4, Insightful)
I disagree. I think that's like saying that since we're made up of tiny biological factories (our cells) that we should be able to conciously manipulate the world around us on a chemical level. But that's now how it works - there are many, many layers of complexity between our concious thoughts and those low-level functions.
I doubt a purely virtual creature would have any more influence over its existence at such a low level than we do.
Re:Maybe it's a good thing they failed (Score:3)
What's true of humans isn't true of all possible minds. Humans had a lot of animal instincts before general intelligence showed up, and we're not free of them yet. Our propensity for violence exists because it was evolutionarily adaptive for humans and for a lot of mammals before us. Future AIs will not be evolved in mammalian a