Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
News

What AI Elements Could Improve the Web? 328

DavidpFitz asks: "I'm entering my final year of my Artificial Intellgience/Computer Science degree at Birmingham Uni. (UK). The trouble is that I can't decide what to do for my final project. I'd like to do something of practical value delivered over the web (things like an intelligent Slashdot filter spring to mind :-), but I always come up with reasons against everything I think of. Can anybody think of ways they would like a web site to react more intelligently that they currently do. Clever shopping carts? More targeted news? Both of these are rubbish, I think - so more interesting and complex ideas are welcome! The main thing, is that it has to have a strong AI element in it, not just appearing to be clever."

Interesting thought. So if we were to apply more AI to the web, what areas should we target? And I feel this is a valid question even if someone may be using these ideas for their school project. These are still just that: ideas. DavidpFitz will have to finish and implement his final project regardless of anything said in this forum, so why not take his line of reasoning, brainstorm a bit and have some fun with it?

This discussion has been archived. No new comments can be posted.

What AI Elements Would Improve the Web?

Comments Filter:
  • Have an AI that keeps track of sports scores.

    Well, that's what's selling all these damn web-enabled cell phones I keep hearing advertised.
  • So, for example if you give it a photo of a person, it gets you all photos in which that person appears

    That's a hard AI problem. If you can solve it, I can assure you you'll be famous.

    I know of a program that scans images (from the web or other places) and picks out porn. Don't laugh, it's real. The program, I don't remember its name, selects pictures containing nude bodies. It works by recognizing skin tones and, I think, not the absolute color values, but rather certain color gradations.

    Face recognition (which is what you are talking about) is being actively worked on now. One of the applications is being able to automatically identify people observed by the ubiquious security videocameras. Would you like to live in an aquarium?


    Kaa
  • How about an agent system that negotiates with coders about what to develop for open source projects. Kind of an INTELLIGENT Bugzilla, that instead of being passive, is an ACTIVE system. "Active Feature Enhancement," or some such.

    An agent for each program, and for each developer. A classification system that learns about the kinds of bugs and updates there are, and learns about the abilities of the developers, and bridges the gap between them. That'd be awesome.

    It not only searches out developers to fix (or work on) code, it searches out programs that developers could make a big contribution to.

    Also, possibly, an agent for each bug reporter... And for each "reviewer" - people who verify bug fixes / enhancements.

    The system would learn, over time, for instance, that I like to complain about weird interfaces and constantly demand good help systems. It would then know to use me as a "reviewer" of fixes to those kinds of problems. "Dear VikingCoder, the AFE system has received a fix to a help system problem in emacs, would you be willing to review and grade the fix?"

    Also, it learns that I'm a very apt OpenGL coder, and learns that if it asks me to write some code, other people are likely to grade my code highly.

    HUH? Not bad, eh? This would be a cool, cool, cool system.

    You'd probably be able to pawn it off on SourceForge, for instance...

    Good luck with your plans!

  • Two thingsb (neither of which I made clear in my original post):

    1) I'd like to be able to say "Digital Camera, holding at least 36 1024x768 images, no LCD, weight less than .25 lb". Then the software has to find ALL models that match and price compare them for me.

    2) Reliably finding the price of a given item for even ONE site would be non-trivial (for some sites). Finding it for many sites would take AI (until we get smart and start using some standard XML for this kind of thing).
    --
    Have Exchange users? Want to run Linux? Can't afford OpenMail?
  • It strikes me that perhaps the part of the web most in need of intelligence is search engines. It would be extremely useful (not to mention entertaining) to, for example, give web crawlers a more human-like 'personality', giving it the capability to dislike a site becuase of apparent lack of content, number of hits on a hit counter, flashy graphics, etc.
    But of course, AI should be dynamic... perhaps in such a situation you could program the crawler to use click-through data from the parent search engine (with the users' permissions of course) to try and formulate an idea of what people like in a website, thus helping the page rankings.

    On the other side of things it would be an interesting AI challenge to try to work out what kind of thing people actually want to find when they search for certain terms, in terms of the content and function of the site.

    Of course, this may just be a silly idea, but I suppose it might help :)
  • Me: i'm thinking of going scuba diving in monterey this weekend

    AliceBot: What is this "thinking"?

    The defense rests your honor.

  • There have been many suggestions for pre-emptive content searches, but what about browser and content history? Something akin to an automagically accumulating everything.com [everything.com].
    For example, use semi-structured data parsing techniques (based on content, tags, etc...) to loosely organize the information on the web sites that you visit. Later on, you will be able to search, browse and correlate all this information that have "seen", but not retained.

    To me, this would solve several very practical problems: Where did I see X the other day? Isn't X like some Y that I heard about?

    In essence, an automated bookmarking utility based on content rather than location.
  • Any other features?

    One thing I would like to see is something that lets you shop for the best price on more than one item at a time (with both items being from the same website) - if, for example, I wanted prices on a specific digital camera and the accessories to go with it, then I would be able to search for websites selling both, sorted in order of lowest combined price.

    Of course, that isn't needed as often as a general price search for one specific item, but it could save a lot of time when it is needed.
  • If the poster's at Brum Uni, he's probably working under Professor John Barnden, who's doing plenty of research into the use of AI in responding to metaphors. So yeah, the whole Autonomy/Bayesian/remembrance thing: being able to deal with the fuzziness of human language and concepts, to get beyond the rather geekish world of the algorithm and of mathematical certainty.

  • Better yet, The Diamond Age, or A Young Lady's Illustrated Primer by Stephenson.
  • by vlax ( 1809 ) on Tuesday May 16, 2000 @08:35AM (#1068817)
    The problems of the web that can have AI-type solutions are generally semiotic in nature. We could have intelligent agents, able to pay our bills and do our shopping for us, if we had a system of symbols that mapped clearly and unambiguously to meanings, or, a non-symbol processing system able to decode the web the way a human does. Most likely, we'll get something midway between them.

    The GOFAI idea that human cognition is a matter of disembodied symbol processing is dead, and good riddance. However, our computers remain most useful as symbol processing engines, not as the more complex kinds of massively parralel connection engines that most people think brains are. We can get computers to emulate that kind of brain functionality, but only on a very limited scale compared to the human mind. Human-equal parralel machines are not just around the corner, so we have to augment the connectionist systems we have with symbol processing facilities.

    The kind of project that interests me is a system that uses XML formats to provide clear semantics on the web, and connectionist methods to make judgements about how to act in response to those symbols. A system, for instance, that can scan an XML resource for information about rock concerts or movie listings, and having learned in more connectionist ways the preferences of the user (both in terms of costs, scheduling and personal taste) can inform them of events they might like to see, perhaps even going so far as to make tentative reservations when it's very confident.

    The same kind of system could be used to solve library research problems. An XML document structures data semantically enough that a connectionist system can make quick, fairly superficial judgements about the contents and how they relate to the research needs of its users. It can then do more indepth readings of the highest confidence documents, leading to better sources and new documents. In the end, it can provide the documents to users and assist them when there are gaps in their knowledge by pointing them to the document that fills the gap.

    The killer ap for AI would be automatic translation. Since that's my field, I don't think it's somewhere you ought to go without a strong knowledge of linguistics, and of the past failures in the field. I have some ideas, but that's what my PhD is going to be about. :^P

  • Food for thought.

    Let's say a combination of AI, robotics, and asteroid mining makes open-source hardware projects realistic to the point where its just as easy and cheap to create your own fusion-powered sattelite as it is to, say, come up with a Linux distro.

    Then, someone comes along, and creates an open source web-controlled nuclear-powered death bot. Only a totally international, mirrored site set up with good encryption so users could be totally anonymous. In other words, a web site that was not subject to any government that could be used to kill someone.

    Something like that is still a few years down the road, thankfully.

  • What about the COPPA? you can't keep any info about people younger then 13, which is definatly
    the targte of most educational sites.

    Cait Sith
  • The Alicebot is a real piece of junk. All the responses I got were seemingly unrelated to what I typed in.
  • We've all heard and maybe even seen (hopping between the p0rn sites and other junk) that the Internet is a wealth of knowledge. How about implementing some sort of machine learning on a given domain utilizing a NLP engine or something to extract data from web pages and learn new things about the domain.

    Another thing to think about is doing something similar and tying in forward or backward chaining (don't know how viable it would be though, its been a while since I've looked at AI its all fuzzy logic at this point..)

    I know a couple years back, there was also a lot of work being done with putting web interfaces on intelligent agents, NLP engines, knowledge bases, etc. Might be worth looking into that also...

  • One of the earliest companies to try to use something resembling AI on the web was Firefly, and before they got subsumed by micros~1 their software (which became Passport) was fairly useful, in that it was fairly good at making suggestions about music and IIRC books you might like based on others you've rated. The company I work for (we essentially make and run cobranded ecommerce sites) has also researched similar sorts of applications, with varying degrees of success. Taste matching isn't the sexiest application of AI, but it is one of the easiest and most commercially viable. Extensions to the sort of systems that CDNow use might be:
    • meta-analysis of user surveys similar to that done on some psychiatric tests, ie, assuming that the respondent isn't able or willing to fully express what they're thinking and trying to draw patterns not just from what they answer by how they answer.
    • learning fuzzy logic systems, which not only might provide more insightful predictions but also could prove useful in the age of the privacy backlash when demographic information might not be readily available and the only information you have to go on might be site tracking information and referrer urls.
  • It might be difficult to do, but how about a tool that checks through a /. forum to see if what you're planning to post has already been commented on, because if another one of my comments gets moderated as redundant I may do something very bad.

    Technonerds: coming soon on z1nc.org.uk [z1nc.org.uk]

  • Let me assure you there is nothing "trivial" about a search engine - the amount of coding and research that goes into developing a new search engine on the order of Google or Fastsearch is anything but trivial. Thats why every time you see a new academic study on a new method of searching the web the folks involved end up leaving the university and forming a company (and all the research papers get harder to find after that too, funny eh?).
    The fact that most search engines are still not performing to the standard we might expect simply indicates the monumental task they face.
  • by Anonymous Coward
    Basic UI design should have no use for AI in the UI itself (ugh, that sounds ugly), and I don't know how useful it would be for the design process (it would have to be damn good). However, in customization processes, good AI could be very useful. Of course it would need components that would be very useful elsewhere as well, so they could be shared... What I mean is, flexible user interfaces have great advantages but are costly in time and other management resources. If you could have an AI that did the work of configuring the GUI for a non-technical user to his preferencies even when he or she is not very clear about what they are, that would go a long way toward intuitive interfaces. Basically it would do something analogous to the... (argh, ugly analogy)... interior designer. (there, I said it!) in creating an environment that is "home" by guessing, without a lot of data, what someone would imagine "home" should be like. This is probably the worst analogy ever, but what the heck, it's an Anonymous Coward so no one will read it.
  • How about the AI takes misspelled URL and compares them to a list that you have previously visited and pick the "closest one" or have allowable misspelling as short-cuts i.e. /. => slash.org
    Something like a plug in.

  • by kaphka ( 50736 ) <1nv7b001@sneakemail.com> on Tuesday May 16, 2000 @09:32AM (#1068827)
    "UI AI" is, IMHO, an ill-concieved idea that has had way too much work done on it in the past decade. The problem is very simple: if I spend a few minutes (or hours, or days,) learning a new interface, I want it to stay the same! I don't care if I never run "Backup", or if I visit Slashdot so much that it may as well be my home page... I don't want those settings changing unless I tell them to.

    MS Office is a notorious example of this. In the newer versions, if you don't use a menu item frequently, it vanishes, so users aren't "confused" by too many options. I used to work tech support, and believe me, having your menus change for no reason is far more confusing than having "too many options"... and it is frustrating to new users and experienced users alike.
  • Or perhaps make a coding assistant which looks through the branches of the code and creates error messages which are helpful and distinctive. So when the message from a particular branch of the logic is still not understood well enough to solve the problem, you can at least search for that error message in the Knowledge Base and find the more detailed explanation...and the AI could create the skeleton KB also -- with KB alterations by humans being fed back into the AI...
  • Rest assured, there's no animated paper-clips or any other gratuitous (and useless) nonsense in my desktop UI vision!

    Just think "adaptive behavior" :-)
  • it almost sounds like your describing a genetic algorithm. which i think can work pretty well in this situation. let see you could have each member of the population moderate with its points and then have people meta-moderate determine the fitness of the moderation. you would probably want to do significant training of the system before it went into public use.

    hopefully the algorithm would evolve to be complex enough to avoid being taken advantage of by first posters. it also increases the possibility that comments get moderated fairly. it also backwards commpatable with the current system and can be slowly phased in, by slowly increasing the percentage of algorithms to people.
  • Yeah, like something were it can tell if an MP3 is a Metallica song or not? :)
  • <P><I>Isn't this what Google does, and does very well? </I>

    Google? What's a Google? ;)

    g0o0G13 5UX! L1NX2g0 rUl3z! ;D

    George Lee

  • I guess this could be used to really make sites like Slashdot personalized. This would go further then simply comment moderation. The site would learn what stories, comments etc that the user reads and thus is intrested in. New stories and comments that match those that the user spent time on earlier would receive a bonus and would be presented to the user prior to those that don't match.

    This would go further then simply checking the section the news belons to, it would also check the contents of the story. And regarding posts it could check things like contents and who made the post.

  • Thats completely true. I have been also a part of a project that was started to implement AIinto the OS in different levels. Unfortunately the project was cancelled after a year because most people thought it was too big to be collaborated through the net. Anyway, If we could of AI for a system, not in the web, then yes ! UI's do need an intelligent way to track users' interactions.
  • I've been fairly impressed with the Jump Start series of educational software for my son (now almost 6). He's had a couple of them. The latest is jump start phonics, which features a limited version of IBM's via voice speech recognition. My son plays it, 'cause the novelty of saying your answers appeals to him. Note to Mac users: Had to install this on on my pc rather than my son's mac because the Mac version doesn't have the speech.

    It actually works pretty well and has done a good job of getting progressively harder as he improves. I can tell, too, that it does little things like keep drilling him on letter combinations/sounds he has trouble with. Also, when it's doing this and he gets several wrong in a row, it'll drop back to easier stuff (or ones that he knows) so he doesn't get too frustrated at getting like five in a row wrong.
    ---
  • If we had AI moderators, it would take about a week before someone found a way of exploiting them so he could always post stuff they would moderate to +5. Wait, Signal 11 already did that and we don't even have AI moderation yet...

    The bus came by and I got on
    That's when it all began
    There was cowboy Neal
    At the wheel
    Of a bus to never-ever land
  • additionally this sort of thing would be useful for making playlists. just hit the 'fast driving music' or 'i'm feeling depressed' button on the mp3 car stereo and let the AI take over. ;)
  • I think that AI will have a limited use. Basically controlling content that people want to see,i.e filters. Also i suppose that marketing depts. could further customize ads and offers for surfers.
  • I agree, an ever-changing interface would SUCK.

    But what if your desktop KNEW that you downloaded updates from site ABC and put them in dir /home/me? It could make it easier to put them there, instead of always having to scroll/click around to find the same damn directory over and over again.

    Granted, no huge leaps of AI here, just minor adaptive behavior. Notice I didn't say adaptive(changing) interface :-)

    A good UI (IM[!]HO) should learn how the user works, remember what they did last time, and make it easier to do it every time in the future.

    I agree, it should NOT keep making a user re-learn how to use it, instead, it should be almost transparent.

    Did I change your opinion? :-)

  • It's your kind of elitist "I got there first so anybody who came afterward is a loser" kind of attitude that drives people away from even trying to use Linux and insures that it will never be a mainstream operating system.
  • we should have artificially intelligent spam-bot blockers to get rid of all this dang spam. they can fight the spam-bots and challenge them to duels and stuff. it can be a new sport.
  • I agree.

    HTML parsing != natural language processing

    Unless you're implying that you should be able to enter a random URL, like http://www.joescomputers.com/, and have it discover the price for a 10GB Seagate hard drive, or some specific item, without anyone having told the software the format with which Joe's Computers displays its prices, or even which page they're on, then you're just talking about searching through HTML, tables, etc., which is most certainly not AI.
  • I'm not sure if this could actually be full-blown AI, but what about some sort of overarching adaptive browsing environment? Something that learns where you go and prefetches/digests information. It might even "learn" that since you like Slashdot, it should parse the Slashdot headlines and provide a ticker for you. How it would know that I have no idea. AI magic. Or perhaps it might learn what type of sites you do not like cookies from...perhaps it does a lookup on doubleclick, fuzzily figures out what doubleclick is about, and subsequently by default denies cookies from other doubleclick-like sites. Same for ads.

    Still that's admittedly AI-weak. How about the adaptive (dare I say even neural-net-ish), FreeNet project? Could you do some work for them in perhaps detecting "cancerous" nodes?
  • But what if your desktop KNEW that you downloaded updates from site ABC and put them in dir /home/me? It could make it easier to put them there, instead of always having to scroll/click around to find the same damn directory over and over again.
    I dunno, maybe I'm a curmudgeon, but I think even that is pushing it. Presumably that would mean that every time I see a "Save As..." dialog , the AI would pick a default directory to display. That would still mean that I'd have no idea what directory I will see when I'm saving any particular file.

    There's only one way I could see something like this working:

    1) If the AI sees that I'm doing something repetitively, it asks me if I want it to do it for me in the future. But...

    2) It must be done in a non-magical way. So, to continue your example, there would have to be a "rule file" somewhere that says "abc.com:/home/me slashdot.org:/home/me/important_stuff etc.," which could be edited using conventional tools. In other words, the AI shouldn't do anything that can't be done manually.
  • Altavista has something like that. Use Altavista's image search [altavista.com] to search on some keyword, then click the "similar" link below one of the returned images. Not all image have a "similar" link. Try another search if you don't get any. "Mountain" gives me a lot of those links on the first page, but "Natalie Portman" doesn't. I guess she's one of a kind.

    It looks to me like they're mostly considering the colours used in the images rather than the shapes and you get very fuzzy matches. Still, it is something in the direction of what you were thinking.
    --

  • With this in mind, I suggest reading Neal Stephenson's [well.com] The Diamond Age" [scifi.com], since the "primer" in the novel represents probably the ultimate teaching tool - one that takes Nell from first words to computer programming and martial arts without needing adult intervention!
  • Consider the ideas of collaborative reviewing and scoring, along with third-party annotations like like what Third Voice [thirdvoice.com] does. Imagine a "moderated" WWW...


    ---
  • Several things.
    1. Intelligent selection of image types & representation, based on available bandwidth & browser capability.
    2. Predictive Caching. Caching based on last-used, or most-used, is only of limited value. You need to be able to predict and haul it in to the cache PRIOR to any requests, if caching is to have a real value for a diverse user group. (Suggestions: Profile web sites according to tracks followed and time of day.)
    3. Local Predictive Caching. Have an Agent-based caching system which downloads the pages you're most likely to visit, during periods of inactivity.
    4. Agent-based Traffic Shaper. If pipe A has S units of spare capacity, there's no point in anything downstream of that sending more. It'll just be dropped. This is similar to ECN, except pro-active rather than reactive.
    5. Agent-based Web Pages. Instead of sending down a page, and hoping the browser can interpret it, why not send down an agent which can actually see what resources exist, and ONLY retrieve stuff that's of interest or use.
    6. Agent-based Compression. Compression is the exchange of computer time for bandwidth. But which way round you want the exchange depends on the machine and the network at that moment in time. So, statically making the choice is stupid. Have an agent report back what the net's like, and what the machine's like, and have another agent at the server end encode sounds & images to optimise for the conditions.
    7. Extended Multi-Views. If you combine the last two with the Multi-views concept, you could map ANY type of data against region, capability and delay, making for a vastly more powerful system.
    8. Crawler Agents. Search Engines are all fine and good, but they're pretty pathetic for Real World searches. Too easy to fool, too many accidental spurious hits, and no easy way of finding anything useful. On the other hand, if you had an agent which took the results and actually crawled each of them, looking for dead links, redirects to irrelevent pages, spoofed indexes, etc., you could trim out a good percentage of the dross.
    9. Capability Agents. Similar to above, but instead of excluding sites, it would hunt for what additional software was needed to view the page correctly, find that software, and allow the user to hot-install it. (A vastly superior system to that used by Microsoft and Netscape, in that it wouldn't be browser or site specific.)
  • how about an intelligent caching server?

    this assumes the user is behind a squid cache.

    a client would scan the user's browsing history (from ~/.netscape and ~/.mozilla directory) and then query a server to see what else a user would probably download next. then download the predicted web pages (via the squid proxy).

    the server would need to keep track of browsing patters in some fashion (that would be the ai part).

    alternatively you could use the squid cache itself to do the predictions, but then you're dealing with multiple people's browsing patterns: experimentation might come into play here to see how well that works.

    anyway it would be an interesting project in terms of speeding up web access by utilising "downtime" in net connections.
  • Hello,

    ASG is a relatively uncharted field of Artificial Intelligence, with possibly unprecedented value with respect to Rapid Application Development and Software Verification procedures. It has industrial as well as theoretical value.

    Perhaps attending the AAAI conference in July/August in Austin Texas would also be of help in providing some insight to what field would be most suited to your desires.

    Cheers!
    Brian

  • by Cool Hand Luke ( 16056 ) on Tuesday May 16, 2000 @09:00AM (#1068900)

    I happen to work at a startup, Links2Go, Inc. [links2go.com], which approaches the "better search engine" problem from a different direction than most engines. Instead of farming huge numbers of web pages and doing greps on them for relevant text, our server sorts these pages by topic automatically and rates a page's relevance, not by the number of keywords on the page, but the number of times the page is referenced from other pages.

    Users can then search on our servers by topic *or* by the URL of a page. What the user gets back is either a list of the most relevant pages to a specific topic *or* to a specific URL.

    George Lee

  • I wouldn't call it canned. I would give some examples but START is kind of slashdotted right now. But START has given me information about distances between two completely arbitrary cities on earth, simple math, and even the answer to the universe. If you are asking for actual *thought* on the part of START, then I am afraid not. I don't think you can ask it for the highest prime number or for it to perform logical deductions, although it would be cool.

    Ahh, there's a good AI project (though not web related). Make an AI program that can find out how to figure out the highest prime number! Easier said than done right? Heh...
  • Most people here want better search engines. Forget that. That's a crowded arena.

    I'd like smart web prefetching and advertisement filtering. Basically, I'd like my browser to figure out which links I'm most likely to follow on a page and start prefetching those links. I'd also like it to block content which I'm not interested (but still leave a tag so that I can 'correct' it if it's overzealous).

    Essentially, a combination Squid + Junkbuster, only proactive.

    --Joe
    --
  • by embo ( 133713 ) on Tuesday May 16, 2000 @09:11AM (#1068926)
    The web is growing every day with more and more content that is dynamically generated. What we need is something that will at least give search engines a grasp of what's buried under all of that.

    Sure, many sites with dynamic content provide an engine that will allow THEIR dynamic content to be searched, but that doesn't help if you're using a major search engine to find ALL the sites with relevant information, not just relevant information on ONE site. We need a way for the engines that search dynamic content to report back to the big search engines what they have in their databases.

    And then we can deal with all the security and privacy issues that will probably come with it.
  • by Alarmist ( 180744 ) on Tuesday May 16, 2000 @07:53AM (#1068928) Homepage
    Web searching.

    The entire point of the Internet is to relay information. Information must, by definition, be meaningful to its recipient.

    I'm sure you all remember the study done a year or so ago reporting that even the best search engines hit only 16% or so of the sites that are actually on the web. Clearly, there is a need for a good AI agent to look for information relating to a query and present that information to its client. Ideally, the client would be able to ask a question like, "Who was the fourth Pharaoh of the Nineteenth Dynasty?" and receive a weighted list of answers (e.g. 85% of sites consulted say it was Seti I).

    The data is there. What we need is the means to collect it and turn it into information.

  • How about this, forget the web, and apply your AI expertise towards building a decent desktop UI!

    In the Windows world (forget about stability, I'm talking USABILITY here), we have windows that pop themselves to the foreground whenever the hell they want, and where you click START to SHUT DOWN or LOG OUT.

    And in the Linux world, we have a new window that gets created but doesn't get FOCUS, and we have the very UNoriginal Windows95 look and feel, but without scroll-wheel mouse support. And the rarely implemented-properly cut/copy/paste features.

    All in all, I have to say, a decent desktop UI with some AI (or even just 'I') features would be just dandy. So forget the web stuff for now, and give us a decent UI !
  • "Alice" is not even an attempt at intelligence. It simply analyses speech patterns without any regard to content or context or previous sequence of conversation and regurgitates replies that were hard-coded in beforehand and designed entirely and simply to "sound realistic" and not at all to mean anything.

    if you are looking for a web-based Artificial Intelligence which actually solves problems and attempts to in some way synthesize the information given to it based on context, i suggest you look at

    http://www.forum2000.org/ [forum2000.org]

    I assure you, you will be impressed.
  • by orpheus ( 14534 ) on Tuesday May 16, 2000 @09:11AM (#1068939)
    >>>The entire point of the Internet is to relay information. Information must, by definition, be meaningful to its recipient.

    Actually, I was going to suggest something that is -- well, not quite the opposite, but certain very dissimilar to what most people would expect from this premise.

    I have a huge variety of interests, and despite searching the web about a dozen times a day for disparate minutiae, I really don't have too much difficulty finding what I want. Of course, most people find my search strategies incomprehensible.

    What I really want, which would require AI, rather than just clever search design, is this: I develop new interests constantly, but it's a hit or miss prospect. Sometimes I find torrents of them, while at other times, nothing new comes down the pike in weeks. I realize that there is a time dependency too -- often an article that didn't interest me last year fascinates me this year, or vice versa.

    If an AI could point me at stuff I'd like and don't know about (aside from the limited domain of music, books etc.) I'd be very happy. If it could flash a dozen or two words on the screen to indicate the *themes* it's extrapolating from my current interests, I'd be fascinated.

    Often a golf caddy cal tell you things about your game that you never knew. And certainly the legend of the butler is someone who can assist you in myriad ways because he observes things about you that you might not, yourself. [Chesterton]

    But spare me from AskJeeves or some gottverdammt prying market-profiling 'personalized portal'

    _____________
  • I know there are static websites that do this already (pricewarehouse.com comes to mind), but how about an agent that searches for the best price on a given item. Other options include:

    -automatic sale ending date detection
    -automatic score-lowering for companies the user doesn't like (i.e. give me the lowest price that isn't at WalMart)
    -automatic score-lowering for companies with "bad practices" (i.e. give me the lowest price that isn't from a company with slave labor)

    -couple those last two with automatic parent company tree-walking (lowest price that isn't from company doing bad stuff AND isn't owned by a company doing bad stuff)
    -full generality: I want to price toilet paper AND houses

    Any other features?
    --
    Have Exchange users? Want to run Linux? Can't afford OpenMail?
  • by nano-second ( 54714 ) on Tuesday May 16, 2000 @09:12AM (#1068944)
    What about an AI that takes a given pool of data (e.g.current news headlines, a collection of images, etc) and creates an electronic image *inspired* by these. I don't mean a random mishmash of whatever it was handed, but rather, identifies some underlying theme in the data and interprets it in an artistic fashion... ok, maybe it's weird or impossible, but I thought it would be cool... probably fairly challenging too.
    ---
  • I have always thought about what it would take to build a program like one of those old doctor programs, but one that would post to a newgroup or IRC channel. I know this has been done, but not very convincingly. This program would remember who it talked to and about what. It could pick on key words being talked about and got out on the net to "learn" about it before posting. If something good enough could be developed, it would be fun to watch a few of these have a conversation and see how far they could take it.
  • What ever happend to Pete Townshend's "give me your bio and we'll make a song about you" AI routine?

  • How about a search engine that when it comes across a new topic, lets say it crawls to a John Tesh appreciation page, it sends a questionnaire to the webmaster that looks something like this:

    1. Is this site informative about John Tesh? Pick one: 1,2,3,4,5,6,7,8,9 (High number is yes, low is no)

    2. Is this about John Tesh's music?

    and so on, it could use an AI routine to come up with appropriate questions based on bios or definitions of the topic. The search engine would be question based, I would type in "Who is John Tesh" and I get the most obvious hits listed but I also get a series of links asking me things like: John Tesh's Music, Why people don't like John Tesh, Photos of John Tesh, etc.
    All done by AI and webmaster feedback.

    Essentialy you'll get an informed series of specialized topics and their hyperlinks for every search. Sure it would spam the hell out of people, but the better it works the more webmasters will want to fill out the form to get a more accurate listing.

  • Excellent points, but I think this bit is a little misleading:
    The GOFAI idea that human cognition is a matter of disembodied symbol processing is dead, and good riddance.
    For one thing, even if that idea (which is basically the strong symbol system hypothesis) is dead, that doesn't have any impact on the feasibility of GOFAI. Just because human's aren't symbol sytems (for the sake of argument,) that doesn't mean that symbol systems can't be as "intelligent" as humans.

    As a matter of fact, I think that most of AI's practical successes have been entirely GOFAI. Take Cyc [cyc.com], for example. (Incidentally, "Cycorp" has got to be the coolest name for a company that I've ever heard, especially considering that their business is actually as creepy as their name.)

    Actually, now that I read your post more closely, I don't think we're disagreeing... With today's technology, most useful AI projects are best implemented using GOFAI, or at least a solid GOFAI foundation. It's just a question of politics, whether you consider GOFAI a kludge or a genuine model for AI.
  • The only things I can think of, beyond those mentioned in the article, that would be of any use on the Web involve searches.

    Finding, for instance, data that is more related to a user's OS when seraching would be a nice feature. The problem is, for anything that demonstrated an appreciable amount of AI, you would have to go beyond simple searches. My recommendation would be to create an automatic moderation system for a weblog. I'd be curious how an AI would moderate a posts involving Natalie Portman and hot grits. :P

    --

  • by cpt kangarooski ( 3773 ) on Tuesday May 16, 2000 @07:55AM (#1068963) Homepage
    Web-controlled nuclear-powered death bots.

    But you'd better invest in a good server b/c the site would see a lot of traffic. Well, until enough people had used the death bots anyway.
  • A precis program which will condense long websites or discussions.

    Quick Algorithm:

    1) Rank all words in a selection by frequency of occurrance.
    2) Throw all out pronouns, connectors, prepositions and other too-frequent words that are not nouns, verbs, adjectives, or adverbs.
    3) You now have the gist of the article, still organized by word frequency.
    4) Go back and find the sentences in the article that contain a large number of high-frequency terms. Print them.
    5) You will find that you have just effectively summarized the article.

    Actually you will find that you have merely listed a bunch of sentences with high-frequency terms. Use your AI skills to determine how to arrange these sentences so that the top ones *do indeed* summarize the article. (Directed graphs? Semantic nets? Internal references?)

  • It could be a good idea to make a robot that upon reading a Slashdot article would search the web, using appropiate search engines, to post interesting links related to the story.

    You could use moderation as an evaluation of the quality of the strategies. Keep in mind that early posters have more chacne to be moderated (up or down than late ones).
    __
  • I always thought it would be interesting to train a neural net to be able to detect when someone is attacking your computer. But I have limited knowlege of NN to know if this would be possible.
  • by Anonymous Shepherd ( 17338 ) on Tuesday May 16, 2000 @07:57AM (#1068975) Homepage
    Several thoughts of mine...

    You'd have an AI program with a web based interface.

    Or

    You'd have an AI enhanced web interface.

    One of the former:
    A program that digests and characterizes an mp3. Say there is a store of music on a sister server that people can download and listen to, and then score in several ways. Think Cinematch at http://www.netflix.com where people can rank their preferences and get statistically collated with other people who rank their preferences. In this case, though, you correlate tastes of a person with the music. So you ask the person who listens to rank on 1 to 5:
    Slow . . . . Fast
    Heavy . . . . Light
    Sad . . . . Happy
    Tense . . . . Relaxed
    Simple. . . . Complex

    Loved . . . . Hated

    Where complex is taken to mean that the song is *both* sad and happy at places, tense and relaxed, etc. So the individual who ranks creates this 6 part characterization of the music, which is fed into some sort of NN and correlated with the music itself, somehow. The end goal would be to feed music into the system and be able to characterize the music correctly *and* decide with good certainty that a person would love a song or not.

    It's a selfish goal of mine because there is too much music out there, and I know what I like, but of course I don't know what I haven't heard. Having a device that filters out 70% of the music I like correctly, with the remaining 30% left for variety and error, would be very interesting.

    Just one idea!

    -AS
  • I don't think we disagree that much either, although I'm not convinced that Cyc has been terribly successful at anything. Structuring environmental knowledge in that way doesn't strike me as a very likely approach to understanding human intelligence, and alternative avenues are available. What Cyc may accomplish is casting some light on what kinds of knowledge humans need to function in the world, although I'm not especially optimistic about that either.

    I see two issues here. The first, I think, is whether a connectionist system can be viewed as a symbol processing system, and I would answer yes, with some caveats. Fodor and others support a very strong form of the symbolic systems hypothesis that I don't think is viable any more. There isn't a single internal logic engine, or any sort of unified "language of thought" in the sense that theorem provers implement predicate calculus. Connected networks do many tasks once thought to require symbolic systems of that type, but don't have any prefabricated symbolic machinery.

    However, it is possible to view the state of a neural network as a kind of logic engine, but one that isn't per se compatible with the logic engines of other problem solving networks. An analysis of neural networks for OCR is quite revealing in how individual hidden layer neurons can search for particular features. Some aspects of this can be summarised using more traditional kinds of symbol systems, but the networks generally prove to be more robust frameworks for application.

    The search for some unified symbol system able to account for different kinds of human behaviours is no longer a very viable research project. Such a project may have value in AI because we can't build massively connected networks that mimic the topology of the brain, and a symbol processing system may well be the best we can do for some kinds of problems at present.

    The second problem is what constitutes artificial intelligence. One of my old profs put it well: once you've solved a problem in AI, it doesn't seem that intelligent anymore.

    There is some research in AI intended to solve problems that humans solve without making any particular reference to how humans do it, nor do they try to shed light on the nature of human intelligence. I'm not very interested in that kind of research, prefering research that sheds light on human intelligence. I have no ideological qualms about doing those kinds of projects. If you can write a program that works, more power to you, but I have to question what we mean by intelligence when we say such a program is intelligent.

    In principle, I suppose a symbol system can be as intelligent as a human, but only if you define intelligence in a way that makes it difficult say if it is equivalent to what humans do. My Ultra 10 is a hell of a lot better than I am at a wide variety of problems, and it certainly is a symbol processing system, rather than a connectionist one. One could define intelligence in a way that makes my Ultra 10 more intelligent than I am, but I don't think much would be accomplished in that way. We will no doubt expand the number of things computers can do well that right now only humans can do, but I'm not sure we will ever build a machine that we can conceed is equal to humans, because we keep defining intelligence in ways that exclude anything that isn't human.

    I've been reading a lot of Piaget and Vygotsky recently, and I think there is a lot of merit to their theories of human intelligence arising through interaction with the environment rather than being something necessarily inherent to brain algorithms. Of course, I reserve the right to change my mind, but I've read a lot of the strong symbolic systems literature, and I don't see anything to make me change my mind so far. If this idea about intelligence is true, it makes it almost impossible to build a computer that we would judge as human equivalent, even if it could do the kind of massive parallel processing brains do, unless we put the computer inside a baby and raised it as a person in our constrained three-dimensional world.

    I do consider GOFAI a kludge, and perhaps not even the best kludge available anymore. The trend is towards biologically motivated models that consider human intelligence to be embedded in the functioning of bodies and their interaction with the environment. They have had some major successes and I think they will continue to for a while.
  • A portal site which interfaces with your web browser. It looks at which sites you visit most often and puts in excerpts and links to sites which would most appeal to the user.
  • by vlax ( 1809 )
    That's why I think we'll end up with hybrid systems when we start to see real AI agents on the web.

    With XML we can provide some semantic clues. We can find websites that claim "Drew Barrymore" as a major topic, or celebrity interviews with Drew Barrymore listed as an interviewee. We can check the website content to find sections with some bearing on Drew, and we can even use fairly simple language models to make good guesses at the kind of content that website has. Then we can pass the data to some more connectionist kind of program (this is where the magic happens :^) that can try to figure out if these pages come anywhere near answering the question.

    I think that's a viable, useful approach to these kinds of problems. We can't provide full semantic markup with XML, but we can get part way there. Hopefully, it can be close enough that CPU intensive processes like neural networks can go the rest of the way.
  • by Kaa ( 21510 ) on Tuesday May 16, 2000 @07:58AM (#1069000) Homepage
    Well, it seems that you are not really thinking of web sites. You are thinking about what used to be called "intelligent agents", specifically those which are running remotely and with which you can communicate in HTML. Another word for this would be personalized service with a Web front-end. A search engine is a trivial example of such.

    So think about what cool thing can a remotely running program do for you. Find you stuff on the web? That's passe (do you want to code another shopping agent?). Filter news for you? Academia has done some interesting stuff here, not sure it it went anywhere.

    You might also want to keep security and privacy in mind when designing your agent.

    Kaa
  • The problem with your 'full-featured' pricing agent is that as a general rule, users will be asking it to evaluate criteria that they themselves cannot. Buying is a complex choice with objective and subjective criteria like service, location, etc.

    In other words, I'd prefer a list of "100 best prices in order" or "all prices within 5% of the best", rather than having someone make a list of 'bad companies' for me. I can remember who the companies *I* dislike are.

    I *don't* want my constantly computer bugging me with questions about the dishwasher I bought six months ago, and how I rate that seller's services, etc.; and I don't want it filtering my vision if it can't know that I kinda wish I'd bought it somewhere else. I know these things already, I can scan down a list and cross out the undesirables.

    On the other hand, since I doubt that users are really going to reconsider most of their default settings. (Gee, hon, since you always overuse lawn pesticides and contaminate the river anyway, you might as well check the 'evil' companies for a better price on that Weed-Away.) this could very likely result in them making suboptimal choices.

    ...Like never even seeing that gorgeous house from a "motivated seller" -- because it's a 'visiting executive' house owned by RJ Reynolds, United Fruit, or Amalgamated Toxins... oops, forgot to turn off the 'bad guy filter'.
    _____________
  • I have always liked the idea of forward searching web browsers. Especially for slower connections. It is arguable that as connections get faster, this will be unnecessary. As connections get faster, html coding just gets crappier, and content gets BIGGER. An intelligent forward searching mechanism would be a good project. Perhaps you could use the mozilla source, and then add this. The idea is to forward retrieve all of the links, sort of like wget, but dynamically. Supposing that you were to do this. The AI could determine which links are meaningful to you based on your browsing style. If you are skipping over porno banners, it obviously wouldn't follow links to porn sites in the search. If you spent a couple seconds looking at a page with many links, it could assume that this is a portal page, and you will be returning to it, and therefore choose to preload all links off of it (useing heuristics determined by your prior surfing of course). It could even make suggestions as to which page to visit next, which would be useful if you were using a search engine.
  • Actually, one of the harder AI tasks on the web would be a working pron filtering device (OSS of course)

    Keywords and blacklists are too blunt, wouldn't it be a challenge to make a *useful* filter?

    I don't mind people looking at naked bodies, but I would very much like to be able to do a "sex AND NOT [porn]" search for example.

    (and if it drives the snake oil salesman known as Cyberpatrol at al out of business, I wouldn't mind)

  • I agree with your complaint about the unpredictable menus (it also makes shared lab computers a pain in the ass), but I can also see many situations in which they would be useful. This is best addressed by a maxim which should be engraved on the right hand of ever right-handed programmer and UI designer:

    The only bad option is one you can't turn off

    - Michael Cohn
  • If you could more accurately predict user behavior in a browser, you could preload links and cache more intelligently. (of course, the former is internet-community hostile, the latter internet-community friendly)

    You could also do this kind of preloading on a larger scale by monitoring the server loads, and dynamically changing the content that is preloaded on web pages to anticipate user clicks.

  • I think a general area you might want to look at is auto-moderation. Currently a site like slashdot works (barely) because lots of volunteers are willing to work over the data and manually vote on it or rank it's quality.

    Consider the way that Google can identify valuable (or at least popular) websites without any such clumsy user input. Is there a way you could identify a valuable slashdot posting by looking at user reading patterns? There's a lot of different kinds of data you have to work with: How many people read the thread, how much time people spend before moving on, numbers of responses, clickthroughs on posted links, and so on... perhaps all weighted by karma?

    You could also try and evaluate a posting based on certain heuristics, though I suspect that would rely a lot on obscurity.... e.g. if people knew that a posting with three URLs was always given credit for being informative, you'd see a lot more suck.com style linking.

    On the other hand, you might be able to do about as well as a lot of slashdot moderators.

  • He's been investigating the possibility of "information agents" to to more intelligent searching of the WWW (for instance, a search tool specifically for finding university staff homepages). They have figured out ways to use the structure of typical web sites to improve the search over keyword matching.

    Check out the Agents Group [mu.oz.au] for this and other projects.

  • I don't really know how much it pertains to actuall AI or not, but a detector/tracer type utility for DOS attacks might be a good idea.
    I saw a paper [arstechnica.com]pertaining to that the other day if I remember correctly.
  • First of all, what kind of intelligence do you really want a web site to have? The most 'intelligent' systems that people have tried to sell as internet innovations are push media. Who wants that? Probably no one who values their privacy.

    On the other hand, I can think of useful AI tools on the web, namely good search engines. Some of the clever ones try to match ideas rather than just simple text pattern matches. Perhaps you could work with that area, but it's nothing new.

    But this still seems really silly. If you can't think of any good new innovative uses of AI to use over the web, then why are you asking us? Why focus on that small area of potential for AI applications for your project? More importantly, what are you going to do with any new ideas that do show up here? Are you going to give credit where it is due, namely to whichever /. reader gives you the thesis idea? And is this really what your profs have in mind when they ask you to come up with a project to prove what YOU can do? Is this going to impress whoever has to judge the value of your project?

    You know what to do with the HELLO.

  • What about spidering-on-demand? You type in your query, come back in a day or two, and get highly detailed, highly relevant results? You might use a standard search engine to get the initial results, then use your AI engine to really drill down and see which ones are relevant and which ones aren't? So, for example, the "Alexandrian Wicca" search wouldn't come up with ancient libraries and cheap lawn furniture, but would instead dig up interesting dissertations on the different branches of modern Wicca.

    Of course, there might be something out there already that I just don't know about. Just a thought to provoke discussion.
  • hmm a link management ai...
    sounds like a good idea. goddess knows i need some help finding somethings in my bookmarks...
    You still look for stuff in your bookmarks? I end up just going back to google. It's faster.
  • Start's reply

    ===> when was the sphinx built?

    Which movie in The Internet Movie Database do you mean:

    Sphinx, The (1933)
    Sphinx, The (1916)

    I think this AI needs to study a bit more history.
  • so turn it off... I don't like it much, but I think it is a good idea for computer novices and removes one of the troubling interface aspects of programs in later versions like office "feature creep" (or bloat) making a good interface WAY too complicated and intimidating.
    ---
  • The AI-Agent could watch the way various posts scores change over several weeks (or hell, hours with Slashdot), digest the posts' content, and try to draw connections between changes in score and post contents (comparing contents to 1. its parents' text, 2. the over-all thrust of comments for that article [the conversational climate, as it were] and 3. the linked news article [if appliable]) in order to ultimately begin to make intelligent asscertians re: what probably will get mod'ed down or up. Ultimatly, you might be able to generate a prog to do the job of moderating, leaving just some few meatspace meta-moderators with the job of making sure the AIA doesn't go karma-crazy.

    Or, this could end up being a filter that an individual user could apply to his/her Slashdot viewing, so that moderation reflects his/her tastes, data-wise.

  • This is slightly-OT, but I thought it was interesting. The current "state of the art" in web-based AI is the "AliceBot [alicebot.org]." It's a conversation bot that won an international competition for AI.. it was dubbed the "most human computer." It's pretty interesting, but as you play around with it for a while, you'll get the impression that it really isn't all-too advanced. While I realize that AI that generates conversation may be much more difficult than any AI you're planning on coding, I still think it's interesting to look at the current leader in the field. I'm sure trying to make a computer seem "human" and trying to use AI to tailor web experiences are two different things, but I still think there's a cross-over.
  • by damyan ( 44781 ) on Tuesday May 16, 2000 @08:05AM (#1069057) Homepage
    I've just today finished the last of my 3rd year project at Sussex Uni.

    I'm not sure how you'll be assessed, but the thing I found most important was that you are assessed on your report, not necessarily on how well what you do works.

    If you can try and target your research to something that will allow a good write up, then you're on to a winner. For example, someone did an email client that attempts to learn what you do with emails. The thing that made the report good was that he was able to test it on different people and collect data and evaluate it.
  • by BranMan ( 29917 ) on Tuesday May 16, 2000 @08:05AM (#1069063)
    There needs to be a real push into getting intelligence in educational software (for kids). Most of what I've seen is drek - while some of it is very slick and good-looking, it lacks real educational content. You either do not learn anything, or you learn it once and then repeat it endlessly.

    Here's a challenge for your AI - adaptive educational software. Most software today requires the child to 'log in' so it can keep track of their saved games. Go further. Keep track of what the child does, how successful they are, and tailor the next experience accordingly.

    Give rewards for progress. Reduce the rewards for continued success at the same level (gradually). Prod them into more difficult problems / puzzles / challenges. Eventually remove the lower, introductory, levels all together. Give different rewards.

    Do all this while keeping it fun, and keeping them coming back for more. Pop quizes to keep them sharp - reward those accordingly. More advanced information (kind of like sidebars), when they are ready for it, can appear as options. Almost a tutor / friend relationship.

    Teach the young how to learn - what could be more challenging for an AI project?
  • by Ed Avis ( 5917 ) <ed@membled.com> on Tuesday May 16, 2000 @08:08AM (#1069074) Homepage
    I realize that neither of these is particularly original, but then neither have they been perfected by anybody; there is still lots of room for improvement, and plenty of scope for adding as much natural-language AI as you like:

    A precis program which will condense long websites or discussions. Jon Katz's articles on Slashdot might make a useful set of tests, also try to precis the comments posted to them :-) Seriously though, it would be useful for busy people who haven't got time to read things from top to bottom. It could have some way of determining which links are relevant, following them, and adding their information to the precis generated (with attributions if necessary). Or you could use it to cut out types of information you don't want - to filter out marketingspeak, if you're a techie, or to filter out techspeak, if you're not.

    The other suggestion is a remembrance agent which looks at the website you're reading and suggests links (from your browser history, from search engines or from some big collaborative database) which might be relevant. This might finally be a use for those sidebars that recent browsers seem to have spouted. Again, this has been done before, but it's not something which has been done perfectly. You might also be able to use it as a fact-checker for postings you make to Usenet - although that would be rather difficult to implement, I imagine.
  • What's happened to you Sam? Records is a dead-end department. Information Retrieval is where it is at. And just look at this suit you are wearing, you'll never get anywhere in a suit like that. I'm perfectly happy where I'm at Jack. Don't you have any dreams Sam? Dreams? No, I don't have any dreams. I don't want anything Jack. SaaaaaAAAaaammmmmmm.
  • by extrasolar ( 28341 ) on Tuesday May 16, 2000 @08:10AM (#1069078) Homepage Journal
    Oh. You mean START [mit.edu]? Ask it a serious question and will often give you the write answer. It can a few non-serious questions as well ;).
  • I'd like to have an intelligent agent that would post Interesting=1, Insightful=2, Funny=1 messages to my /. account.

    --

  • Mix

    - natural language parsing
    - web crawler / discussion group logger
    - intelligence

    Get

    - persons (id by nick/links/style)
    - topics a person discusses
    - depth and linkedness of topics

    Provide

    - lists of specialists on a wide range of topics

    Probably nothing too new, but the reqs/specs should be adaptable for something useful and implementable.
  • Jeez, lighten up. Can't he be interested in what other people might think of?
  • Not "doctor program": its name is E liza [cmu.edu]. (AI Attic [utexas.edu] version also)

    If you're going to have such a program for people to chat with, that is called a Cha tterbot [dmoz.org]. It's been done [chatter-bots.com].

    There are an assortment of Vi rtual Robots [about.com] for different web tasks. Personally, I think the searching/indexing problem is still lacking a solution -- although librarians have been working on it for decades.

  • One of the things that comes to my mind ponder AI and the web is smarter search engines: use the power of classification systems (say something like self-organising maps like WEBSOM [websom.hut.fi] in order to get something like useable semantic nets. For example, I'd love to see a web engine that, when searching for "Serpent" or "Blowfish" would ask me "Are you looking for an animal or an encryption algorithm". Also, this would make it possible for a search system to produce hits that don't use the literal search term(s), but only synonoyms.
  • Some friends and I were talking about a different kind of search engine. One that can be given an image or sound clip as input and which searches the web for similar matches. So, for example if you give it a photo of a person, it gets you all photos in which that person appears. Similarly for a sound clip. I think this would be an interesting AI project.

    Sam

    ___________
  • Hmm... This sounds familiar. ;-) If you're interested in seeing a kind-of implimentation of this, check out Orson Scott Card's description of the Fantasy Game in Ender's Game. Its kinda fragmentary and self-contradictory in places, but the book was written over a decade ago.


    -RickHunter
  • by StandingBear ( 99367 ) on Tuesday May 16, 2000 @08:25AM (#1069105)
    Really?

    How about a desktop that learns as you use it, and can predict where you're about to store the file you're working on?

    Or one that watches the way your organize your stuff, and the features you use to do it, and doesn't stand in your way when you try but actually HELPS you do it?!

    Sure, fixing the OBVIOUS stupidities (all 400 zillion of them) that Windows and GNOME/KDE have would be a nice starting point, but why stop dreaming there?

    Wouldn't it be nice if your desktop & apps could actually work together and know at least -something- about where you're keeping your files for a given project? Or what projects and what files are related? Or what features you tend to use and how you use them?

    C'mon segmond, use your imagination for crissakes, bringing some AI to the desktop would certainly be useful.

  • This is a really shitty idea. We definitely don't need a computer moderating for us. If we need more moderation, then assign more people as moderators. A computer won't do better than it's programmed, and it's programmed by one person - which means our filtering is the result of a single set of values rather than a distributed set.

    Plus, we don't need to help those who want to write filtering software for other purposes.

    I'd rather he worked on a slashdot posting agent! Something that combed the web for potentially interesting Slashdot stories for us to read and comment on.
  • call me a karma whore if you will, but consider this first . . .

    artificial == not real
    intelligence == smart
    therefore, artificial intelligence == not real smart

    I remember AI being the big buzzword about seven or eight years ago, but it was set aside after failing to deliver computers that think and whatnot. Intelligent agents have taken the place of AI since then. It would be really cool to see universities supporting an intelligent agent network that would allow users to submit an agent to perform some specialized task.

    The idea is for the agent to travel from machine to machine using unique data sets (usually massive data sets that would be unweildly to move around) to perform calculations or gather statistics. Or the agent coule replicate itself as it travels and perform a massively parallel calculation.

    Yes, yes, yes, but what about viruses and malevolent users etc. etc. etc. The network admins and managers would have to restrict usage to those would could be trusted. The code would also have a trust level associated with it as well (along the lines of Java and the JVM).
    ------------------------------------------------ ----------------

  • I probably know to little about AI to be suggesting anything here, but disclaimers aside:

    I've always found it disappointing that systems which try to profile customer preferences aren't smart enough to understand that people can like the same thing for completely different reasons. A smarter system should be able to model the motivations and intentions of consumers to better match them to products and services. It would need to be able to store partial information, which may not make sense initially, but which could provide meaning after sufficient accumulation.

    I think consumers would be very willing to answer questions like, "Why did you buy this product" or, "Click the attributes you like/dislike about this product." Most people who browse the internet are often actually looking for in-depth product information. In fact, the ideal way to collect this information would be to interact intelligently with the user when they are using a search engine, trying to find a specific piece of information. It would be great if AI software could help them find what they are looking for and be able to suggest truely similar alternatives.

    This may seem nefarious, but I don't think advertising would be an intrusion if it were driven by true interests of individuals rather than the sales goals of marketing execs.

    "What I cannot create, I do not understand."

  • Oh, yeah. I asked it "What is Alexandrian Wicca?" and got a page that told me it did not know the word "Alexandrian", after telling it to accept the word, it told me it did not know the word "Wicca" and quit. A further question resulted in a response of "I don't know the answer to your question" (Which was "Who was Gerald Gardner?"). I chose these questions cause their kinda obscure - but I did not expect a completely null response. Next...

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...