Simple-to-setup Expert System? 28
Mark Hood writes: "I've been asked to provide a simple trouble-shooting guide for new engineers to follow when presented with an unfamiliar fault or bug report. Mainly this can be done with static web pages and a series of 'yes/no' questions... (Is it plugged in? Is it switched on? etc) but a nice facility would be to allow engineers to update it with what they did to fix faults / gather information. The question is: does anything simple like this already exist? Web searches turn up loads of 'Expert System Shells' or 'Programming Systems', but I was hoping for something that wouldn't require months of dedicated time (this is not my job, I'm just helping out :). Anyone done this at their workplace? Any hints? Or should I knuckle down and write a few CGI scripts for adding data to a web page?"
Simple? (Score:4, Informative)
--------
* [After blah happens, Does this happen when this happens?]
* [After blah happens, does this not happen?]
Insert description of what may be going on if it's neither of the links to the next pages.
---------
and have wiki links to other pages asking questions as similar links.
It's simple, braindead, no custom coding, and laughingly easy for any engineer to update. It's not the most elegant solution, but in a pinch, it'll work. In my current and previous workplaces, the IT departments (at least the UNIX folks) have used wiki's for documenting just bout everything.
Here ya go: (Score:3, Funny)
FAQs and Searchable Mailing Lists (Score:1)
I do not know of any "expert systems" which would qualify, but I think it might be fairly effective to go with mailing lists, archived with, for example, hypermail [hypermail.org], and searchable with, say SWISH++ [mac.com], and to have FAQs which can be updated by the users using, for example, Faq-O-Matic [sourceforge.net]
Re:FAQs and Searchable Mailing Lists (Score:3)
If there's one thing I hate, it's Faq-o-Matic. I have never been able to get decent information out of such a mega-hyperlinked irritatingly-coloured monstrosity as Faq-o-Matic. That includes OpenLDAP's [openldap.org] FAQ-o-Matic, Amanda's [amanda.org] FAQ-o-Matic, Lynx's [isc.org] FAQ-o-Matic and FAQ-o-Matic's [sourceforge.net] own FAQ-o-matic. Clicking a hundred links to get to a single paragraph that almost, but not entirely completely fails to answer the question is more annoying than not having an entry at all. And why does every FAQ-o-Matic seem to be hell-bent on experimenting in shades of puke for the colour scheme? Lynx's FOM doesn't follow this trend but damn near every single FOM on the planet is butt-ugly in addition to being terrible to navigate.
Provide FAQs in plain, easy to read HTML or text. Screw FAQ-o-Matic.
Old game "animal" (Score:4, Interesting)
Here is a log of some output:
I think I'll try a guess now...
Is your animal a Elephant? no
I give up! You win! What was your animal? Dog
I need a yes-or-no question so I can later
tell the difference between a Elephant and a Dog.
Does it have a trunk?
what would be the answer be for a Dog? no
what would be the answer be for a Elephant? yes
I now know 3 animals!
Want to play again? yes
Think of an animal.
Press [Enter] or [Return] to continue...
Does it have a long neck? no
Does it have a trunk? no
I think I'll try a guess now...
Is your animal a Dog? no
I give up! You win! What was your animal? Monkey
I need a yes-or-no question so I can later
tell the difference between a Dog and a Monkey.
Is it a biped?
what would be the answer be for a Monkey? yes
what would be the answer be for a Dog? no
I now know 4 animals!
Want to play again?
$ apt-cache show animals [debian.org]
Re:Old game "animal" (Score:1)
Re:Old game "animal" (Score:1, Funny)
I think I'll try a guess now...
Is your animal a Elephant?
> no
I give up! You win! What was your animal?
> Oxygenation of feedstock upstream of reactor
I need a yes-or-no question so I can later tell the difference between a Elephant and a Oxygenation of feedstock upstream of reactor.
> Is reactor temperature above spec?
What would be the answer be for a Oxygenation of feedstock upstream of reactor?
> yes
What would be the answer be for a Elephant?
> no
I now know 3 animals!
Want to play again?
CLIPS (Score:3, Informative)
http://www.ghg.net/clips/CLIPS.html [ghg.net]
Re:CLIPS (Score:2, Interesting)
CLIPS looks like a wonderful system, but as I stated in the original submission, I don't want to program an expert system, I just want to use it
Something like the 'animals' program suggested below is probably ideal... if a little simplistic.
Re:CLIPS (Score:3, Informative)
If your problem is simple enough, maybe an expert system is overkill and you can program it with some if statements in a cgi script.
Specifying the rules in an expert system is not that hard at all and certainly won't take you several months. Making the expert system interact with a bunch of cgi scripts shouldn't be that hard either.
A system you might want to look at is JESS (Java Expert System Shell) which is based on CLIPS:
http://herzberg.ca.sandia.gov/jess/
You could use it in a servlet or as an applet.
Re:CLIPS (Score:3, Interesting)
Now, there might be a program in the CLIPS language out there that might already do this, but OTTOMH, I don't know of one.
Not too helpful but... (Score:3, Informative)
What you are trying to do here is supply incomplete information about a problem, in order to classify it, so you can suggest a fix. There is an algorithm (the ID3 algorithm, see eg http://www.cise.ufl.edu/~ddd/cap6635/Fall-97/Shor
In order for ID3 to work you need to supply it with examples where the data was classified correctly. This translates to faults where the problem was identified and resolved; you can learn from previous examples simply by treating every resolved problem as part of your training set (and rebuild your decision tree).
You will also need to decide what data you are going to use for classification. This is simply done: when you fail to resolve a problem, ask the engineer what question should have been asked to identify the problem; something which takes a finite number of responses. Treat every question as having N 'known' responses (things that engineers have answered) and 1 'other' answer (this allows you to classify problems when data is incomplete, or known but not used in existing classified problems).
This approach would probably work. I've thought about using it before when I ran a helpdesk. But, it is fairly restrictive (questions have to be completely independent, for example, so that ID3 can reorder the decision tree), and the questions engineers supply might not help to learn much (eg: if the problem turned out to be the 'flange widget being loose' then the engineer might suggest 'is the flange widget loose' as a question; instead of the better question - 'Q: is there a loose widget? A: Yes, the flange widget.')
When I thought about writing this I thought it would also be useful to allow the engineer to add text suggesting how to determine the answer to the question ('examine all 6 washers under the grommit hinge') and an explanation of why the question is being asked, to teach the engineers ('loose or worn washers can rattle about. The gromit hinge washers have been prone to this in the past')
Anyway, hope this helped a little showing you how you could create a trainable system. As other posters have commented, CLIPS and the like require the rules to be written in the first place. ID3 is actually simple enough that you could code this yourself without much trouble, and downloadable implementations exist on the net. There are incremental variants (search for ID5) which avoid rebuilding the whole decision tree when a new solved problem is added.
For more sophisticated systems that actually try to do this right, take a look at the expert systems faq: http://www-2.cs.cmu.edu/Groups/AI/html/faqs/ai/ex
Redhat? (Score:3, Funny)
:)
"Expert" system (Score:3, Informative)
Maybe I'm missing something, but... (Score:2, Informative)
I know of support systems that utilize expert system shells implemented either with iLog or some other java Expert system shell. They typically require about 3-5 months with 3 knowledge engineers to build the knowledgebase. Writing efficient rules is an art, which many people do not realize. I've know and heard of people writing rules as if it was procedural programming and took a dozen pages or more.
My advice is don't go down that route unless it is an authorized project and has the funds allocated for it. Working with expert system shells takes a lot of skill and experience to do effectively. How the rules are written can greatly impact the performance of the engine and application. CLIPs and JESS both implement the RETE algo, which is a forward chaining algo. For support purposes RETE is fine, since you're dealing with simple question and answers. You may want to check out iLog, since they have done that type of thing in the past.
Case Based Reasoning (Score:4, Informative)
In general, I think you are going to find that even if you pick a general rule based package such as CLIPS or iLog Rules or go with some that has CBR capabilities such as A*E the project is going to be a lot more difficult than you think.
Diagnositic expert systems are not simple, and having a good tool is only the begining. If you want a usuable effective system plan to have 2-3 experts work at least a year to develop it.
Disclaimer: I develop expert system for a living, and have extensive experience developing diagnostic systems. Most clients I've worked with have no idea how difficult a problem this is. You really have restrict your problem domain or the task is impossible.
Re:Case Based Reasoning (Score:1)
To build a useful system like the one described in the original post, you're really talking about a couple separate domains:
Re:Case Based Reasoning (Score:2, Interesting)
If the one asking the question would set up a wiki-web and slowly added q&a's, then what would be non-trivial about this? The cataloguing involved?
Bart
Wiki. (Score:2, Interesting)
WebLS (Score:2, Informative)
LotusNotes (Score:1)
Expert Systems, CBR, etc. (Score:1)
I will also agree that as small as the problem space is, an expert system may not be the best fit.
As for CBR, it is a different beast in terms of how you build your solution. You essentially use a set of known cases, or solutions instead of "rules of thumb". And that might be a better fit for this task.
For the record, ArtEnterprise does have CBR capabilities. It's now sold by MindBox [mindbox.com].
Also, if you want to experiment with a free expert system, you might want to try Jess [sandia.gov]. Other folks have already mentioned it, but not provided a link. It's basically CLIPS rebuilt in Java, but it lets you use real Java objects instead of having to use the CLIPS COOL model.
Expert Systems (Score:1)
One is plain old code. Very hard to maintain.
Two is a forward chaining production system. These systems move forward from facts to a set of conclusions. Examples of this include CLIPS and iLOG. CLIPS is public domain source from NASA. Jess is a Java version of CLIPS. Commercial versions include ART, Haley, and a host of others. If you have many rules, you will want a RETE based system, since that scales Olog(n) to number of rules; there are more advanced algorithms than RETE also.
Three is a backward chaining production system. These systems move backwards from a goal statement to a compatible set of facts. Most common example is Prolog. Amzi has an inexpensive prolog. Prolog also particularly good at writing parsers.
Four are a set of case based reasoning system (CBR). These take cases and learn inferences over them. Particularly useful for help and diagnostic systems. Inference Corp, Brightware and others have such systems, usually in hybrid engines that also do rules and objects.
Five are a set of formal statistical reasoning systems, for example based on Bayes Theorem, such as influence diagrams. These can also be generalizations of Markov models, commonly used in speech recognition.
Sixth are "informal" statistical reasoning systems, such as artificial neural networks (ANN) and fuzzy logics. Although they have precise mathematical models, the learning algorithms are usually not precisely statistically matched to the training sets. You can find several ANN and fuzzy logic algorithms in various recipe books or the faq at comp.* newsgroup.
I have seriously butchered expert systems in this oversimplified note, but this gives you a sense of the space. I echo other posters in recommending that you obtain a general purpose expert system tool, learn it, and try multiple approaches. Most production systems really show that 1) knowledge of the expert domain is most important, and 2) using what works is more important than picking a single "pure" approach.
My thoughts (Score:1)
I run a helpdesk for a small - medium sized ISP and recently we wanted to start an online knowledge base that would be web based. Even though I am not overly ecstatic with the result, it seems to serve its purpose. I will take you through the steps that we went through with this :
1) Installed a CGI based Wiki, sorry can't remember the name of this one. It went soooo sooo slow. True the server was only a P200 but the front page would take over 5 seconds to load sometimes.
2) Wrote a tree structured knowledge base from scratch using PHP. This in my mind was the best approach despite the time writing the knowledge base code. Unfortunately the person developing this had other ideas and this ended up getting scrapped. The one thing I loved about this was its tree structure, it was very easy to see what steps you had been down and what possible paths there were to take.
3) Finally we installed PHPWiki and was fairly impressed. It ran quick and later on when it got moved onto a dual processor system I actually noticed no speed increase, so I guess it was running as fast as it can just on a P200. It is easy to maintain, though only thing that I miss is a nice tree like structure, instead pages are just linked to each other.
Hope this is of some use.