Best Way to Build a Searchable Document Index? 216
Blinocac writes "I am organizing the IT documentation for the agency I work for, and we would like to make a searchable document index that would render results based on meta tags placed in the documents, which include everything from Word files, HTML, Excel, Access, and PDF's." What methods or tools have others seen that work? Anything to avoid?
Lucene (Score:5, Informative)
Google (Score:3, Informative)
If you value your privacy, invest in a Google mini, though.
Swish-E (Score:3, Informative)
Re:Lucene (Score:5, Informative)
Re:Lucene (Score:4, Informative)
Re:Google (Score:5, Informative)
We even pointed it at the web-cvs server and bugzilla and it was great at searching those too.
To see all the bugs still open against v 2.2.1 or something like that bugzilla's own search was better. but for searching for "bugs about X" the google mini was great.
It only cost something like $3k ircc.
not exactly what you asked about, but you should definitely see if this wouldn't work for you instead.
Most easy solution (Score:4, Informative)
Your not even realy required to use added tags... (as most people will put in poor tags).
But if you like you can add tags even with sharepoint.
Re:Lucene (Score:5, Informative)
I've got a prototype of the system described in the OP that we did while quoting a fairly large project. It's really easy to have an 'after upload' action that'll push the document through strings (or some other third party app that can operate similarly, given the document type) and throw the strings into a field that gets indexed as well. That pretty much handles everything you may need.
Obviously I'd also allow someone to specify keywords when uploading a document, but if this engine's going to just be thrown against an existing cache of documents, strings-only's the way to go.
Also see Xapian (Score:5, Informative)
http://en.wikipedia.org/wiki/Full_text_search [wikipedia.org]
If you're not afraid to do a little reading and potentially coding a custom front end, you may want to look at two of the big open source engines: Lucene and Xapian.
Lucene is quite popular now, and is an Apache Java project. It's a good choice if you're a Java shop.
Xapian seems to be based on a little more solid and modern information retrieval theory and is incredibly scalable and fast. It's written in C++, with SWIG-based front ends to many languages. It might not have as polished of a front end or as fancy of a website as Lucene, but I believe it's a better choice if you have really really huge data sets or want to venture outside the Java universe.
There are also many other wholely-contained indexers too, mostly which are based on web indexing (they have spiders, query forms, etc.) all bundled together. Like ht://Dig, mnogosearch, and so forth. They are good, especially if you want more of a drop-in solution rather than a raw indexing engine, and if you're indexing web sites (and not complex entities like databases, etc).
Re:Google Desktop or Applicance (Score:3, Informative)
I set up a Google Mini for indexing an internal wiki, our bug tracking system, and some other systems, and it is very straight-forward.
I know the original question mentioned meta-data, but you have to ask yourself if the meta-data is going to be maintained well enough that the search index will be valid. Going the Google Appliance route is so much simpler. It takes a bit of tweaking to set up the search restrictions, but once up and running, it works flawlessly. Most importantly, it doesn't require everyone to make sure that all their document meta-data is perfect.
Google appliance pricing is really quite cheap when you compare it to the time cost of setting up a meta-data driven system.
Meta-data is one of those things that seems like a really good idea, but like all plans, doesn't tend to survive contact with the enemy, which in this case is the user.
Paul
Re:Lucene (Score:3, Informative)
DocSearcher - http://docsearcher.henschelsoft.de/ [henschelsoft.de] - already does it. A friend with the US Coast Guard wrote it 4+ years ago, I deployed it within the Department of Justice for a few projects, and it's pretty widely used among some of the local tech circles. It even plugs into Tomcat if you want a web-based UI.
Re:Easy (Score:3, Informative)
I've deployed it across a handful of servers, and it does a good job of crawling, but doesn't do well with javascript. If you have javascript for your web's frontend, you can write a shell script to find . -print, prepend the urls into a file, and point htdig at that file. It will dig into each file it finds, and create a searchable database of everything that it finds.
You add
- Avron
I do this in several programming languages (Score:3, Informative)
A good tool for getting plain text out of various versions of Word documents is the "antiword" command line utility.
The Apache POI project (Java) can read and write several Microsoft Office formats.
For indexing: I like Lucene (Java), Ferret (Ruby+C), and Montezuma (Common Lisp).
I have mostly been using Ruby the last few years for text processing. Here is a short article I wrote using the Java Lucene library using JRuby:
http://markwatson.com/blog/2007/06/using-lucene-with-jruby.html [markwatson.com]
Here is another short snippet for reading OpenOffice.org documents in Ruby:
http://markwatson.com/blog/2007/05/why-odf-is-better-than-microsofts.html [markwatson.com]
---
You might just want to use the entire Nutch stack:
http://lucene.apache.org/nutch/ [apache.org]
stack that collects documents, spiders the web, has plugins for many document types, etc. Good stuff!
Re:Most easy solution (Score:4, Informative)
I do a lot of LAMP development, and I'm not the strongest fan of Microsoft for a lot of things, but if you have a MS desktop and MS Office environment, SharePoint services really is quite decent for INTRANET applications. Especially for collaberation. You can set up work flows for check-out/check-in, and it integrates really nicely with some of the more recent MS Office releases. If you connect it to a real MS SQL server on the back end (as opposed to the express edition that it defaults to), you can have full text indexing even with the free SharePoint Services version. Only need for the full blown Portal/MOSS version is if you think you are going to have a large number of sharePoint sites, and want to simplify cross-connecting and management. (At least as far as I can recall)
I'm not saying SharePoint is the way to go, but I'd at least read up on it and consider it IF you have a lot of MS Office stuff that you plan on indexing/sharing.
I'd strongly advise avoiding it if you plan to do Internet-based stuff though... at lest until you get a good enough understanding of the security issues involved that you feel that you really know what you're doing.
Just my $0.02 worth.
If money's not an object... (Score:2, Informative)
Re;Easy off the shelf (Score:2, Informative)
Re:Install Wumpus Search (Score:3, Informative)
Re:Also see Xapian (Score:2, Informative)
I agree that Lucene is a great choice specifically for java shops, it does have ports for pretty much all major languages. The java implementation is the 'mothership' but you can use lucene with php, python, .NET or C++ or whatever.
Secondly, I'd like to point out Lemur [lemurproject.org]. It's an indexing engine similar to Lucene, but geared much more toward the language modeling approach of information retrieval. All IR approaches will use either a vector space based approach or a language model approach. Lucene does vector space very well, but it's difficult to get it to do language model based retrieval (although extensions are available), Lemur can do both. Lemur also has Indri, a search engine written on top of Lemur, which can parse html, PDF and xml. And like Lucene, Lemur has multiple language ports of the API.
A final point I would like to make is that IR is a very actively researched field. If you're going to do your own coding (specifically the retrieval model), I suggest you buy a book and get reading. Most of the basic problems (and there are many) have been figured out and it'll save you a lot of trouble if you just read up on how to update an index or find spelling suggestions, instead of figuring it out for yourself. It's possible to index your documents with Lucene and run searches on them in half an afternoon, but it takes some basic knowledge to get it right, and make the app useful. (Look at wikipedia's search for an example of what you get when you don't follow through, and stop after it seems to work ok).
Re:Google (Score:4, Informative)
Re:Lucene (Score:3, Informative)
If you needed to run, say 100 indexing engines in parallel and merge the indexes, you'd have to research that. Somebody's probably done it.
Re:Check out Alfresco! (Score:2, Informative)
I notice a lot of the comments in the thread are coming from developers or sysadmins who want to solve everything with libraries or command line tools. But it really sounds to me like you need a reasonable document management system (and of course being a slashdot reader you want it for free).
Again, I'm not affiliated with Alfresco, but did quite a bit of research into open source DMS's that would run in a java environment for a couple of recent projects. I found Alfresco to be well architected, easily extendible if I needed it to be and importantly simple to deploy & get running. It will integrate with your LDAP for access and while it's marketed as an Enterprise CMS, is quite capable of doing DMS.
It uses Lucene under the hood, and while it has a web UI, isn't focused on indexing web sites. You can record meta-data against docs, and it's also capable of extracting some metadata from common MS Office formats. I've no doubt this could be extended if there were other doc properties you wanted access to (although I've never tried myself).
Most importantly is that the project & community is quite healthy with very active forums. You can get paid support (the Enterprise License) if you so desire, but I expect you'd probably start with the GPL version just to get yourself up & running.
I wouldn't recommend the SMB interface for the time being as there's currently an outstanding bug with it that causes it to die after a while (the rest of the app continues to run happily), however the FTP interface is great for an initial import of docs. Also take a look at the rules capability for classifying/sorting docs as they're imported.
It does the basics like check-in/check-out & workflow, and can be backed by your DB of choice as it uses Hibernate for ORM. Searching can be done against keywords or meta-data (classifications, dates, authors etc) & in my experience is more powerful/useful than sharepoints keyword based searching. If you're really keen you can use the Java or Web Service API's for integrating into other solutions.
Again, I'm not affiliated, but clearly I'm a fan-boy
Re:Most easy solution (Score:3, Informative)
Next to that, there are a lot of caveats and as soon as you start modifying the layout (even though it's just the HTML) in SharePoint Designer, Microsoft Support will not help you (as if they could in the first place). Simple things like whitespaces in-between table structures can make your list workflows screw up (yes there is an actual opening with Microsoft Support for that very issue). A lot of things will not work either and require a nasty hack or workaround (like attachment upload on modified forms) and are known with Microsoft and have been known for the last 9 months.
Re:Lucene (Score:3, Informative)
Re:Lucene (Score:5, Informative)
Yes, they have. In my previous job we had to search 2 terrabytes of plain text data (HTML) really fast. The company chose Autonomy, and many developers spent many months trying to make it work, consuming insane amounts of hardware resources for mediocre results, and still requiring . One lone (and brilliant) dev whipped up a Lucene proof of concept in a weekend, and it was faster (full index in a day) required less resources (a single HP DL 585, 16GB RAM, 4xdual core AMD as opposed to 10 of the same), had a smaller index (about a 5th of Autonomies'), returned results faster, the result set was more accurate, and was significantly more flexible in making it do what we actually needed it to do.
Lucene wins hands down
Search software (Score:3, Informative)
Terrier - LINK [gla.ac.uk]
Indri/Lemur - LINK [lemurproject.org] / LINK [lemurproject.org]
MG - LINK [mu.oz.au]
Re:Meta tags are worthless, generally (Score:3, Informative)
Unless you can pin responsibility for a document to a named person, you can't trust anything in the document. Not metadata, not content, not presentation.
The meta tags most of the documents I deal with are inserted by the applications, and only the content is human-drafted. Those meta tags contain information like creation date, mdification date, application name, character encoding, etc. They are generally trustworthy.
I'm also in the process of building a documentation system; it will be a set of documents in various formats, with an HTML interface, TomCat server and Lucene to make it fully searchable.
In a previous job, I did a similar thing with Apache and ht://dig on an old Dell I recycled. Document files could be uploaded by anybody with an FTP account on the server, and index files were automatically regenerated by a CRON task at 04h00 each day.
I could have made a trigger to regenerate the index after each FTP upload session, but using CRON was easier and sufficiently frequent to be useful.
This time around, the whole system of TomCat webserver and Lucene search engine is bundled on a CD-ROM with the docs to run on any of the firm's laptops. Because I control the documents, I can build the index files and burn them to the CD-ROM before distribution.
Beef
Look at how you will access the docs first (Score:3, Informative)
Full-text indexing allows users to search the entire contents of documents, but the results are imprecise and voluminous and not terribly useful in most cases (think web search engines here). Yes, you can find all documents that contain the word "patent", but you get a lot of old references to patent leather shoes in addition to what you were probably after. So, with full-text search you get it all, but force the user to subsearch for what they really want.
Using meta-tags gives the appearance of pre-classifying documents and having the users do it themselves means you don't have to have a dedicated person to assign the tags. The disadvantage is that everybody makes up their own tags or if you have a standard set, you have to rely on people being diligent about applying them. And tag popularity can easily change over time. For example, if you want to find docs that refer to "removable media", this might have garnered a "floppy" tag 15 years ago and "CD" or "DVD" today. You are therefore almost guaranteed of missing some documents using this method.
Database indexing means that you list all your docs in a database, perhaps by title, author, date, or other fields that your users would find useful for searching. The advantage is that every doucment is indexed the same way, searching is really fast, and the results are usually relevant if your schema is meaningful. The disadvantages are that indexing the docs takes work on input and users need to know how to search to get the best results.
Finally, you could organize the docs by simple name and folder. This works fine for the desktop and users usually can identify the category that points them to the folder they want. The disadvantage is that this only works well for limited document sets. Once you start getting hundreds of categories and thousands and thousands of documents, things become too hard to find.
So - understand your users search requirements and the size of your expected database. Only then can you make an informed decision about how to create and index the repository.