Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
The Internet

Building a Search Engine Using Open Technology? 42

cybrthng asks: "Mozdex.com is my attempt at building a search engine capable of indexing the entire web. Our goal is to provide a completely transparent system utilizing open technologies such as Nutch, Lucene and other systems to provide a search facility that is more scientific and 'protocol' vs the current propriety and almost 'faith based' search engine results and methods of getting listed. What do you look for out of a search engine? What would you look for out of this project? Should large commercial entities be the only way we find information and resources on the net? BTW, our beta index currently has about 50 million pages and we hope it shows what can be done using Open Source systems available today. We are seeking input on starting a developer & input community as well as getting concepts and ideas out and about, so we value your ideas and what you hope to see out of this project."
This discussion has been archived. No new comments can be posted.

Building a Search Engine Using Open Technology?

Comments Filter:
  • by shaitand ( 626655 ) * on Wednesday May 12, 2004 @10:47PM (#9135443) Journal
    Support our index, sponsor mozAds keyword Advertising as low as 1/cent click

    Is it different only because it runs on open source software? Hell google does that successfully already.
    • by k4_pacific ( 736911 ) <k4_pacific@yaho o . com> on Wednesday May 12, 2004 @11:04PM (#9135557) Homepage Journal
      Yes google already runs on OSS even though the search software itself is proprietary. If you wanted to truly put the search engine in the hands of the people, consider this idea. You could use P2P technology to distribute the search index across millions of systems worldwide. If someone wants to use the search engine, they must download the client software and donate, say, 100 MB to the project. Of course, you would have to have the system set up so that it has massive redundency to handle cases where individual nodes are offline. Also, the logistics of distributing the search across so many systems would need to be worked out. Furthermore, there is the possibility that users may attempt to tweak the client handling their node to increase the score for various pages or decrease the score for others. These issues would have to be worked out, but it could be feasible. Frankly, I'm too lazy to implement it, but you are welcome to credit me for the idea when its all done.

      • by Tune ( 17738 )
        Although the website mentions "open source" a lot it only suppplies a link to a sourceforge page [sourceforge.net] which does not seem to supply anything downloadable.

        ALthough Mozdex appears to be of good will, notice that the GPL does not force them to distribute changes to GPLed code as long as they're the only ones using the code. THe GPL would only be effective if they would try to distribute changed binaries, but they do not distribute anything other than HTML web content. This could become a major headache with the GP
      • by Anonymous Coward

        Step 1: Create P2P search engine technology.

        Step 2:

        Also, the logistics of distributing the search across so many systems would need to be worked out.

        Step 3:

        Furthermore, there is the possibility that users may attempt to tweak the client handling their node to increase the score for various pages or decrease the score for others. These issues would have to be worked out, but it could be feasible. Frankly, I'm too lazy to implement it, but you are welcome to credit me for the idea when its all done.

        St

  • As a webmaster (Score:2, Insightful)

    by Anonymous Coward
    The thing I look for is a polite bot. Does it follow robots.txt fully? Does is hammer the server? Does it page modification headers?
  • by Chester K ( 145560 ) on Wednesday May 12, 2004 @11:00PM (#9135530) Homepage
    An open source search engine is a great idea! I'll know exactly how to exploit the ranking algorithms to position my pages as #1!
  • How about you work out some way to do this old saw:

    Searching for "Jaguar" the fighter bomber as opposed to "Jaguar" the comic book character.

    But then, I'd want you get into natural language processing to determine what the real "topic" was that I meant. Of course I'm assuming a free form field. I'd like to just be able to put in "Jaguar the bomber" or "aeronautical: jaguar" or "plane jaguar" or even "plain jaguar" and have it do a Googlesque "Did you mean 'plane Jaguar'".

    Hmm, and a fun API so you could
    • Oh, and the search engine needs to have some understanding of the pages it's looking at so it can distinguish between pages that are about jaguar planes (or the comic book character) as opposed to pages that just mention them but might actually be about a related topic.
    • by jc42 ( 318812 ) on Wednesday May 12, 2004 @11:20PM (#9135632) Homepage Journal
      Yes, and a related topic is indexing files that are in some specialized format.

      I run a search site that only indexes a few hundred other sites and around 170,000 files (today). What the files contain doesn't matter here. What's significant is that the data, while being (usually) plain ascii text, is not in any human language. If you saw it and didn't know the subject area, you wouldn't be able to make sense of it. It's very useful to a few thousand users, and of no interest whatsoever to anyone else.

      One thing that could be feasible with an open-source search project is to discuss ways in which specialized search engine like mine can be incorporated. The data that I index can be related to several other kinds of online data that are in turn indexed by others. But my code doesn't make the connection, and neither do the search engines for the related types of data.

      This strikes me as a significant problem that the big guys can't much work on (yet). And, like "orphan" drugs, they probably won't ever find it worthwhile to work on most kinds of data that only exist in a few thousand files.

      But if we could define a way to interface search engines so that they can recognize each other and refer queries to each other, then these specialized data formats could be usefully searched and indexed.

      Sounds worthwhile to me. I wonder if I could find someone to pay me a salary while I worked on it?

      • >...is not in any human language... So, it's like legal documents and stuff?
        • Heh, no. But that could be used as a similar example. A search engine that is good at legal searches should be a lot pickier about how the language is used. I'd imagine that lawyers would really like something that is a lot more targetted and precise than google.

          In general, I'd think that to solve this problem, you wouldn't want to look too closely at specific examples, other than to become convinced that each specialty really is going to need its own parser and syntax analyzer.

          A good traditional examp
    • You could do that by (a) putting in more keywords; (b) letting the search engine suggest topics/extra search keywords for a given search; some search engines try to do this already. As to how, latent semantic indexing looks good (it's a matrix technique used to find relationships between bits of data, such as the ones you discuss)
    • Our search engine does something like this. Go to http://search.aol.com and search for "eagles", for example.

  • It's something to really consider, because I can see how an open algorithm would be beneficial, but it's very easy to see how it can be spammed into uselessness.

    I think of the Dow and other financial indices and believe that the proprietary model may be the only successful way to provide useful, reliable information.

    Then I look at encryption, and I see how the algorithms, being public, can be vetted without compromising the security of the communication through a proprietary, secret key.

    I suspect t
    • Well, it could easily assess a penalty to any website which was ranked above that which the user clicked upon in any given search, thus ensuring that "xxX Hot Teens Slashdot Xxx" is punted down to the bottom of the Slashdot searching pile. Of course, you would need an uncrackable way of summarizing the pages...

  • results of course (Score:4, Insightful)

    by rueger ( 210566 ) on Wednesday May 12, 2004 @11:38PM (#9135743) Homepage
    What would you look for out of this project?

    The only thing that matters is results. Is the answer that I need in the first three or four results? If you can do that, you win. If you can't, don't bother.

    I'm skeptical about how realistic it is to develop an open source search engine. Wikipedia [wikipedia.org], although cool, has large gaps in content, and only a few months ago was begging [slashdot.org] for donations to survive. I'm betting that a Google sized operation would be even more resource intensive.
    • I'm skeptical about how realistic it is to develop an open source search engine. Wikipedia, although cool, has large gaps in content, and only a few months ago was begging for donations to survive.

      Well, Wikipedia did get almost $30K in donations that time and is still getting lots of donations from what I gather, and could easily get a lot more whenever it wanted because lots of people LOVE that project, so that part is successful.

      As for the larg gaps in content, it is being worked on everyday. That's
  • of such a search engine would be something akin to the philosophy of openness that is common to GNU etc, and free as in beer of course. It's OK to have open rankings, if the point of this is to index the more non-commercial side of stuff. And don't bother to cover the same base as google. What's the point?
  • by xmas2003 ( 739875 ) on Thursday May 13, 2004 @12:31AM (#9136000) Homepage
    First, I've futzed around with MozDex for a little while, so congrats on having Slashdot "find" you and getting the word out.

    What I have found REALLY interesting about MozDex is the "explain" button which I assume provides some insights into why MozDex decided to rank that web URL as whatever ... but the information as currently presented isn't understandable and/or explained.

    For instance, I was interested where a Google Compute [powder2glass.com] web page came up and was actually quite surprised that a MozDex Search shows it as #1. [mozdex.com] So I click on the explain button and I get a page with a buncha numbers ... but nowhere on this page (or anywhere on the MozDex site) can I find an explanation for what they heck they mean.

    Since your claim-to-fame is open source/search, I think adding information on the internal algorithms would help you out. Keep up the good work - interesting stuff! ;-)

    alek

    P.S. Minor typo in the Corporate Info link from your FAQ [mozdex.com]

  • - less importance given to commercial sites and blogs, more importance given to general information
    - results not easily manipulated (eg. http://your-search-keywords.com/your-keywords/keyw ords.html and their ilk)
    - fast discovery of new or updated sites
    - features such as caching, view as X, spam reporting
  • by christophe.vg ( 742168 ) on Thursday May 13, 2004 @02:26AM (#9136481) Homepage

    While browsing the Mozdex site, I learned they are using Nutch, an open source search enigine. So I started browsing the Nutch site. On their site I found out that they are sponsored by Overture Research ... The name seemed familiar. Clicking on the link I arrived at http://labs.yahoo.com [yahoo.com].

    Apparantly Yahoo is rather interested in this project. Browsing the Yahoo Labs site I found this page [yahoo.com](which is also the third hit when googling for nutch): "Welcome to the Yahoo! Research Labs implementation of the Nutch open source search engine (www.nutch.org). This search engine is intended as a demonstration platform for a number of search related technologies that we are working on and is specifically not intended to provide a full and comprehensive search experience for the average user. If you do a search here, please do not be surprised or offended if your favorite site is not in the result set for your query.
    With this in mind, please feel free to test drive the technology. Happy Nutch-ing.

    A very quick test shows that the 50 million pages counting index of mozdex is indeed still far to small to really find something. The ranking system will also need some tweaking, but this is also clearly stated on the nutch site: "Nutch has not yet been tuned for quality. There are ten or twenty knobs that we can twiddle to adjust the ranking formula. We are developing software to do this tuning automatically, but the current code just contains guesses. With a little tuning we should be able to get results that are competitive with those of major search engines.".

    Although it is currently not possible to do any real comparison due to the big difference in the number of indexed pages, it sure is nice to see both the Nutch project and the Mozdex project. I hope that both of these project will receive enough funding (and hardware) to continue, and maybe we'll see another /. post when they hit the 5 billion page count and we will be able to do a massive comparison ... and all change from googling to nutching or mozdexing!

    One to watch

  • by iangoldby ( 552781 ) on Thursday May 13, 2004 @04:23AM (#9136879) Homepage
    I'd love to be able to filter out all sites that are trying to sell something.

    Searching on Google for things like reviews of mp3 players has become a nightmare these days. Any useful sites are drowned out in a noise of pricerunner/dealtime/kelkoo/shopping.yahoo/etc and other sites that are simply affiliate sites for Amazon etc.
  • by Anonymous Coward
    ...and a *HUGE* hard drive.

    Download the internet with httrack and search it with grep.
  • by Alomex ( 148003 ) on Thursday May 13, 2004 @08:11AM (#9137857) Homepage
    An OSS search engine that actually indexes the entire web and is used by many people is at least a couple of orders of magnitude harder than the Mozilla project.

    Writing the search code itself is not too hard (you still need a PhD in data structures and algorithms, but those can be found), the real hard part is the amount of bandwidth and CPU power that is required.
  • A different name (Score:4, Interesting)

    by doc modulo ( 568776 ) on Thursday May 13, 2004 @08:13AM (#9137873)

    You need a name that is as easy to pronounce as google. As friendly sounding would be good as well.

    You're "competing" on a number of different areas with google, including the name ofcourse.

    The first thing that came to my mind when I read the name was: "Typical for geeks who are good at the technical side of things, but are bad at marketing and the human interface/psychology side".
  • Right now, there might be 50 million pages indexed, but right now it looks like I've got to go through 1 million of those to get to what I searched for.

    My two tests were 1: "Hattrick" which is an online soccer management game, and is great. Google it and up it comes with some handy links to some sites about it. Using this engine, I got a bunch of crap. It may have been pages that linked to hattrick, but I didn't check.

    2: "Buyer Agent Boulder Colorado" - Exclusive buyer agents are the preferred way of buyi
  • Easy! (Score:2, Funny)

    by JamesP ( 688957 )
    cat database | grep query

    Completely Open Source!

  • What do you look for out of a search engine?
    One whose algorithms work. (In this instance, your first result is the same for "henry l stimson" and for "uss henry l stimson".)


    Back to the drawing board.

Pascal is not a high-level language. -- Steven Feiner

Working...