Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet

Any Interest in a Regexp-Based Web Search Engine? 51

K-Man asks: "From time to time, I've seen people comment that they would be interested in searching the web with regular expressions, but I've seen very little research in this area. Over many months (as part of a project I call 'grepple'), I've gradually assembled some background on the idea (also some work-in-progress not noted in the link), and the idea seems to be approaching the realm of technical possibility. However, my expertise is not in marketing, so I have no idea whether anybody would use this capability. So I ask, if you could search the web for any regular pattern, including html, partial words or wildcards, long phrases, or anything you might grep out of an html file, would you do it? What types of searches would you do?"
This discussion has been archived. No new comments can be posted.

Any Interest in a Regexp-Based Web Search Engine?

Comments Filter:
  • by Anonymous Coward on Sunday April 27, 2003 @12:10PM (#5819456)
    ([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0 -9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[ 0-9]{1,3})(\]?)
    • by MarkusQ ( 450076 ) on Sunday April 27, 2003 @12:41PM (#5819626) Journal

      You have a point, but I have no mod points at the moment, save the ones I coin myself. Any new ability will invite new abuses (or, at least, new forms of old ones).

      -- MarkusQ

      P.S. For the regexp challenged, the parent poster was showing how easy it would be to use a rexular expression search engine to harvest e-mail addresses which the Bad Guy could then send spam to.

    • Although you could also search for your own email and find any web page that contains it.

      There are a lot of html tag fragments (eg img tags with just part of the src url, href's to a given domain or subsection of a website, etc.) that might be handy to find, at least for technical people.

      I share most people's skepticism about regexp's ever becoming mainstream, but it might be a good foundation for value-added services, like finding web pages by color, font, number of images, etc.
      • Most of the population is too stupid to type words correctly in their native tongue, so yes, regular expressions are way too complicated for the average idiot to handle. However, it'd be a nice feature of the Google API or something.
    • I'm not sure I understand the point--this search would probably return about 80% of all web pages because they contain an e-mail address like string. I think spammers will still just crawl pages and use that form of regex to collect addresses, not to search for pages that contain those addresses.

      The original post is sounding more and more like FUD to me.

  • Interest? Sure. (Score:4, Insightful)

    by Violet Null ( 452694 ) on Sunday April 27, 2003 @12:14PM (#5819482)
    I'd be interested. Probably not interested enough to pay for the service, but still.

    But it seems that you'd have a huge performance problem you'd have to work around. Search engines work by indexing the words as-is. Since you can't do that with a regexp search, I can't see any way that you could have a regexp search engine that didn't have to scan every page for every new search.
    • Full text indexing (Score:4, Interesting)

      by K-Man ( 4117 ) on Sunday April 27, 2003 @04:15PM (#5820559)
      The idea is that any character sequence in the source can be found in time only proportional to the pattern length, not the data size.

      The penalty is a bit of space for indexing, but methods for compressed indexing have been found which use only about 40% of the source text size to hold both the index and the source text.

      IMHO, much of the performance problem has already been solved, so the question is really whether people would use a tool if it were developed.
  • by RobotWisdom ( 25776 ) on Sunday April 27, 2003 @12:26PM (#5819543) Homepage
    I'd definitely use it a lot, for searches that Google couldn't handle. Some examples:

    - the obvious one is 'stem*' to get all words that begin with a certain string, but sometimes I might want the opposite '*ending' as well

    - if I'm unsure of the spelling, 'start?end' could come in handy

    - most search-engines are useless for specifying punctuation or capitalization

    - I'd like to be able to search for ranges of dates using '18??' or the equivalent

    - phrases with gaps or alternate forms ("All your [x] belong to [y]")

    My recommendation would be to start with strong-content sites (Project Gutenberg, Wired, etc) and see how computationally expensive it becomes, one step at a time.
    • For phrases with gaps, try Google's * operator, such as:

      http://www.google.com/search?q=%22all+your+*+are +b elong+to+*%22
    • - the obvious one is 'stem*' to get all words that begin with a certain string, but sometimes I might want the opposite '*ending' as well


      Very useful to look for a file or a set of files.

      regex:/href=".*cowboyneal\.jpg"/i

      • I don;t know if it's been fixed, but Mastering Regular Expressions (first edition) said you shoulf avoid /i like the plague. It works by uppercasing your regex, then making a copy of whatever you're searching, and uppercasing that. For a short string, it's ok, but for large files (and if you search the web, that's a LOT of large files) it is ridiculously slower than case-sensitive.

        Manually desensitizing ([Hh][Rr][Ee][Ff]) doesn't have a performance penalty to speak of, so that would have to be done behi


    • I'd be interested, mostly to exclude search hits that were not related to the topic of interest by anything other than an accident of vocabulary.

      For example, if I wanted to search for the use of "Star Wars" in relation to the "Space Defense Initiative" and am not interested in the movie "Star Wars", I would very much like to have a search of "Star Wars !movie". I don't think Google can do this very well, although I haven't tried much either. Another example would be multiple operators, eg +(Apple AND/OR
    • Most of your examples, and the majoirty of things I would want would be met by splitting the problem in to two parts: 1) do a normal search for the non-regex parts; 2) apply the regex to rank the results (maybe pick the top n sites to limit the subset for the regex search).

      So for example, "Fred* Bloggs" would search for all pages with "Bloggs" and then regex for the Fred part.

      To do a date search, e.g. find events on your birthday - 29/02/???? - find all pages with "29/02" then find 29/02/????.

      not per

  • by QX-Mat ( 460729 ) on Sunday April 27, 2003 @12:53PM (#5819679)
    a real time regex engine would perform regexes on condensed byte code of a page rather than the actual page. this is bound to be lossy.

    the only way i can see it happening is an associated list of popular searches is entered into the db store, and regularly updated. sadly you're going up in factors, depending on how many expressions you have, so it'd be a huge db pull.

    maybe... it's a cute idea. I'm sure something client side would be easier, with the advent of broadband in most homes.

    Matt
  • Probably not... (Score:5, Insightful)

    by Jerf ( 17166 ) on Sunday April 27, 2003 @01:02PM (#5819713) Journal
    While there are some cute tricks you can do with a regexp-based engine on the user's side, cute tricks do not a viable technology make. Along with the obvious computational issues, and the difficulty (though perhaps not impossibility) of a creating a caching scheme, I think there's the problem that most use cases where someone might really want to use your search engine, there are more promising ways to approach the problem other then regexps.

    The two ones that come to mind are word stemming approaches and things trying to take advantage of processing that's closer to (though of course not necessarily reaching) natural language processing. Both of those improvements are really useful, and are at least possible to implement, though not easy.

    Word stemming approaches eliminate the whole class of "I want every form of kill: kill(|ed|er|ing)" queries; plus you don't want a human to have to enumerate that.

    Phrase alternations is already handle by existing syntax: "All your (base OR chili) is belong to (us OR them)." You don't need regexp for that.

    Most of the rest of the examples of where a regexp might be useful are almost certainly toys, that sound like a cool hack but won't actually be useful.

    Note that a counterexample requires not yet another probably-silly hack, but a plausible usecase where you have an example of something you were really searching for, that a regexp engine might have been able to solve, and that there was no good way of finding currently. In my experience the only searches that I can't do are the ones for things where there isn't a search term I can use that will unique identify what I'm looking for out of a sea of pages related to that term, but not what I'm looking for. One example I recall was looking for how poisonous a philodendren is to a cat; if the info is out there, it's swamped by pages saying simply that it is poisonous, with no indication of how much.

    That's an example where a hypothetical search engine with better NLP might have helped me, where I could have asked it for only a page that included "how much" information about the poison level, and not its mere existance.

    On the one hand, I'd take this with a grain of salt as I'm just a random Internet yahoo, and you'll always find someone who says "X won't work." You can't let that be a stopper. On the other hand, you might want to mull this over and be sure you are not being overoptimistic about the usefullness of this before committing much resources to it. In particular, I recommend scrutinizing your own usage of real search engines over the next few weeks, and ideally the usage of others, and make sure that you're sure your approach can beat Google in at least some useful domain. Overoptimistic assessments of one's own program is a very real danger of being a programmer and it has scuttled more then one project.
  • Anything that lets you search should support Regexps
    Anything that displays data should allow you to search
    Basically, absolutely everything should support regexp search, even if it doesnt make any fucking sense to do so.

    Problem: the ways regular expressions work aren't anywhere near standard from program to program. Even a minor syntax change like "In this one, you need to put a slash before parens in order to make the parens special" vs "in this one, you need to put a slash before parens or else it is trea
  • by Alomex ( 148003 ) on Sunday April 27, 2003 @01:14PM (#5819769) Homepage
    (1) users tend not to type as many regex as you would think

    (2) it is too easy to create a query that matches half the words in the index, bogging down to a crawl your search

    (3) in all likelihood what you want is a stemmer and something that allows typos, not a full fledged regular expression matcher

    (4) the main problem with search engines is that they return too many results, not too few. Regex search capabilities further increase the size of the result set.

    (5) let me repeat point (3). Regular expressions are not a natural operation when searching natural language.

    • by K-Man ( 4117 ) on Sunday April 27, 2003 @04:31PM (#5820619)
      Yes, I agree that pathological regexp's are easy to create, but limits on match length and counts are easy to impose.

      At the technical level, one indexing method I'm currently looking at (the FM index [unipmn.it]) has a couple of advantages. First, it is incremental, extending a match one character at a time, and allows backtracking etc. to probe different legs of a regexp. It's also very quick at counting hits at each step, making realtime pruning of query results very easy.

  • ok firstly you posted a link the "the other site."...I thought that wasn't allowed...=)

    any way ....

    I think it would be very cool and very usefull, if it could be done without scaling problems, I am not an Expert on RE's but I've always been told that they are slower than indexed lookups, and don't scale to masive quantities of info.
    and if it can't be done without scaling problems...it could be done for a subset of the net. like find all matching entries of Regex1 within all Url's matching Regex2.

    I find t
    • You can regex in a page with a bookmarklet, works usually with any javascript-enabled browser.

      One of them is here [squarefree.com], spawn your favorite search engine and look for bookmarklets, there are plenty.

      Bookmarklets and smart bookmarks (not available in IE) can make magic and turn your browser into a very powerful process ;)
    • It should be fine for SHEF or any file format. Once one stops expecting bytes to form words, many file types become indexable.

      URL's are a good example of difficult-to-parse search targets. At one time I was looking at parsing urls into components and searching those, but even then it was too hard to search with just a fragment.
  • It won't work (Score:3, Informative)

    by 0x0d0a ( 568518 ) on Sunday April 27, 2003 @02:29PM (#5820107) Journal
    You can't scale it. Indexing systems that could be used as a foundation for regexes (CDAWG structures or similar) don't scale to the level of the Web.

    If you want to do searching of a small intranet, you might be able to get away with it. You might be able to do globbing, but currently using regexes won't work.

    The main regex-related features I suspect people might want are:

    * Phrases. Google and almost all other search engines can already do this, with quotes.

    * NEAR. foo NEAR bar in the document requests documents where foo occurs "near" bar. This is of somewhat more dubious utility, but there are some searches that it's convenient for.

    * Boolean NOT. Google and almost all other search engines can already do this.
    • NEAR. foo NEAR bar in the document requests documents where foo occurs "near" bar. This is of somewhat more dubious utility, but there are some searches that it's convenient for.

      Google already does this to an extent, using NEARness of your search terms as one of the terms in the ranking equation.

  • Well, I can't honestly say I'd pay for such a service, but even being able to do simple regex stuff like "There.*gun and gunshot.*who shot who" would be nice. I find that most of my regexp searches, even in grep, are just looking for parts of a sentence or code block using .* .

    However, the above comment on how most people wouldn't be using regex in your engine is a valid one. You'd prolly want to pass off non-regex searches to a more suited engine (ie google), while handling the real searches yourself.

    • I agree that the query mix would probably be 99% keywords, and 1% regexp, but sometimes that 1% makes all the difference in usefulness. I think the keyword stuff would work fine also (each keyword qualifies as a regexp, just a really boring one); it would just be an added bonus to be able to do regexp's or any character sequence.

      The spam thing is valid, although it's already hazardous to post an email on the web. Try typing your phone number into Google - that's another surprise that most people aren't a
  • Possibly... (Score:3, Informative)

    by WindBourne ( 631190 ) on Sunday April 27, 2003 @05:37PM (#5820931) Journal
    an interesting use of this would be on top of the results from say google. Google already seems to give the best results. Now simply use an RE engine on top of that would enable a user to get better results.
    • Ranking is a separate issue from selection, or gathering raw hits for a query.

      Google doesn't have a mind-bendingly better selection system (it's a lot like any other search engine), but their ranking is, of course, their main advantage.

      The issue for a search engine like google would be to cut over from a keyword-based inverted index to something a bit more flexible, while maintaining continuity with the current system.

      I think it's possible. We have the technology...(cut to six million dollar man intro).
  • by K-Man ( 4117 ) on Sunday April 27, 2003 @06:36PM (#5821239)
    Since I did that original writeup I've added considerably to what I know about indexing for this sort of thing, and in fact since I submitted this story I've done some work which looks quite encouraging. Rather than post a bunch of replies I'll round up what I can here:

    The most promising method for supporting this idea is a full-text index, one which allows any byte sequence in the source to be looked up quickly. That way, a regexp like /ab(le|ility)/ can be matched by finding matches for "ab", then "abl", "able", "abi", etc. An index which allows progressive refinement of the pattern, from "a" to "ab" to "ability", is a big help.

    It's also important to know when no longer matches exist, for instance if "ab" has no hits, then "abi" doesn't need to be checked.

    The big leap which makes this seem possible is Ferragina and Manzini's FM index [unipmn.it]. This method takes the size of a full-text index from somewhere around 10 times the source, to around 40%, including the text as well as the indexing. Their algorithm is described in relation to fixed-length patterns, but it's a trivial extension to handle regexp-generated sets of patterns as well.

    In the past few weeks I've been working on an implementation of a similar algorithm with possible performance improvements.

    So, the short answer is "yes, it's possible". There are a few hitches here and there, but in comparison to what I knew a year ago, it's much more workable than I would have guessed.

    • Still, there are some theoretical limitations, e.g.:

      This [mit.edu] gives a worst-case linear lower bound on the size of an index structure for substring search, which is obviously necessary for "full" regexp power. Of course, I doubt anyone really wants full regexps; the challenge you face is constructing a powerful enough subset that is easy to implement.

      Personally, like other posters have mentioned, I am only really interested in stem searches such as stem*.
  • Why not taking the google api and writing a regex engine the search the result of a string....

    Or a simple perl script that searches the resutls giving back by the web site?
    • There are a few problems with that...

      Firstly the google API will only return 10 results at a time, IIRC, meaning that it wouldn't be possible to meaningfully rank the sites unless you entered a loop to get all the search results from Google - and there could be lots.

      Secondly, it means that you need to be able to search Google first before you can pass a regex filter over it - and what string would you use to search Google with?

      Even if you could get Google to return likely pages for the regex, you'd still
  • That is what I would like to have !

    Daitch-Mokotoff is able to handle many languages compared to the almost English exclusive Soundex, so I would rather use this algorithm.

    And I don't think it would hinder performance that much since you can cache results just as you can with normal queries.
  • --I'm fairly decent on using google now, eliminating keywords, limiting it to domains, using proper keywords, etc, but tell ya WHAT would work better. I'd like the ability to ask a normal question, where every word had meaning, the sentence structure had meaning, all of the above, to the seach criteria. Just like you talk, exactly like that. Including prepositions, that one non available feature makes a difference in searching, if they could be included it would be great. Now sometimes I can get lucky, if I
  • by PDHoss ( 141657 )
    ...already supports this (you most often see it in a free search engine called Webinator). It's the search db behind Dogpile, some (all?) of Ebay, parts of ZDNet, and a whole bunch of other stuff. Not cheap by any stretch but solid.

    Check it out: http://www.thunderstone.com/ [thunderstone.com]
  • Yes! (Score:2, Interesting)

    by Jahf ( 21968 )
    Very much interested in this. In fact, I've written letters to Google and Yahoo requesting this but never got much beyond a polite thanks for the suggestion.

    Actually, I'd be pretty satisfied if Google supported the advanced boolean search that Altavista has. When Altavista had one of, if not the, best databases I regularly used it. Take a look at:

    Altavista Special Search Terms [altavista.com]

    I find that a combination of wildcards, AND, OR, NEAR, NOT, grouping via parentheses and being able to search specifically for a
  • google api (Score:1, Interesting)

    by Anonymous Coward
    I created a google app in perl that did a search for a fixed string, then did a regexp search on the resulting websites. Slower and more limiting than if google did it, but I've got a T1 and a 4-way Xeon :)

    The results are fairly good. It's on sourceforge if anyone wants to use it. They seem to be down right now, or I'd give the url.

  • s/xxx//g Next day on /. "Bug in new regexp-search engine wipes out all pr0n"

After a number of decimal places, nobody gives a damn.

Working...