Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Software Technology

Bayesian Filtering Outside of Email? 54

clonebarkins asks: "Is anybody out there using Bayesian filtering for stuff other than to get rid of spam? For example, how useful would Bayesian filtering be to identify news stories/blog entries in the RSS feeds I monitor? Is there any software out there using Bayesian filtering to do this sort of thing already? Are other types of filters better for these purposes?" What other areas can you think of where Bayesian filtering may prove useful?
This discussion has been archived. No new comments can be posted.

Bayesian Filtering Outside of Email?

Comments Filter:
  • by NanoGator ( 522640 ) on Tuesday March 30, 2004 @12:34AM (#8711010) Homepage Journal
    .. imagine, filtering out MS fud stories and dupes!
  • Slashdot Dupes.

    And, as a more insightful suggestion, troll posts marked as redundant in slashdot stories. There have been a few "attacks" on slashdot which could have been prevented by simply blocking 'repeat' posts.
    • Re:Nyuk Nyuk Nyuk (Score:4, Interesting)

      by NanoGator ( 522640 ) on Tuesday March 30, 2004 @12:37AM (#8711040) Homepage Journal
      " There have been a few "attacks" on slashdot which could have been prevented by simply blocking 'repeat' posts. "

      Filerting out GNAA posts would be nice. Not that I've run into it lately, but there was a story a couple of months back that had nearly 1,000 GNAA posts. Impressive organization on the behalf of the trolls, but it did take a while to suss out. (I wonder how many mods burned up mod points that night...)
  • by costas ( 38724 ) on Tuesday March 30, 2004 @12:38AM (#8711046) Homepage
    Bayesian needs pre-determined "bins" of data to assign a new piece of information to --that's a limited approach that will break down for news articles or generic Web pages. A combination of context- and collaborative-filtering is a much better approach [memigo.com] IMSHO (that's my newsbot, BTW).
    • by stoborrobots ( 577882 ) on Tuesday March 30, 2004 @12:48AM (#8711101)
      Most "Filtering" techniques fall into the same trap you've outlined - namely that they require pre-determined bins to sort data into. This is the nature of the beast.

      There are "clustering" techniques which attempt to identify similar bunches of data, without respect to any pre-determined bins, but the are not as useful for programmatically dealing with information. This is simply because you don't know what the clusters will contain, so you cannot make assumptions about what you will want to do with each cluster.

      Classification systems are used when you WANT to fit things into one of a number of bins that you already have decided what to do with (e.g. SPAM - delete, From Mistress - show now, From Boss - file for later, From Debt collector - return "Deceased", etc.) Bayesian filtering is simply one form of classification.

      For more information and ideas, check out KD Nuggets [kdnuggets.com]

      Nice work on the newsbot, BTW.

      • I would love for an E-mail program to automagically sort my work e-mail into the project folder it belongs in!
        • Well, theoretically the same bayesian filter that knows to put spam in the "spam" folder, can be similarly taught to put arbitrary content in an arbitrary folder. The trick is training it. The email client would have to somehow "record" every time you moved or copied something into a folder (or numerous folders), and then, when a message fit that criteria, it would have to replicate that action, move/copy, to the specified folder or folders. I don't think it's all that hard, but I don't think it's been d
          • The email client would have to somehow "record" every time you moved or copied something into a folder (or numerous folders), and then, when a message fit that criteria, it would have to replicate that action, move/copy, to the specified folder or folders. I don't think it's all that hard, but I don't think it's been done in major email clients.

            Provided you find a bayesian filter which can use arbitrary destinations, Sylpheed Claws [sylpheed.org] can easily take care of the automatic filtering using its folder process

          • Well, theoretically the same bayesian filter that knows to put spam in the "spam" folder, can be similarly taught to put arbitrary content in an arbitrary folder. The trick is training it.

            This is really not that hard. Check out POPfile, an open-source Perl program that's intended for spam filtering, but can be used and adapted for much more. It's as good or better than Mozilla's bayesian engine - I would still be using it except that the Mozilla approach does offer some integration benefits. For othe
    • That's great stuff, and looks like it would work for what I want. Is the backend code open source? Curtis.
    • You could try sorting into "interesting" and "uninteresting" based on previously labeled webpages. Those two categories would be entirely user specific and any dataset would become invalid over time as user interest shifts, but still, these are two "good" bins.
      For newsfeeds you could set a subject (for example: "Presidential elections") and sort into "About presidential elections" and "Not about presidential elections". You just make an initial suggestion (a few articles maybe) and judge the first few artic
  • Here's an enhancement request [mozilla.org] I filed for Firefox. This is something I think would be a nice use of Baysian filtering.
    • Sorry, links to Bugzilla from Slashdot are disabled.

      I get the feeling they've been slashdotted before. Once bitten, twice shy...
      • To view the bug report :

        1. Enter http://bugzilla.mozilla.org/ directly in your brower's navigation bar.

        2. Enter bug # 235076 and click show.

        3. View suggestion.

        4. ???

        5. profit !
        • 1. Go to http://www.opera.com/, click Free Download, and download the version for your platform.
          2. Go back here, hit F12, and uncheck Enable Referrer Logging.
          3. Click the link, and view the suggestion.
          4. ???
          5. Well, if you want to get rid of the Opera ad banner, it's not profit, but hey...
  • by Jayfar ( 630313 ) on Tuesday March 30, 2004 @12:51AM (#8711125)
    See their technology overview [autonomy.com]. I believe they have a number of (ugh!) patents on Bayesian text analysis. They were founded by a Dr. Michael Lynch to productize research he did at Cambridge U.
  • by GrumpySimon ( 707671 ) * <`zn.ten.nomis' `ta' `liame'> on Tuesday March 30, 2004 @12:56AM (#8711153) Homepage
    Bayesian approaches have really taken off in studies of molecular evolution (Phylogenetics).

    For those of you who don't know, phylogenetics is a set of techniques for working out a 'family tree' of taxa (taxa = basically units of analysis, normally species or genetic sequences). The main reason for doing this is that it gives an objective way of testing evolutionary hypotheses. For example - If I predict a certain protein has evolved through stages A, B then C, but my tree shows a pattern of A - C - B, I can reject that hypothesis.

    Phylogenetics is extremely powerful and has allowed us to investigate many many cool things (like the origin of modern humans in Africa, and the migrations out of). The problem is that there is a *huge* number of trees to search to find the optimal set of trees. The formula (IIRC) is 5N-2!!, where N is the number of taxa. So, 10 taxa (species or whatever) has 34 million trees, and when you get up to a real dataset it gets much worse: There are 10^132 ways of connecting my 77 taxa dataset.

    Bayesian approaches can really really speed up this process. We used to have to do a large number (100-1000) of heuristic analyses and then bootstrap (a resampling procedure) these to get a confidence interval, of say, a date of a divergence time or a model fit. These Bayesian techniques allow us to do, say, 10 long runs whilst simultaneously estimating parameters.

    Sooo much faster (ie - that 77 taxa dataset mentioned before - instead of ~250 hours x 1,000, I can do the same in about ~100 hours x 10.

    There are some problems - it possibly over-estimates support (ie underestimated uncertainty in the data) for taxa groupings, compared to the bootstrap method. This isn't terribly surprising given the hill-climbing approach these algorithms use, but no-one's really sure whether this is a good or bad thing (since no-ones really sure how to interpret the alternative bootstrap support)

    Fantastic software: Mr Bayes: Bayesian Inference of Phylogeny [ebc.uu.se]
    and BAMBE: Bayesian Analysis in Molecular Biology and Evolution [duq.edu]

  • and especially how it applied to rss feeds, but that's not all. You could apply it to search results, friendster-type profiles, etc. Maybe that's what google has planned with their personalized search engines...
  • For those who still bravely (foolishly) venture onto usenet, it would be nice to replace kill files with something Bayesian. There may be such a reader already but I haven't seen it (nevermind something cross-platform, which is a must for me).
    • Re:NNTP/Usenet (Score:3, Informative)

      by jpkunst ( 612360 )

      For those who still bravely (foolishly) venture onto usenet, it would be nice to replace kill files with something Bayesian. There may be such a reader already but I haven't seen it (nevermind something cross-platform, which is a must for me).

      There is one newsreader I know of which uses Bayesian filtering for articles in its latest version, but it's Mac only: MT-NewsWatcher [smfr.org].

      JP

  • MT-Newswatcher (Score:3, Informative)

    by megabulk3000 ( 305530 ) on Tuesday March 30, 2004 @01:22AM (#8711272) Homepage
    Well, the latest version of MT-Newswatcher [smfr.org] for Mac OS X utilizes Bayesian filtering to filter Spam out of newsgroup postings. Maybe not the most unusual application of things Bayesian, but a welcome one nonetheless.
  • pr0n! (Score:1, Funny)

    by Anonymous Coward
    It works great to sort pr0n! And it's much more useful than getting rid of spam too.
  • It could help for slashdot. Unfortunately, the site is only given a small portion of a machine, so the added complexity would probably cost the parent company too much.
  • Why...yes. (Score:5, Informative)

    by ByronEllis ( 22531 ) on Tuesday March 30, 2004 @01:49AM (#8711422) Journal
    First off, the spam filters are actually classification algorithms, not filters---the name filter is incorrectly used almost exclusively by spam classification software--and worse yet they're really only referring to a specific classifier (the "Naive Bayes" algorithm) rather than to classifiers in general. "Bayesian" filters are things like Kalman Filters, Particle Filters and Hidden Markov Models which are used in any number of fields, but not really germane to the tasks you're asking about I think. Using "Bayesian Classification" in Google will probably yield more fruitful results.

    It sounds like you want to extend the naive bayes classifier to more than two categories and, in the best case, learn new categories from the data. Both can be done and have been done with varying degrees of success. You might try here [psu.edu] for some pointers to more information about how it is done (the algorithm itself has been around since the '60s---people only think its something new). Unfortunately for things like RSS and email you're going to run into two problems: you really want to do your classification on-line and your data are actually quite sparse and your prior is usually uninformative so its going to be hard to do the actual classification. But, who knows, its still an active topic of research.
  • Try visiting http://www.mackmo.com/nick/blog/java/?permalink=cl assifier4jnntprss.txt [mackmo.com]

    "I now have Classifier4J and nntp//rss working together to do Bayesian classification of RSS feeds. There are a few things still to work out (perfomance and usability to name two), but I'm pretty pleased with it, since it was something I whipped up in a couple of hours. AFAIK it is the first Bayesian/RSS thing that has got far enough to have a screenshot..."
  • My friend [slashdot.org] has done this with his growlmurrdurr aggregator [stompstompstomp.com]. It uses SpamBayes along with a set of "this sucks", "this is yay" buttons on displayed feeds to highlight them appropriately.

    Also, I'm not certain, but I strongly suspect that Google is using some sort of Bayesian filtering as at least part of their criteria for Google News [google.com].
    • Hey, that's me!

      Yeah, I tried it. It tends to suck, actually. RSS feeds don't have quite enough information to usefully classify every article that comes up. Especially when a lot of your RSS feeds contain nothing but the title of an article.

      But you can see it kinda in action on my own aggregator [stompstompstomp.com]. The software works well, but the bayesian classification is not too useful. I guess part of the problem is also that the majority of my RSS feeds I actually want to read.

  • by OnyxRaven ( 9906 ) on Tuesday March 30, 2004 @02:04AM (#8711500) Homepage
    I'm working on a project for my Senior Project that could take the Bayes method to identify webpages that are 'good' or 'bad' for a proxy or bridge based connection filtering or bandwidth limiting application.

    Now, obviously for webpages its a bit easier to say 'good' 'bad', but this app (www.bandwidtharbitrator.com) already has some regular expressions for apps like Kazaa, Bittorrent, in the hopes of limiting the bandwidth. I wonder if a Bayesian system could be adapted to this domain? I considered it, but the person in charge of that part of the project is using a diff-like method (which I find silly).

    Are there easy-to-plug-into APIs and libraries like that we could use to do all the 'hard work'? Is SpamBayes up to the task?
  • oh yeah (Score:4, Funny)

    by revmoo ( 652952 ) <slashdot&meep,ws> on Tuesday March 30, 2004 @02:16AM (#8711546) Homepage Journal
    What other areas can you think of where Bayesian filtering may prove useful?

    Family discussions?
  • the paperclip (Score:4, Informative)

    by drDugan ( 219551 ) on Tuesday March 30, 2004 @03:04AM (#8711705) Homepage
    the technology developed at MS research to get the paperclip (the office help animate hate attractor) to work is based on a bayes net.

    http://www.wired.com/news/print/0,1294,43065,00.ht ml [wired.com]
  • I have a friend at university who is using it to analyse news stories and make predictions about stock increase/decrease (Masters degree project). It seems to be working well enough that if you followed exactly what was guessed so far you would have made money, however i still wouldnt trust real money (the gains are quite small, and obviously the risk is still high). However, combined with human knowledge this really does look like a potentially very interesting bit of software.
  • by 216pi ( 461752 )
    I know, it's in mail, but as far as I know, opera [opera.com]'s mail client (in the actual beta 7.5 at least) uses bayesian filtering to sort non-spam messages in your views. Opera learns where to sort mails when you drag and drop mails from one view to another so you don't have to set up rules (you can do, if you want but you don't have to).
  • Control algorithms (Score:5, Interesting)

    by lindelof ( 606257 ) on Tuesday March 30, 2004 @05:35AM (#8712237)
    I work at the Building Physics Laboratory [lesowww.epfl.ch] in Lausanne, Switzerland, and I investigate the possible use of Bayes' theorem in the building control field. The idea is to classify situations as bad respectively good based on feedback from the occupants and have the system learn from its mistakes.

    Consider, for instance, the total amount of sunlight hitting your computer screen. Most people would like an automatic system to control their window blinds to keep that amount to an acceptable level, but the system cannot know a priori what that level will be for a given user. So we let the system set the blinds to a setting deemed acceptable for the average user and use the user's manual interventions to build up a list of bad settings, corresponding to the setting immediately before the intervention, and good settings, corresponding to the setting immediately after the intervention.

    The system will then attempt to minimize the probability of the user rejecting its settings by applying Bayes' theorem.

    I've done only preliminary exploration of this idea so far but the results are encouraging, and we plan to do a full-scale experiment this summer.

  • I have a short answer. Yes.

    My students and I are buidling a filter for the web. We're really not ready to tlak about it yet, but it is working well and we hope to get something "out there" soon (next year?).

  • We take care of the technical needs of many schools throughout the area and every one of them wants web content filtering.

    We typically setup squid and squidguard for them and grab blacklists from a regional database the schools put together.

    The first thing you can't help but notice is that it sucks. Even with the various schools additions it doesn't block much of what it should and blocks quite a bit it shouldn't. All of the same problems come into play with these hardcoded blacklists that come into pla
  • Is anybody out there using Bayesian filtering for stuff other than to get rid of spam?
    Look out for most content management systems - most of them happen to make use of some or other form of Bayesian algorithms to "cleanse" the content and/or extract attributes. After all, your "filter" is nothing but a set of rules built on a test/clean data, with which you compare your actual data.

    For example, how useful would Bayesian filtering be to identify news stories/blog entries in the RSS feeds I monitor?
    D
  • I am trying to setup Popfile [sourceforge.net] to sort mailing list messages into multiple buckets: very interesting, mildly interesting, worthless and so forth. I belong to several high-volume mailing lists and I've been wishing for an easier way to find what I care about without having to skim several hundred messages to find it. I am hoping the classifier will eventually pick up on what people and topics I like best.
  • This would be a great application for system logs. You think your e-mail is full of spam and worthless junk, try going through MB of multiple sysem logs a day. I know there are logwatch tools, but AFAIK, they're regex based. A Bayesian approach would be great, as it would learn what I care about and what I don't. Heck, I might be able to convince work I need to write on now. Time to Google and see if such a thing exists.
  • When the original "plan for spam" article came out, I got excited about it and incorporated it into a suggestion tracking system I was working on. The end result was nice. In the system, the user would look at email and associate it with existing suggestions or bug reports. The system learned what words were associated with which suggestions or bugs, and would show the user a list of suggestions which might be relevant for the email he was viewing. It worked surprisingly well.
  • Kind of ... (Score:3, Interesting)

    by pen ( 7191 ) on Wednesday March 31, 2004 @01:23PM (#8726514)
    I run a submission-based web site [phrise.com] that, at times, gets a lot of duplicate (or very similar) submissions. I have a basic Bayesian script break each new submission into words and flag it if it's too close to something else.

  • Findory.com [findory.com] (run by a Slashdot user) filter's news based on user preferences. It stores preferences automatically using cookies and require no registration.

Remember, UNIX spelled backwards is XINU. -- Mt.

Working...