Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Software Spam Databases IT

Ask Slashdot: Speeding Up Personal Anti-Spam Filters? 190

New submitter hmilz writes "I've been using procmail for years to filter my incoming mail, and over time a long list of spam patterns was created. The good thing about the patterns is, there are practically no false positives, and practically no false negatives, i.e. I see each new spam exactly once, and lose no legit mail. This works by using an external spam-patterns file, containing one pattern per line, and running an 'egrep -F' against it. As simple as this is, with a long pattern list this becomes rather slow and CPU consuming. An average mail currently needs about 15 seconds to be grepped. In other words, this has become quite clumsy over time, and I would like to replace it by a more (CPU, hence energy) efficient method. I was thinking about a small indexed database or something. What would you recommend and use if you were me? Is sqlite something to look at?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Speeding Up Personal Anti-Spam Filters?

Comments Filter:
  • spamassassin (Score:5, Insightful)

    by mdaitc ( 619734 ) on Friday August 30, 2013 @08:56PM (#44721401)
    have you tried spamassassin?
    • by Scutter ( 18425 )

      Latest News: 2011-06-16: SpamAssassin 3.3.2 has been released, a minor new release primarily to support perl-5.12 and later. Visit the downloads page to pick it up, and for more info.

      Last update was more than two years ago. I know you can refresh your rule sets periodically, but is the software even still maintained?

      • Re:spamassassin (Score:5, Informative)

        by dbIII ( 701233 ) on Friday August 30, 2013 @10:20PM (#44721757)
        There is still stuff going on in the dev version with an svn commit listed on August 30 2013.
        http://spamassassin.markmail.org/search/?q=#query:%20list%3Aorg.apache.spamassassin.commits+page:1+state:facets
      • Re:spamassassin (Score:5, Insightful)

        by wvmarle ( 1070040 ) on Friday August 30, 2013 @10:39PM (#44721829)

        Maybe the software is pretty much finished? In that case there's not much more to do - no new features to add, and sooner or later you'll run out of bugs to fix.

        • by Scutter ( 18425 )

          I don't think there's any such thing as "pretty much finished", especially with a piece of software involved in the arms race that is spam vs. filtering. There's only so much you can do with rules before you need to revisit your engine. Also, it's not just the software that's been stagnant for two years. The website itself hasn't been updated in as long. Not a single news item since 2011. The other respondent mentioned that dev is still active, but dev is not production. Dev is dev. Ever since Spamas

          • Re:spamassassin (Score:5, Informative)

            by bill_mcgonigle ( 4333 ) * on Friday August 30, 2013 @11:48PM (#44722053) Homepage Journal

            The rules sets are updated pretty frequently - that's where the front lines of the battle are. As others have said, the engine is pretty mature.

            The question, I guess, is what do you want spamassassin to do that can't be expressed with the current rules language?

          • by jgrahn ( 181062 )

            I don't think there's any such thing as "pretty much finished",

            There is; software designed according to "do one thing and do it well" ... for example the Unix cat(1) command is probably pretty stable by now. Same with fgrep(1).

            especially with a piece of software involved in the arms race that is spam vs. filtering.

            ... but yeah, well, I don't know Spamassassin but I suspect it has broader and more loosely-defined goals.

        • No, today's technology user has been brainwashed by mobile applications that update frequently. I have seen complaints on perfectly good software: "Has not been updated in a year whats wrong this software sux 1 star". Developers also use software updates as a sort of beta test: push it out, and if it crashes a lot of systems then update it again. Iterate as necessary. I've seen three releases in a day and five in a week using this "plan". The users don't help by considering mature (i.e. un-updated, ess
    • Re:spamassassin (Score:5, Informative)

      by wvmarle ( 1070040 ) on Friday August 30, 2013 @10:37PM (#44721823)

      Add greylisting to the mix. For me it stops approx. 90% of junk at the gate. That alone saves >90% of your server's spam workload (90% of the spam checker; a bit extra due to the mail server not having to process the mail at all).

      Of course I don't know about legitimate mail but if someone is trying to send legitimate mail trough a spam-type minimised mail server that doesn't retry, that's their problem...

      • One client in 4 years of greylisting has had that problem, for something like 40,000 unique senders per month. I like those numbers.
      • Just switch over to Email Certification.

        Long story short, everyone who wants to send Certified mail has to be 'certified' by their ISP. (UN-certified mail would still be possible, if
        you wish.) Getting certified is nothing more than providing enough information to positively identify you, and costs a nominal fee.
        In return, you create a public/private key pair, and give the public one to the certifier. The private key goes into your email server, which
        adds some headers to each outgoing email. One of these is

    • On the mail servers I maintain, I employ SpamAssassin only a last resort because it is resource-intensive. Submitter hmilz's approach is not only resource-intensive, but also labor intensive, so I would never recommend it.

      I've used Exim for my MTA since 2001 and my main defense against spam has always been to filter it out before SpamAssassin comes into play based on analysis of header information and checking against DNS black lists. Actually, the first thing I do is look for obvious fakes from a limite

    • have you tried spamassassin?

      Indeed. I just looked at logs on a server that acts as an incoming mail filter for a small company. The range of times for spamassassin (spamd) to filter the incoming emails was about 1 to 7 seconds. with most being in the range of 2-4 seconds. This is without bypassing spamd for large emails (spam can be relied upon to be small)

    • The canonical spam solution checklist. [craphound.com]

      I'm going with Specificaly, your plan fails to account for: (x) Users of email will not put up with it.

      • Thank you for posting that checklist, that's a vital document for any spam planning.

        SpamAssassin, executed through procmail on the mail client's email, is indeed resource intensive and does not scale well for an organization. Other people have mentioned other upstream filtering techniques, such as grey listing and DNS blacklists, but those are limited because of the large numbers of zombied Windows clients around the world, which have their resources rented as botnets to send spam from legitimate environmen

  • by russotto ( 537200 ) on Friday August 30, 2013 @09:00PM (#44721419) Journal
    Write something that uses a regular expression library (RE2 would be ideal, if your expressions are actually regular), and keeps the compiled patterns resident. Most of your time is likely spent parsing the patterns.
    • by PetiePooo ( 606423 ) on Friday August 30, 2013 @09:30PM (#44721541)

      ...Most of your time is likely spent parsing the patterns.

      I second that. And as your rules have built up, there are likely some that have never been used beyond when they were first put in. I'd instrument your next solution to identify outliers and cull them over time so your parser doesn't have to work so hard.

    • by grcumb ( 781340 )

      Write something that uses a regular expression library (RE2 would be ideal, if your expressions are actually regular), and keeps the compiled patterns resident. Most of your time is likely spent parsing the patterns.

      I'm probably going to get shat on by kids who don't know any better, but....

      Use Perl. If a complex set of regular expressions is taking 15 seconds per email, then there's clearly something wrong with the implementation. I suspect you're doing too much backtracking [regular-expressions.info]. I've been guilty of the same in the past. In one case, simply anchoring my regular expressions to the start and end of the string reduced running time literally by two orders of magnitude. Just glom the whole message into a string and go nuts.

      And

    • Write something that uses a regular expression library (RE2 would be ideal, if your expressions are actually regular), and keeps the compiled patterns resident. Most of your time is likely spent parsing the patterns.

      Yes but a more resource friendly set of tools might begin with the OP's procmail to move the mail
      onto a local machine quickly. Filters inside of procmail are hobbled. Do this as one message
      per file (http://unix.stackexchange.com/questions/62563/savings-emails-as-individual-files-using-procmail).
      Procmail locks and gates are OS dependent but still slow.

      Next test each message with one or more simple "grep expressions" that then pass it or gate it
      to more complex expressions. On a multi core machine with a

    • Here's several things you can do to make this faster.
      1) first don't keep invoking egrep. this has to parse the command line and then re-load the egrep command itself every time. Instead do this from within a loaded program. Perl is a very good choice for this
      2) the perl command can pre-compile the regular expression. So you can leave the perl program running as a process then simply feed it new data to analyse.
      3) given you are searching for words, you probably want to split the incoming stream on white

  • Database? (Score:3, Insightful)

    by K. S. Kyosuke ( 729550 ) on Friday August 30, 2013 @09:01PM (#44721421)
    What would the database achieve? I'm not sure what is the exact nature of the patterns (an example would really help here), but perhaps writing a compiler from the patterns into some decision procedure in something reasonably efficient yet featuring quick start, such as SBCL or Gambit, could help.
  • bogofilter (Score:5, Informative)

    by jon787 ( 512497 ) on Friday August 30, 2013 @09:08PM (#44721441) Homepage Journal

    http://bogofilter.sourceforge.net/ [sourceforge.net]

    I haven't timed it to see how well its been doing in the 6 years I've had it though.

    • http://bogofilter.sourceforge.net/

      Seconded. Procmail + bogofilter + spam.mbox = no problem.
      I keep - and periodically review - a "spam" mbox for the rare false positive.

      I haven't timed it to see how well its been doing in the 6 years I've had it though.

      It's written in C, so it's very likely much faster and leaner than Spamassassin.

  • by Anonymous Coward

    Sorry, couldn't resist the pun.

    Your problem (besides not using existing Bayesian tools...) is that every single egrep is a fork. As others have pointed out, you should rewrite your script in something like Python and use the native regex libraries. Even if you have to read and 'compile' the regex list every time, you're saving a *massive* amount of OS-level overhead.

  • It seems you could easily distribute the load on multiple machines, each doing a subset of the regex.
  • ragel (Score:2, Interesting)

    by Anonymous Coward

    Try compiling your patterns using Ragel: http://www.complang.org/ragel/

    Union them all together and you'll see orders of magnitude improvement in performance (e.g. 10x - 100x) over other regular expression engines, although GNU grep is using Aho–Corasick with the -F switch, so you're likely to see less of an improvement.

    Many people use re2c, but it has nowhere near the performance or capabilities of Ragel. Ragel has a steep learning curve, but it's well worth the effort to master. It's well maintained,

  • by Jmc23 ( 2353706 )
    http://www.gigamonkeys.com/book/practical-a-spam-filter.html [gigamonkeys.com] has the nuts and bolts. CL-PPCRE does perl regex matching faster than perl.
  • I've heard, but never timed it myself, that perl is faster for regexp-type stuff than even the specialized tools, just from the massive amount of optimization it has accrued over the years; here [perlmonks.org] is a completely unbiased source. Use a perl or python script, and consider using Storable (perl) or pickle (python) to serialize the data structure, I guess, but just having the whole list in memory will help.

    According to this [uidaho.edu], perl regexps are (unsurprisingly) a superset of egrep's.

    I don't see how introducing SQL c

    • by Jmc23 ( 2353706 )
      regexps in cl-ppcre are faster than perl.
      • doing anything but repeated egreps is probably fast enough. he should do whatever is easiest, which probably isn't lisp.

        • by Jmc23 ( 2353706 )
          Don't bring your prejudices into this!

          It doesn't get much easier than someone not only handing you the code but also holding your hand and walking through every single function. Unless you want to use a magic black box and where's the fun in that?

  • by careysb ( 566113 ) on Friday August 30, 2013 @09:36PM (#44721575)
    Many years ago I worked with a Unix development tool called LEX that could handle matching multiple patterns simultaneously. Perhaps there is an updated tool that would do the same thing. Java has a 3rd party library called ANTLR that might do the trick. It would involved re-compiling every time a new pattern is added but it should be extremely fast.
  • by swillden ( 191260 ) <shawn-ds@willden.org> on Friday August 30, 2013 @09:36PM (#44721577) Journal

    Sqlite, or anything that uses an index, will be screaming fast.

    Your statement of your current solution makes me wonder, though.. are you using "egrep -F -f pattern_file e_mail_message"? Or are you running egrep many times, once per line of the pattern file, or once per line of the message? I would think that given a pattern file egrep would be smart enough to do something better than repeatedly scanning the input, but based on the time it's taking, it sounds like that's happening.

    • I doubt that he is using "grep -F -f ...", because fgrep can search for a hundred thousand patterns in a megabyte of data in under a second even on a modest machine (and most of the time is building up the regex state machine). I suspect he is using "egrep -f", and lots of patterns with wildcards. Worse, he will be running it once on each email, which means rebuilding the regex state machine each time.

    • by jon3k ( 691256 )
      Can someone explain what the big O notation would be for this? I'm still trying to wrap my brain around big O notation.

      I'd think it would be O(n^2) but that can't be right because it's two different sets of data (not N raised to itself). So is there even a: O(N^X)?

      I'm assuming that for each mail message (outer loop) each RegEx is processed, which might be an incorrect assumption.
      • O(NM).

        If the pattern file has N lines and the e-mail has M lines, and if we count comparing one line of the pattern file against one line of the e-mail as one operation, then for each of the N lines of the file, we have to do M comparisons.

        There are better algorithms than this obvious one, though, and it would surprise me if egrep didn't use one of them when given the whole list of patterns at once.

  • ... I just gave up on email. Even w/o spam it's more hassle than I like.

  • Just route everything from Facebook, LinkedIn, my dad, Apple and "i*" to the spam folder, and most of it is covered.
  • Problem spotted. (Score:5, Insightful)

    by girlintraining ( 1395911 ) on Friday August 30, 2013 @10:15PM (#44721735)

    The problem is that you're using egrep in the first place. Here's the thing -- the overwhelming majority of your cycles are getting sucked loading, initializing, executing, then unloading, that thread. It's not that using regular expressions is processor-intensive... it's that repeatedly launching the same executable is.

    Use something that can load once, read in the patterns, check all the e-mails that are queued, sort them, then exit. Your execution time will go from 15 seconds to 150 milliseconds.

    • Re:Problem spotted. (Score:4, Interesting)

      by complete loony ( 663508 ) <Jeremy.Lakeman@g ... .com minus punct> on Friday August 30, 2013 @10:31PM (#44721793)
      If you have sufficient programming experience, I'd recommend basing this solution on redgrep [google.com]. It's an llvm based expression compiler that should be able to combine multiple expressions into a single machine code state machine, assuming it doesn't run out of memory in the process. With a bit of effort you could output all of your compiled expressions into a single executable so you'll only need to wait for the compilation time when you add more filters.
    • by arth1 ( 260657 )

      You mean like doing an egrep +F instead of multiple egreps? I sure hope he already does.

  • by Arrogant-Bastard ( 141720 ) on Friday August 30, 2013 @10:33PM (#44721801)
    If spam has made it far enough that it's actually reached your personal instance of procmail, then there's been a problem earlier in the chain. Procmail rulesets should be a last resort, and they should only be asked to deal with minor issues that aren't dealt with via earlier rulesets.

    The first line of defense are your perimeter routers. They should implement BCP 38, they should block bogons, and they should bidirectionally deny all traffic to/from the Spamhaus DROP list. In addition, they should block inbound port 25 traffic from everywhere on the planet that you don't need email from. In other words; the fact that someone in country X wants to email you is unimportant unless you actually wish to receive mail from them. Yes, this is a reversal of default-permit, for a simple reason: default-permit for SMTP stopped being reasonable around 2000. Use http://www.ipdeny.com/ [ipdeny.com] to pick up the ranges per-country and only permit what you need. (Obviously a major research university can't do this. But Joe's Furniture, which does not have customers in Peru or Pakistan or Greece, can.)

    Then use blacklists, the best defense against spam we've ever developed. (Source: 30+ years of email experience) Spamhaus's Zen blacklist is a good one with a low FP rate and a tolerable FN rate. Augment these with local blacklists based on domains and network allocations. Augment those with as much blocking of generic hostnames and dynamic IP space as possible: real mail servers have real hostnames and are on static addresses.

    Then enforce RFC requirements: sending host must have rDNS, that PTR must resolve, what it resolves to should be the sending host's IP. Sending host must HELO as FQDN or bracketed dotted-quad; if FQDN, must resolve. Sending host must not send traffic pre-greeting. And so on. Enforcing these DOES mean occasionally you block mail sent by non-spamming entities: but since they are incompetent non-spamming entities, why would you want mail from them?

    Add greylisting. It'll handle a lot of annoying hosts that haven't learned to retry yet.

    Rate-limit based on normative values for your site. For example: if analysis of a year's worth of mail logs shows that during that time you never received more than 10 messages a day from ANY host, then rate-limit at 30 or 40. You'll never hit in normal practice; but if you get hammered by a fast-sending host, you'll blunt the attack. Note that these don't have to be perfect to work: provided you send deferrals (SMTP response codes 4xx) instead of refusals (5xx) the worst that happens is that you will mistakenly impose a delay.

    There's more -- it's possible to get quite crafty about this. But note that NONE of these measures pay any attention to content. There's a reason for that: spammers can defeat content-based measures at will. They won't have it so easy with these.

    Deployed in production in various setups ranging from a dozen to eight million users, these steps yield a FP rate of about 10e-6 to 10e-7 and a FN rate around 10e-5 to 10e-6. Tuning helps, of course: initial rates can be higher but log analysis (which all sensible postmasters do) readily brings them down. If you have the luxury of running your own mail server just for yourself, then you can REALLY tune this setup: you should be able to get the FN rate down to 10e-7 after a few months.
    • That's a very informative post, but the first part is making a big assumption that someone has that level of control over the network. Much of what you say is exactly the type of filtering that is applied by spamassassin and other various tools at the end end of the chain. Many of us don't have the option to work higher up the chain.

  • by lkcl ( 517947 ) <lkcl@lkcl.net> on Friday August 30, 2013 @11:09PM (#44721937) Homepage

    i'd be interested to see what happens if you run those regex's through this:
            http://bisqwit.iki.fi/source/regexopt.html [bisqwit.iki.fi]

    btw can we please get a copy of the patterns you're using? i think they might prove useful for other people. also i'd like to test them myself against regexopt.

    oh - to the other person who suggested spamassassin? i tried that, i set it up to run at MTA-time. it often took THIRTY SECONDS to process a message. in fact it was so bad that i was forced to set a limit of 100k on incoming messages, as a lot of virus-ridden word documents (etc) were typically over 100k. that cut down the amount of CPU cycles but it was still far far too much memory and far too CPU intensive.

    the one thing that did work well is greylisting, however the problem with greylisting i find is that if you happen not to be at the computer or have direct access to the server and people on the phone say "i'm sending you a message now, have you got it?" you *know* it's going to be at least an hour before it'll arrive. so, unless you can whitelist them in advance (which you can't always do) greylisting does actually interfere with legitimate business.

    anyway: in the end i gave up and went to gmail, but with gmail fucking up how they're doing things i have to revisit this and set up a mail server again. thus we come full circle...

    • oh - to the other person who suggested spamassassin? i tried that, i set it up to run at MTA-time. it often took THIRTY SECONDS to process a message. in fact it was so bad that i was forced to set a limit of 100k on incoming messages, as a lot of virus-ridden word documents (etc) were typically over 100k.

      I'm sorry to say it but you must be doing something wrong. I have a very default installation of spamassassin and sendmail also running at MTA time and on my really crappy old spare parts server it never takes more than a second or two to process a mail item. This also does not appear to vary depending on email size, a 10MB email seems to take just as long as a plain text one. I don't think out of the box spam-assassin checks attachments or any type of external content.

      • by lkcl ( 517947 )

        thanks thegarbz - i didn't mention that i added in pyzor and razor, and i think clamav as well. also as my domain's been up for a while it does receive a considerable amount of spam. the load just got to be too much. i'll investigate alternatives and also bear in mind that spamassassin worked well for you.

  • by Forever Wondering ( 2506940 ) on Saturday August 31, 2013 @01:13AM (#44722301)

    A long time ago I benchmarked perl's regex engine against about 5 others. At the time, it was 10x faster than the nearest competitor for the same regex/data.

    Also, you can use perl's "study". Or, split the regexes across threads.

    Also, with perl you can do some hierarchical saviings. For example:
    /Ffoo/ ...
    /Fbar/ ...
    /Fbaz/ ...

    Could be redone as:
        if (/F/) {
    ... if (/Ffoo/)
    ... if (/Fbar/
    ... if (/Fbaz/)

        }

    The above is trivial example, but you get the idea.

    Also, how much time is spent compiling (vs. executing) the regexes in egrep? I imagine a lot and you have to do this for each incoming message.

    Note that spamassassin (and hence perl) can be set up as a daemon where the regexes are compiled once. The messages are passed through a socket to the daemon. This means that the only CPU time spent is on executing the regexes--a considerable savings.

    Additionally, perl regexes have [considerably] more functionality/utility than egrep ones. You might be able to recode/consolidate yours and get the same [or better] bang for less buck.

  • I don't even use spam blockers. Instead I've purchased a domain, which is quite affordable nowadays. I have a catch-all redirect, so I any mail addressed to *@mydomain.com.

    Then, I give a unique username to each organisation. e.g. slashdot@mydomain.com. If I receive spam at this address, I inform them, then kill the username. I can also just create slashdot2@mydomain.com if I want to keep dealing with their company.

    Now, I receive only a few spam emails each year, so I need to do zero automated filtering. I a

    • by jon3k ( 691256 )
      You can also do this via gmail. Gmail will accept and deliver email to +@gmail.com and delivery it to you. Try it out.

      So anytime you sign up for something, just use: postglock+slashdot@gmail.com. Then if you get spam, just look at the "To:" address, you can even write a filter based on the + sign in the "To:" field, if you wanted.
      • by jon3k ( 691256 )
        sorry, replying to myself. First line should be been any email delivered to: name+any_string@gmail.com
        • I did hear about this, but I hadn't thought about writing a filter after receiving spam. That's a cool idea.

          The only part that makes me slightly wary is that since so many use gmail, you'd think that spammers would automatically remove the +slashdot part pretty soon.

          • by jon3k ( 691256 )
            Entirely possible - but here's something cool. I have Google for Your Domain setup for a personal domain. I just tested it, and I was able to send an email to: jon+test@[mydomain].com. Now there's no way for a spammer to know if Google is handling my mail (easily) so they'd have to assume that the + was a legitimate character. I mean, in theory, they could lookup the MX records and if they point to google, strip the +[characters up to]@ off, but I seriously doubt many, if any at all, would do this.
  • A project I worked on many years ago re-wrote a monitoring system in Java.
    It was Perl, running a rather large list of regex's over syslog files.

    The process of converting it to Java resulted in a 100x speed up - despite Perl possibly having a faster regex implementation. The regular expressions are compiled once on start-up. Regular expressions can be very fast - they're just slow to parse and compile.

  • There is a comparison of blacklists: http://dnsbl.inps.de/analyse.cgi?type=monthly&lang=en [dnsbl.inps.de]
  • You get so much mail so furiously that you can't suffer a 15 second delay? I presume you're talking about a personal mail server... if you're hosting mail for a 1000 people then yeah that's a problem.
    • by jon3k ( 691256 )
      If it's running for 15 seconds maybe it's just putting an annoyingly high load on the server. Also consider that for every legitimate mail, you could be getting a lot of spam. I know I would be annoyed if my CPU load shot up randomly ever 5 or 10 minutes when a piece of spam came in.
    • I get around 500 emails per day to my mail server of which maybe one or two are legitimate. A 15 second delay means a maximum theoretical capacity of 5760 emails per day before emails arrive at the server faster than the spam filter can process them. Even lower overall numbers will cause substantial bottlenecks at busy times of the day.

  • Just forward your mail through gmail. That way all the spam disappears and the NSA can get their data without trouble.
  • We see people complaining about this problem a lot, and yet for some reason they are afraid to actually put energy into a real solution. Repeat after me : filters can never end spam. That's right, never. All your filters (same can be said for every filter, everywhere) do is encourage the spammers to make their spam more obfuscated to improve their odds of passing future filters. It is a huge waste of time and resources and it's an arms race that the spammers will win.

    If you want to actually end spam,

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...