Infrastructure for One Million Email Accounts? 1216
cfsmp3 asks: "I have been asked to define the infrastructure for the email system for a huge company, which fed up of Exchange, wants to replace their entire system with something non-Microsoft. I have done this before, but not for anything of this scale. Suppose you are given a chance to build from scratch an email system that has to support around one million accounts. Some corporate, some personal, some free. POP, IMAP, webmail, etc are requirements. The system must scale perfectly, 99.9% uptime is expected... where would you start?"
Obviously (Score:5, Funny)
Re:Obviously (Score:5, Funny)
Upon which the global "wankfest" will commence, leading to solutions ranging from Novell to qmail based solutions, upon which the OP will look for someone else for advice, upon which the OP will end up paying an IBM consultant [huhcorp.com] to set up his company's email.
Re:Obviously (Score:5, Funny)
At which point the highly paid consultant will post a question to Ask Slashdot...
Re:Obviously (Score:4, Insightful)
Re:Obviously (Score:5, Interesting)
Just so you know. Most of us out in South East Asia refer to NMCI (Navy-Marine Corps Intranet) as the Not Mission Capable Intranet.
Re:Obviously (Score:5, Interesting)
The Navy maywant to take a page out of walmarts book, if they're having that much trouble.
Re:Obviously (Score:5, Interesting)
Walmart invited countless consulting firms and data backup experts. They deployed Exchange strictly because M$ was willing to "support" them. To say they were vulnerable to a major IT disaster was an understatement. The Navy want nothing to do with Walmart's IT.
Re:Obviously (Score:4, Insightful)
No they cannot. Microsoft does not want you backing up mailboxes. You backup mailstores, which are several (hundred - however many will fit on a single disk partition) mailboxes. This works great for disaster recovery, you restore the failed disk.
It is worthless for a single user who just deleted some important message. You end up building a new exchange server, and then restoring the entire mailstore, than going into that box and grabbing the one message. Veritas (I presume Legato as well) has an option to go in an grab each message from the mailbox one at a time. However this is slow - 1/5th the speed of a normal backup.
I work for, a company that competes with Veritas and Legato (though we try for much smaller accounts, big enterprizes need things we don't provide). We do Exchange backup, and are pretty sure that Veritas is doing it exactly like us. I strongly doubt anyone can scale mailbox level backup to millions of users.
Re:Obviously (Score:5, Funny)
Thanks, another reason to never shop there.
NMCI Blows (Score:4, Informative)
When it works at all it's slow. Sometimes you can hit the Send button and just sit there and wait a while.
When we have to work on a Navy project we had to start bringing our own equipment and hubs. Even their developer machines come loaded with 10 year old software and you can't get your email and be logged in as a developer at the same time. To check mail you have to log out, log back in under a different account, then log back in as a developer. The NMCI machines are boat anchors.
NMCI is the worst defeat the US Navy has ever suffered.
Re:Obviously (Score:5, Funny)
Re:Obviously (Score:5, Funny)
He said "up".... beat yourself *up*
Re:Qmail!! (Score:5, Insightful)
Insert "imagine a beowolf of those" joke here, except it isn't a joke.
I think you might be underestimating the requirements for this large a project that "must scale perfectly". The "99.9% uptime is expected" requirement alone requires multiple internet connections, a large cluster of front end servers, and redundent database servers, preferably located in different states. (ie: "What do you mean our only server is in New Orleans?")
I don't think the average Dell dual Xeon box is up to the task for this large a project...
Re:Qmail!! (Score:5, Informative)
365 days * 24 hrs/day = 8760 hours per year
0.1% downtime = 0.001 downtime
8760 * 0.001 = 8.76 hrs
You're off by two orders of magnitude.
8.76 hrs / 12 months = 0.73 hrs/month = 43.8 minutes/month
One 45 minute scheduled downtime (assuming its scheduled) per month isnt terrible. It's not great, but costs really start to go up as you add nines beyond those 3.
Re:Qmail!! (Score:5, Funny)
Re:Obviously (Score:5, Funny)
Ah, a proof by contradiction, eh?
Re:Obviously (Score:4, Interesting)
Re:Obviously (Score:5, Insightful)
I work with Exchange, and think that the chances are better that they just had shitty architecture to begin with. Exchange is a great platform and scales well, so if the original people wouldn't do it, well then f*ck em.
Stilll convinced to migrate? Well, something with multiple datacenters, large scale, compressed SAN backend, and alot of clustering will do it. Shit, you could do the entire thing with MySQL if you REALLY wanted to. Moving the existing data over will be a huge pain no matter what you migrate to though.
My suggestion? Don't just jump off Exchange, do a proper requirements analysis and you might find it is alot cheaper to just redesign the existing architecture.
Re:Obviously (Score:5, Informative)
Your point about putting more effort up-front into design is well taken, but thhat advice applies to any platform...
WIth that said, and without turning this thread into an Exchange bitchfest...
Why in the hell can't you restore a mailbox from backup using only the tools you already have if the user is no longer present in Active Directory? You can't even export the mailbox with EXMERGE... Your choices are 1) 3rd party recovery tool (like Quest Recovery for Exchange) or 2) Build an ENTIRE OTHER SERVER and do a normal, full restore of the entire mail store so you can extract one measly mailbox.
OBviously, the "Recovery Storage Group" feature is a VAST improvement over the old Exchange 5.5 way of bringing back just one mailbox (that being setup another server) but this is a MAJOR duh situation on Microsoft's part. They seem to think that since their "best practice" is to never ever erase any user account ever ever ever, that its okay to leave this gaping flaw in their enterprise groupware product. Sorry, but I think that sucks. We paid out the ass for "Enterprise" edition (to avoid the arbitrary 16gb limit on the mail store) and goddammit, I should be able to bring back a mailbox without its corresponding AD account without wasting a whole day setting up another server... I've only had to do it once (today) but the whole time I Was thinking how much esaier a mailbox restore on my OS X Server at home would be... Just restore the frickin' files and move on with your life.
Re:Obviously (Score:4, Informative)
Re:Obviously (Score:5, Interesting)
I am so tired of people shoving everything into relational databases. What queries are you going to run against your database, anyway? SELECT * FROM messages WHERE read=0? Try "ls new" in your maildir. The reason things never scale right is because people design things to be "new" and "cool" like putting their e-mail into a relational database. No. Just use the filesystem. It, and its supporting tools, have been around for 30 years! It Just Works! It doesn't use any userspace memory! There are no permissions issues, because the kernel controls the permissions. It's the optimal solution.
The filesystem is really really efficient (for e-mail) and really really reliable.
Please, don't use a database!
Re:Obviously (Score:5, Insightful)
The reason Exchange uses a database can be summed up in three words: Single Instance Store.
Say you send one 1MB Word document to 100 of your colleagues. In a relational database-based, Single Instance Store-driven mail server, that document takes up exactly 1MB on the server. If somebody in the organization forwards the Word doc to the remaining 900 people in your organization, how much space does it take on the server? 1MB.
Send a 1MB document to 1000 users on a flat, mbox-style mail server, and how much space is taken up on the server? 1000MB.
I see your point about some things, sure. Being able to jump in and restore a mailbox from tape by just dumping a folder somewhere is nice, but it just doesn't scale in terms of storage the way a db-driven mail system does.
Don't flame me as an MS advocate. There are times when an SIS-based email system is good, and there are times when a flat email system is good. I've run Exchange environments for 500+ people, and I've run Linux-based mail systems for 1000+ people. I'm just saying that your particular argument is one-sided and flawed.
Re:Obviously (Score:5, Insightful)
Re:Obviously (Score:5, Informative)
Re:Obviously (Score:4, Insightful)
Say you send one 1MB Word document to 100 of your colleagues. In a relational database-based, Single Instance Store-driven mail server, that document takes up exactly 1MB on the server. If somebody in the organization forwards the Word doc to the remaining 900 people in your organization, how much space does it take on the server? 1MB. Send a 1MB document to 1000 users on a flat, mbox-style mail server, and how much space is taken up on the server? 1000MB.
Speaking of which, is there any filesystem around that "automagically" detects redundancy and avoids storing the same data twice (i.e. two files with the same content end up being stored only once)? (I don't mean hardlinks. Suppose I download some file for the second time without knowing the first instance exists). I suspect this would add a lot of overhead to the filesystem driver, but it'd certainly be a cool feature.
Re:Obviously (Score:5, Interesting)
You are wrong in every way. (Score:5, Insightful)
Second, people with rediculously frequent mail check times are not any more of a problem. Modern operating systems use file system caches. You do not have to touch the disk subsystem in any way, frequently accessed data will be in RAM.
And finally, a database has alot of extra overhead, and there is alot of deletes going on. Sure, such a select statement would work, but reading the files in one directory is an order of magnitude faster. And the deletes will really hammer your database. FFS+softupdates makes file deletion extremely fast. A relational database is not the answer for everything, stop trying to pretend it is. Use the right tool for the job, and for storing files, a filesystem is the right tool. Its not relational data, it doesn't need to be queried in arbitrary, complex ways, so it doesn't belong in a relational database.
Re:Obviously (Score:5, Funny)
Where to start (seriously) (Score:5, Informative)
Once you have that done, you can start looking at solutions. You will have two parts to your solution:
1) The DMZ email relays (possibly including other antispam/antivirus functions) You really need high availability here.
2) Your email storage and retrieval systems. These may be a little more tolerant to downtime on an individual basis. But if you need to have redundancy here, there are ways to do it.
I think Hotmail did fine with BSD and Qmail.* I am sure Postfix is equally capable.
* Although Qmail itself has never had a security vulnerability discovered, you should be careful. TCPRules (on which qmail relies) has a vulnerability that can lead to root access for local users. This is not a problem on systems with no local users, however. I am not aware of any patch for the TCPRules vulnerability.
Re:Obviously (Score:5, Funny)
Easy. (Score:5, Funny)
New Google Appliance (Score:3, Interesting)
It really is the best email.
I'd start by (Score:4, Funny)
Re:I'd start by (Score:5, Funny)
Um... (Score:4, Informative)
Re:Um... (Score:3, Funny)
For starters... (Score:3, Interesting)
POP? (Score:5, Funny)
Re:POP? (Score:5, Funny)
Re:POP? (Score:4, Funny)
Exactly!
Remember, redundancy is good!
Re:POP? (Score:5, Funny)
Re:POP? (Score:5, Funny)
Re:POP? (Score:5, Interesting)
Re:POP? (Score:4, Insightful)
Having never been near a computer, I have no idea. If I had to guess, I'd suppose that with a million users, 100,000 of them will have to be constantly reminded to delete their mail off the servers. 25,000 of them won't EVER delete their mail no matter what you do, and 5,000 will bitch and whine when you cap their fucking mailboxes. One of them will be the CEO, and he'll berate you in front of his smarmy suspender-wearing jerkoff golf buddies because you're a dumb hick that can't fit a terabyte of mp3s and porn (most of it redundant for chrissakes) into only 500 gigs of disk. You will also get to deal with countless issues involving different email clients. You would give almost anything to have a massive natural disaster wipe everything out so you didn't have to go to work tomorrow, but there's the wife and kids, so y'know, there it is.
Backups (Score:5, Informative)
With POP3, the client downloads mail and deletes it off the server. Without a significantly butchered POP3 server there's no way to hold copies of that mail for a period of time (say, to ensure it goes on to your archival tapes, or to make sure you can recover files the user deleted accidentally). It's one less thing to worry about if their workstation / laptop dies, too - just give 'em another one. If more mail clients supported LDAP address books and WebDAV calendars this would be even nicer; as it is I still have to keep their mail folders in their network home dir so I can back up their address book.
You can back up POP3 boxes if you're on a corporate network, by forcing the client to keep its spools on the user's homedir. That tends to be slow and inefficient, though, and it doesn't let you do things like transparently split out attachments and store only one copy of an identical attachment for everybody.
It's also easy to lose mail with POP3 if your client does something silly. Most clients seem pretty decent now, but I remember old Eudora versions used to DELE mail off the server then crash, corrupting their mailboxes. Woohoo.
IMAP gives admins much more control over user mail. You can back up their mail folders, including their outbox and filed mail. You can enforce mail lifetime limits if your information retention policy requires it. You can store single copies of duplicate messages and attachments. You can give users access to shared mailboxes, and to each other's mailboxes where necessary. You can manage their mail folders remotely ("I can't delete $message, help!"). You can set up filters that deliver mail into sub-mailboxes automatically. Good clients automatically sync the IMAP mailbox so it can be used when the client is offline, like POP3. You can have your anti-spam software learn from their mail client's Junk folder. It's just much saner for business environments, in much the same way that network home directories and thin clients are much saner than a bunch of desktops with local storage are.
IMAP also permits you to give the user a single view of their mailboxes from their desktop and when they're on the road, or accessing their mail from home. Don't even talk about "leave mail on server" for POP3 - users WILL misconfigure it and suck all their mail down onto one of their machines, then come to you looking for help cleaning up the resulting awful mess.
Now, for an ISP, things are the opposite. You want to get the users' mail through your system and get rid of it. Most ISPs only offer POP3 and have small mailbox caps, so the user can't set their client to never delete mail off the server. They don't want to be responsible for user mail, they want it off their hands ASAP. An ISP can just tell a user who deleted a message then wants it back "well, that was silly then wasn't it?". An ISP doesn't want to back up 5 years worth of mail for 500,000 users.
My point is that for corporate environments IMAP is so superior that it's almost nuts to offer anything else, but for an ISP POP3 is a much more viable option. So what's so bad about POP3 depends entirely on what your needs are.
Re:POP? (Score:4, Insightful)
Wal-mart has an estimated 1.6 million employees. (source [wikipedia.org])
General Motors, by contrast, has approximately 360,000 employees.
The post says "around one million accounts" which is very different from one million employees. I have over ten email accounts that I actively use for receiving mail and four to six for sending.
An ISP could easily have millions of accounts. But since he said "huge" company, they were using Exchange, and because he's asking Slashdot my guess is that he's not at an ISP. Instead, I'd guess he's at a medium-sized company that might offer email accounts to its customers or at a large company that also contains many subsidiaries (but wants one email domain for all of those).
~ 320K accounts (Score:5, Informative)
Re:~ 320K accounts (Score:5, Funny)
Yeah, but we all know what happens when one of these Domino servers falls over
Worst. Email. Client. EVER! (Score:4, Funny)
Don't get me wrong. Notes isn't just a crappy E-mail client. It's also a crappy database access client that provides user-definiable forms which can be used to populate rows in the database. When you start getting a LOT of rows, the performance really goes to shit unless you replicate the database down to your local hard drive.
Rather than the Notes based solution, I would suggest an old 386 running BSD and Sendmail. That'd save you a lot of pain in the long run, versus dealing with Notes.
Oh, dear God, you RECOMMEND Notes? (Score:3, Informative)
Sorry, but Lotus Notes sucks; it's an abomination in almost every way. It's bloated, slow, buggy and has what is arguably the worst user interface ever (The User Interface Hall Of Shame said they could have based their entire site on this one app!) Sure, it does group meeting notes and can let you check other people's calendars but it falls flat as an email system. If it can't do the basics, who cares about the "advanced" fea
Re:~ 320K accounts (Score:3, Informative)
Argue for your favorite all you want, but friends don't specify Lotus Notes to friends.
It's obvious (Score:4, Informative)
However, I'd personally ask Google [google.com]. They've done it and even their search engine has information. I found an interesting link from there detailing the deployment of a large hundred thousand user mail system, from the architecture to the software located on Linux Journal [linuxjournal.com].
Who to talk to (Score:3, Informative)
Mirapoint is probably _the_ vendor to speak to, though.
openwave's email server does this but it's $$$ (Score:3, Informative)
I'm sure you could hack together something to do this much like what google did. Might take some time but it's totally doable.
Vendors (Score:5, Interesting)
Just do one thing, please: make sure that the client is honest-to-goodness serious about this. I absolutely hate getting pie-in-the-sky RFPs from people who are just kicking the tires. It's a good way to burn bridges by not looking professional.
Re:Vendors (Score:4, Funny)
Those newfangled "real numbers" are nothing but bullet-point creeping featuritis. Integers, on the other hand, have been around since at least Kernighan & Richie. They do one thing and do it well. Keep true to the Unix philosophy! Real numbers in information technology? Just say NO.
Split up the tasks (Score:4, Informative)
Your receivers will be a bank of servers running sendmail. They will do appropriate spam processing to reduce the amount of mail actually received. They feed the data into the storage servers.
The storage system has the data partitioned out so that all the data for one user would go to one server while all the data for another will go to a different one. The storage system also has to provide POP and IMAP access. You may want a special setup where the IMAP or POP service known which server to go to. Investigate having one giant virtual filesystem so that the system isn't too complicated.
Your webmail access will use IMAP to access the actual mail. It can be a completly different system.
The sending system will be a chokepoint for all outgoing mail. You are going to scan it as it goes out to look for virus-sent emails or unauthorized messages. For instance, you may want marketing email to be processed differently than inter-office email and such.
All of these systems will be running sendmail. I know sendmail has a bad rap for being insecure, but the insecurities have been found and since fixed. It is by far the most manageable system when it comes to large-scale deployments with heavy customization.
Re:Split up the tasks (Score:4, Insightful)
Re:Split up the tasks (Score:3, Informative)
Re:Split up the tasks (Score:4, Informative)
In short, we have mail servers accepting the mail and dropping it on a shared NFS server which stores all the mail. The incoming servers run spam and virus filtering and is responsible solely for delivering the mail to the customer's mail directory which lives on the NFS server.
On the client side, we run IMAP and POP3 servers which access the stored mail on the NFS server to deliver it to the clients.
The exact software used for both of these functions are somewhat irrelevant. Once you split this up this way, you can also split the selection process. I.E. which is the best server for accepting SMTP mail and dumping it in customer's mail directories. Which can be answered with a completely different answer than the question of "what is the best NFS (or SANS) server to use to store the mail", or "what IMAP server should we be using", or "what webmail front end should we be using", or so on.
It also makes changing your mind down the road on any piece easier since you can actually run and test any one of these components in the live system as a final test before moving a replacement into the system.
FWIW, I would *love* to consult on something this scale.
no, it will not be sendmail (Score:4, Interesting)
You're high. Building a massive production email system on Sendmail 9 is slow-motion suicide. If the security holes don't get you, the terrible configuration methods and complete lack of scaleability will, nevermind the fact that Sendmail Inc is trying desperately to replace the product.
"Most managable with [...] heavy customization?" I'd laugh if I wasn't crying. And I'm crying because I used to work for a company that deployed a massively customized sendmail infrastructure -- and I was one of the poor bastards who had to maintain it. Trust me, you don't want to do this. Ever.
Yes, milter is cool. No, it's not cool enough to justify burning CPU cycles on sendmail in 2005.
Even Sendmail Inc tacitly admits that Sendmail's design is garbage: take a look at the design document [sendmail.org] for Sendmail X, and note carefully how much it resembles Postfix and Qmail. There are very good reasons for this.
If theyre using exchange (Score:3, Insightful)
If you're looking for a groupware replacement, then you've got a big job ahead of you. Scalix is a mess, bynari is a hack, etc. When you do get them running things end users end up buying like PDAs and apps that hook into outlook are going to cause more problems.
If its just pop/imap you really can't go wrong. A good webmail option is kinda a catch. Squirrelmail is nice, but compared to OWA its really out of its league.
If your post told us what they were fed up with and how they used their system you'd get some real advice. Expect the usual postfix vs qmail vs sendmail vs whoever mini-flamewars.
Unless It's A Very Old Exchange System... (Score:3, Insightful)
I'm sure someone, somewhere within the enterprise is using features of Exchange that they won't get anywhere else. Not to sound like a Microsoft fan-boy sock puppet, but there's some features that Exchange has that people in a business environment just love.
However, since you asked. I'd run Exim or Qmail and Cyrus IMAP.
Still Have to Engineer it (Score:3, Interesting)
For pop3 & imap4rev1, look at:
http://www.dbmail.org/index.php?page=overview [dbmail.org]
Still need an MTA, I think qmail is the fastest, best, but I'd used exim, as its easier.
Database - not sure if MySQL and PostgreSQL will scale with dbmail.
I'd say use FreeBSD, because of the ports collection (Don't linux Flame me). However, something like Solaris 10 x86 (or Solaris+Sun Hardware) might provide a bit better scaling, and HA hardware, SAN support, support in general, etc. Though, a bit tougher on the OSS software installs (In My Experience)
Re:Still Have to Engineer it (Score:3, Insightful)
I currently run qmail in a small production environment, handling about 20k messages a day. It's small, but enough to point out the cracks.
qmail does many things well, but it also is a product of DJB-bizarroworld. The worst of the offenses, in my book, is that due to his security model, the smtp receiver will accept messages to any recipient, not just valid
Re:Still Have to Engineer it (Score:5, Insightful)
Gmail accounts... (Score:3, Funny)
CommunigatePro from Stalker.com (Score:5, Informative)
2) It'll scale as big as you can dream - over 5 million accounts with clustering
3) MAPI support
Hire Matt Simerson, the creator of MailToaster (Score:4, Informative)
You can learn about him, and his mail projects at http://www.tnpi.biz/internet/mail/toaster.shtml [tnpi.biz]
-Chris Knight
Scalable e-mail systems? (Score:3, Informative)
My slides relevant to this discussion can be found at http://www.shub-internet.org/brad/papers/dihses/ [shub-internet.org] and http://www.shub-internet.org/brad/papers/sistpni/ [shub-internet.org].
And yes, Nick Christenson [jetcafe.org] has been a long-time friend and co-author of mine.
Feel free to contact me directly if you want some referrals.
Google services (Score:3, Funny)
Plan. Test. Spec. Deploy. (Score:5, Informative)
(2) Test. For each server, hammer it. Test it's load under as close to real world circumstances as you can. Then create unreal punishing loads and see how it handles it. Plan in advance for how your server farm handles something like virus-generated mass emails causing 1000% spikes in load.
(3) Using your testing results, spec out the actual hardware. RAID, cheap hardware, redundancy, etc. If you have control over the network choice, plan a location with multiple fiber trunks coming into the building and provider redundancy. Remember backhoes in concert? Don't get hit by that. Plan for server failures, drive failures, network failures, power failures, and security compromises.
(4) Deploy! If you did the rest right, this is the easy part. You'll have redundant network connections, HSRP, redundant switches, a proxy farm, an imap/pop farm the proxies connect to, an smtp farm for outgoing emails, and a web server farm for serving up webmail (depending on how you choose to architect the disk space, the web farm and the pop/imap farm may be one and the same; depends on how you set things up.)
Here's a starter link to a setup which is smaller but, in principle, fairly similar:
http://www.itd.umich.edu/umce/features/2004/cyrus
Finally, if you don't want to screw it up, ask someone who has done it before. Paying someone $300/hr for a 10-30 hour review of your plan is dirt cheap compared to horking the setup. Someone who has worked in huge email environments (a la, hotmail) could show you gotchas before they bite you. (If you need help figuring out who to ask, I could even point you to some of the appropriate people)
YIKES! Tossing out the groupware?! (Score:5, Informative)
Now to the mega-infrastructure that I set up for an undisclosed company for under 50K (and also didn't want groupware).
1. Transport Sender (sendmail). That's right! Good ol' plain sendmail scales. It does require some pretty savvy tweaking so get Sendmail.Com consultant onboard just for this. Use SleepyCat DB for speed for all sendmail setups. For one million, I had about 23,000 transaction per minutes during the day. You'll require 10 servers for this for cushion (against some idiots sending an ISO attachment).
2. Payload receiver (sendmail). A second group of machine to handle the reception of SMTP payloads.
3. IMAP4S/POP3S - Hey what's with the "S"? Nothing like sending your user's password in the clear. Unless you enforce VLAN in your corporate environment and limit all IMAP4/POP3 to VLAN, the "S" is a mandatory security feature, inside and outside. Guess what "S" stands for?
4. Webmail - SquirrelMail - Yet another dedicated server (in which I had to add two more load-balanced server to handling the growing pain). Use https for login only.
5. AntiVirus (ClamAV) - It was the best back then, now its just running in the middle of the pack. sendmail has milter that allows extensibility such as MIMEDeFang, wilter, rureal (reverse-DNS check), spamassasin, and SPF.
6. Support - Half the effort is put into those webpages that would 'hand-hold' these newbies into reconfiguring their machine. Worth the effort if you have over 20 expert PC users that can do their boxens. Otherwise do it yourself at each PCs. These pages should cover Thunderbird, Evolution, as well as Outlook and Outlook Express.
7. Learn to spin 11 plates, one on each pole. Keep them spinning... If they start to drop and break, bring in some more Unix dudes.
Simplicity is key. (Score:5, Informative)
OpenLDAP
You need a central configuration repository to store the email accounts, their passwords, etc. OpenLDAP is perfect for this, and you can replicate it out for scalability. Be prepared to learn about LDAP schemas.
Exim
Use Exim because it has a simple process model (a single binary that does all the work, like sendmail) but has a human readable configuration file and has to be the most flexible MTA out there. You will have customers with weird requirements sometimes, and Exim will be able to meet those. Plus, it has Exiscan-ACL built-in these days, which allows you to do virus scanning and spam scanning at the DATA stage, before the mail is actually accepted by the MTA. It means you can make the sending MTA deal with the bounces if the mail is a virus or is obvious spam.
Courier-IMAP for POP3 and IMAP access.
Yeah its written by a sociopath, but nothing else works as good in the field. It works out of the box with sensible LDAP schemas and is fast, reliable and secure. Handles SSL, all the different authentication methods, what have you. Maildir compatible.
Maildir message store.
Store the mail in maildirs. Don't put them in
NFS mount the maildirs from a fast NFS device like a Netapp. Netapps are recommended because you can plug them in, and they just work, plus they are easy to scale by adding more trays.
Linux NFS servers set up with heartbeat and shared disk also make a nice HA NFS, and would be cost effective, but you'll have to buy an array anyway (probably fiber channel) so it might be better just get something thats completely integrated like the Netapp.
Spamassassin.
Can be configured to scan make at DATA time in the SMTP conversation. A LOT of configuration work here to make it play nice on a massively scaled platform, but it can be done. Mostly it needs to have things like the auto whitelisting and bayseasn filtering turned off, as the extra DB file work is a bit excessive.
Actually, I'm sure there is a way to make it work with a less resource intensive repository, but using the standard SA rules seems to work well for my environment. *shrug*
ClamAV.
Free antivirus, it works, and integrates well with Exiscan-ACL. Set it up to scan via the daemon, and configure it to update every couple of hours from cron, and bob's your uncle.
Scaling out
Make every box the same. Make every box an MTA, a POP3/IMAP server, etc. Use something like Kickstart to automate builds so that you can build a machine in 10 minutes, and all you have to do is configure the IP address and plug it in. If you want to be REALLY sexy, you could make the machines boot off the network, and mount / from a shared NFS area, and make
Load balancing
Hardware load balancers are pretty much a necessity. Don't touch cisco stuff. Its not very good. Go with Foundry Networks ServerIrons. The XLs can handle 1 billion requests/day if you configure them in Direct Server Return mode (also known as DSR/Foundry switchback). Use it. It makes all the return traffic go directly out to the net, meaning your ServerIrons have to switch less traffic and track less sessions. I would recommend however for a million users a pair of the ServerIron 450GTs, or bigger. Maybe one per VIP/Service.
Now, if this is all looking pretty daunting, you could always hire me to build it for you
Re:Simplicity is key. (Score:5, Insightful)
-Cyrus IMAP, while a monster to build and configure, can handle a pretty heavy load, and the latest versions can handle a lot of load-balancing internally.
-Exim's nice. I'm a Postfix man, myself. Sendmail is king, though. I'm not going to claim to like it, but it's up to the task, and there's something to be said with using a standard tool.
-While things like MD4 are okay for hashing, they're kind of CPU-intensive. Consider something like "second and third letter of username" that takes less CPU time. The right answer here depends a lot on the relative speed of CPU versus disk. If you can get dedicated hardware to do this (rare, but it exists), use whatever hashing the hardware supports.
-Consider some sort of cache (maybe even separate machines) between incoming SMTP and SpamAssassin/ClamAV. When the 2am spam run hits, your incoming SMTP machines can become overloaded. The downside: deciding what to do with mail that's not rejected the moment it's received.
-Set up a "mail machine" configuration with whatever OS and tools you use, and make it possible to create a disk image quickly. You're going to need a lot of hardware, which means that you'll have enough random failures to make building machines by hand impractical. This also means "have at least one extra built machine/disk array/etc. powered-on and waiting at all times" for those 4am hardware failures.
-You may find that things like NFS just aren't fast enough. Be ready to look at SAN or shared "direct-looking" storage. The tough part: this is hard to discover during testing. It may be overkill, but don't lock it out as a possibility.
-I/O is king. CPU speed won't matter as much as bus speed, disk speed, and memory speed. This is why a lot of companies use banks of big proprietary unix machines for their mail, even if they use commodity PCs elsewhere.
-I don't trust hardware load balancers. Sometimes they're necessary (and they do make life better when they work), but they're a big single point of failure. Consider other ways to split the load, or at least ways to work around the load balancer if it should fail. The Cyrus aggregator can handle some of this.
Re:Simplicity is key. (Score:4, Insightful)
80,000 is trivial. I was running a 12 node system with 87,000 users 12 years ago on hardware that was slower than a play station.
The complexity of going from 100,000 to 1,000,000 isn't just 10 times harder, you start to get into that area where sigma 4 system works with few problems with 100k but dies horribly with 1000k users. There is a line where instead of one machine being broken is unusual, you get this situation where at least one machine is always broken and it will often be broken in a way that is hard to diagnose.
While we answer this question... (Score:5, Funny)
It starts with a slashdot geek working in the email department spitting up his coffee, followed by a few rumors which make it up to a guy in accounting and customer service, followed by frantic management emails, including some inappropriate language, from Steve and Bill. Then a few good geeks start tracing who this cfsmp3 guy is and try to trace him to a company while the salesreps begin coldcalling any customers running around 1 million customers.
And Microsoft will botch it because they have no experience in cowtowing and bootlicking, which are important skills for any company who wants to humbly keep its customers.
Easy (Score:5, Insightful)
Intelligent Architecture (Score:5, Informative)
Sounds like a fantastic design opportunity here. The 5% of the project that is Enterprise architecture is what I enjoy the most as well. I'm assuming money probably isn't an object in terms of how much gear and bandwidth you may have to feed to this.
I'm happy to let my fingers type away below, I'd love to keep in touch and see how you end up shaping this system. my email is allowmx at hotm...
Before I ask, are there actually a million accounts? Or is that just a ceiling that you have to show proof of concept with?
I've only implemented up until about 250,000 accounts of any kind, as I'm sure you're probably aware, the base transactional resource costing is essentially the same..
For me, I would look at this for sure from at least these two angles:
1) knowing your transactional costs (how much of your hard resources, bandwidth, cpu and disk space) will each type of transaction in your system take?) I mostly use this approach to get not an exact number, but an idea of magnitude, and detail where it happens on it's own to make sure the proper attention is applied to them.
2) Failsafe intelligence & capacity in the infrastructure, as well as the failsafe intelligence & capacity in at the application layer. You have to know that your hardware, software, os, business logic and applications are all monitorable internally, externally for availabilty and actual "can I use it". Transactional logs, etc, of having information available when the inevitable problems come up.
Also, having a capacity for as many of these layers to be self-healing, and fungible to the point that your service delivery is homogenous in as many ways possible. If your network finds something doesnt work or route, with mail, you can find another way to route it. Having a transactional manager of some kind, direct or not, could be useful in this case depending on what the client wants.
99.9% uptime equates to about 526 minutes, or 87.6 hours you _could_ be down each year. Thats about 7.3 hours a month, or one day a month.
Based on that, having flexible, redundant tools setup in a high-availabily arrangement at their respective operating capacities is key. I'm not sure if your current exchange problems are being aided by not enough equipment, bandwidth, or other stability issues, so I'll just assume that it's all of them
I apologize if anyone else has already mentioned some of this, but here's some of what I've found to help me where email has become as crucial to a business as their cell phone.
On the hardware level:
- STORAGE: Everything goes on a SAN, if not more than one. Don't waste your time with anything less.
- SERVERS: All servers have redundant hot swappable parts in the very least, power and hard drives. I'd even suggest making the servers Iscsi bootable so they can boot off the backbone. Beyond this, I like to buy my servers in piles of identical ones. Have 1-2 spare serevrs of each kind sitting there, ready to throw hot swap drives into from a failed server. That way if a server dies, you can address the power supplies, or get the HD's in that machine into another identical server and get it up and running while you diagnose the hardware problem independantly. My approach to any kind of problem is FIX, DETECT and REPAIR. Get it up and running, find out what was wrong, make sure it's fixed for good. Too many of us stop at the first too
The idea I have in mind is a smaller scale of a google beige box army. linux/bsd offer so much more transcations for each piece of hardware, so that works very much in your favor. Obviously something enterprise grade to satisfy the client such as the Compaq/HP Proliants, etc. I feel these Servers ahve the best overall support, manageability and information tools, and their openlinux drivers interface wonderfully with open source operating systems)
Networking/Communication level:
- Entire mail processing architechture communi
You need a staff of 10 to 20 to run this... (Score:3, Informative)
One million email accounts is quite a lot. You getting into the big league ISP category with something like this. It's not a one person operation to put something like this together. You're going to need a substantial number of well trained people to do this. There's only a couple players in the field at this level. Sun's JES Messaging system owns a sizeable chunk of the market, followed by OpenWave and a small gaggle of fly-by-nights with unproven track records.
Some of the larger email systems however are homegrown using open source parts. Yahoo and Google immediately come to mind, and they do work quite well. But you probably don't have the resources that they do to engineer & test something like this. Yahoo is rumored to have more than 200 people working on email alone.
Sun has a deployment like this canned, sitting on a shelf in Santa Clara. Tell them what you need, write a check, and they'll show up with the kit. 99.999% uptime if you write a big enough check. Make them to throw in the Waveset stuff.
I/O (Score:5, Informative)
One of the first things the school did was figure out how exactly their current system was failing them. Their old AIX boxes were being stressed just by the volume of mail coming through the system, they had little power left over to do any sort of filtering. This led to users getting drowned in unwanted e-mail which only exacerbated the existing load issues. This is one of the first things you need to do, figure out why your current system isn't working properly. You'll be better equipped to fix the problems when they've actually been identified.
HEC Montréal also went for heavy redundancy and specialization. Instead of a handful of servers sharing all of the tasks equally each node in the cluster has its own job with every class of job having a backup server. Every job is going to take a beating with so many users, even if only a fraction of them are using the system at any given time.
I'd say the most important part of what you're doing will be modeling your current use. Are you getting a ton of traffic from viruses and worms spreading over your internal network? Do you get huge amounts of spam traffic to users? In such cases filtering at your SMTP servers will relieve the rest of the system from extraneous traffic. While you might need really beefy external SMTP servers you won't need nearly as much storage space on a SAN or NAS.
Army Knowledge Online does it for 1.72 million use (Score:5, Informative)
Needs to be said: (Score:4, Funny)
Re : Mail Transfer Agents
Qmail : a small office of neatly dressed clerks, delivering short clipped remarks to queries, and handling mail with a rude impersonality, except in the case of failiure where they let their hair down and have an after-hours beer and let you know about it, pointing to the pertinent header sections.
MMDF: A jumped up mailroom boy with a chip on his shoulder. Loves the bureaucracy and takes great pride in stamping "illegal address" in red ink on any mail it passes. Unpacks all the mail and repacks it in his own special envelopes before delivery to end users.
PP: MMDF gone mad with standards fever. Think "Brazil".
No, PP is... well, see, when it receives a letter, it chops it into small pieces, then translates bits of it using an English-Hungarian phrasebook and puts all the bits into various pigeon-holes. When it gets round to delivering the message, it collects all the bits, translates them back using a Hungarian-English phrasebook, tapes them together, and loses the letter. Some time later, you get a bounce message:
----- The following addresses had permanent fatal errors -----
----- Transcript of session follows -----
>>> RCPT To:
550 My hovercraft is full of eels
PP is John Cleese.
Sendmail: Shiva as a postman. Many arms delivering mail, dancing, taking drugs, destroying as it sees fit. Often makes creative changes to the mail for kicks, but ultimately can be persuaded to do anything with the right incantation...and that includes giving you other people's mail.
VMail: No experience yet, but I'd guess something like a wisened old man sitting on the porch outside the postoffice. Looks at everyone who passes by with deep suspicion, but turns out to be friendly and helpful once he realises you're not there to rob the place.
Micro$oft IMC: The Scarlet Pimpernel of postmen. Hard to find, impossible to order about, but every once in a while it saves a piece of mail from disaster. Sometimes even with it's head(ers) intact.
cc:Mail SMTPLINK: A 5 year old child left in charge of a large sorting office. Can't reach over the counter properly, can't handle more than one letter at once and has to go looking for a grownup whenever it wants to deliver to mail to other towns. Often opens parcels to look for shiney things inside then just delivers the wrapping paper onwards.
cc:mail UUCPLINK: an insane madman sitting in a box. Mail is thrown into a box where unknown things happen to it.. sometimes mail actually leaves the box.. usually to be delivered to the administrator of a totally unrelated postoffice and containing a complaint that the madman could not find the recipient in his dark box and would you please contact the person with the key of the box. Of course, the only way to reach that person is by mail and even if the box is opened the madman cannot be pursuaded to actually send mail to unknown addressees to the person with the key anyway...
Gus, Pete Bentley, Malcolm Ray, Perry Rovers
Quick setup (Score:5, Insightful)
my recommendations:
With backup support you should be able to setup such a system in 6 to 12 months (the later more realistic for big companies).
Most probably users will complain about the lacking calendar.
Most troublesome will be the migration phase (hope you realized i didn't mention it above). This depends so much on your current scenario that it is very difficult to give a general advice.
> where would you start?
Contacting me ;-). Perhaps get a budget first. As i said, i'm sales....
Regards, Martin
Stop right now (Score:4, Insightful)
So, what you do right now is you go find someone who does know how to do it. And by that I mean someone who can demonstrate they know how. Which does not equate to having a low slashdot id; it equates to having done real projects of this scale.
So, how do you start? You ring IBM and get them to come in and talk to you. You ring Red Hat. You ring Accenture.
If you want impartial advice from someone who isn't a vendor (which is a good idea), then you go find some companies that has a million seat open source e-mail deployment in place and you see if you can get their messaging admin to talk to you.
Well (Score:4, Insightful)
Notes/Domino (Score:4, Interesting)
This is exactly what Notes was designed to do: scale. People have been building systems on this scale with notes for nearly twenty years. You can not only scale it by moving parts of your email system onto mainframe class iron, but you can distribute it and provide all kinds of flexibility and redundancy into your system to meet virtually any messaging requirement (e.g. choose an alternate MTA for high priority traffic when there are Internet disruptions). Naturally there's some complexity involved, but if you can get by with sendmail you probably shouldn't be using Notes.
What's more important is that management of accounts and identity, which is distributed, delegatable, and backed up by robust cryptographic certificate management. You can let a subsidiary manage it's own accounts, they can subdelegate that to a division and the division can subdelegate that to the IT staff on site; at each level policies can be set, enforced, and changed for lower levels.
For the lazy... (Score:5, Informative)
---
ok i work for a large uk isp in the messaging (email) operations dept. we currently have 2.5-3 million active accounts (and a load of suspended), and manage anywhere upto 12-16million mails per day
our setup is like this (this is simplistic though):
front line - anti abuse mta's - these do dnsbl type lookups (spamcop, spamhaus and sorbs). we have 9 incoming
next we have mta's. they farm mail off to brightmail servers, which do similar to spamassassin. we have 6 incoming mtas, and 8 brightmail servers (not enough - high load)
after that they farm off to vscans (6)
after that any mail that gets through is delivered to mail stores (8 + 2 hot spares)
what you want to be doing is similar to this above - chaining hte mail from one level to the next. the first level should be the rbl's - these are less processor intensive, and can remove a fair whack of your mails in one swoop. spamassassin is going to be more cpu intensive, since it has to open each mail and read the first x many bytes
id have separate machine(s) holding your master directory, and if you can get directory caches then do that too (to take the load off the master directory) - ours run oracle
i dont know what your budget is, but split up hte different tasks as much as possible. that way if you need to add more to any pool (rbl lookups, spamassassin etc) you just add another machine..
one last thing - we also have a separate box just for postmaster mail (with exim + spamassassin funnily enough) - it tends to get busy
Last edited by Slidey on 09-08-2005 at 11:19 PM
--
(end of quote)
Re:CommuniGate (Score:3, Informative)
Re:go to gmail (Score:5, Insightful)
Gmail does not have guaranteed uptime.
You do not pin your companies communications system on something you cannot sign a SLA agreement with.
need I go on?
Re:Let the vendors do the work. (Score:5, Insightful)
DO NOT implement a half-assed solution. Unless you really know what you're doing (and if you were, you wouldn't be asking this question), don't assume that a million Linux servers strewn about a million offices and data centers is the best solution, even if it is easiest to set up and administer. Maybe it is, come up with a proposal with hard numbers and see how they compare to the vendors. A million dollars spent on a Sun E10000, and Oracle Grid subscription (scales perfectly, right?), or a million IBM engineers flown into your site when an emergency happens may be worth paying for.
Re:Kerio (Score:4, Informative)
disclaimer: I work for one of those companies.
Re:Here's my plan and it's the best one you'll get (Score:5, Insightful)
This is the best advice he'll get? Sheesh.
Think this through -- a lot of e-mail programs check every 20 minutes. Assuming I actually hit any without duplications, I could potentially need 400 minutes or over six hours to get all my mail. Since it's random, it could take days.
And that's just for starters with this lame scheme. If I want to check mail, say, from the field on a dial-up once a day... hopefully you can see how badly this would suck.
What the guy should do is buy an e-mail system that can handle 1,000,000 users and not screw around trying to chewing gum his own solution.
Re:Here's my plan and it's the best one you'll get (Score:3)
1. users should be in a db.
2. imap servers should be their own cluster
3. pop servers should be their own cluster
4. smpt servers shoudl be their own cluster
5. spam filtering should be their own cluster
6. round robin DNS should be ditched in favor of hardware load balancing.
kashani
Re:NO GMAIL (Score:5, Interesting)
My God no! Friends don't let friends use qmail. Want reasons why?
1) It's a bitch to install. Won't even compile on modern Linux distributions. You have to patch it to compile it and the patch isn't even hosted on qmail's site.
2) It's a bitch to configure. Rather than parsing a single configuration file, qmail relies heavily on the presence of individual files in a directory.
3) Not not not not scalable! That's a myth. Doesn't properly batch jobs together. Hell! qmail was originally designed to be run from inetd!
4) Heavy reliance on other daemontools.
5) Breaks well-known and understood UNIX standards.
6) Security through lack-of-functionality.
7) Not really secure despite the claims.
8) No longer maintained.
9) No features. Adding them requires patching, and patching, and more patching.
Serious sysadmins don't use qmail and for damn good reason. I don't give a damn if Yahoo did manage to string it together and make it work well. In short, qmail isn't particularly suited for deployment in any capacity.
More specific? (Score:3, Interesting)
5) Breaks well-known and understood UNIX standards.
Which standards are these? Are you talking about the errno [tesco.net] fiasco?
6) Security through lack-of-functionality.
What sort of functionality is provided by, say, postfix, that qmail simply won't do?
7) Not really secure despite the claims.
How's that? Do you have $500 [cr.yp.to]? If not, what's the security vulnerability that the author refuses to acknowledge?
Which of these problems that you enumerate are not addressed b
Re:NO GMAIL (Score:5, Interesting)
chkuser-2.0.8b-release.tar.gz
doublebounce-trim.patch
netqmail-1.05-tls-20050329.patch
outgoingip.patch
qmail-smtpd-auth-0.31.tar.gz
qmail-smtpd-auth-close3.patch
qmail-smtpd_gmfcheck.patch
qmail-spf-rc5.patch
Most of these patches require hand editing the sources and Makefiles to successfuly merge them all into the stock qmail or netqmail base. Lots of manually reading through *.rej files to make it all work.
In order to simplify new installations I've created my own personal CVS repository for my Qmail sources. I commit changes to the tree whenever a new patch comes out with functionality I need. Hence on a new install I simply check out my custom tree and compile.
The initial work was a royal pain in the ass, however, once it is all up and running the stability and performance has been excellent.
Re:NO Domino (Score:3, Insightful)
There are dozens of perfectly good mail servers out there. The more features they have the more likely you are to have problems. It's a pretty simple equation.
And if all else fails, you can write your own. I've written one, it's n
Re:I worked at a company that did this... (Score:3, Informative)
We have account data stored in an LDAP store, mirrorred to a second (read-only) store for redundancy/scaling when busy. LDAP scales wonderfully for read-heavy tasks such as this one.
As has been mentioned separately, separating recipient (edge), storage, and outbound mail servers is really important. Our edge servers perform RBL checks, greylist
Outbound queues (Score:3, Insightful)
them) so backed-up outbound queues don't interfere with normal outbound processing.
The FallbackMX hosts can use a file system optimized for directories with lots of files in them (and can of course themselves be tuned as the parent poster suggested.)