Learning High-Availability Server-Side Development? 207
fmoidu writes "I am a developer for a mid-size company, and I work primarily on internal applications. The users of our apps are business professionals who are forced to use them, so they are are more tolerant of access times being a second or two slower than they could be. Our apps' total potential user base is about 60,000 people, although we normally experience only 60-90 concurrent users during peak usage. The type of work being done is generally straightforward reads or updates that typically hit two or three DB tables per transaction. So this isn't a complicated site and the usage is pretty low. The types of problems we address are typically related to maintainability and dealing with fickle users. From what I have read in industry papers and from conversations with friends, the apps I have worked on just don't address scaling issues. Our maximum load during typical usage is far below the maximum potential load of the system, so we never spend time considering what would happen when there is an extreme load on the system. What papers or projects are available for an engineer who wants to learn to work in a high-availability environment but isn't in one?"
2 words (Score:2, Informative)
map reduce
http://labs.google.com/papers/mapreduce.html [google.com]
Well... (Score:4, Informative)
Zawodny is pretty good...
Here goes... (Score:3, Informative)
1. Check all your SQL and run it through whatever profiler you have. Move things into views or functions if possible.
2. CHECK YOUR INDEXES!! If you have SQL statements running that slow, the likely cause is not having proper indexes for the statements. Either make an index or change your SQL.
3. Consider using caching. For whatever platform you're on there's bound to be decent caching.
That's just the beginning... but the likely cause of most of your problems. We could go on for a month about optimizing.. but in the end if you just stuck with what you have and checked your design for bottlenecks you could get by just fine.
Watch videos/presentations. (Score:3, Informative)
check these out... (Score:4, Informative)
http://highscalability.com/ [highscalability.com]
http://www.allthingsdistributed.com/ [allthingsdistributed.com]
Re:Well... (Score:4, Informative)
Define your thread's purposes (Score:5, Informative)
The biggest problem I have seen is people don't know how to properly define their thread's purpose and requirements, and don't know how to decouple tasks that have in-built latency or avoid thread blocking (and locking).
For example, often in a high-performance network app, you will have some kind of multiplexor (or more than one) for your connections, so you don't have a thread per connection. But people often make the mistake of doing too much in the multiplexor's thread. The multiplexor should ideally only exist to be able to pull data off the socket, chop it up into packets that make sense, and hand it off to some kind of thread pool to do actual processing. Anything more and your multiplexor can't get back to retrieving the next bit of data fast enough.
Similarly, when moving data from a multiplexor to a thread pool, you should be a) moving in bulk (lock the queue once, not once per message), AND you should be using the Least Loaded pattern - where each thread in the pool has its OWN queue, and you move the entire batch of messages to the thread that is least loaded, and next time the multiplexor has another batch, it will move it to a different thread because IT is least loaded. Assuming your processing takes longer than the data takes to be split into packets (IT SHOULD!), then all your threads will still be busy, but there will be no lock contention between them, and occasional lock contention ONCE when they get a new batch of messages to process.
Finally, decouple your I/O-bound processes. Make your I/O bound things (eg. reporting via. socket back to some kind of stats/reporting system) happen in their own thread if they are allowed to block. And make sure your worker threads aren't waiting to give the I/O bound thread data - in this case, a similar pattern to the above in reverse works well - where each thread PUSHING to the I/O bound thread has its own queue, and your I/O bound thread has its own queue, and when it is empty, it just collects the swaps from all the worker queues (or just the next one in a round-robin fashion), so the workers can put data onto those queues at its leisure again, without lock contention with each other.
Never underestimate the value of your memory - if you are doing something like reporting to a stats/reporting server via. socket, you should implement some kind of Store and Forward system. This is both for integrity (if your app crashes, you still have the data to send), and so you don't blow your memory. This is also true if you are doing SQL inserts to an off-system database server - spool it out to local disk (local solid-state is even better!) and then just have a thread continually reading from disk and doing the inserts - in a thread not touched by anything else. And make sure your SAF uses *CYCLING FILES* that cycle on max size AND time - you don't want to keep appending to a file that can never be erased - and preferably, make that file a memory mapped file. Similarly, when sending data to your end-users, make sure you can overflow the data to disk so you don't have 3mb data sitting in memory for a single client, who happens to be too slow to take it fast enough.
And last thing, make sure you have architected things in a way that you can simply start up a new instance on another machine, and both machines can work IN TANDEM, allowing you to just throw hardware at the problem once you reach your hardware's limit. I've personally scaled up an app from about 20 machines to over 650 by ensuring the collector could handle multiple collections - and even making sure I could run multiple collectors side-by-side for when the data is too much for one collector to crunch.
I don't know of any papers on this, but this is my experience writing extremely high performance network apps
Re:Similar Question - Interviews (Score:1, Informative)
Its very difficult to get that kind of knowledge/experience without having actually done it before. The way I got it was by being hired into a project which was a rewrite of a very large scale system, and I got hired right as that project was starting. This was a great way to make it up the learning curve without too much pain, because I got to hear from the team directly about what decisions they thought were wrong about the previous design, and got to participate in the design discussions for the next generation. The team was very experienced at this and the choices they made (and mistakes/bumps along the way) taught me a lot about how to build such an application.
Re:The unfortunate thing about databases (Score:5, Informative)
Most databases have async APIs. Postgresql and mysql have them in the C client libraries. Most web development languages, though, do not expose this feature in the language API, and for good reason. Async calls can, in rare cases, be useful for maximizing the throughput of the server. Unfortunately, they're more difficult to program, and much more difficult to test.
High scale web applications have thousands of simultaneous clients, so the server will never run out of stuff to do. Async calls have zero gain in terms of server throughput (requests/s). It may reduce a single request execution time, but the gain does not compensate the added complexity.
You are talking about two things (Score:3, Informative)
You can address reliability and through put by invest a LOT of money in hardware and using things like round robin load balancing, clusters and mirrored DBMSes, RAID 5 and so on. Then losing a power supply or a disk drive means only degraded performance.
Latency is hard to address. You have to profile and collect good data. You may have to write test tools to measure parts of the system in isolation. You need to account for every millisecond before you can start shaving them off
Of course you could take a quick look for obvious stuff like poorly designed SQL data bases, lack of indexes on joined tables and cgi-bin scripts that require a process to be strarted each time they are called.
Re:2 words (Score:3, Informative)
But who knows... you could be right. I'm just playing devil's advocate.
Re:Here goes... (Score:1, Informative)
4. Get your servers clustered and this will help with server load (not really necessary at this this time for what you need, but will position you for the future) and redundancy if the server dies. If this is not possible, look at "warm" backups. But then again ask the business side what is their expectation when a problem happens, and then plan for it.
5. For performance tuning look at the execution plans on the SQL
6. Use transactions whenever possible (BEGIN TRANSACTION / COMMIT / ROLLBACK).
7. If you see deadlocks on tables, try using table hints (NOLOCK) on SELECT statements.
8. Get an experienced DBA to peer-review your setup and code if necessary.
And a change to 1. You can use stored procs as well as views and functions. But moving the SQL code into views / functions will bring performance gains from the server by having the code already compiled and creating a saved execution plan.
Re:Slightly off topic (Score:4, Informative)
Basically, processes are primitives, there's no shared memory, communication is through message passing, fault tolerance is ridiculously simple to put together, it's soft realtime, and since it was originally designed for network stuff, not only is network stuff trivially simple to write, but the syntax (once you get used to it) is basically a godsend. Throw pattern matching a la Prolog on top of that, dust with massive soft-realtime scalability which makes a joke of well-thought-of major applications (that YAWS vs Apache [www.sics.se] image comes to mind,) a soft-realtime clustered database and processes with 300 bytes of overhead and no CPU overhead when inactive (literally none,) and you have a language with such a tremendously different set of tools that any attempt to explain it without the listener actually trying the language is doomed to fall flat on its face.
In Erlang, you can run millions of processes concurrently without problems. (Linux is proud of tens of thousands, and rightfully so.) Having extra processes that are essentially free has a radical impact on design; things like work loops are no longer nessecary, since you just spin off a new process. In many ways it's akin to the unix daemon concept, except at the efficiency level you'd expect from a single compiled application. Every client gets a process. Every application feature gets a process. Every subsystem gets a process. Suddenly, applications become trees of processes pitching data back and forth in messages. Suddenly, if one goes down, its owner just restarts it, and everything is kosher.
It's not the greatest thing since sliced bread; there are a lot of things that Erlang isn't good for. However, what you're asking for is Erlang's original problem domain. This is what Erlang is for. I know, it's a pretty big time investiture to pick up a new language. Trust me: you will make all your time back in writing far shorter, far more obvious code than you did in learning the language. You can pick up the basics in 20 hours. It's a good gamble.
Developing servers becomes *really* different when you can start thinking of them as swarms.
Re:2 words (Score:5, Informative)
The Macy's Door Problem is a great example of a Scalability 101 problem that map_reduce has no way to address. In the early 30s, when most department stores were making big, flashy front entrances to their stores with big glass walls and paths for 12 groups of people at a time, doormen, signage, the whole lot, Macy's elected to take a different approach. They set up a small door with a sign above it. The idea was simple: if there was just the one door, it would be a hassle to get in and out of the store; thus, it would always look like there was a crowd struggling to get in - as if the store was just so popular that they couldn't keep up with customer foot traffic. The idea worked famously well.
In server design, we use that as a metaphor for near-redline usage. There's a problem that's common in naïve server design, where the server will perform just fine right up to 99%. Then, there'll be a tiny usage spike, and it'll hit 101% very briefly. However, the act of queueing and disqueueing withheld users is more expensive than processing a user, meaning that even though the usage drops back to 99%, by the time those 2% overqueue have been processed, a new 3% overqueue has formed, and performance progressively drops through the floor on a load the application ought to be able to handle. I should point out that Apache has this problem, and that until six years ago, so did the Linux TCP stack. It's a much more common scalability flaw than most people expect.
Now, that's just one issue in scalability; there are dozens of others. However, map_reduce has literally nothing to say to that problem. Do I need to rattle off others too, or maybe is that good enough? I mean, we have the exponential growth of client interconnections (Metcalfe's Law, which is easily solved with a hub process;) we have making sure that processing workloads is linear growth (that is, o(1) as opposed to o(lg n) or worse), which means no std::map, no std::set, no std::list, only pre-sized std::vector and very careful use of hash tables; we have packet fragmentation throttling; we have making sure that you process all clients in order, to prevent response-time clustering (like when you load an apache site and it sits there for five seconds, so you hit reload and it comes up instantly,) all sorts of stuff. Most scalability issues are hard to explain, but maybe that brief list will give you the idea that scalability is a whole lot bigger of an issue than some silly little google library.
Talk to someone who's tried to write an IRC server. Those things hit lots of scalability problems very early on. That community knows the basics very, very well.
Re:Has anyone actually answered the question? (Score:3, Informative)
I think this guy/girl is looking for something along the lines of this comment [slashdot.org] but in "accepted" book format. It doesn't look like the search returns a "handful"....
Re:Lots of Options (Score:3, Informative)
A good JMS provider is nice to have for HA though. Nothing like durable message storage to help you sleep well.
Re:Slightly off topic (Score:4, Informative)
Really only 2 things to think about at the base (Score:3, Informative)
The first is handled by just about any messaging/queue system. J2EE has had one for ages, Microsoft has MSMQ that recently (better late than never...
Then horizontal scaling. Why horizontal? Because just taking a random new box and plugging in it the network is easier and faster (especially in case of emergency) than having to take servers down to upgrade them (vertically scaling). Also adds to redundancy, so the more servers you add to your farm, the less likely your system will go down. There are documents on it all over (Microsoft Patterns&Practices has some on their web sites, non-MS documentation is hard to miss if you google for it, and many third partys will be more than happy to spam you with their solutions), but it really just come down to: "Use an RDBMS that handles clustering and table partitioning, use distributed caching solutions, push as much stuff on the client side (stuff that doesn't need to be trusted only!), and make sure that nothing ever depends on ressources that can only be accessed from a single machine (think local flat files, in process session management, etc)".
With that, no matter what goes down, things go on purring, and if someone ever bitch that the system is slow, you just buy a 1000$ server, stick a standard pre-made image on the disk, plug it in, have fun.
Oh, and fast network switches are a must
Re:Impressive (Score:4, Informative)
The problem is, it's hard to explain why. The overhead of using things like that is tremendous; Erlang's message system is used for quite literally all communication between processes, and a system like Windows Events or MSMQ would reduce Erlang applications to a crawl. Erlang uses an ordered, staged mailbox model, much like Smalltalk's. If you haven't used Smalltalk, then frankly I'm not aware of another parallel.
It's important to understand just how fundamental message passing is in Erlang. Send and receive are fundamental operators, and this is a language that doesn't have for loops, because it thinks they're too high level and inspecific (you can make them yourself; I know, that must sound crazy, but once you get it, it makes perfect sense.) You're about to see a completely different approach. I'm not saying it's the best, or the most flexible, but I really like it, and it genuinely is very different. What Erlang does can relatively straightforwardly be imitated with blocking and callbacks in C, but that involves system threads, and then you start getting locking and imperative behavior back, which is one of the things it's so awesome to get rid of (imagine - no more locks, mutexes, spin controls and so forth. Completely unnessecary, both in workload, debugging and in CPU time spent. It's a huge change.)
Really, it's a whole different approach. You've just got to learn it to get it. No, I said that. I wrote some code to help explain it to you, though of course slashdot's retarded lameness filters wouldn't pass it, so I put it behind this link [tri-bit.com]. Sorry it's not inline.
Hopefully that will help. Sorry about the lack of whitespace; SlashDot's amazingly lame lameness filter is triggering on clean, readable code.