Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Databases Programming Software

High Availability Solutions for Databases? 83

An anonymous reader asks: "What would be the best high availability solution for databases? I don't have enough money to afford Oracle RAC or any architecture that require an expensive SAN. What about open source solutions? MySQL cluster seems to be more master/slave and you can lose data when the master dies. What about this Sequoia project that seems good for PostgreSQL and other databases? Has anyone tried it? What HA solution do you use for your database?"
This discussion has been archived. No new comments can be posted.

High Availability Solutions for Databases?

Comments Filter:
  • by Anonymous Coward on Monday November 14, 2005 @11:03PM (#14031773)
    Don't buy the RAC hype. I've seen too many misperformant RAC clusters that Oracle couldn't fix to save their life (and no, they weren't all bad vendor configuratins either).
  • by anon mouse-cow-aard ( 443646 ) on Tuesday November 15, 2005 @12:02AM (#14032073) Journal
    It's odd that all these people are answering without hearing a thing about your application. How big is the db? How often is it written? How often is it read?

    For example, we run a site with data from a thousand odd different data sources, with each source getting updated every hour or so. We do it by parsing the data into static pages. We we receive a datum, we rebuild the pages that depend on it.

    We have another site that runs off an Oracle db. the static page site runs about 90x faster, and is basically in memory (disk access is nil.) Now take into account that we can (and do) replicate the static page solution with zero load, we get to a solution that is literally 900x faster.

    Now folks are thinking 'oh, the horror!' well... tough! There is no substitute for thinking about your data, and how it flows. A DB is not a given, but a (potentially wrong) answer to a question after you have done some analysis.
  • by mrselfdestrukt ( 149193 ) <nollie_A7_firstcounsel_com> on Tuesday November 15, 2005 @02:26AM (#14032684) Homepage Journal
    High availability is NEVER as highly available as on paper...
    *sob*
  • by anon mouse-cow-aard ( 443646 ) on Tuesday November 15, 2005 @08:35AM (#14033728) Journal
    In my experience, you're right. But you have to take the long view. You don't just say... let's do an HA project, put it in, and walk away. to get more 9's you start with something that makes sense, and then look at every failure that happens, and fix the cause.

    case in point:
    We started off with HA, figured out how to go to cloned configuration: two servers, two RAIDS, no SPoF, right? We had some LAN issues which caused traffic storms, there was a bug in the controller logic, so both RAIDS crashed simultaneously. We fixed it by using another brand of RAID for one of the units. Those servers have not crashed since...

    If you do the accounting, the biggest cause of lack of availability with HA sites is number 18, 18 inches in front of the keyboard. That's not because people are less skilled than before, it is because we have eliminated all the hardware issues, the stuff you don't automate is all the stuff that is too complicated to automate, so only human error in making complicated changes remains. So every down time, there is usually an analyst looking sheepish, but it is usually not his/her fault. The process had some failing in it, and you have to fix the process. It's a lot like I hear airliner crash investigations are like. Find out what happenned, fix the process, so that it doesn't happen again.

    You hone it over years, and every failure or even glitch is precious. Study it.

If you want to put yourself on the map, publish your own map.

Working...