Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Databases Programming Software

High Availability Solutions for Databases? 83

An anonymous reader asks: "What would be the best high availability solution for databases? I don't have enough money to afford Oracle RAC or any architecture that require an expensive SAN. What about open source solutions? MySQL cluster seems to be more master/slave and you can lose data when the master dies. What about this Sequoia project that seems good for PostgreSQL and other databases? Has anyone tried it? What HA solution do you use for your database?"
This discussion has been archived. No new comments can be posted.

High Availability Solutions for Databases?

Comments Filter:
  • by Anonymous Coward
    Don't buy the RAC hype. I've seen too many misperformant RAC clusters that Oracle couldn't fix to save their life (and no, they weren't all bad vendor configuratins either).
  • by cravey ( 414235 ) * on Monday November 14, 2005 @11:19PM (#14031861)
    While MySQL supports master/slave replication, MySQL Cluster specifically avoids that entire model. It's an entirely synchronous database storage engine. If you want master/slave, use postgres. If you want high availability and can handle the lack of a small number of features, MySQL Cluster is the way to go. The only real downside to the architecture required for CLuster is that all of the data is stored in RAM based tables. transactions are logged to disk every (configurable) time interval. If you're going to try for HA, you might want to RTFM on the available options before you settle on one.
    • by afabbro ( 33948 ) on Monday November 14, 2005 @11:31PM (#14031925) Homepage
      The only real downside to the architecture required for CLuster is that all of the data is stored in RAM based tables

      ...for an "only real downside" that's a pretty big one. I mean really, what sort of database is this - 256MB? 500MB? 1GB? Fine for small websites, not fine for large apps. I don't mean to be a "big shop" snob, but this is a ridiculous limitation.

      Unfortunately, open source hasn't caught up to the big guys yet in the area of replication.

      • Name a DB from a 'big guy' that doesn't require a shared resource. Oracle? Nope. RAC requires a shared storage solution. Lose your single storage device device and you lose access to your db. MS-SQL? Don't make me laugh. SAP? No. Postgres? No. Firebird? No. Who's left? MySQL is the ONLY available 'shared nothing' (tm) solution available.

        Yes, the RAM only tables suck for large DBs. On the other hand, they're REALLY fast and they can be easily scaled up on commodity hardware rather than requiring faster and b
        • RAC is one way to cluster in Oracle. There are ways to do replication though. The easy ones are one way, but multimaster local/distant site replication is totally doable.

          I have yet to find anything oracle can't do. It is a bit of an 8000 pound gorilla though...
          • Multimaster replication really sucks for HA. It only pushes transactions every N seconds, so you lose data if your primary master fails. You can theoretically set it up to push continuously but it leaks all kinds of resources.

            I really don't think they had HA in mind when they designed multimaster. Actually, I don't know what they hell they were thinking...
            • "so you lose data if your primary master fails"

              No, you won't.

              Two phase commit, remember?
              • Oracle multi-master does not use two phase commit.
                • So what? The question still is that _under proper configuration_ loosing master-node doesn't mean loosing data (even worse: risking non-consolidated data)
                  • Technically you are correct, you will not lose data, but it won't be available until the master comes back.
                    I should clarify that I don't mean the real master. The scenario goes something like this:
                    Client commits a change to Node A. Node A queues the change to be sent to node B.
                    Node A explodes.
                    That change never gets to node B until you repair and restart node A.

                    This is why multi-master is really not a good solution for HA.
                    • "Client commits a change to Node A. Node A queues the change to be sent to node B.
                      Node A explodes."

                      Then client either won't recieve an OK status for its transaction (since it won't be declared as commited till Node A recieves confirmation from node B and confirms again it has recieved that OK -we are talking now about 2PC, remember?) and it will retry or it will recieve that OK status from node B if the system is configured to do so in case of loosing node. It will then recieve either a timeout (if still t
                    • But Oracle MultiMaster does not use a two phase commit! There is no guarantee that node B has received the data after the client commits. There is a guarantee that the data has been queued and will eventually be propagated to node B.

                      The problem comes when a client commits to node A, then switches to using node B (because it has detected node A exploding). It expects the data that it committed earlier to be there now, but it isn't.
        • Oracle? ... MS-SQL? ... SAP? No. Postgres? No. Firebird? No. Who's left?

          Sybase?

        • You forgot Oracle Advanced Replication (or maybe you just didn't know it existed). It's ideal for running across a (relatively) slow WAN and allows for 100% database uptime, even if an entire site is lost.

          It was not trivial to setup back then (probably much better now), but that's what DBA are for. If you're a highly compensated admin, complexity is not necessarily your enemy.
        • by delta407 ( 518868 ) <slashdot@l[ ]jhax.com ['erf' in gap]> on Tuesday November 15, 2005 @10:49AM (#14034487) Homepage
          Name a DB from a 'big guy' that doesn't require a shared resource. Oracle? Nope. RAC requires a shared storage solution. Lose your single storage device device and you lose access to your db.
          ...
          The only time I will EVER have to take my cluster down is to add more storage space or to replace my switch.
          Sorry, what? You're bashing Oracle for having a system with a single point of failure, then add two of your own?

          "Shared storage" doesn't mean "not highly available". If you're serious about building a database that cannot go down, you get multiple servers, each with two Ethernet interfaces and two Fibre Channel cards. Get two Fibre Channel switches, and two Ethernet switches. Get two Fibre Channel disk arrays. Hook together as follows.

          Both disk arrays have one or more uplinks to both FC switches. Each server has one HBA connected to each switch. Then, use Linux multipathing to provide automatic failover in case either switch dies or either HBA dies, and use Oracle ASM or Linux MD to mirror the data so you're good even if you unplug the shared storage.

          Set up the servers to use 802.1q VLAN tagging. (Remember, it's a good idea to keep your inter-node communication separate from client-to-server communication.) Create two Ethernet bridges, using the Linux bridging driver, binding eth0.2/eth1.2 to one and eth0.3/eth1.3 to the other. Take the two switches, set up 802.1q on the ports going to your database servers, and connect them together (preferably using high-speed uplinks, but channel bonding works fine). Enable spanning tree on the switches and on the bridges. Connect both switches to the rest of your network.

          Now, if a switch starts on fire, spanning tree on the servers will fail over to the other one automatically. If a network cable gets cut or a network card goes out on one node, that one node will fail over all traffic to the remaining interface, and the inter-switch trunk will make everything keep working. Suddenly, you've got a robust network.

          So, we've got network and storage covered. What about the database software? Neither MySQL nor Postgres can use this sort of configuration, but Oracle RAC can. Add a dash of TAF, and suddenly, any component -- network switch, database server, SAN switch, disk array -- can fail in the middle of a SELECT and the application won't notice. That is highly available.

          Yes, this solution costs more. Our data -- more accurately, the cost of it not being accessible -- justifies the expense. But don't tell me that shared storage is a weakness.
          • I'm not a network engineer so I may be missing some subleties of your design but Oracle RAC does have a single point of failure. Even if you mirror the data between 2 disk arrays, you need a quorum disk to avoid the split brain problem. In our case, we wanted to cluster Oracle between 2 data centers with duplicate systems in each. The problem is if networking were completely lost between the 2 DC's (small chance but not non-existant) but each Oracle instance were running then they each would think the other
            • Yep, Oracle RAC requires a quorum disk (CRS calls it a "voting disk"). However, if you use OCFS2 to store the data files (including the voting disk) the solution above cannot produce split-brain. Putting the data files in OCFS2 puts the responsibility of guarding the shared storage in the filesystem, which solves the problem as follows.

              Each node in an OCFS2 cluster updates a special file every two second and maintains a TCP connection to its neighbors. If this process doesn't work (i.e. writes are failing,
              • For the OCFS2 special file not to be a single point of failure, I assume it's mirrored between both storage arrays. If you lose all connectivity between data centers, assuming each DC has a storage array and a node then don't you still have both halves thinking they are it and still have split brain? That's 2 simultaneous outages and if your company is anything like those I've worked with 1 outage may suffice.

                In my current company, our network group tends to keep things to themselves and we'd never have kno
          • Remarkably, that's not how the Oracle reps explained it to me. I went around with then for quite some time trying to figure out a way to get that to happen properly and they kept telling me that there was not a way to protect against a SAN device failure.

            As for availability on mysql, The switch limitation is an OS issue. I choose to run an OS that can't handle multiple interfaces per subnet, though that issue has been fixed in the latest version. As for adding storage space, I believe that fix is scheduled
            • Remarkably, that's not how the Oracle reps explained it to me. I went around with then for quite some time trying to figure out a way to get that to happen properly and they kept telling me that there was not a way to protect against a SAN device failure.

              When was this? I'm fairly certain that much of what I described won't work on anything older than 10g, and it's really, really painful to attempt this using ASM. This only becomes practical with OCFS2, which (as I recall) is currently supported only by SL

        • MySQL is the ONLY available 'shared nothing' (tm) solution available

          Not quite. Check out the shared nothing architecture of DB2 Universal DB [ibm.com]. You can get DB2 for AIX, Windows, Linux, and other platforms. UDB offers a High Availability Disaster Recovery [ibm.com] solution.
        • there's also no other way to do it without either spending ridiculous amounts of money or being dependent on a single piece of hardware.

          That's not true. along with some other things you've mentioned from various marketing materials.

          For starters, there's no such thing as "shared nothing" clustering. You have to have a shared resource to have a cluster.
          Solutions that claim to be "shared nothing" actually share a network. Once you've gone that far, there's no reason not to do replicated network block devices,
        • Mnesia is a database written in Erlang, a functional programming language. Erlang has support for concurrency built into the language and it does concurrency really well.

          Mnesia was built to run non-stop forever because it's supposed to automatically run on a clustered server. This gives it fault tolerance. The best kind, because no matter what PC you buy, some day it's going to break. You can reconfigure it while it's running and although you're better of using Erlang to interface to the database, SQL is av
      • by jorela ( 84188 ) on Tuesday November 15, 2005 @08:21AM (#14033681)
        Actually we have implemented disk storage.
        Hopefully it will make into version 5.1.

        MySQL Cluster 4.1/5.0 supports:
        * transactions
        * transparent data partitioning and transparent data distribution
        * recovery using logging
        * (multi) node failure and automatic non blocking hot sync for crashed/stopped nodes
        * hot backup

        MySQL Cluster 5.1 will support:
        * user defined partitioning
        * cluster to cluster async. replication (like "ordinary" mysql replication)

        The disk impl. supports
        * support putting column in memory or on disk
            (currently index columns has to be in memory)
        * all HA features mentioned above.
    • by Joff_NZ ( 309034 ) on Tuesday November 15, 2005 @02:13AM (#14032646) Homepage Journal
      The other thing to note with MySQL Cluster, is that, even in 'stable' releases of MySQL, is horribly unstable, and prone to massive data loss..

      We deployed it ourselves, and it worked ok for a while, but things went very very wrong when we tried updating one of the configuration parameters, causing us to to inexplicably lose quite a bit of data.

      Avoid it. At least for another generation or two, or three.
      • MySQL was prone to massive data loss for years... what's the big deal?
      • We deployed it ourselves, and it worked OK for a while, but things went very very wrong when we tried updating one of the configuration parameters, causing us to to inexplicably lose quite a bit of data.

        They changed the database parms on part of a large clustered HA system and it is the systems fault you lost data? I would be very interested in hearing what parm(s) were changed. If you mean they changed a parm on part of the cluster, then have them "move away from the keyboard, before someone gets hurt."
        • The parameter in question was just increasing the maximum number of indexes allowed in the database. Not something which you would expect to destroy everything.

          Believe me, it was MySQL's fault.. NDB is seriously seriously flakey
      • Try running the latest CVS nightly and see if it fixes your problem. While it may sound like a bad idea, it's what the mailing list people told me to do repeatedly when I was suffering from these issues. This was for non-cluster mysql and years ago. We stopped using mysql due to many issues like this so my experience is somewhat outdated and limited, but it seemed like MySQL coders never released any production quality code.
  • by digerata ( 516939 ) on Monday November 14, 2005 @11:25PM (#14031893) Homepage
    You can use MySQL's in-memory cluster replication, which is pretty cool. You can have quite a few nodes, each serving requests against the same database. However, your database size is limited to the amount of RAM a single node can support. That really limits long term scalability of the database. What we just hit 8 GB? What do we do? Sorry, boss, I need another ten grand for an unplanned DB upgrade. Also, if you are used to the atomic transactions InnoDB provides, forget that. The cluster storage system NDB does not support all of the features that InnoDB does.

    We chose to go with a Master / Slave option which basically gave us failover within 3 seconds. Any more fine grained monitoring and the CPU performance on the slave gets pretty high. Not ideal, but probably the best option that uses MySQL when you don't want to be tied so much to one platform.

    • by cravey ( 414235 ) * on Tuesday November 15, 2005 @12:07AM (#14032095)
      Your database size is NOT limited to the amount of RAM a single node can support unless you're only running 2 nodes. It's possible that your tables size may be limited, but I don't believe that that is the case. No, the cluster storage system does not support all the INNO DB features, but INNODB doesn't support all of the MyISAM features. Does that make MyISAM better?
      • Uh, atomic transactions are hardly a useless bell-and-whistle item. If you have so many people accessing your DB that you need a cluster, then I can't imagine them not needing transactions, unless they are read-only, and you infrequently do updates.

        Transactions are what make DBs such great tools - the system is always consistent. No corruption issues if some process dies halfway through updating a pile of records.

        Any system that expects to compete on this scale really needs to support transactions, unless
  • Just use .NET's XML serialisation on a DataSet to use an offline file a high availability server which has a RAID-5 array. Ok... so that's not really high availability, simple or really sensible but it would be a bit of fun.

    How about you have a few Access .mdb files (created using Jet, it's free) updated asynchronously using a few COM objects in Transaction Server? Well, it wouldn't work but the idea is pretty cool. Well actually it isn't really but you know.

    Or just write down all of your data, photocop
    • A typical transaction:
      Phone: Ring Ring
      Son: BEGIN TRANSACTION!
      Mum: Oh, hi dear. How have you been? Did you know that ..
      Son: *cough* Work, remember?
      Mum: Yes, dear.
      Son: WRITE_DOWN into notepad ADDRESSES. fname='John'. sname='Smith'
      Mum: Wait, i have to look up the next identity number thingie. Oooh, that's a big number, I can't even recite it.
      Son: Hush! address='12 Rover Roa..
      Mum: Oh, dear. My pencil broke.
      Son: Ok, Roll back!
      Mum: That's fine for you to say. I don't have any erasers.
    • (why are you called "yanks"?)

      It's debated [wikipedia.org].
  • If you did go the Oracle route, you might take a look at Data Guard. It may be a cheaper option than RAC. It lets your site run in an active/standby mode. I'm not a DBA, but I think you can configure Data Guard with different levels of reliability. Guaranteed synchronization is one of those levels. I'm thinking that it would let you automatically fail over to the standby site if the active site goes down, but you'd have to look at the docs for sure. The nice thing about this route is that you could ha
    • Correct you can set it to maximum reliability mode which ensures the the redo logs are written to all sites as the transactions are performed and before they are committed. This can be a big performance killer as you are adding network round trips.
    • Data guard is only available in Oracle Enterprise Edition which makes it as expensive as RAC ($40,000/cpu).

      That said you can run a standby database and do manual transfer of archivelog files without using data guard. In this case you only need Standard Edition ($15,000/cpu) or even Standard One Edition ($5,000/cpu) if you only have a one or two cpu machine.

      Standard One Edition on a single CPU is pretty cost effective.
  • It's called a filing cabinet. It's got full text search and an easy to use index. Although it has 99.9999% availability (it's blocked by crap stacked in front of it the other 0.00001%), it's a bit difficult to make backups without access to the office copier, two toner cartriges and 20 boxes of paper.
    • (it's blocked by crap stacked in front of it the other 0.00001%

      Only 0.00001% of the time? What office environment have you been working in? My co-workers tell me that there's a desk and a filing cabinet under all this paperwork -- I still think it's just a rumor.

  • by anon mouse-cow-aard ( 443646 ) on Tuesday November 15, 2005 @12:02AM (#14032073) Journal
    It's odd that all these people are answering without hearing a thing about your application. How big is the db? How often is it written? How often is it read?

    For example, we run a site with data from a thousand odd different data sources, with each source getting updated every hour or so. We do it by parsing the data into static pages. We we receive a datum, we rebuild the pages that depend on it.

    We have another site that runs off an Oracle db. the static page site runs about 90x faster, and is basically in memory (disk access is nil.) Now take into account that we can (and do) replicate the static page solution with zero load, we get to a solution that is literally 900x faster.

    Now folks are thinking 'oh, the horror!' well... tough! There is no substitute for thinking about your data, and how it flows. A DB is not a given, but a (potentially wrong) answer to a question after you have done some analysis.
  • Nice "question" (Score:5, Interesting)

    by photon317 ( 208409 ) on Tuesday November 15, 2005 @12:06AM (#14032087)

    Hello, anonymous Sequoia promoter seeking free advertising. (BTW, You might try picking a product name that normal people can spell without thinking about it).

    Your solution is not database clustering, and should not be advertised as such. It's more a long the lines of a database connection proxy which supports multiple simultaneous backends and operating on them in parallel, with some added features to make HA-like solutions relatively easy.

    The downside of this style of approach, as opposed to an architecture of the likes of Oracle "RAC", is that it doesn't scale up as you add backend nodes (at least not for writes, but in any case for read-only scaling there are simpler solutions for all of the vendors, even the free ones), and it must have limits on how many transactions it can backlog and replay to a temporarily-unreachable/down server before that server has to be re-synced from scratch in order to catch back up (and I have to wonder if there's really any real-world scenario under real transaction load in which the practical net effect wouldn't be a complete resync of a backend server anytime something goes wrong with it, in which case one could throw out any attempt to backlog transactions for a single failed server and just keep things simple - you fall out of sync, you resync).

    The open source world really needs a RAC-like solution for PostgreSQL and MySQL (I'm a fan of friendly open-source competition, so while my personal preference is PostgreSQL, I hope both projects stay current and popular for many years to come). Unfortunately there is unlikely to be a generic way to do this, it will probably have to be re-invented for each database project.

    I took a brief look around PostgreSQL's guts a while back, and it actually seemed like the architecture they use isn't far off from something RAC-capable to begin with, just nobody's quite buttoned together a few peices here and there to make it happen. Basically on SMP multiple co-operating backends already serve parallel requests and synchronized on a shared memory cache. There's patches out there for the linux kernel to support network-synchronized distributed shared memory. Put two and two together, and what do you get? Something not far off from a first-pass hack at a RAC-like network-distributed database caching system. Most of the other details are easy to solve (start/shutdown, join/leave cluster, tracking of processes across the cluster, etc), or belong in another problem domain (implementing shared storage filesystems (hey, we have GFS, Oracle OCFS, etc available...)). One of the biggest issues would be multiple nodes all having pg "Writer" processes. The first step would probably be to put the writer on one node and failover the writer functionality when that node dies, to be quickly replaced by a scheme whereby multiple writers can work by synchronizing through a distributed lock manager (there are already dlm modules available for linux). Then there's the issue of making the current distributed shared-memory patches do the right thing performance-wise for this kind of usage, and so on. It's not easy, but it's not outside the realm of possibility.
    • Re:Nice "question" (Score:2, Interesting)

      by cecchet ( 931266 )
      I am working for the Sequoia project. I don't think that this post was made by anyone of our group. I don't necessarily share your vision with the advantage of a shared disk architecture over shared nothing.

      As a SAN has to be shared between all nodes in your cluster, this already limit the availability and scalability of your database to your SAN capabilities (you are shifting the problem to the disk). With a shared nothing architecture you are also replicating the disks and thus distributing the IO worklo

      • Note that when you want to synchronize nodes that are not collocated, then a SAN does not work anymore.

        But you can replicate IO across (high-end) SANs up to several hundred km. Totally OS-independent.

        We'e talking serious money though...

        This looks like a religious war between shared-disk and shared-nothing solutions but a SAN and its admin cost is usually not compatible with someone seeking for an open source solution. If you can afford the SAN, why not just using Oracle RAC?

        SAN isn't the only way to do sha
        • There does, though, have to be some h/w support for shared-disk. Cluster Interconnect, anyone????

          What, like SCSI? FC? ;) It would re-complexify the situation, but you can just hook up a bunch of SCSI controllers to a chain. Typically this is used for a Hot/Cold situation, but some solutions are Hot/Hot.

          • There does, though, have to be some h/w support for shared-disk. Cluster Interconnect, anyone????

            What, like SCSI? FC? ;) It would re-complexify the situation, but you can just hook up a bunch of SCSI controllers to a chain. Typically this is used for a Hot/Cold situation, but some solutions are Hot/Hot.


            Huh? A chain? That's the least-redundant topology.
  • There are several options on 'master/slave' that can be done. The easiest invovles shared storage (2 boxes tied to 1 disk controler... box A goes down, box B notices, imports the disks, mounts them, starts the DB, and you're back. You only loose any transaction that was 'in progress' at failure time.

    Any time you add HA to something, you're adding complexity, and usually a fair bit of it. That's a trade off you need to consider (as is the extra price for software/hardware and support for the solution).
  • by Scott ( 1049 ) <stl@ossuary.net> on Tuesday November 15, 2005 @12:08AM (#14032097) Homepage
    The submitter of this question seems to have confused the two, Cluster and the older replication. Cluster does not in any way rely on a master/slave setup. Think of Cluster as RAID for databases, where you can lose a node (or more, depending on your configuration) before you lose your db. The current drawbacks of cluster are that it is in-memory and doesn't support certain features, such as fulltext indexing. Replication isn't going to cause you to lose data either if your application is designed to handle a situation where the master server (which you kick your writes to) hits the bricks. Have the app go into a read only mode from your slave.

    Neither option is really "beautiful", though Cluster has a lot of promise for the future, especially in 5.1.
    • Thanks for clearing that up, Scott. You're right (and the submitter is wrong, or at least misinformed) - Cluster replication is *not* of the master/slave variety.

      Master/slave replication is tenatively scheduled for MySQL 5.1 Cluster. IOW, you'll be able to replicate with a Cluster as a master or slave, or between two separate Clusters. (It's already working, it's just not yet been merged into the main 5.1 tree.) Also, a disk-based Cluster implementation - while not 100% guaranteed at this time - is also a
  • DRBD (Score:4, Informative)

    by Bios_Hakr ( 68586 ) <xptical@gmEEEail.com minus threevowels> on Tuesday November 15, 2005 @01:30AM (#14032462)
    Have you looked into DRBD? It works kinda like RAID1 over a network. It uses 2 computers to store the database. Another computer acts as a heartbeat server. You'll need 3 NICs in the database servers; one for the connection to the network, one (gig-e preferably) for the connection between servers, and one for the connection to the heartbeat server.

    http://www.drbd.org/ [drbd.org]

    If you are smart, you'll play around with this on a test network or VMWare first. Get it all tweaked out and actually test it by killing a server while in mid-transaction to see if it works for you.

    • DRBD is one of the most interesting sw projects out there right now. There's two things on my wishlist:

      1. Multicast replication to more than one backup
      2. Write support from more than one system.

      Coupled with GFS, it would be absolutely astounding. Multiple failure happens, a single backup is not always enough.
  • High availability is NEVER as highly available as on paper...
    *sob*
    • In my experience, you're right. But you have to take the long view. You don't just say... let's do an HA project, put it in, and walk away. to get more 9's you start with something that makes sense, and then look at every failure that happens, and fix the cause.

      case in point:
      We started off with HA, figured out how to go to cloned configuration: two servers, two RAIDS, no SPoF, right? We had some LAN issues which caused traffic storms, there was a bug in the controller logic, so both RAIDS crashed simulta
      • If you run HA in any production site, the company better be willing to hire and pay $$$. Any engineer knows that HA means you have a lot to lose. Otherwise why else would you run it.

        • High Availability is actually a cost-cutter for people who have to provide a service. We get a paged at 4am, told that a node has gone down, we thank them and go back to sleep, fix it during the day. Hardware support can be Next Business Day, instead of 24x7, with no impact on the service given to clients. otoh, adding redundancy is not, really not, just something you slap on top of a single node application, unless the entire solution has been architected with that in mind.

          What often happens is that

  • by Anonymous Coward
    If you want to stay within open source Slony + PgPool [google.com] is a viable option. Slony is a very capable master-slave replication system and PgPool is an easy way of handling failover and load-balancing (of reads).

    For commercial postgresql options, Afilias (who wrote Slony and uses it to power the .NET domain registry [google.com]) will happily sell you commercial support for less than similar microsoft-or-oracle stuff. I think a company called Command Prompt also has a commercial solution for postgresql, but I haven't tri

  • You want the "best" HA solution but not something too expensive. How about you give us something more to go on, like how much are you willing to spend, how much downtime you can tolerate in the event of failure, are there space/power constraints, etc. Then people can give you a real recomendation instead of the standard MySQL sucks/is great.

  • Hey, first, full disclosure: I work at Avokia. But we do have an availability solution that is cheaper than RAC (doesn't require a SAN) and combines the value props of both RAC and DataGuard. We virtualize the data layer enabling many identical databases to be in production serving your users. And you can put these databases into geographically distant datacenters. So you get a live-live-live... set-up without the need for manual conflict resolution that others require. Check it out at: www.avokia.com Wo
  • Radiant Data PeerFS (Score:2, Informative)

    by darkone ( 7979 )
    http://www.radiantdata.com/ [radiantdata.com]
    Radiant Data has a product called PeerFS which is a replicated filesystem (rw/rw, active/active) which allows you to also hold MySQL databases on it. You run 2 seperate MySQL servers pointing to the same data folder, and have it use POSIX locks for writes. The data is physicaly held on each server, and synced across the network.
    I am testing it at work ( http://www.concord.org/ [concord.org] ) now for our websites. VERY easy to setup, but it supports MyISAM tables, and NOT InnoDB (or the
  • You don't need a SAN to run Oracle RAC - it will work using a shared firewire drive apparently.

    RAC is an absolute ******* to install however. The heavily bugged Oracle Installer being the cause of most of these issues.

    Incidentally for people experiencing blue screens when building a cluster with Windows Server 2003 Enterprise on Dell 2850 servers with AX100 SAN with Oracle's ASM with QLA200 fibre channel cards - flash the QLA card firmware.

    And before anyone asks, I was going to use Linux, but it was act
  • m/Cluster [continuent.com] is a fine solution for MySQL clustering, and it's almost reasonably priced. Be preapred not to use it on RedHat EL4 for a bit while they work out rpoblems with RH's kernel tweaks, but it works really well for our large LAMP and e-mail system.
    • High availability is another one of those marketing buzz words that really doesn't have a good, nailed down definition.

      You can acheive this in three basic ways. Each has their own pros and cons. I recommend that you weigh them out and come to a decision you think you can live with.

      Clustering - You have a group of servers (physical hardware) each running the same software and working to stay synched up with each other. Now clustering comes in two flavors active/active and active/passive. The active/activ
  • Extended System's has a pretty good client-server high performance database server called Advantage. I guess they just released a new 8.0 version. Its not open source and but its affordable, gives you whole lot of replication and backup featurs..and support for various clients. herez is their url : www.advantagedatabase.com
  • If you are dealing with a small db or relatively light transactions you could setup real time replication or some other type of of rapid change transfer system depending upon DB vendor. But if the change is rapid to the database you will need some sort of shared storage. There are storage solutions for under 10000 that you can purcase and connect multiple servers to. If your data isn't worth that amount to spend you don't need a cluster.
  • HA could also mean hardware redundancy.
    Most homebuilt computers now have motherboards that supports
    built in RAID-5 support. It's an easy way to provide not just
    database HA, but system stability as well.

    Just string a couple of SATA-II drives together and activate
    RAID support on a motherboard like LanParty from DFI.

    RAID-5 is more robust and has higher survival rate in case of
    hard-drive failures.

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...