Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Databases Programming Software

What Would You Want to See in Database Benchmarks? 42

David Lang asks: "With the release of MySQL 5.0, PostgreSQL 8.1, and the flap over Oracle purchasing InnoDB, the age old question of performance is coming up again. I've got some boxes that were purchased for a data warehouse project that isn't going to be installed for a month or two, and could probably squeeze some time in to do some benchmarks on the machines. However, the question is: what should be done that's reasonably fair to both MySQL and PostgreSQL? We all know that careful selection of the benchmark can seriously skew the results, and I want to avoid that (in fact I would consider it close to ideal if the results came out that each database won in some tests). I would also not like to spend time generating the benchmarks only to have the losing side accuse me of being unfair. So, for both MySQL and PostgreSQL advocates, what would you like to see in a series of benchmarks?"
"The hardware I have available is as follows:
  • 2x dual Opteron 8G ram, 2x144G 15Krpm SCSI
  • 2x dual Opteron 8G ram, 2x72G 15Krpm SCSI
  • 1x dual Opteron 16G ram, 2x36G 15Krpm SCSI 16x400G 7200rpm SATA
I would prefer to use Debian Sarge as the base install of the systems (with custom built kernels), but would compile the databases from source rather then using binary packages.

For my own interests, I would like to at least cover the following bases: 32 bit vs 64 bit vs 64 bit kernel + 32 bit user-space; data warehouse type tests (data >> memory); and web prefs test (active data RAM)

What specific benchmarks should be run, and what other things should be tested? Where should I go for assistance on tuning each database, evaluating the benchmark results, and re-tuning them?"
This discussion has been archived. No new comments can be posted.

What Would You Want to See in Database Benchmarks?

Comments Filter:
  • by iamsure ( 66666 ) on Friday November 25, 2005 @11:27PM (#14116531) Homepage
    Send an open email to the dev teams on both projects, and ask for their opinions on what should be tested. It might take 3-4 rounds of back and forth to settle on a set of reasonable benchmarks and settings, but at least that way both sides are involved from the beginning.
  • ...is some honest benchmarks of Oracle against Postgres. That would be far more interesting. Does anyone happen to have a copy of Oracle that isn't covered by obnoxious "no benchmarking" licence clauses?
    • by dtfinch ( 661405 ) * on Saturday November 26, 2005 @12:05AM (#14116697) Journal
      What you do is:
      The person publishing the benchmark does not use Oracle.
      The Oracle user running the benchmark remains anonymous.

      But there are many ways that Oracle (or any other database software) can be made to perform badly in a benchmark that would be no fault of the software. If someone wants to benchmark against Oracle, Oracle wants to make sure they do it correctly, or else not at all. If they didn't have that clause, Microsoft would have dozens of studies and benchmarks saying that Oracle is slower than SQL Server under certain setups, just like those bullshit VeriTest benchmarks they have against crippled setups of Red Hat, Apache, and Samba.
  • I would be interested in performance for a single user, but accessing a very large table and/or very complicated queries.

    Most benchmarks only deal with multi-user performance. Most problems I had to solve dealed with managing large datasets with complicated relationships, but only one user that has to access them.

    Althought I highly suspect that mysql is not suited for that sort of usage anyway. But I am curious how postgresql would compare there with firebird.
  • Don't bother (Score:4, Insightful)

    by Anonymous Coward on Friday November 25, 2005 @11:57PM (#14116659)

    in fact I would consider it close to ideal if the results came out that each database won in some tests

    With an attitude like that, there's no point running benchmarks. The idea is that you run the benchmarks to get an idea of how the databases perform. But it seems you are already rejecting one possible result (that one database performs worse than others in all respects) because you don't consider it "fair".

    Well life isn't fair. I'm sure people worked hard on all databases, but that doesn't mean they all have value. Sometimes people try hard and fail. And you want to ignore the numbers that tell you this because you think it's fairer that way? Give me a break, you don't want to run a real benchmark, you want to run something that will tell you what you have already decided upon is the best.

    • The post above is spot on. In addition to his (her?) reasons, we have too many database benchmarks already. The problem isn't finding them, it's reading through the horrible writing. You should seriously consider changing your goals.

      Rather than benchmarking the two different databases, measure how the settings for each database changes. e.g. Plot the performance of some staple queries with different memory pool sizes.
    • Give me a break, you don't want to run a real benchmark, you want to run something that will tell you what you have already decided upon is the best.

      And your whinning helps because..?

      He didn't claim to be a true scientist. He didn't claim that this result is what MySQL developers or PostgreSQL developers will die over.
      It's the Thanksgiving weekend, and one of the readers want to do something fun with his spare machines.
      In fact, he's trying to run a test scenario that I happened to be interested i

    • With an attitude like that, there's no point running benchmarks. The idea is that you run the benchmarks to get an idea of how the databases perform. But it seems you are already rejecting one possible result (that one database performs worse than others in all respects) because you don't consider it "fair".

      Part of the issue with benchmarking is that different types of queries can result in wildly different performance ratios between RDBMS's.

      For example:

      Try joining a really large table in PostgreSQL against
  • This is simple (Score:4, Insightful)

    by MerlynEmrys67 ( 583469 ) on Friday November 25, 2005 @11:57PM (#14116664)
    Take your current application that you need the database for
    Compile your application to use each database
    Now go and compare which database runs fastest on your application

    Anything else just doesn't matter - your application is going to be different than every benchmark, so what you need is to run your application on the database and see what happens.

    What I have usually found is that while you can highly tune the database, and have great database benchmarks - most of those are ruined by completely brain dead applications that do very stupid things, ruining any kind of performance the database will give you

    • This is simple only if there's no porting effort required at all. Given the bugginess of MySQL (compared to SQL92) and differences between RDBMSs in general, this is somewhat unlikely.

      Even if the application if fully DB-agnostic, who has their schema laying around in two formats?
      • Hmmmm, so sounds to me like there is something else more important than speed - what database am i currently using.
        So in other words - speed might not even be important at all, imagine that. So why benchmark then ?
  • easy, TPC (Score:2, Informative)

    by ZuggZugg ( 817322 )
    TPC-C and especialy TPC-H for DW benchmarking. Buy the benchmark kit and run it...done. If you're really serious have it independantly audited and submit the results to the TPC.org. You could probably wrangle up some sponsors to help foot the bill.

    Good luck.
  • by eyeball ( 17206 ) on Saturday November 26, 2005 @02:28AM (#14117402) Journal
    I would love to see operations on very large databases, say 100 million or 1 billion records (or even more). Operations like bulk loading, inserting, querying, deleting; against indexed and un-indexed tables; reindexing a whole table (*).

    (*) Reindexing caused me a ton of grief. I inherited a huge mysql db once that required an emergency reindex. Unfortunately mysql locked the table while it did a full table copy, which took hours.
    • Locking out table access during an index is sometimes a shame, but An "emergency reindex" sounds like you shouldn't be using the table anyway until it was repaired.

      Unless queries were just very slow because the indexes were poor.

      Sam
    • Oh yes.. mysql makes a copy of the table for every ALTER TABLE command i think, even if you want to drop an index. This sucks royally. Sometimes it's quicker to add and remove indexes when you want to run a few queries on a large table (at least it is with postgres). Locking the table whilst dropping/creating indexes is a huge pain in the ass - but won't show up in benchmark results. This also means that if you don't have enough disk space to hold a copy of the table, you can't easily alter it :(
      • In fact, while i'm in rant mode, the only reason we're using MySQL at all within last.fm is because it has pretty kick-ass replication. Postgresql is miserably lacking on the replication front :'( Although I'm hoping that with the recent addition of two-phase-commit someone will mod pgpool into a synchronous multi-master replication system.
  • How I would do it:

    1) take a snapshot of the database
    2) turn on the query log for the server and run it for a day
    3) install the snapshot on your test servers
    4) play back the query log and see which one goes the fastest.

    There's no reason to complicate it by trying to stress certain functions. I'm looking for real application performance, and I have no control over the queries that the programmers are throwing at the box (unless I see it bogging down the server and I go and hit them over the head).

    --Ajay
  • At work I had a problem recently where there was significant performance loss because there was a flood of very many very small and simple queries. I had to replace a bunch of them with one very large very complicated query to make things run smoothly. Figure out just how many simple select queries you can flood the server with before it starts to choke. Also make an absolutely enourmous query, really really big, and see how many seconds it takes to get the result on each of the setups.
  • 1x dual Opteron

    It'll spend too much time task switching. Better to have 2 "less than half" slower CPUs, than 1 really fast CPU.

    16G ram,

    Could you put the 16G into one of the 2x boxes?

    2x36G 15Krpm SCSI 16x400G 7200rpm SATA
    data warehouse type tests (data >> memory);

    DW does not mean that data .GT. RAM. It doesn't even mean that the data is a whole lot bigger than RAM.
  • We'd love to see some benchmarks run on this equipment. It's a great chance for us to evaluate and boost postgresql performance in general. Can you contact us directly? You can find a subscription link here: http://archives.postgresql.org/pgsql-performance/ [postgresql.org] as well as the thread regarding your ask slashdot question here: http://archives.postgresql.org/pgsql-performance/2 005-11/msg00514.php [postgresql.org]
  • Who cares about performance? If you want to do something useful, then test reliability... Pull the power on the server. Have a failure on a drive, see if you can rescue any data. Corrupt a sector by overwriting it with 0s and see what the engine does.

    Performance you can (almost) always go to a bigger box - if your whole engine goes down and can't recover because of a few bad sectors or insufficient logging or such, you're screwed.

    Peter.
  • There are all different kinds of workloads for database servers. Is the workload mostly transactional, moving around thousands of tiny buckets of data, all interrelated by constraints, foreign keys, and triggers, and all being done by thousands of users at once? Or is the workload one where you're trundling through hundreds of gigabytes of data to mine for certain critical points hidden in them, and only a few users at a time will be hitting the system?

    Are we talking about workgroup size database apps, or
  • I'm not sure benchmarks are really the best way to measure between those two products. Pretty much the common wisdom exists here:

    MySQL is much faster
    Postgres does a lot more

    If you can tolerate feature poor you go with MySQL for ease + speed
    If you can tolerate slow you go with Postgres
    If you need feature rich and fast you go with Oracle
    If you don't have dedicated DBAs then you must not care about reliability go with SQLserver

    Anyway:

    1) MySQL has corruption problems. Good measures (load testing) would be wo

What is research but a blind date with knowledge? -- Will Harvey

Working...