Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
News

PostgreSQL vs. SAP? 97

Johann asks: "As my friend and I embark on building a large web site using open source development tools, I planned on using PostgreSQL. I was reminded that another 'enterprize' database is now released under the GPL - SAP DB. Since there have been countless Pg vs. MySQL comparisons on Slashdot, I wanted to ask: how does SAP DB compare technically to Pg?"
This discussion has been archived. No new comments can be posted.

PostgreSQL vs. SAP?

Comments Filter:
  • Is anyone else really happy with this move? I think that this is a great move forward for the open-source community. You've got an enterprise-strength application, one that a company charged money not only for the application itself (no source included), but also for maintenance contracts, installation, etc.

    This company has realized that the benefits of releasing the source out to everyone outweigh the benefits of keeping it inside.

    Congratulations, SAP! I wish the best for you!
    • Um, what "move"? SAP DB is GPLed for years. It isn't the only once-commercial-now-free (or even Free?) database either, check out Interbase.
    • I have been running the SapDB on solaris for almost a year now. It is running in a production environment with PHP. I would have to agree with the other posters as far as the learning curve, it was fairly steep. The ends definately justify the means, however. Now that they have the 64-bit version for solaris released, the performance is even better. The database it holds is approximately 1.1 Gb. It has numerous tables that have two or more primary keys. It also has a plethora of stored procedures. Performance has been very acceptable. Albeit not as fast as Mysql, but once the system expanded beyond the first 2 tables, Mysql had a real problem managing it. The decision to move to sap was based on a simple method. Was the database shaping the code, or the shaping the database. With Mysql the limitations of the database (lack of sub querys being the most annoying) were having a profound impact on the complexity of the code. The move to sap allowed a much simpler, more efficient code base. The database is not quite as fast, but the performace hit there is severely outweighed by the performance gain from removing several hundred lines of php code. Additionally, since the database structure was simplifiable, report generation also became much easier. Realistically you have to compare the future needs of your data and your user base. Everyone here is worried about transaction support, etc... The real issue is whether the time invested circumventing the pitfalls of your particular database (Mysql,PG,Sap,Oracle) are going to outweigh the advantages it brings to the table. I suggest installing several databases, and loading a sample set of your data. Then try to retrieve the data in ways that the resulting application would. The database that makes this process easiest is your best choice.
  • interface support (Score:5, Informative)

    by galore ( 6403 ) <<ian> <at> <labfire.com>> on Thursday July 18, 2002 @11:50AM (#3908973)
    one big thing to consider is the interface to the db you'll be using. i chose postgres for a large server-side java web-application, and while i have zero complaints about postgres the database, the jdbc postgres interface is a complete mess. if you're doing anything beyond the most basic operations, you'll find a _lot_ of the jdbc-2 spec completely or partially unimplimented (and these shortcomings are, of course, undocumented). i've put far too many hours pouring through jdbc driver code in the last year with postgres. though, i give the developers credit for making steady progress in the last year, it just isn't there yet.

    like you, i've been pondering the switch to SAP-DB - from looking through the source, their jdbc implementation seems to be very complete. the only problem i've run into with SAP is the lack of readable documentation... the manual seems to have lots of information, but it isn't exactly developer-friendly.

    if this is an "enterprise" level application, i see little choice but to dive into SAP and figure it all out - otherwise i think you'll run into bigger problems down the road.
    • Re:interface support (Score:1, Interesting)

      by Anonymous Coward
      Funny. I thought the documentation was pretty readable myself, most of the syntax stuff looked similar to BNF.
    • Can you elaborate on the deficiencies you've found in the JDBC implementation?
      if this is an "enterprise" level application, i see little choice but to dive into SAP and figure it all out - otherwise i think you'll run into bigger problems down the road.
      I think that's an unfair assessment. The only specific criticism you've made of PostgreSQL is the quality of the JDBC driver -- I'm not sure why that has a negative impact on "enterprise level" applications in general (i.e. non JDBC stuff)...
      • I guess I had similar problems:

        First of all the binary downloads of PostgreSQL don't seem to include the enterprise jdbc drivers (implementations of the interfaces in javax.sql.*) which I wanted to use to implement connection pooling in an application using jdk1.4.

        Downloading the latest PostgreSQL from cvs (a few months ago) did appear to include the source for this, however they wouldn't build with jdk1.4 due to many new methods having been added to the interfaces since the implementation was last updated (and even many of those that were there will just throw an org.postgresql.Driver.notImplemented exception).

        By modifying the source to add implementations of the missing methods to also throw notImplemented exceptions I have successfully built some drivers which appear to work correctly with my connection pool manager (and having since then got a proper internet connection have gathered from the mailing lists and news groups that other people have done the same thing).

        The problem with this is that you no longer have a program that just works out of the box. (and building from source also entails either creating or suitably modifying some init.d scripts)

        OTOH it was an educational experience, and made me seriously consider learning more about PostgreSQL and maybe trying to join the development effort (though I haven't actually done so yet, too many other things have come up)

        I have also used the C++ interface, which seems to work O.K, but it is much less powerfull than the jdbc implementation could be if it were to implement some of the cool features such as writable result sets and prepared statements that the java interfaces declare (but which PostgreSQL doesn't yet implement)

        • by nconway ( 86640 )
          I have also used the C++ interface, which seems to work O.K, but it is much less powerfull than the jdbc implementation could be if it were to implement some of the cool features such as writable result sets and prepared statements that the java interfaces declare (but which PostgreSQL doesn't yet implement)
          For the C++ interface, you might want to try libpqxx -- it's much nicer (IMHO) than the default libpq++. It's been integrated into the CVS tree and will be in the next release of PostgreSQL.

          Updateable result sets for JDBC have been implemented in CVS, and will be in 7.3. I implemented prepared statements and that should be in CVS soon, but I'm not sure what the status of JDBC support for that is.
          Downloading the latest PostgreSQL from cvs (a few months ago) did appear to include the source for this, however they wouldn't build with jdk1.4 due to many new methods having been added to the interfaces since the implementation was last updated (and even many of those that were there will just throw an org.postgresql.Driver.notImplemented exception).
          Hmmm... this will definately need to be fixed before the 7.3 release (if the 1.4 methods aren't implemented, they should at least be declared and throw exceptions so that JDBC will compile out of the box with JDK 1.4).
      • psql-jdbc doesn't support prepared statements. well, it doesn't actually prepare them.

        you can't return multiple rowsets from a stored procedure.

        • Re:interface support (Score:2, Interesting)

          by nconway ( 86640 )
          psql-jdbc doesn't support prepared statements. well, it doesn't actually prepare them.
          I implemented prepareable statements for PostgreSQL a couple weeks ago -- the patch is on pgsql-patches. It should be in CVS in a couple days, and will be in the 7.3 release.
          you can't return multiple rowsets from a stored procedure.
          This has also been implemented and will be in 7.3 (at least, table functions for functions defined in C -- not sure if there's support for PL/PgSQL table functions yet).
      • Re:interface support (Score:2, Interesting)

        by galore ( 6403 )
        Can you elaborate on the deficiencies you've found in the JDBC implementation?
        sure. for starters, take a look here: http://lab.applinet.nl/postgresql-jdbc/ those are the results of the jdbc compliance tests, ringing in at around 43%. peruse that page and get ready to live with all those shortcomings when writing your application. the last few things that have bitten me personally are the slew of methods not implemented (or that just return empty strings...) in DatabaseMetaData and ResultSetMetaData, the lack of any sort of performance gain by using PreparedStatements, and the fact that the org.postgresql.Datasource barely works (no distributed transaction support, and no connection pooling). i believe only recently have cvs checkins been made to allow updatable recordsets. though i can see from your other posts you're aware of these issues, it's still a bummer.
        I think that's an unfair assessment.
        you're correct in that i'm only evaluating JDBC support. however, that's a pretty big chunk of your enterprise web-app marketshare. i have no experience with the other interfaces, and so of course can't comment on their degree of quality.
        • for starters, take a look here: http://lab.applinet.nl/postgresql-jdbc/ those are the results of the jdbc compliance tests, ringing in at around 43%
          Although that page refers to the JDBC driver included with PostgreSQL 7.1 -- I'd be interested to see the results of running those tests against the JDBC driver included with PostgreSQL 7.2 or CVS HEAD.
          though i can see from your other posts you're aware of these issues, it's still a bummer.
          Yes, our JDBC implementation has a number of deficencies, as you've pointed out -- I think your criticisms are justified. I'm a C programmer, so using Java more than necessary is a bit of a grating experience :-) Nevertheless, I've just started working on getting the latest source to compile with JDK 1.4. Beyond that, I'll see what else I can do...
    • it's true that postgresql's back-end architecture limits a driver's compliance with the jdbc specification.

      it's also true that the jdbc driver that ships with postgresql is poorly executed.

      this driver [sourceforge.net] offers more potential to be the jdbc driver of choice for postgresql. i'm already using it in a number of applications despite its "alpha" status (the lead developer is keeping it alpha until fully compliant with the jdbc spec, but this of course could not happen until the backend changes).
  • by zulux ( 112259 ) on Thursday July 18, 2002 @12:35PM (#3909284) Homepage Journal
    I love PostgeSQL - it's worked wonderfully for us, but....

    In all seriousness, if you use SAP DB, you could probably raise your rates just due to the SAP name alone. The name 'SAP' is also good for covering your ass when things get difficult.

    SAP and Oracle have expensive reputations and you can charge accordingly. We do it all the time, for some of our nastier customers, when anything with the word 'Cisco' on it need care.

    It depends on the relationship you have with your custome, wether you go that route, but it's worth considering.

  • by bjpirt ( 251795 ) on Thursday July 18, 2002 @02:27PM (#3910415)
    something strange going on here ;-)

    http://www.sapdb.org/history.htm
    • The black bars are weird. If you use the Wayback Machine you can see what they obsure: http://web.archive.org/web/20011130142504/http://w ww.sapdb.org/history.htm [archive.org]

      The removed text is

      • Subsidiary of Software AG
      • Software AG quits R/3 DBMS market
      • ADABAS D
      • ADABAS D
      • ADABAS D

      I think a marketing drone just doesn't want to show any sign of weakness in the product's history.

      • SAP licensed Adabas D from Software AG some time back with some powerful rights to the source - making a code fork. Both companies then developed the database further with different directions. Once SAP decided to open source the renamed SAP DB (still based on the same Adabas core) I imaging Software AG was a little pissed off and doesn't want the info to flow around too much... there still trying to sell Adabas D for a whole lot of money while the SAP DB is GPLd. And btw. from all that I know, SAP DB evolved much faster than Adabas D.
    • Wow. From the look of it, SAP was developed as a government military research program! At least those are the same black marks that black out all the important parts of government documents released under the Freedom of Information act.
      Maybe the blacked-out parts are the names of an advanced alien race that the government stole the technology from using Jeff Goldblum and a laptop.
  • If you are not hellbet on "free as in speach" database, I heard Sybase is offering for free (as in "beer") it's database for Linux. It's not the latest verision (that one has to be paid for) but it's already a very mature product and it's free for developpers AND commercial use as well. Or you can have a look at Firebird (Interbase), it's pretty mature.
  • by mikehoskins ( 177074 ) on Thursday July 18, 2002 @02:57PM (#3910720)
    I'm sorry that most of the responses haven't answered your question. I'll try to partially answer it, but I bet you already know all of this. I really like PostgreSQL and use it. If I were wanting to get a really big web site together, I'd try SAPDB, or just go with either DB/2 or Oracle.

    My info about SAPDB is research-based, as I did a lot of that. Please feel free to correct me, if I'm wrong, in any way. MySQL has steadily improved, so a couple of items, below may be out of date.

    I see that SAPDB is really an Oracle killer. It is a true enterprise-ready DB. If you want all the features of a big DB, such as replication, partitioning, etc., use SAPDB. You can get true commercial support, etc. Read their PDF's and be impressed.

    However, you might want to know why I chose PostgreSQL over MySQL and SAPDB:

    1.) PostgreSQL is really ACID compliant, as is SAPDB. MySQL, on the other hand hasn't yet proved itself in this area. Give MySQL a few more months, though, and we'll see, with version 4.1.

    2.) Hosting companies support MySQL really well, and PostgreSQL only partially well. I have yet to see SAPDB as a hosting offering. Oracle is rare, too, but expensive.

    3.) All three are free, in both senses of the word, while DB/2, Sybase, Oracle, etc., are not. (No preference in this area.)

    4.) There is an enormous userbase for MySQL, and a sizable one for PostgreSQL, so I can get help from peers 24 by 7. SAPDB, unfortunately, does not have much in this area.

    5.) PostgreSQL 7.1.x+ is supposed to scale better in many ways compared to MySQL 3.xx+. Many benchmarks seem to bear out the fact that PGSQL annihilates MySQL 3.xx, after about 5-10 or so web users. PGSQL seems to beat Oracle up through and beyond 100 simultaneous web users. I cannot find any benchmark on SAP, anywhere.

    6.) Installing PostgreSQL and MySQL are easy, easy, easy. It's not so easy with SAPDB. While I'm no neophyte, when you consider remote hosting issues, I hope to have a system I can quickly rebuild if hardware dies.

    7.) PostgreSQL has enough enterprise-ready features for my web site. MySQL did not (and probably still does not). SAPDB is almost overkill. Views, triggers, foreign keys, constraints of various kinds, stored procedures, versioning, hot backups, etc., are all available in PostgreSQL and SAPDB. One responder indicated that programming time is more important than benchmarks -- amen, preach it, brother. Your time programming and tweaking are far more expensive than hardware and most software.

    8.) After working with PostgreSQL and MySQL, I saw that care and feeding (DBA work, in particular) were very simplified, straight forward, and quick. SAPDB smacks of Oracle, in terms of tweaking, complexity, etc. (I was an Oracle DBA for two years; a RedBrick DBA; worked for Informix; and have administered MySQL, PostgreSQL, and even lowly Access.) I haven't actually tried SAPDB, because it looks like a major investment of time.

    9.) Books, web sites, and other literature are readily available for MySQL and PostgreSQL. SAPDB's included documentation is most excellent, however, followed by MySQL's included docs. PostgreSQL suffers in this area -- buy the books.

    10.) As I scale up, I probably will have to consider something other than PostgreSQL, like SAPDB, DB/2, or Oracle. I refuse to look at Sybase or especially it ancestor, SQL Server. On the other hand, PostgreSQL is semi-tunable and the development team plans to be adding replication, etc., in the coming months. I'll have to wait-and-see. If SAP ERP can be hosted on SAPDB, then, well, it'll scale, no question. I'd contact the author of BinaryCloud for more objective info, here. http://www.binarycloud.com/

    11.) Warning, total subjectivity: something about PostgreSQL seems "clean," compared to MySQL. I can't say what it is, but there is a big lack of business-likeness to MySQL, other than what I listed above. I'm sure the same is true about SAPDB, and more so, since it's got a real business and an ERP behind it.

    12.) PostgreSQL, like MySQL, has a user-friendly SQL command line, with readline support, etc. I don't know about SAPDB, but I expect it to be less so, like Oracle's SQL*Plus. Can somebody help me out, here?

    In conclusion, my first choice, for now, is PostgreSQL, my second would be SAPDB, my third is actually MySQL, followed by the commercial products. Again, I've never used SAPDB, but I hope to, in the near future; it seems "too big" for my needs, now, and information, outside of sapdb.org is scarce. I hope the community really gets around it, soon, so we can have a more objective look at the product. We need support groups and books available at our local book stores.

    SAPDB looks absolutely excellent, while PostgreSQL looks good. MySQL has potential to be a business/enterprise-ready product, in a couple of years. I like the fact that SAPDB uses ODBC/UDBC for its native calls, like SQL Server, Access, and DB/2. MySQL, PostgreSQL, Oracle, and the like, require drivers to translate calls back and forth, slowing things down and adding complexity, for ODBC, UDBC, and JDBC.

    For PostgreSQL, I gotta say, the thing that really impressed my was versioning, which makes transaction support and hot backups easy, while keeping performance very, very high.

    Try out both databases. Use PGSQL today and SAP tomorrow, as you grow into it.
    • Ok, I have a few spelling, grammar, and punctuation mistakes. I was in a hurry, so subtract that from your flame bait, please.
    • I also failed to have you look at Firebird/Interbase -- I wasn't thinking. Firebird might be a PostgreSQL killer. I have actually seen a web host, or two, with Interbase/Firebird support.

      Almost all of the PostgreSQL/SAPDB advantages are available to Firebird/Interbase.

      My list:
      PostgreSQL (for now)
      SAPDB or Firebird (future growth)
      MySQL 4.1+ (maybe, maybe not)

      I have done less research about Firebird than SAPDB. All 4 give a great deal of choice. Firebird is based off of Interbase and has a userr community probably larger than SAPDB. SAPDB still kills Firebird/Interbase and PostgreSQL in terms of enterprise features.
    • by Anonymous Coward
      I refuse to look at Sybase or especially it ancestor, SQL Server.

      I think you meant it the other way around. MSFT bought a copy of Sybase and it became SQL Server.

    • Odd (Score:2, Interesting)

      by Betcour ( 50623 )
      I agree on about everything but point 11 :
      Warning, total subjectivity: something about PostgreSQL seems "clean," compared to MySQL. I can't say what it is, but there is a big lack of business-likeness to MySQL, other than what I listed above

      I'm following pretty closely both releases and frankly I've the exact opposite feeling. While MySQL adds features one by one, Postgresql seems to absorb lots of stuff at each releases. While this is great feature-wise, it can lead to pretty nasty bugs and instabilities.

      Just look a the latest release for example (from postgresql history) :
      "it fixes a critical bug in v7.2: sequence counters will go backwards after a crash "

      That bug is really critical (can you imagine the state of your datas when counters go backward ?), but it took 2 months to be discovered and fixed. On the other hand the MySQL history list only very minor bug fixes, most of them that you are highly unlikely to meet and that are of little effects. Actually the MySQL developpers make the effort of documenting every petty bug fix, which is a very professional thing.
      • While MySQL adds features one by one, Postgresql seems to absorb lots of stuff at each releases. While this is great feature-wise, it can lead to pretty nasty bugs and instabilities.
        I have to disagree with you there. The long release cycle of PostgreSQL has the goal of integrating features and giving them a long time to stablize and be tested in the tree, before they are released. The beta periods last for a long time, and a release is only made when all the developers are satisfied that no more bugs can be found. So while PostgreSQL only makes releases on an occaisonal basis, every effort is made to ensure that those releases are as stable as possible.
        That bug is really critical (can you imagine the state of your datas when counters go backward ?), but it took 2 months to be discovered and fixed.
        Yes, that was a bad bug. On the other hand, it had been present for a long time -- IIRC, the bug was present in 7.1.0, 7.1.1, 7.1.2, and 7.2.0, at which point someone reported it and it was fixed. So it's fairly safe to say that it only occurred under very, very rare circumstances, and likely effected very few users.
        Actually the MySQL developpers make the effort of documenting every petty bug fix, which is a very professional thing.
        Every PostgreSQL release includes the full CVS changelogs from between the previous release and the current one -- so every single change made to the source (including typo fixes, etc.) is listed. A more concise set of release notes are also made available, which highlight the major user-visible changes or bug fixes.
        • Re:Odd (Score:2, Insightful)

          by Sxooter ( 29722 )
          Just think of it. Postgresql crashes so seldomly (and even then, I've never seen it caused by anything but bad hardware ) that a problem that could show up during a crash took well over a year to show up.

          But let me ask the parent of the parent I'm responding to, how does MySQL recover from crashes? Does it have a write ahead log feature? does it use some other method to guarantee data integrity in case of database crash? I'm not being facetious, I don't use MySQL so I don't know.
          • You have to repair the tables before restarting. MySQL claims to be atomic in its data operations, and it appears that repairs are mostly just re-indexing to make sure the indexes agree with the data. I've never lost data due to a crash (well, nothing that hadn't been written within a few seconds of the crash.)

            The time to repair is causing us a few issues. The 60GB of data we have takes approx 1.5 hours to repair after a crash (single process repair.) We're currently experimenting with how many processes the hardware can support for optimum repair time. 32 processes is looking good, but that's a Quad UltaSparc2 w/ Fiber Channel.

            I've had to deal with quite a few crashes recently. It looks like our hardware is a lot bigger than MySQL can test on in house. The Sparc boxes (mentioned above) have had about 10 crashes in the past year. The Quad Xeon box has had 0 crashes in the 2.5 years that I've maintained it. YMMV.
            • I've never lost data due to a crash (well, nothing that hadn't been written within a few seconds of the crash.)
              So in other words, you've lost data due to a crash :-)

              So does MySQL not flush disk writes to disk before a transaction commits? Your reply seems to suggest that MySQL waits for the kernel to flush disk buffers to disk -- if that's the case, it definately sounds like there is a potential for data loss.
              I've had to deal with quite a few crashes recently.
              Have you considered using PostgreSQL?
              • So does MySQL not flush disk writes to disk before a transaction commits? Your reply seems to suggest that MySQL waits for the kernel to flush disk buffers to disk -- if that's the case, it definately sounds like there is a potential for data loss.

                Transactions? You must be thinking of another database. MySQL only does transactions if you use the not-supported-out-of-the-box InnoDB table type. And that defeats the whole point of MySQL, because InnoDB is about the same speed as PostgreSQL.

                The "Data" I've lost was usually still in the middle of being written. Some of the writes takes more than a few seconds to complete, so I guess the few seconds overlaps with "not done yet". That's what I get for posting drunk ;-)

                MySQL does flush the data out of disk, but its less agressive flushing indexes out to disk. In general though, most of my repairs after a crash are just to be safe. MySQL has a "clean" bit in the tables, so we check tables that are labelled unclean. Most of the time the unclean tables are fine, and the bit is cleared. It still takes a while to verify though. Sorta like checking your ext3 filesystem after a crash. MySQL isn't journaled, but our filesystem is. That might help a bit.

                Have you considered using PostgreSQL?

                We're too far gone. See my other comments [slashdot.org]

                • Transactions? You must be thinking of another database.
                  Sorry -- I meant "atomic operation" (which in most databases is a transaction, I suppose it's a single query in MySQL).
                  The "Data" I've lost was usually still in the middle of being written.
                  Ah, ok. If the "atomic operation" hadn't completed yet, there is no way the database can (or should) store the results of the operation -- everything should be discarded.
                  We're too far gone. See my other comments
                  Ah, I see. Good luck! :-)
            • So, has anyone done anything like run a script that issues a "select nextval('sequence');" and then a kill -9 mysqld to see if the counter stays incremented? If it stays at the old value but the new value gets committed, then that's the same bug postgresql had but doesn't anymore.

              One of the interesting things to try with postgresql is to run a couple dozen transactions and kill -9 the postmaster, or even more severe, hit the big red switch, and then reboot. With journaling file systesm and postgresql's write ahead logs, the whole system if back up in literally minutes. And the transactions that were pending are either all individually committed or none at all.

              I've run it on smaller sparc boxen running linux (Sparc 20, ultra 1, 2 etc...) and it seems to be quite stable and fast. Solaris, not so fast. :-)
          • how does MySQL recover from crashes

            If you use Innodb it has a double-write system that takes care of graceful recovery in case of crashes. MyISAM tables don't have any specific protection but they seem to never ever get corrupted, safe for index which might need to be repaired in case of crash. Either way Mysql is pretty solid and has never lost datas on me in years, and seldom crashes (I have uptime for the database between 1 to 3 months, which is not too bad IMHO)
    • RE: Docs. There are two [postgresql.org] books [postgresql.info] about PostgreSQL online. Buying them is certainly encouraged, but not necessary!
    • I looked into using PostgreSQL for some of my personal projects. I've been using MySQL, but I want the triggers and stored procedures that the more sophisticated databases offer.

      Most of what you mentioned above compares well with my findings. I take issue with only one. You mention that PostgreSQL is clean compared to MySQL. I suppose that could mean many things. After studying the database syntax, I didn't feel PostgreSQL was nearly as "clean" as MySQL.

      It has been a while since I've read the PostgreSQL manual, so I don't have the best examples. But I found many more inconsistencies and "why the heck did they do it that way" features in PostgreSQL. One token example is escaping data. Depending on the situation, you may need to escape a character with one, two, or even four backslashes. That makes putting binary data into a PostgreSQL database fairly difficult. With MySQL, you only have to backslash the apostrophe and the backslash -- very simple and clean. With most other databases, there is a prepared statement feature that directly transfers data as a separate blob. With PostgreSQL, good luck getting your binary data in.

      But when all is said and done, other than syntax complaints, PostgreSQL is an amazing database system.
      • You mention that PostgreSQL is clean compared to MySQL. I suppose that could mean many things. After studying the database syntax, I didn't feel PostgreSQL was nearly as "clean" as MySQL.
        It's certainly much more compliant with the SQL92/99 standards.
        One token example is escaping data. Depending on the situation, you may need to escape a character with one, two, or even four backslashes.
        The syntax [postgresql.org] for string literals indicates that you only need to use a single backslash to escape an input value. If the input value you're trying to create has an escape sequence of its own (e.g. a regular expression), you'll need to do both SQL escape and regular expression escapes. I'm not sure what the problem with this behavior is, or how to implement it any better...
        With most other databases, there is a prepared statement feature that directly transfers data as a separate blob. With PostgreSQL, good luck getting your binary data in.
        If you need to load binary data, the easiest method would probably be to use PQescapeBytea(), or the equivalent function provided by your language interface.
        • My comment was referring to this. [postgresql.org] I just was too lazy to look it up last night.

          I agree about the conformance -- PostgreSQL is fairly good in that respect. Just as a matter of taste, I prefer MySQL's way of doing certain things over the ANSI standard. YMMV.

          (Don't get me wrong. I'm trying to make the switch from MySQL to PostgreSQL because I want the power PostgreSQL has. I'm just finding it hard, because I keep running into these roadblocks where I think "MySQL does this so much simpler!")
    • I've been using SAPDB in production for over a year now, running commercial web site.

      Here are my corrections to a couple big misunderstandings:

      1. SAPDB uses a fixed cache buffer per database. For example, you can give 128MB of RAM to a database for DATACACHE. The problem is that there seems to be no way to SHARE the datacache between multiple database instances. So this makes it poorly suited for web hosting applications... there is a lot of RAM overhead for each instance of SAPDB. Yes, you could create one instance and use permissions to split your hosting customers, but hope that you customers never try to create the same tablename :) Products like Access and MySQL are a lot better in this regard.

      2. SAPDB has no replication! The "replication" is nothing more than an import/export manager - and it still seems pretty immature. Bugs are being found in it all the time when people actually use it.

      3. SAPDB error messages and documentation are not very easy for newcomers. This is for people who have worked with MS-DOS 3.0 and Linux 2.0 :)

      4. At least for Windows users, the ODBC drivers seem like they are built on a old codebase and updated. They don't have an OLEDB driver, no interest in tuned dotNET drivers, etc. Another example of things to watch for: UNICODE is supported in the database, but not by the ODBC drivers! They are supposed to be working on it, but it was a shock to me to find that UNICODE wasn't working.

      I think people are wrong to compare Oracle to SAPDB. Just because it has some compatibility, it doesn't even seem to compare in maturity. I'm not saying that SAPDB is unreliable, but some basic problems have existed that you would assume would have shown up if more people were actually using it!

      Example of problem: If you ran a single program that did a SELECT that returned no results, a memory leak took place that would hit you after 2000 or so such SELECTs.

      I think it was previously used mostly for R/3 and just doesn't have a lot of usage outside of R/3 - so a lot of bugs are yet undiscovered. As people switch from other apps to SAPDB, all new SQL is being thrown at it, and the bugs are getting found and fixed.

      Having a dedicated (SAP) development staff is great for this type of maturity... as the bugs are found, they get fixed. Yet it seems obvious to me that we are turning the titanic here (really old codebase).

      I'm sticking with it, but I think sometimes people assume it is going to be really "high-end" in terms of features and maturity. Don't assume...
    • Out of curiosity, why do you refuse to look at Sybase? I have worked with it as well as Oracle and (gag) Ingres, and while real sequences are certainly nicer than identity columns, Sybase was generally pretty nice. DBA work definitely seems to be less hassle with Sybase than with Oracle.
  • by PizzaFace ( 593587 ) on Thursday July 18, 2002 @04:43PM (#3911586)
    PostgreSQL and SAP DB are both good products. A few differences I've noticed might influence your choice:
    • Platform independence: SAP DB supports Windows better [skippingdot.net] than PostgreSQL does. They both have good support for unices of various flavors.
    • Concurrency: Both databases support ACID transactions. PostgreSQL uses multi-version concurrency control so reads and writes won't interfere with each other, while SAP DB uses row-level locking. In this respect, PostgreSQL v. SAP DB is similar to Oracle v. DB2.
    • Curriculum vitae: PostgreSQL originated at Berkeley, and has very cool features like MVCC and functional indexes [postgresql.org]. SAP DB descends [accpro.com.sg] from Adabas D, which was marketed as the PC-sized little brother of Adabas for mainframes, and SAP's focus is on providing robust support for business applications such as SAP's customer-relations and supply-chain products.
    • Developers: PostgreSQL is developed by a global community of volunteers. SAP DB is developed by a team of employees (FAQ [sapdb.org] says 100) in a major software company. Development of both products is active.
    You can do good work with either product.
  • I did some performance testing of our application with some databases (on linux, via unixODBC); a mixture of selects, inserts and deletes.

    here my (absolutely unscientific) ranking:

    1. SAPDB
    2. Firebird
    3. PostgreSQL
    remarks:
    • All three databases performed well
    • (RANT: It is a nuissance that there is no standard for called procedures!)
    • SAPDB was almost twice as fast on inserts as PostgreSQL (deletes where only a little faster)
    • Firebird: somewhat slower than SAPDB. I didn't know then that there are also some tuning possibilities.
    • MySQL: I didn't even test it because it isn't ACID (I know, there are recent additions, but ...)
    we have been using sapdb now for half a year and it rocks!

    Of course it is heavier than the other databases; and it is a little hard to get started. (The beginners documentation could be better, although there are web sites providing valuable hints)

    Would I recommend it for medium or big sized installations?
    Yes, without hesitation.

  • Here are the differences I know of (not that many, I don't use SAPDB, but I have looked it over a bit.)

    Postgresql uses MVCC (multi-version concurrency control.) That allows for multiple transactions to run with each viewing the database as an instance in time when the transaction began. Other than Oracle and Postgresql, I don't think any other database uses MVCC.

    Row level locking is a win for a data warehouse, with little writes and many reads, but in a heavily updated transactional environment hits a brick wall pretty quick.

    The other big difference which not one person has mentioned yet is the license.

    Postgresql is BSD style (do what you will) while SAP was released under the GPL.

    For some companies this may be an important difference, like if you're building "black box" apps that you sell (think network appliance type stuff)
    • Other than Oracle and Postgresql, I don't think any other database uses MVCC.
      Interbase [borland.com], Firebird [sourceforge.net] and Solid [solidtech.com] also use versioning for concurrency control.
      Postgresql is BSD style (do what you will) while SAP was released under the GPL.
      The SAP DB license [sapdb.org] isn't really onerous. The database kernel is under the GPL, so if you distribute the server on a CD-ROM, you need to put the server's source on the CD. If you allow a download of the server, you need a link to the server source. The programming interfaces and client utilities are under the Lesser GPL, and can be distributed as binaries and linked to closed-source software.

      This licensing doesn't restrict my rights to my application software at all. As for letting customers know the identity of the server software, that's no problem because SAP DB is easier to sell than PizzaFace DB.
    • Row level locking is a win for a data warehouse, with little writes and many reads, but in a heavily updated transactional environment hits a brick wall pretty quick.
      Why would row level locking be a win (compared to MVCC) in that environment? A read doesn't generate a nother version of the tuple, so you don't suffer any MVCC overhead.
  • Comment removed based on user account deletion
  • I gave PostGreSQL a try. I consider myself a very experienced database developer and DBA. I've use Oracle (not the i versions), MS SQL Server versions 6.5, 7, and 2000, Informix, and Interbase and Phoenix. Interbase is great as an embedded database, and it has all the necessary features, but according to all the literature I can find it including the Phoenix online docs it doesn't support more than 100 megs or so of RAM. For an enterprise-level database, that's a joke. Phoenix ODBC drivers are a bit dicy. The one provided by Phoenix doesn't handle text fields at all. We tested a VB application, and attempting to view a text (that's a blob type, not a varchar) field using that driver caused an immediate crash, every time. I thought PostGreSQL was the answer, but I find a couple of problems. Through the ODBC driver there's no way to prepare an SQL statement. Doesn't sound like a big deal, but that means I can't build libraries with all the intelligence I like. The bigger deal is reliability. I had a Delphi application that I was using to move data from local tables to my server. On some tables- always the same ones, and never others- after about 1,000 records I would start getting data errors. When I looked at the PostGreSQL Logs, something in the process was changing the characters. I had a log on the client of what was sent, and a log inside PostGreSQL of what the database got, and they didn't match. I ran this at least a dozen times, and it was repeatable. So I've been exploring SAP. One of my requirement is that I be able to access the database either from Linux or Windows using ODBC. I can't run ODBC using the current server download on SUSE 8.0 or Redhat 7.2 from ODBC on the same machine, but I'm told that SUSE 7.3 works. From reading the logs it appears to be a glibc version incompatibility. Next week I hope to have a SUSE 7.3 machine built to test that. If it does, I plan to burn some incense to the database gods in thanks and appreciation of SAP, because I think it is going to answer all my prayers.

HELP!!!! I'm being held prisoner in /usr/games/lib!

Working...