Forgot your password?
typodupeerror
Databases

Ask Slashdot: Which NoSQL Database For New Project? 272

Posted by Soulskill
from the mo-sql-mo-problems dept.
DorianGre writes: "I'm working on a new independent project. It involves iPhones and Android phones talking to PHP (Symfony) or Ruby/Rails. Each incoming call will be a data element POST, and I would like to simply write that into the database for later use. I'll need to be able to pull by date or by a number of key fields, as well as do trend reporting over time on the totals of a few fields. I would like to start with a NoSQL solution for scaling, and ideally it would be dead simple if possible. I've been looking at MongoDB, Couchbase, Cassandra/Hadoop and others. What do you recommend? What problems have you run into with the ones you've tried?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Which NoSQL Database For New Project?

Comments Filter:
  • by tubs (143128) on Wednesday April 09, 2014 @05:17AM (#46702801)

    Do you need a database to do what you're trying to do? Why not just write the information to a text file (csv or tab seperated?), and use other programs to query the data?

    • by Anonymous Coward on Wednesday April 09, 2014 @05:20AM (#46702811)

      Excel Spreadsheet, maybe?

    • by mwvdlee (775178) on Wednesday April 09, 2014 @05:49AM (#46702927) Homepage

      Basically the question is; what's the expected volume of records and fields per records?

      A solution for 100 records a week with 4 fields each would be different from 1000 records per second with 30 fields each.
      1000 records/sec with 4 fields would be yet another solution.

      • Re: (Score:3, Informative)

        by DorianGre (61847)

        We are looking at 99% incoming data, 10-12 fields, 1000-2000 per session per week, X as many users as we can get.

        • So, 10-20 thousand data points, per customer, per week?

          Or, at 100 customers, 50-100 million data points per year?

          Get a real database. And some real horsepower.

        • by aoteoroa (596031)

          We are looking at 99% incoming data, 10-12 fields, 1000-2000 per session per week, X as many users as we can get.

          Our company's accounting system uses Mongo on the backend. With about 30 users, and a database that is 7 GB Mongo performs well and sounds like it would fit your application.

          Having said that I agree with other posters who have suggested that if you want to plan for future growth you would be wise to consider a real database from the start. We are planning a migration to PostgreSQL this year.

    • by Richard_at_work (517087) <richardprice@@@gmail...com> on Wednesday April 09, 2014 @06:00AM (#46702959)

      Theres probably an element of multithreaded access that needs to be taken into consideration here - writing to a single text file may get you into issues if the receiving webserver is multithreaded, meaning the threads will either have to queue for write locks, or write to a different file.

      Database engines don't have this issue, so while it may be overkill, there may be reasons to have one irregardless.

    • by FyRE666 (263011) on Wednesday April 09, 2014 @06:07AM (#46702979) Homepage

      Please don't do this (use a flat file) to store data for a web app that's likely to be accessed by more than one device at a time. Unless you implement your own file locking mechanism, you'll eventually end up with corrupt entries. Even if you do implement your own locking scheme, it's probably not going to be as efficient as using a DB. It's a 5 minute job to set up a new MySQL DB and associated query to push data in, then you can filter and report on it much more easily. It's something DBs are very good at!

      Unless you have a specific need to scale horizontally, it's generally better to stick with a SQL DB for web apps. I've used MySQL, PostgreSQL and Oracle for this. MySQL is by far the easiest to work with, hence its popularity. I don't actually know of any advantage to using PostgreSQL; it doesn't perform any better, and is (or at least used to be) much less user friendly.

      • by Raumkraut (518382)

        For storing and querying arbitrarily-structured data, which is what the submitter seems to be wanting, a traditional relational SQL database is not necessarily the best way to do it.

        And if anything, MongoDB is easier to start using than any relational database, IME. No need to create databases, schemas, or tables (collections) beforehand - you just install MongoDB, start writing data, and it gets stored.

        • by Richard_at_work (517087) <richardprice@@@gmail...com> on Wednesday April 09, 2014 @06:43AM (#46703099)

          I think many people get stuck in thinking "one single database, thats it, my initial decision condemns me forever", when in-fact theres no shame in having many databases.

          Stick the raw data into one database, choose the database that suits that.

          Transform the data from the raw database into something you can use day to day, thats well structured etc, choose the database for that.

          Transform the data from the day to day schemas into something that more suitable for archiving and long term reporting, again choose the database for that.

          You don't have to have one single database type, every particular one has its strengths, so use them!

        • Create a table, get a POST, Insert contents of POST into table...I don't really see how this isn't the best way to do it.
        • Re: (Score:2, Informative)

          by Anonymous Coward

          >For storing and querying arbitrarily-structured data, which is what the submitter seems to be wanting
          I dunno. I read TFS and it looks more like he wants rows of tabular data. Were this a STX site, I'd vote to close as too broad since he hasn't actually said anything useful about what he's storing.

          So default answer to "Which NoSQL database should I use?" is always "Don't use NoSQL."

        • Which is why he might as well use a flat file. If he has structure, then an RDMS is what he should use. If he's not going to bother to organize the information, then a flat file would be perfect because all you are after is junk anyways.
      • by Lennie (16154)

        There are a whole lot of things PostgreSQL was less user friendly, but they take their time and keep improving it in a consistent way. It has many, many features.

        Personally I really like PostgreSQL. It scales really well.

        And if there is anything missing, there might be things some people want.

        But I think you'll find it will be added in the next 3 releases. 9.4 is now in development:
        - upsert/merge in 9.4
        - basis of logical replication in 9.4 (has been available in out of tree tools for many years), upcoming v

      • by jythie (914043)
        No need to develop your own locking system, just use whatever logging functionality the server has.
      • I've used MySQL, PostgreSQL and Oracle for this. MySQL is by far the easiest to work with, hence its popularity.

        What about Firebird? Actual transactions - even transactional lazy schema updates -, single-file databases, reasonable tools, almost invisible maintenance, everything virtually idiot-proof. Even LibreOffice wants to switch to embedded Firebird for its native database engine. I can't imagine MySQL being anything other than PITA compared to Firebird.

    • Now scale that. Or just lock it properly.

      If you want simple, scalable and low sysadmin overhead and all you need are key -> value lookups then Amazon's S3 can be an excellent choice. You don't need to manage it, you don't need to work out how to add servers and its well proven at extremely large scales.

      However, like a lot of other posters, I'm very sceptical that NoSQL is the place to start. SQL databases can do a LOT for you, are very robust and can scale very considerably. As your requirements grow you

    • by Art3x (973401)

      "Think of SQLite not as a replacement for Oracle but as a replacement for fopen()" --- About [sqlite.org]

    • by tubs (143128)

      When I read the post the first thought that came to me was "log files" - you mention date & time, a "number" of fields and "few" fields for reporting. It still sounds like a log file from everything that is said. Indeed, just change from POST to GET and you can use the web server logs :-)

      But, why not build into the design that you may change the "backend" database without having to worry about what is at the backend?

  • Use PostgreSQL (Score:5, Informative)

    by Anonymous Coward on Wednesday April 09, 2014 @05:17AM (#46702803)

    If you need to store less than a few hundred million rows just use PostgreSQL.
    It supports JSON and transactions.

    • Re:Use PostgreSQL (Score:5, Insightful)

      by Lennie (16154) on Wednesday April 09, 2014 @05:37AM (#46702873) Homepage

      Yes, that is what I would wanted to point out too.

      Also in PostgreSQL 9.4 it has jsonb which is, in certain tests less than a year ago, faster than MongoDB.

      • by Lennie (16154)

        Also if you want a key/value store, there is also http://symas.com/mdb/ [symas.com] from a company of some of the OpenLDAP developers.

        Which really seems to be have the fastest read performance of them all.

    • Unfortunately, as the submitter gave details later.

      Each customer is about 500k data points per year.

      Thousands of customers is a few hundred million rows, per year.

      • by rtaylor (70602)

        Right. So 5 years from requiring a NoSQL DB, and hardware/software advancements in that period will likely give another 3 years of easy growth with just a basic Pg installation.

        If it was 10m text/blob records per day, that would be a different animal; but it's probably 1/10th of that.

        • by rycamor (194164)

          A few hundred million rows is no trouble to PostgreSQL, if configured right. And if you go beyond that there are some great ways to deal with the problem:

          1. Partitioning [postgresql.org]: Make a large table composed of smaller subset tables. This is a great way to deal with what is primarily historical data, since you can partition by month, quarter, or whatever time period makes sense for your application. Then, when it comes time to archive or delete old data, all you have to do is migrate that month's table to the archiv

  • by Anonymous Coward on Wednesday April 09, 2014 @05:23AM (#46702825)

    You might want to consider a SQL database.

  • by prefec2 (875483) on Wednesday April 09, 2014 @05:25AM (#46702829)

    Based on your information no one can give you solid advice. It highly depends on the load you expect and on the data model you will use. for a simple twitter, you can use a log file, or any NoSQL technology. If you only have a few transactions and not billions of entries, you could use PostgreSQL or even MySQL. However, PostgreSQL scales better. If you want to make complex interpretations on graph like data you may consider Neo4J as a graph DB.

    • by OzPeter (195038) on Wednesday April 09, 2014 @07:00AM (#46703147)

      Based on your information no one can give you solid advice.

      IMHO the question is deliberately designed to be vague. iPhones and Android devices, PHP and Ruby On Rails .. that is such a shotgun blast of specifications that are totally unrelated to the DB use on the back end that the entire question smells of click bait to me.

      • by khchung (462899) on Wednesday April 09, 2014 @08:07AM (#46703435) Journal

        Based on your information no one can give you solid advice.

        IMHO the question is deliberately designed to be vague. iPhones and Android devices, PHP and Ruby On Rails .. that is such a shotgun blast of specifications that are totally unrelated to the DB use on the back end that the entire question smells of click bait to me.

        Either that, or the OP simply have no idea how databases work at all.

        If OP has any idea how database (any database, not just relational) works, he would be talking about data and transaction volumes, access patterns, transactional requirements, data integrity constraints, retention and housekeeping requirements, etc.

        Instead, as you said, he talked about devices platforms, communication protocols, language and runtime environment which are all irrelevant to choosing database. (ok, the last may be a bit relevant depending on which database used)

        • by jythie (914043)
          And here I am out of mod points.

          At first reading something seemed off about the question, and I think you summed it up nicely.

          To me it comes across a bit as the OP asking 'I need some vaguely authoritative sounding reasons for a sexy solution, look at my keywords and tell me what is "in" with that community'
  • NoSQL? (Score:5, Insightful)

    by aaaaaaargh! (1150173) on Wednesday April 09, 2014 @05:29AM (#46702839)

    I would like to start with a NoSQL solution for scaling

    And there it is, the proverbial premature optimization ...

    • by mwvdlee (775178)

      Being able to scale from 1 billion records a day to 10 billion a day does not a premature optimization make.

      The simple fact is that there's not enough information to give any reasonable advise.

    • Re:NoSQL? (Score:5, Insightful)

      by Sarten-X (1102295) on Wednesday April 09, 2014 @08:00AM (#46703397) Homepage

      As an expert (relative to most of Slashdot) in NoSQL databases, with a significant amount of experience in Hadoop and HBase systems, I agree wholeheartedly.

      NoSQL solutions can be ridiculously fast and scale beautifully over billions of rows. Under a billion rows, though, and they're just different from normal databases in various arguably-broken ways. By the time you need a NoSQL database, you'll be successful enough to have a well-organized team to manage the transition to a different backend. For a new project, use a RDBMS, and enjoy the ample documentation and resources available.

      • by tigersha (151319)

        Thank you. Someone who talks sense around here.

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        As an expert (relative to most of Slashdot) in NoSQL databases, with a significant amount of experience in Hadoop and HBase systems, I agree wholeheartedly.

        NoSQL solutions can be ridiculously fast and scale beautifully over billions of rows. Under a billion rows, though, and they're just different from normal databases in various arguably-broken ways. By the time you need a NoSQL database, you'll be successful enough to have a well-organized team to manage the transition to a different backend. For a new project, use a RDBMS, and enjoy the ample documentation and resources available.

        Agreed. I used a NoSQL database on a project I'm working on at the moment, and stick by that decision even though I don't even have millions of row, but my situation is somewhat different to the OP's: my data model is very difficult to map to SQL (I have hundreds of different entity types, each of which has different field storage requirements, and need to be able to associate between entities of different types according to a variety of rules, meaning that some entity types may have hundreds of different

  • These guys are committed, meaning mongo has a future. 2.6 that came out the other day has some nice new features and many bug fixes.
  • light (Score:4, Insightful)

    by invictusvoyd (3546069) on Wednesday April 09, 2014 @05:33AM (#46702853)
    SQLite is a relational database management system contained in a C programming library. In contrast to other database management systems, SQLite is not a separate process that is accessed from the client application, but an integral part of it.
  • by tonywestonuk (261622) on Wednesday April 09, 2014 @05:34AM (#46702855)

    "I'll need to be able to pull by date or by a number of key fields"

    So, in other words, you have already decided on key fields. If you use a database, this has things call index's, that can search billions of rows for a key field in a fraction of a second.
    If you don't use something with INDEX's then you can't do this.

    Where has this idea that Databases can't scale come from? - The world runs on Database for heaven sake. Do you think when you take money out of an ATM, its going to MONGODB? - And yet there are millions of ATM's and you can take money out of your VISA account in almost all of them anywhere in the world. That is called scale.

    • by cyber-vandal (148830) on Wednesday April 09, 2014 @05:35AM (#46702867) Homepage

      Where has this idea that Databases can't scale come from?

      Salesmen

    • b.bb...but mongodb is webscale!

    • by Raumkraut (518382) on Wednesday April 09, 2014 @06:27AM (#46703029)

      MongoDB has indexes.
      MongoDB also lets you store and query arbitrary data, in addition to any "key fields", without having to pre-define all the possible fields. Which it seems is what the submitter asked for.

      Where has this idea that "NoSQL" means "not a database" come from?

    • by janoc (699997) on Wednesday April 09, 2014 @07:15AM (#46703191)

      Databases don't scale for people who don't understand SQL, don't understand data normalization, indexing and want to use them as flat files. Unfortunately, a way too common anti-pattern :(

      The second group are too-cool-to-learn kids using the latest development tool fad on the market to build yet another Facebook/Twitter/Instagram/whatever clone ...

      • We have a winner here. When I saw the number of buzzwords in the article, I already thought the worst too.
      • by wvmarle (1070040)

        I've mis-used databases just as you describe. And continue to do so. That's fine, I'm an amateur, and I never needed to handle databases larger than a couple thousand rows. I could probably get away with tens or hundreds of thousands of rows before running into problems.

        Now if I were to develop something that needed a billion rows - that's a different story, and I do know my current approach won't work and I'd have to learn a lot about databases to pull it off. And submitter is obviously trying to do that (

    • by rthille (8526)

      Where has this idea that Databases can't scale come from?

      the CAP theorem [wikipedia.org]

      Consistency, Availability, Partition-Resistance. Choose any two.

    • by Kjella (173770)

      Where has this idea that Databases can't scale come from? - The world runs on Database for heaven sake. Do you think when you take money out of an ATM, its going to MONGODB? - And yet there are millions of ATM's and you can take money out of your VISA account in almost all of them anywhere in the world. That is called scale.

      Of course you can with lots of money in hardware and software and top notch database administrators, architects and query designers but it's a lot of hard work and expensive. The sales pitch for NoSQL is that it's built for horizontal scale-out by design, just throw more servers at it - mainstream servers, not the extremely expensive high-end servers and it'll scale almost indefinitely without having to rework everything. There's a lot of people in the "when we go viral we must be ready for it" category, wi

  • MariaDB (Score:2, Insightful)

    by Anonymous Coward

    I would consider using the latest release of MariaDB.

    You can use it as a standard MySQL server, but they also have Cassandra NoSQL as an engine for it now (since the release of 10)... So you would be easily able to play with things on different database types and see what suits your situation better.

  • ... since it is web scale. ;-)

    https://www.youtube.com/watch?v=b2F-DItXtZs [youtube.com]

  • Short Intro (Score:5, Informative)

    by emblemparade (774653) on Wednesday April 09, 2014 @05:51AM (#46702933)

    It's a mistake to think that "NoSQL" is a silver bullet for scalability. You can scale just fine using MySQL (FlockDB) or Postresgl if you know what you're doing. On the other, if you don't know what you're doing, NoSQL may create problems where you didn't have them.

    An important advantage of NoSQL (which has its costs) is that it's schema-free. This can allow for more rapid iteration in your development cycle. It pays off to plan document structures carefully, but if you need to make changes at some point (or just want to experiment), you can handle it at the code level. You can also support older "schemas" if you plan accordingly: for example, adding a version tag or something similar that can tell your code how to handle it. So, even ignoring the dubious potential of better scalability, NoSQL can still be beneficial for your project.

    More so than SQL, NoSQL database are designed for different kinds of applications, and have different strengths:

    MongoDB is a really good backend engine that gives programmers lot of control over performance and its costs: if you need faster writes, you can allow for eventual integrity, or if you need faster reads, you can allow for data not being the absolute freshest. For many massive multiuser applications, not having immediately up-to-date data is a reasonable compromise. It also offers an excellent set of atomic operations, which from my experience compensate well for the lack of transactions. Furthermore, MongoDB is by far the most feature-rich of these, supporting aggregate queries and map-reduce, which again can make up for the lack of joins. It also offers good sharding tools, so if you do need to scale, you can. Again, I'll emphasize that you need a good understanding of how MongoDB works in order to properly scale. For example, map-reduce locks the database, so you don't want to rely on it too much. The bottom line is that MongoDB can offer similar features to SQL databases (though they work very differently), so it's good for first-timers.

    Couchbase is very good at dispersed synchronization. For example, if parts of your database live in your clients (mobile applications come to mind), it does a terrific job at resynching itself and handling divergences. This is also "scalable," but in a quite different meaning of the term than in MongoDB.

    I would also take a look at OrientDB: it's not quite as feature rich as MongoDB (and has no atomic operations), but it can work in schema-mode, and generally offers a great set of tools that can make it easy to migrate from SQL. It's query language, for example, looks a lot like SQL.

    The above are all "document-oriented" databases, where you data is not opaque: the database actually does understand how your data is structured, and can allow for deep indexing and updating of your documents. Cassandra and REDIS (and Tokyo Cabinet, and BerkeleyDB) are key-value stores: much simpler databases offering fewer querying features: your data is simply a blob as far the engine is concerned. I would be less inclined to recommend them unless your use case is very specific. Where appropriate, of course simpler is better. With these kinds of databases, there are actually very few ways in which you can create an obstacle for scalability: simply because they don't do very much, from a programming perspective.

    There are also in-between databases that are sometimes called "column-oriented": Google and Amazon's hosted big data services are both of this type. Your data is structured, but the structure is flat. Generally, I would prefer full-blown "document-oriented" databases, such as MongoDB and OrientDB. However, if you're using a hosted service, you might not have a choice.

    It's also entirely possible to mix different kinds of databases. For example, use MongoDB for your complex data and use REDIS for a simple data store. I've even seen sophisticated deployments that very smartly archive data from one DB to another, and migrate it back again when necessary.

    • by St.Creed (853824)

      Any relational database can also do "schemaless" models, by using the EAV (anti-)pattern. Mainly this conveys a lack of understanding of your data and a lack of planning and design in your datamodel, but hey, it happens. The fun thing is that you still get all those nice database features like parallel processing, concurrency, SQL, ACID transactions if you want them, security and maintenance tooling, etc.

      And if you use a modern database like SQL 2014 or Oracle's latest, you will get column-based compression

      • Re: (Score:3, Insightful)

        by Brian Nelson (3610471)
        And any text file can be transnational if you write your code right. We can keep going down this road about how you don't /need/ X technology, but nobody wins. It's really OK to see the good in different technologies.
        • by St.Creed (853824)

          I agree that that road isn't productive (otherwise we'd still write machine code since we can do everything in machine code), but the hint of "it's going to be on internet so I can't use and RDBMS" in the original question is silly, and that's what I react to.

          Given 3 trillion users your options are pretty much limited to horizontal scaling, no SQL etc. but most people never get that far with their applications and in that case, storing the data in a noSQL database and then getting actionable information out

  • Just Use SQL (Score:5, Insightful)

    by Anonymous Coward on Wednesday April 09, 2014 @05:52AM (#46702939)

    I just felt I have to comment on this. So many developers start with the phrase "I need NoSQL so I can scale" and almost all of them are wrong. The chances are your project will never ever ever scale to the kind of size where the NoSQL design decision will win. Its far more likely that NoSQL design choice will cause far more problems (performance etc), than the theoretical scaling issues.

    Take for example two systems I've been involved with for managing WiFi access to large scale networks (100,000+ concurrent users, 1000's of APs), one uses MongoDB the other based on PostgresSql. The MongoDB based solution has very real performance problems, its reporting takes a very long time to run taking very large amounts of system ram (24G in some cases) and that performance is only degrading as the system grows, there are also many other performance issue. These issues are not just mongo issues but simply that NoSQL is not well suited to the task. The system has been rewritten using an SQL backend and now works much better but importantly it's scaling but better. Growth in the system is no-longer degrading performance and the point where we need hardware upgrades or extra servers etc are now much more predictable so we can predict cost base growth in relation to user growth.

    NoSQL does not guarantee scaling, in many cases it scales worse than an SQL based solution. Workout what your scaling problems will be for your proposed application and workout when they will become a problem and will you ever reach that scale. Being on a bandwagon can be fun, but you would be in a better place if you really think through any potential scaling issues. NoSQL might be the right choice but in many places I've seen it in use it was the wrong choice, and it was chosen base on one developers faith that NoSQL scales better rather than think through the scaling issues.

  • Postgres might carry you further than you imagine with hstore and json extensions. I'd also try Riak if you really want NoSQL.
  • take a look at hyperdex if your are looking for a NoSQL DB: http://www.hyperdex.org/ [hyperdex.org]

  • Big mistake (Score:5, Insightful)

    by msobkow (48369) on Wednesday April 09, 2014 @06:58AM (#46703139) Homepage Journal

    Telecommunications data is eminently suitable to schema table storage in any relational database, which with a little work, will let you index by the keys you intend to query by.

    NoSQL solutions are better for unstructured data that doesn't come in predictable formats or value sets.

    You need to take a step back and look at the problem before you decide on a solution. Don't be one of those idiots who tries to use a hammer to drive a screw.

  • by BlackPignouf (1017012) on Wednesday April 09, 2014 @07:24AM (#46703233)

    "I'm working on a new independent project. It will soon become the new Facebook, and I'll be billionaire next quarter. The only problem is that I don't know which luxury yacht to buy with all this money. I've been looking at Lady Moura, Christina O, Pelorus, Venus and others. What do you recommend? What problems have you run into with the ones you've tried?"

    • by coofercat (719737) on Wednesday April 09, 2014 @08:51AM (#46703659) Homepage Journal

      Pff! All that soon-to-have money and yet no imagination, huh? Buy an old diesel Navy submarine and have it refitted. Maybe cut some windows into the hull - that'll mean you can only go down to maybe 50 metres instead of 350, but that's still plenty, and if you get lost you can just look out of the windows to see where you are without having to worry about using sonar.

      I'd imagine surfacing your submarine in Monaco's marina will turn far more heads than your ridiculous yacht moored a mile offshore ;-) (besides, a submarine is phallically shaped, so works better in metaphorical dick measuring competitions)

      Oh, and be sure to use Postgres or MySQL for your on-board systems - it'll scale plenty well for a long time before you need to go all 'web scale' with a NoSQL DB.

  • by ledow (319597) on Wednesday April 09, 2014 @07:57AM (#46703377) Homepage

    Premature Optimisation.

  • Is this for your stock inventory project? If you want to do anything that involves keeping track of any goods or money or anything of value, then NoSQL is not necessarily the way to go. NoSQL is designed to keep track of value-less things like Twitter messages and Facebook postings, where it doesn't matter if you lose a few thousand transactions here or there. People keeping track of things with actual monetary value usually use SQL for the transactions, from what I've seen.
  • by scorp1us (235526) on Wednesday April 09, 2014 @09:20AM (#46703855) Journal

    First. everyone who is pointing out your premature optimization is probably right. You can get a lot of scalability out of existing databases, particularly if you optimize your data schema with indexes. Even if you store all possible 9,999,999,999 phone numbers, the log base-2 of that is 34. So you'll need a b-tree 34 levels deep. That's big, real big, but b-trees are fast. Worst case you are reading 34 blocks from disk, which is ~16kB.

    Next, don't choose databases by name. Choose them by their features because you use features, not names. That said, HBase is probably what you want. It's a blend of distributable hadoop and tables. You don't need atomicity (it doesn't sound like) which is one thing you give up when leaving SQL behind.

  • ...so that you simply write an adapter for pushing/pulling data.

    Then you don't have to worry so much about making what appears to be an extremely premature optimization.

    In other words, have your backend web services (presuming you're using them and not manually POSTing from a socket yourself to your own socket server) instantiate an instance of iMyDBAdapter and use it.

    Later, when you find out that you actually do need MongoDB, PostgreSQL, sharded MariaDB, whatever, you can simply write another adapter class that simply has to satisfy the iMyDBAdapter interface.

    The reason this works so well is that it will force you to separate your business logic from your underlying DB implementation (which requires a lot of discipline to do otherwise, especially when you just want to get something 'done'.)

    Also, as another poster pointed out, you're much more likely to suffer from other issues relating to scaling (and issues better solved elsewhere) than a modern database.

    My advice, stick rigidly to the interface/adapter mechanism and implement an adapter for whichever DB you're most comfortable with right now.

  • by luis_a_espinal (1810296) on Wednesday April 09, 2014 @09:35AM (#46703991) Homepage

    I would like to start with a NoSQL solution for scaling,

    This is a solution looking for a problem. Or more precisely, you are looking for an excuse to use a piece of technology or paradigm. Don't get me wrong, your systems requirements might indeed be best served using a NoSQL solution, but what exactly has your analysis shown regarding this?

    Scaling is not just a technical feature (NoSQL, SQL, Jedi mind-meld tricks). Scaling is a function of your architecture. You can NoSQL the shit out of your solution, but if your software and system architecture is not scalable, then having NoSQL will mean chicken poop as solutions go.

    and ideally it would be dead simple if possible.

    If you want simple, put a simple RDBMs schema (a properly normalized that) in place, and have your code use a simple, technology-agnostic persistence layer that maps your domain-level artifacts to database artifacts. If you ever had to replace the back-end, then you can do so with minimal changes to the API that domain-level artifacts use to persist themselves with the persistence layer.

    Design your domain solution around domain-specific artifacts. Persistence technology is typically a low-level design/implementation detail, an important one obviously (and a critical one for some classes of systems).

    But for what you are describing, the choice shouldn't even be coming into the picture without first having an architectural notion of your solution.

  • It means you don't have any big data requirements so you're better off sticking with MySQL or something easier to manage at a small scale.
    If growth is high or you have a lot of data to analyze, you can look into importing data into Hadoop using sqoop and query it with Hive and HBase. But you most likely won't need that for at least a couple of years.

  • Create a separate folder for each type of 'key' copying 'POST' data to files in these folders using filename as key for ... umm... lightning fast retrieval.

    U should then totally think about creating other directories full of symbolic links rather than files enabling you to have many keys for reference or even generate materialized views without duplicating data.

    Since you would be using a query language that is not SQL it is guaranteed to scale to infinity and beyond... (inodes sold separately)

  • and get to know it later :-). Fast here: your prototype creation, not primary the database I/O. The general comments are right: there is no one-fits-all solution and the database might change. It looks very much like you also haven't decided on the server platform: Ruby, PHP... you could look at node.js or vert.x too - server side JavaScript is at least neat for prototyping (I'm not making a statement that is is *only* neat for prototyping - that's a completely different discussion). We did a number of supe
  • by GameMaster (148118) on Wednesday April 09, 2014 @11:42AM (#46705181)

    Use MongoDB, it's web-scale. They produce kick-ass benchmarks by piping all your data to /dev/null.

  • by samwhite_y (557562) * <`moc.oohay' `ta' `spwerci'> on Wednesday April 09, 2014 @03:47PM (#46707603)
    I have used Oracle, MySQL, and Mongo in prod situations. I have looked at Cassandra for evaluating it for potential usage in prod.

    I can imagine situations where I could recommend any of the above. For example, if you are large financial company with billions of rows, I would go with Oracle. If you have smarts but not money and didn't need somebody to sue if something went wrong, then maybe Postgres would do . If I were a simple web based app with simple form submits, I would go with MySQL. If I had complex unpredictable data blobs and unpredictable needs to do certain types of queries against the data, I might recommend Mongo. If I have large amounts of data on which I want to do analytics I would use Cassandra.

    Cassandra wins when you have a lot of data and not a lot of complex real time queries against it. It is especially good at scaling up on cheap data storage (think 100s of terabytes). It also has an unreal "write" throughput (important for certain types of analytics which write out complex intermediate results) though that is not relevant for the case described.

    The problem generally with noSql solutions is that they increase the amount of storage to store the equivalent amount of information. You are essentially redundantly storing schema design with each "record" that you store. This really matters more than some might suspect, because when you can put an entire collection into memory, the read performance is much higher. You usually need 1/5th to 1/10th as much RAM to do the job with a traditional relational database (especially since MySQL and their brethren handle getting in and out memory better than mongo). This isn't so much the case for Cassandra because of its distributed storage nature, but it really isn't usable for real time transactions.

    My recommendation, use a traditional database -- if in a Microsoft shop use SQL Server, otherwise I like postgres or mysql. If however, you have complex data storage needs that a noSql solution is perfect for, then I would go with that. If you are into back end analytics, copy the data as it comes in and put into a Cassandra (or one of its similar brethren) as well.

The most delightful day after the one on which you buy a cottage in the country is the one on which you resell it. -- J. Brecheux

Working...