Dumping Lots of Data to Disk in Realtime? 127
AmiChris asks: "At work I need something that can dump sequential entries for several hundred thousand instruments in realtime. It also needs to be able to retrieve data for a single instrument relatively quickly. A standard relational database won't cut it. It has to keep up with 2000+ updates per second, mostly on a subset of a few hundred instruments active at a given time. I've got some ideas of how I would build such a beast, based on flat files and a system of caching entries in memory. I would like to know if: someone has already built something like this; and if not, would someone want to use it if I build it? I'm not sure what other applications there might be. I could see recording massive amounts of network traffic or scientific data with such a library. I'm guessing someone out there has done something like this before. I'm currently working with C++ on Windows. "
2-stage approach (Score:5, Informative)
Cost... Are you going to go for local storage or NAS? Need SCSI and RAID or a less expensive hardware setup? Do you think gigabit ethernet will be sufficient for the transfer from the data dump hardware to the processing/indexing/search machines?
Sounds like you might want to run a test case using commodity hardware first.
Wonderware InSQL (Score:5, Informative)
Check out wonderware InSQL. We update roughly 50k points every 30 seconds without loading the server much at all. Pretty nice product, also has some custom extensions to SQL built in for querying the data (eg cyclic, resolution, delta storage, etc etc).
http://www.wonderware.com/ [wonderware.com]
Of course, you'll need your data to come from an OPC/Suitelink/other supported protocol, but should work nicely for you.
- Joshua
Don't roll your own (Score:4, Informative)
In order to process so many points realtime, it usually will have to be in RAM for performance reasons.
A commercial RDMS can cut it (Score:4, Informative)
Yes, this sort of thing has been built before (Score:3, Informative)
If your data streams are continuous, and can be represented as audio data, then you are pretty much dealing with a solved problem, and your other problem of selecting from large number of possible 'instruments' is solved by an audio patchbay.
If this isn't feasible, then a number of solutions might be appropriate (spreading the load over a number of machines/huge ram caches/buffering/looking at the problem and thinking of a less intensive sampling strategy/etc.) but without more information on the sort of data you are collecting, and exactly how quickly you need to access it, it's very hard to be specific.
Ramdisk database (Score:5, Informative)
Either make a big ramdisk and put your database out there (see my Journal from a few months back, ramdisk throughput is pretty damn fast from the local machine, given certain constraints, and random access writing is hella fast), or use a database that runs entirely in memory (think Derby, aka Cloudscape that comes with WebSphere Application Developer.)
When you got your data, save it out to the hard drive.
Granted it helps to have a box with a ton of memory in it, but they are out there now, almost affordable. If you are collecting more than 4G of data in one session, well YMMV - but 4G is a LOT of data, perhaps consider your approach.
Yup... (Score:3, Informative)
The design of a data acquisition systems will of course differ, depending on how much data it records per sensor, how many sensors there are, how often to record the data, and if the data is to be available for online or offline processing.
In most of the "hard" cases, you will use a pipelined architecture, where data is received on one or more realtime boxes, and buffered for an appropriate (short) period. A second stage occurs when data is collected from these buffers, and buffered/reordered/processed to make writing the desired format to a file or DBMS easier. The last stage, is, of course, to write it. You might use zero or more computers at each stage, with a fast dedicated network in-between. You might even decide to split up some of the stages even further. Depending on how much you care about your data, you may also add redundancy. And make sure it's fault-tolerant, it's generally better to loose some data, as long as it's tagged as missing, than to loose it all. To check this in real-time you can also add data-monitoring anywhere it makes sense for your system.
In the simper cases, you simply remove things not needed, such as a soundcard instead of dedicated realtime-boxes, redundancy, monitoring, dedicated network, etc...
Some commercial off-the-shelf systems will surely do this. But the more advanced systems, you still build yourself, either from scratch, or by reusing code you find in other similar projects (I'm sure there are some scientific code available from people interested in medical science, biology, astrophysics, geophysics, meteorology, etc...).
Most of the "heavy" systems will not run on Windows, or even Intel, due to limitations of that platform for fast I/O. This has obviously changed a lot recently, so it's no longer the stupid choice it was, but don't expect too many projects of this kind to have noticed, as they probably have existed much longer.
Specialized Hardware (Score:2, Informative)
I'm not so sure what their story is regarding reading or querying. My guess is you lose a lot of bandwidth, but not all. Anyway, it might be worth checking out.
http://www.conduant.com/products/overview.html [conduant.com]
Another thing is that modern computers cam have lots innate capacity themselves. My hunch is that you could do a lot with a couple modern disks on seperate SATA channels and several GB of RAM. Maybe this is only a software problem...
Re:Wonderware InSQL (Score:3, Informative)
SCADA is very versatile and powerful. Are you feeding data in mostly from local or remote RTU's?
You do understand that SCADA is a general term which describes a type of system, right? A SCADA system could be designed (and has been)
Anyway, we work with a much larger SCADA system vendor, which actually has the SCADA market share for our industry. Wonderware would never come close to providing the functionality we'd need in our industry and we do not want to be tied to a Microsoft platform.
Wonderware was a candidate for a smaller sub-system, but we've decided to go with another system that's working out very well--is more open for development purposes and is generally better designed. I wasn't on the smaller project, but I was on the big system project and continue to maintain and develop for it.
SCADA is a fun area to work in for geeks--loads of administration, development, design opportunities in various techologies including, but not limited to, LANs, WANs, telecommunications, backend/frontend development, database maintenance, etc.
Re:Wonderware InSQL (Score:3, Informative)
InSQL works as an OLE Processes for SQL Server. You can use pretty much any tool (ODBC/ADO/excel/DAO/whatever) to query the database. Yes I realize I mixed libraries/methods/applications in the tool list, but just trying to get across a basic idea.
Yes, per point licensing, I believe we licensed for 60k points, not sure on the cost. This is pretty typical in the SCADA world I believe.
Sample query I'd use to get all data for a specific rtu
select * from live where tagname like 'StationName%'
Two tables use typically work with, live and history. Live between the latest values, history for historical queries.
As for query times, very respectable. I believe we have about 50k points right now, updated/stored every 30 seconds (Actually, its delta storage, so some discrete points who don't change every 30 seconds would be stored only on change...). So how many rows is that?
1440 minutes per day * 2 samples per minute * 50000 points * 180 days (approx history we have online) = 25,920,000,000 rows.
We have asp pages people query the data from, we limit 30 second resolution data to only 2 days at a time (to help prevent loading down the machines) but a query for any point will typically return in a few seconds.
We are pretty satisfied with the product, may not fit your needs, but its been good for us.
Kdb+ (Score:3, Informative)
From how you descibed your needs, this would probably bit the bill..
NetCDF or HDF5 (Score:3, Informative)
I'm more familiar with NetCDF (because I use it) so let me tell you some of the things it can do. (HDF5 can also do these things, I'm sure).
With NetCDF, you can store +2 gigabyte files on a 32 bit machine (it supports Large File support). I've saved 12 gigabyte files with no problems. It supports both sequential and direct access, meaning you can read and write either starting from the beginning of the file or at any point in the middle of the file.
The format is array-based. You define dimensions of arrays and variables consisting of zero, one, or more dimensions. You can also define attributes that are used as metadata, information describing the data inside your variables.
You can read or write slices of your data, including strides and hyperslabs. This allows you to read/write only the data you're interested in and makes disk access much faster.
It's also easy to use with good APIs. They have APIs for C, Fortran95, C++, MATLAB, Python, Perl, Java, and Ruby.
Take a look at it. It might be what you're looking for.
-Howard Salis
An RDBMS won't cut it? (Score:1, Informative)
*Many* RDBMS systems can do this without breaking a sweat.
Do some googling on Interbase for example - one of the success stories for IB is a system that does 150,000 inserts per second - sustained. It's a data capture system that may well be similar to yours.
Oracle can definately do it - but you'll probably need a good Oracle DBA to tune it up properly.
Informix can definately do it as well - don't know about the latest version, never used it, but whatever was current circa 1999 (v5?) could handle your needs as well.
HP-IB and ISAM (Score:3, Informative)
On the coding end, there are numerous (hell, hundreds) of commercial, F/OSS, and books on ISAM libraries for you to use for the actual storage and retrieval. It may even be included in your existing libraries given how old the technique is now. I was doing this back in the '80s for the US Navy using a 24 bit, very slow, mini-computer, so any normal box should be able to handle it today!
We use these techniques in electronic instrument monitoring, logistical systems, systems engineering, you get the idea. You may want to mosey over to the HP developer web site to see if there is a drop in solution, as I imagine there is (sorry, haven't looked).
I hope this helps.