Looking for Portable MPI I/O Implementation? 36
rikt writes "I am trying to implement MPI I/O for our CFD product. I am facing a problem with the portability of the generated data files. MPI2 interface describes a way to achieve this either by using 'external32' or user defined data representations. The problem is that ROMIO, the most widely available MPI I/O implementation, has not implemented support for any data representation other than 'native'. Do you know of any MPI I/O implementation that supports this, and is available on various platforms? I know IBM and Sun supports this, but I am looking for a solution on Linux and Windows (both 32 & 64 bit) as well."
Re:Huh? (Score:3, Insightful)
I hate the drivel question asked on
Re:Huh? (Score:3, Funny)
Re:Huh? (Score:1)
Re:Huh? (Score:1)
Definitions (Score:5, Informative)
MPI: Message Passing Interface, a standard for parallel processing environment message passing.
MPI-2: Extended version of MPI.
MPI-IO: Parallel input/output extensions for MPI, included in MPI-2
ROMIO: An implementation of these extensions.
CFD: Computational Fluid Dynamics (a good candidate for parallel processing, thus the interest in the above).
Of course, the fact I had to look them up means I have no idea about implementations, but at least others won't have to wonder what all that was about.
Re:Definitions (Score:2)
gobblety gook
BINGO! (Score:1)
Thanks for the definitions, now the summary makes a little more sense.
Maybe I am missing something (Score:3, Insightful)
Think of it this way.
typedef struct MessageStruct {
int messageType;
int messageSize;
} Message;
Message msg;
msg.type = messageType;
msg.size = sizeof(Message);
msg.data = someData; /* repeat for each part of the message struct*/
SendMessage(&msg, msg.size);
MPIMessage msg;
if (msg.size == sizeof(Message)) {
Message *msg = &msg;
}
I think something along these lines should work. Just make a struct for each type of message your app has. Then check the size and type elements of the structs to determine which type of message you have recieved. You can also just make a struct with just a type and size field and copy the first 8 bytes of the message into that and use that to determine the type of message. I'm sure I am missing some implementation details, but something like this should handle your problem.
Re:Maybe I am missing something (Score:4, Informative)
There's almost an absurd number of datatype declaration, conversion, etc. functions in MPI. If you properly set up MPI_Datatype types to hold your data, then the MPI library will be able to handle it all internally. Then, when sending and receiving messages, it will automatically do conversions as needed (between big-endian and little-endian machines, and so on).
So the problem isn't one of sending/receiving data between machines of differing architecture. The problem is writing this data to a file, and then reading it in again at a later date, possibly on a different machine. This is a harder problem.
The MPI I/O extensions (part of MPI-2) tried to address this somewhat. There is a file format "external32" in the spec, that was supposed to be universal, with a standard encoding for all datatypes, and so on. However, evidently it was never implemented fully, as I haven't been able to find it.
Re:Maybe I am missing something (Score:3, Informative)
When I did MPI projects for school I essentially did this when I wanted to send something in a struct. However, as one poster already pointed out, MPI takes care of the conversions between big and little endian. If you have a homogenious network, you'll probably be okay just sending a struct as a buffer. That said, if you want something a little more robust, MPI does have rather extensive user defined datatype creation capabilities.
I learned a little about these capabilities when I wanted to know how t
If last resort try human-readable text (Score:4, Informative)
Unfortunately, I know of no other MPI I/O implementations, other than ROMIO, that can simply be plugged into an existing MPI stack. You might want to ask around at the new project OpenMPI [open-mpi.org], a new-from-the-ground-up MPI implementation that is currently in development. I'd be curious to learn the level of MPI I/O support that they claim!
Assuming you are stuck with a MPI stack that only supports the "native" representation, the problem you face becomes one of data representation in general. As you know, there's bajillions of different ways of storing floating-point numbers, and if you write them to disk, the files will be only valid for exactly that CPU.
As a last resort, a brute-force solution is to write the numbers as human-readable text, and then parse them in again accoringly. It's a waste of file space, but there's no ambiguity in the datatype representation, and it is very tolerant of floating point differences between machines.
-1.2345234523452345
2.345634563456365e+13
-3.2121212121e-24
And so on.
This shouldn't be much of a hotspot in your code, since ideally it would only be done at start, stop, and checkpoint time. Also, if you need paralellism, and don't care about wasted file space or future precision improvements, you could use a fixed-length string for each number (with much padding), thus allowing you to read your numbers random-access instead of sequential.
Hope this helps!
Josh
fans (Score:1)
This one's easy. (Score:5, Informative)
Now, we move onto the portable I/O. The vast majority of scientific software (which is, in turn, the bulk of MPI-based software) uses the Heirarchical Data Format. There are two versions worthy of mention - HDF5 [freshmeat.net] and Parallel HDF [uiuc.edu]. Both support MPI in operations. Compile HDF5 with MPI support, and you have something that will support platform-independent atomic and compound data types.
Of all the options, HDF5 (from the NCSA) is the most widely used. I would say that the majority of scientific and distributed software out there that uses platform-independent typing uses HDF. So does the grid computing system Globus. The other platform-independent complex data typing libraries, CDF (from NASA) and NetCDF (from UniData), are rarely used. Indeed, the next generation of NetCDF - version 4 - will be built on top of HDF5. There's a link to the development site and the source code on Freshmeat.
Less-widely used, but still very significant, is the Transparent Parallel I/O Environment [freshmeat.net]. I am not 100% sure if this supports MPI, it's been a while since I've used it and I never put in the dependencies on Freshmeat for it.
Depending on what is being done, PETSc [freshmeat.net] may also be worth checking out. This supports MPI-based differential equations.
Globus [freshmeat.net] can use MPI for communication and then handle the I/O directly. This means you only have to write your interface for one API, not one API per type of operation. Main problem is that Globus has a fairly large footprint, so you might not want to do that unless the project is large enough to warrant that kind of sophistication.
Re:This one's easy. (Score:3, Informative)
When I started a software project about two years ago, I looked at both NetCDF and HDF5 for data formats. I chose NetCDF and have had zero problems (and it's been very easy to the software working nicely). I think using HDF would have added another 6 mon
Re:This one's easy. (Score:1)
Re:This one's easy. (Score:1)
Re:This one's easy. (Score:1)
I second String representation (Score:3, Informative)
The main benefit for us was that our message passing code became generic and we got the side effect of passing large values between machines without respect for endianess or word size.
hope that helps,
dave
My suggestion.. (Score:4, Funny)
Re:My suggestion.. (Score:1)
Re:My suggestion.. (Score:2)
In that case, maybe he could fix his problem with a Sonic Screwdriver?
If you are looking for a commercial solution (Score:3)
I work there and I worked on our MPI-IO implementation. I'm sure we'd like to find a way to help you out if you aren't against paying for the software.
Re:If you are looking for a commercial solution (Score:1)
Open MPI (Score:2, Informative)
do you really need external32? (Score:1)
Even beter might be Parallel-NetCDF [anl.gov]. It has all the benefits of a high-level library (portable, self-describing data representation), but it has a much simpler interface than HDF5. Unlike serial NetCDF, you'll probably see much better performance as all processes can carry out I/O c
XDR (Score:2)
Free libxdr code is available everyplace, although often quite ancient (some written in 1982 or so). Just run your data structs through xdr calls, write it ou
Thank you all (Score:1)