

Parallel Programming - What Systems Do You Prefer? 23
atti_2410 asks: "As multi-core CPUs are finding their way into more and more computer systems, from servers to corporate desktops to home systems, parallel programming becomes an issue for application programmers outside the High Performance Computing community. Many Parallel Programming Systems have been developed in the past, yet little is known about which are in practical use or even known to a wider audience, and which are just developed, released and forgotten. Or what problems the actual users of parallel programming systems bother the most. There is not even data on the platforms, that parallel programs are developed for. To shed some light on the subject, I have setup a short survey on the topic, and I would also very much like to hear your opinion here on Slashdot!" What Parallel Computing systems and software have you that really made an impression on you, both good and bad?
My experiences on 6000-series hardware (Score:1)
I've found that parallel computing has provided an excellent means to avoiding obsolescence by allowing the creation of massive computers that have the potential to crush comparitively tiny, "modern" systems. While the prototype AppleCrate [aol.com] is just a small, tentative step in this direction, a future system comprising NES subprocessors in addition to the "Oregon Trail"-codenamed CPUs could spontaneously develop mech-transformative properties, allowing the weapon-aided destruction [gamersgraveyard.com] of systems not puny enough to
AppleCrate!!!??? (Score:2)
This guy [aol.com] actually seems to have built an 8-processor parallel computer using Apple IIe mainboards [aol.com]! With a custom networking system using the game port [aol.com]! Then, over the top of that, he used the machine with other custom software to make an 8-voice sound synthesizer system [aol.com], using the native hardware (where each "voice" has 5 virtual bits of sample playback capability using PWM square-wave
short answer (Score:5, Informative)
uniform access shared memory (think the bigass (tm) cray machines) -- here you'd typically use mpi (if your programs are supposed to be portable) or the local threading library + vectorized / paralellized math libraries. since its all in a single memory space, it's "as simple" as just doing a good job multithreading the program.
non-uniform access shared memory (think the large modern sgi machines) -- here things get a bit more subtle, because you're going to start caring about memory access and intranode communications. you can still get a reasonable measure of performance by just using threads, however, if your problem is "embarrassingly parallel enough".
distributed memory (beowulf clusters and their ilk, although a bunch of regular linux or windows boxes will do) -- this is where things get excessively complicated very quickly. you have your choice of several toolkits (mpi being standard in the scientific world and superceding the previous pvm standard). here you are going to care a lot about communications patterns (in fact, probably more so than computation). i believe one of the java technologies (javaspaces perhaps? jini maybe?) abstracts this away and gives you the view of the network as a sea of computational power. regardless, you're going to have to pay very careful attention to how data moves because that will typically be your bottleneck. synchronization becomes whole orders of magnitude more expensive on this kind of parallel machine, which is another thing you'll have to figure into your algorithm design.
once your architecture is fixed, you can start to talk about which toolkit to use. a well tuned mpi will work "equally well" in each of these environments and have the added bonus of being portable across architectures. mpich is a well respected implementation, although i found lam to be much easier to use, personally. good luck, i think you're about to open a can of worms only to discover that you've really just opened a can of ill tempered and rather hungry wyrms.
Re:short answer (Score:2)
most of my parallel programming has been on commodity pc hardware (intel). as a result, i've used a combination of pthreads, compiler auto-vectorization (god bless intel's compiler), and mpi. for the more real time stuff i do now, i use nist's nml as the message layer rather than mpi (i have no idea how they'd compare in terms of performance). almost all my code is in c++ (the ocassional piece being in c).
honestly, if you've got the option of using multiple
Re:short answer (Score:2, Interesting)
Re:short answer (Score:1)
The only recent Cray machine that I am aware of that had uniform access to memory was the MTA2 which used an address scrambling scheme to spread references throughout the memory system so there would be no hot-spots. It also meant that no memory was "local" either.
The current vector and MPP lines are either distributed memory (shared address space, non-uniform latency) or message-passing (no shared address space).
Re:short answer (Score:1)
u++ (Score:5, Informative)
http://plg.uwaterloo.ca/~usystem/uC++.html [uwaterloo.ca]
MPI, Co-Array Fortran, & UPC (Score:5, Informative)
So, in MPI, to send data from processor 0 to processor 1, the 0 processor would call a function
Call MPI_Send(dataout, datacountout, datatype, destination processor #,
(Fortran style)
which must match an MPI_Receive in the processor 1's executing program.
In Co-Array Fortran, OTOH, it would look like
data[1] = data[0]
The fun part about Co-Array Fortran is that 'data' can be defined as a regular multi-dimensional array so that data(1:10,1:20)[1] = data(40:50,60:80)[0] is perfectly ok _and_ the 'processor dimension', denoted by the []'s in Co-Array, can also be accessed using Fortran notation so that data[1:100] = data[0] is perfectly ok too. Or even data[2:2:100] = data[0] for only even processors.
In truth, a Co-Array Fortran compiler will probably turn the language-level additions into MPI function calls (because that's the standard), but I find CAF to be more elegant than MPI.
UPC is similar to Co-Array Fortran, but for C. I've never used it before, though.
Google Co-Array Fortran or UPC for more information.
OpenMP (Score:2)
http://msdn.microsoft.com/msdnmag/issues/05/10/Op
Multi core - "Parallel Computing" (Score:5, Informative)
Multi-core chips in a typical commodity machine (shared memory, same address space, etc) just means you have multiple threads of execution, but everything else is pretty much the same at the application coding level.
If you're working on an app and want to take advantage of multi-core (or SMP), you just need to have a well threaded app, using the native threading libs (ie pthreads) - nothing fancy. Clusters and big non-shared-memory type supercomputers are a different story altogether from something like a dual-core Athlon.
Re:Multi core - "Parallel Computing" (Score:2)
Re:Multi core - "Parallel Computing" (Score:2, Informative)
Yes, but the grandparent post still holds, in that there's hardly a difference between a well threaded app on a single processor compared to shared memory/numa multiprocessor SMP. That kind of parallelism stops scaling at 4, maybe 8 cores.
From there on, memory/communication bandwidth becomes the bottleneck, adding more cores does not change speed. That's where the big decisions need to be made at the application level
I loved our T3E (Score:2)
Re:I loved our T3E (Score:2)
cray products, sgi killed the t3e.
the merger agreement with tera specifically constrained
cray from making a followon machine.
not that cray doesn't have problems....
Google's MapReduce (Score:2)
"MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper.
Programs written in this functional style are automatically parall
Prolog (Score:2)
Shell with pipes (Score:1)
process1 | process2 | process3
with all three of them running on different processors means that your program can get up to a 3x speedup for free! No MPI/PVM/pthreads/etc required!
(Note: the program chain will complete in time roughly proportional to the time of the slowest link. This trick only works when each program doesn't need to read in all the data before it finishes processing