Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming

Ask Slashdot: What Is the Most Painless Intro To GPU Programming? 198

dryriver writes "I am an intermediate-level programmer who works mostly in C# NET. I have a couple of image/video processing algorithms that are highly parallelizable — running them on a GPU instead of a CPU should result in a considerable speedup (anywhere from 10x times to perhaps 30x or 40x times speedup, depending on the quality of the implementation). Now here is my question: What, currently, is the most painless way to start playing with GPU programming? Do I have to learn CUDA/OpenCL — which seems a daunting task to me — or is there a simpler way? Perhaps a Visual Programming Language or 'VPL' that lets you connect boxes/nodes and access the GPU very simply? I should mention that I am on Windows, and that the GPU computing prototypes I want to build should be able to run on Windows. Surely there must a be a 'relatively painless' way out there, with which one can begin to learn how to harness the GPU?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: What Is the Most Painless Intro To GPU Programming?

Comments Filter:
  • Re:Learn OpenCL (Score:5, Interesting)

    by Tr3vin ( 1220548 ) on Friday July 19, 2013 @04:41PM (#44331987)

    Learn OpenCL and do the job properly.

    This. OpenCL is C based so it shouldn't be that hard to pick up. The efficient algorithms will be basically the same no matter what language or bindings you use.

  • OpenACC (Score:5, Interesting)

    by SoftwareArtist ( 1472499 ) on Friday July 19, 2013 @04:53PM (#44332161)

    OpenACC [openacc-standard.org] is what you're looking for. It uses a directive based programming model similar to OpenMP, so you write ordinary looking code, then annotate it in ways that tell the compiler how to transform it into GPU code.

    You won't get as good performance as well written CUDA or OpenCL code, but it's much easier to learn. And once you get comfortable with it, you may find it easier to make the step from there into lower level programming.

  • by bzipitidoo ( 647217 ) <bzipitidoo@yahoo.com> on Friday July 19, 2013 @05:43PM (#44332745) Journal

    I went with OpenSceneGraph.

    Long ago, I tried xlib only, because at that time Motif was the only higher layer available, and it was proprietary. It was horrible. xlib has been superceded by XCB, but I wouldn't use that, not with all the other options out there today. XCB is a very low level graphics library, for drawing lines and letters in 2D. 3D graphics can be done with that, but your code would have to have all the math to transform 3D representations in your data into 2D window coordinates for XCB. LessTif is a free replacement for Motif, but by the time it was complete enough to be usable, the world was already moving on. With Wayland likely pushing X aside in the near future, XCB and xlib may not perform so well. They will continue to be supported for a while through a compatibility layer, but I think they're on the way out. Motif is also not much good these days either. For one, Motif rests on top of xlib, and if xlib goes, so does Motif. Today, we have many better libraries for interfacing with GUIs.

    When OpenGL became available, I tried it. OpenGL is great for drawing simple 3D graphics, but it lacks intelligence. The easy part is that you just pass x,y,z coordinates to the library routines, and OpenGL does the rest. The bad part is that if you want to draw a fairly complicated scene, containing many objects that may be partly or completely hidden behind other objects, OpenGL has no intelligence to deal with that. It just dumbly draws everything your code tells it to draw. To speed that up, your code has to have the smarts to figure out what not to draw, so it can skip calling on OpenGL for invisible objects.

    That's where a library like OpenSceneGraph comes in. Your code feeds all the info to OSG. OSG figures out visibility, then calls OpenGL accordingly.

    You may need still other libraries for window management, something like FLTK. Yes, FLTK and OSG can work together.

    You will also most likely be working in C/C++. OpenGL has many language bindings. But OSG is C++ and doesn't have so many. FLTK is also C++, and has even fewer bindings. Trouble with picking a language like Python for this work is that it can be difficult to find bindings for all the libraries. Even when bindings to a particular language exist, they tend to be incomplete, and don't always perfectly work around differences in data representation. Pick libraries first, then see what language bindings they all have in common, then code in one of those common languages. It's possible C/C++ will turn out to be the only language common to all the libraries.

  • by Anonymous Coward on Saturday July 20, 2013 @12:26AM (#44335037)

    Writing GPU programs is hard. Not only do you have to learn a new sets of APIs, you also have to understand the underlying architecture to extract decent performance. It requires a different approach to problem solving that requires months if not years to develop.

    Fortunately you don't need to read the entire cuda programming guide to program on the GPU. There are several excellent libraries out there which hide the complexities of the GPU architecture. Since you are doing image processing, I would recommend Arrayfire (http://www.accelereyes.com/products/arrayfire). It is a free library which provides several image processing functions which have been optimized for the GPU. You should also look into Thrust and NPP(included with the CUDA toolkit), although these libraries are more verbose and require greater understanding of the C++ and GPU to program.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...