Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming

Ask Slashdot: What Is the Most Painless Intro To GPU Programming? 198

dryriver writes "I am an intermediate-level programmer who works mostly in C# NET. I have a couple of image/video processing algorithms that are highly parallelizable — running them on a GPU instead of a CPU should result in a considerable speedup (anywhere from 10x times to perhaps 30x or 40x times speedup, depending on the quality of the implementation). Now here is my question: What, currently, is the most painless way to start playing with GPU programming? Do I have to learn CUDA/OpenCL — which seems a daunting task to me — or is there a simpler way? Perhaps a Visual Programming Language or 'VPL' that lets you connect boxes/nodes and access the GPU very simply? I should mention that I am on Windows, and that the GPU computing prototypes I want to build should be able to run on Windows. Surely there must a be a 'relatively painless' way out there, with which one can begin to learn how to harness the GPU?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: What Is the Most Painless Intro To GPU Programming?

Comments Filter:
  • Learn OpenCL (Score:5, Insightful)

    by Tough Love ( 215404 ) on Friday July 19, 2013 @04:37PM (#44331939)

    Since the whole point of GPU programming is efficiency, don't even think about VBing it. Or Pythoning it. Or whatever layer of a shiny crap might seem superficially appealing to you.

    Learn OpenCL and do the job properly.

  • by godrik ( 1287354 ) on Friday July 19, 2013 @04:57PM (#44332217)

    Like in all attemps at getting stuff faster, you should first wonder what kind of performance you are already getting out of CPU implementation. Provided you seem to believe it is actually possible to get performance out of a VB like langage, I assume that your base implementation heavily sucks.

    Putting stuff on a GPU has for only goal to make things faster but it is mostly difficult to write and non portable. Having a good CPU implementation might just be what you need. It also might be easier for you to write.

    If you really need a GPU, then you need to start learning how GPU works, because a simple copy paste is unlikely to give you any significant performance. A good start at: https://developer.nvidia.com/cuda-education-training [nvidia.com]

    I never properly learned opencl, but it is essentially similar. Except you have access to less low level details on nvidia architecture. Of course, cuda is pretty much nvidia only.

  • Re:Learn OpenCL (Score:5, Insightful)

    by HaZardman27 ( 1521119 ) on Friday July 19, 2013 @05:13PM (#44332393)
    That's because the closest analogy to a software engineer using a more abstracted language in the hardware world is the packaging of common circuitry. Or when hardware engineers design chips, do they actually model out the components of every single transistor?
  • Re:CUDA (Score:2, Insightful)

    by Anonymous Coward on Friday July 19, 2013 @05:21PM (#44332501)

    Never under any circumstances use cuda. We don't need anymore proprietary garbage floating around. Use opencl only.

If you want to put yourself on the map, publish your own map.

Working...