OpenCL always had the vendor neutral moniker to back itself. Now CUDA being open sourced. It too will be vendor neutral. Choice between CUDA and OpenCL just got a lot lot harder.
By vendor neutral I mean it works on lot of different platforms from different companies. OpenCL Intel, AMD, ARM and CPU, GPU, DSP, FPGA etc Cuda Nvidia, PGI with GPU and CPU. this will change from now on.
You can run OpenCL code on Nvidia hardware as well (though it's a little slower in most cases).
In my experience CUDA is the easier language to write, but if I were doing something commercially it would be really hard to not do it in OpenCL. If I were doing an in-house project where I got to choose the hardware, I'd go CUDA.
As the article notes, nVidia is not giving anyone else full control of CUDA. AMD would not be able to modify LLVM to allow it run CUDA programs on their own GPUs.
Most (all?) OpenCL drivers from all manufacturers already use LLVM as their compiler anyways. It's just another language that is fed in the front-end. Apple has been using LLVM since Leopard to compile OpenGL code to run on x86 CPUs or pass onto Intel/AMD/nVidia GPUs. So LLVM use in GPUs is not new.
"LLVM in the strictest sense isn’t a true compiler (it doesn’t generate binary code on its own)"
Firstly, the strictest sense of compiler doesn't necessarily need anything to do with binary code. A compiler, in the strictest sense, just translates one language to another. Usually a higher level language to assembly, but not always.
But more importantly, LLVM is generally used as *part* of compiler. Some compiler front end compiles your code to LLVM IF. Clang, for example, is built expressly for generating LLVM IF, but there are others. Then you can do a bunch of optimizations on the IF, and then if you want you can translate that IF into machine code, or a variety of other things (there are LLVM IF interpreters for example).
So I'm not just trying to make a big fuss, look at how much I know about LLVM, etc. I bring this all up because I'm a little unclear on what they are doing with LLVM and what bits are open. They are forking LLVM, I assume that means from the LLVM IF optimizer back, including binary generation from the IF to something the GPU can understand all remains under wraps.
So are they dropping PTX and using a closed version of LLVM IF instead? You addressed the front end flexibility in LLVM, but how different is the nVidia version of LLVM from regular? If I took Clang and ran a C program through it to generate some LLVM IF, is there any hope of feeding that into CUDA LLVM? Or does it have to be special nVidia LLVM? In which case we aren't really that far ahead of the PTX days. They are just taking advantage of some of the work that's gone on at the LLVM IF optimizer level, with no real "openness" benefit to speak of.
Which is fine, as you pointed out the BSD license allows for this. I'm just not sure. Maybe you're not either, but I thought I'd ask.
I'm no programmer, But I gotta wonder if Open CL or Cuda can be leveraged to do GPU calculations in a virtual enviroment. Could we have full GPU acceleration on mutliple VMs with this ?
What would be great if I could do something like OnLive but on a local scale.
it gave an average 15% code speed up but actually increased the compile time. However thanks to llvm code speed and compile time are getting better every release. Also porting to new targets like arm is way easier.
I feel that NVidia is trying to add CUDA support for Clang, and generate LLVM IR for the CUDA Code. Now, that the IR is generated, they add support to LLVM and interpret the new IR generated for the CUDA source, and output the machine code for specific target architecture (GPUs). It may seem complicated, but is definitely easier than using GCC.
The ultimate aim might not be just improved compilation time for applications, but also to lessen the developers' work to extend support for newer targets.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
12 Comments
Back to Article
MySchizoBuddy - Wednesday, December 14, 2011 - link
OpenCL always had the vendor neutral moniker to back itself. Now CUDA being open sourced. It too will be vendor neutral. Choice between CUDA and OpenCL just got a lot lot harder.MySchizoBuddy - Wednesday, December 14, 2011 - link
By vendor neutral I mean it works on lot of different platforms from different companies.OpenCL Intel, AMD, ARM and CPU, GPU, DSP, FPGA etc
Cuda Nvidia, PGI with GPU and CPU. this will change from now on.
A5 - Wednesday, December 14, 2011 - link
You can run OpenCL code on Nvidia hardware as well (though it's a little slower in most cases).In my experience CUDA is the easier language to write, but if I were doing something commercially it would be really hard to not do it in OpenCL. If I were doing an in-house project where I got to choose the hardware, I'd go CUDA.
ltcommanderdata - Wednesday, December 14, 2011 - link
As the article notes, nVidia is not giving anyone else full control of CUDA. AMD would not be able to modify LLVM to allow it run CUDA programs on their own GPUs.Most (all?) OpenCL drivers from all manufacturers already use LLVM as their compiler anyways. It's just another language that is fed in the front-end. Apple has been using LLVM since Leopard to compile OpenGL code to run on x86 CPUs or pass onto Intel/AMD/nVidia GPUs. So LLVM use in GPUs is not new.
Noriaki - Wednesday, December 14, 2011 - link
"LLVM in the strictest sense isn’t a true compiler (it doesn’t generate binary code on its own)"Firstly, the strictest sense of compiler doesn't necessarily need anything to do with binary code. A compiler, in the strictest sense, just translates one language to another. Usually a higher level language to assembly, but not always.
But more importantly, LLVM is generally used as *part* of compiler. Some compiler front end compiles your code to LLVM IF. Clang, for example, is built expressly for generating LLVM IF, but there are others. Then you can do a bunch of optimizations on the IF, and then if you want you can translate that IF into machine code, or a variety of other things (there are LLVM IF interpreters for example).
So I'm not just trying to make a big fuss, look at how much I know about LLVM, etc. I bring this all up because I'm a little unclear on what they are doing with LLVM and what bits are open. They are forking LLVM, I assume that means from the LLVM IF optimizer back, including binary generation from the IF to something the GPU can understand all remains under wraps.
So are they dropping PTX and using a closed version of LLVM IF instead? You addressed the front end flexibility in LLVM, but how different is the nVidia version of LLVM from regular? If I took Clang and ran a C program through it to generate some LLVM IF, is there any hope of feeding that into CUDA LLVM? Or does it have to be special nVidia LLVM? In which case we aren't really that far ahead of the PTX days. They are just taking advantage of some of the work that's gone on at the LLVM IF optimizer level, with no real "openness" benefit to speak of.
Which is fine, as you pointed out the BSD license allows for this. I'm just not sure. Maybe you're not either, but I thought I'd ask.
SlyNine - Wednesday, December 14, 2011 - link
I'm no programmer, But I gotta wonder if Open CL or Cuda can be leveraged to do GPU calculations in a virtual enviroment. Could we have full GPU acceleration on mutliple VMs with this ?What would be great if I could do something like OnLive but on a local scale.
skiboysteve - Wednesday, December 14, 2011 - link
www.NI.com/labviewif anyone uses this.... A year and a half ago labview 2010 refactored its compiler to use llvm. There is a good write up on it here: http://zone.ni.com/devzone/cda/tut/p/id/11472
it gave an average 15% code speed up but actually increased the compile time. However thanks to llvm code speed and compile time are getting better every release. Also porting to new targets like arm is way easier.
obsidience - Thursday, December 15, 2011 - link
Sigh, wanna educate us on what the LLVM acronym stands for?vlado08 - Thursday, December 15, 2011 - link
LLVM (Low Level Virtual Machine), a compiler frameworkwildon - Sunday, December 18, 2011 - link
THANKS NEVER KNEW WHAT THAT MEANTfashionbook - Tuesday, December 20, 2011 - link
Our Website: ===== www fashion-long-4biz com ====Our main product list is as follows:
smurali - Thursday, February 23, 2012 - link
I feel that NVidia is trying to add CUDA support for Clang, and generate LLVM IR for the CUDA Code. Now, that the IR is generated, they add support to LLVM and interpret the new IR generated for the CUDA source, and output the machine code for specific target architecture (GPUs). It may seem complicated, but is definitely easier than using GCC.The ultimate aim might not be just improved compilation time for applications, but also to lessen the developers' work to extend support for newer targets.