Read more.Latest platform “makes parallel programming easier than ever”.
Read more.Latest platform “makes parallel programming easier than ever”.
Is this Nvidia catching up with AMD?
Nvidia Cuda , AMD Mantle , Nvidia Cuda 6
Really ?!
But OpenCL is cross-platform where as Cuda is just Nvidia? So realistically it SHOULD be better than OpenCL because there is less compatibility/optimizing work to be done. IMO anyway.. I'm not really up to date with these things.
So how is this separate from AMD's Unified Memory Architecture? I understand different companies but the principle is exactly the same is it not?
Existing cpu libraries are "not so easy" to turn into "gpu intensive" libraries. examples include codec and vray. Now dont start follow up discussion. I have enough hands on experience to to tell that vray works slower in cuda enable nvedia card than multicore processor. There were always option to turn cuda in vray, but it never accelerated the workload. Same thing goes to codec. In encoding and decoding gpu sucks and so does cuda. I need this 2 things to improve and i dont care if it is nvedia or amd. Until then both of their gibberish sucks.
Cuda speeds up vray but depends on the other elements of your system. GPU's can be very quick at 3D rendering.
An I7 will obviously be faster than using Cuda on any mid-low end gpu.
However using a high end GPU vs a dual core or I3/I5 will often see speed gains.
Trouble is you'll often hit the GPU's ram limit with high res textures or high poly counts.
One of my old PC's has a E2180 but sticking a GTX 560 in it saw massive gains to rendering speed with cuda.
CUDA still can't directly access the system memory, although that's something nvidia are working towards in the next generation GPUs (presumably using their own proprietary interfaces/API, since they're not part of the HSA foundation). The update to CUDA 6 just means that developers can write code using a unified memory access API, and the compilers/CUDA system underneath it all will handle the copying of data between system memory and GPU memory automatically (previously the developer had to include code to copy data between the two types of memory themselves). Basically it's just a language feature to make it a little easier to write code using CUDA: it won't make CUDA any more performant.
Meanwhile, AMD are about to release the first fully HSA enabled parts, with genuinely unified memory access, and their work on the process is shared openly through the HSA foundation. That means that both the CPU and GPU in a system will have fully unified access to the entire memory space, which *will* improve performance on GPU accelerated tasks (as the data won't need copying between GPU and system memory). As I said, nvidia are meant to be folding hardware unified memory addressing into the next generation of their GPU hardware, although I don't think there are any details released on how they're going to manage that yet...
There are currently 1 users browsing this thread. (0 members and 1 guests)