Read more.Intel spills architecture details on upcoming Larrabee, promising to end the VPU stranglehold enjoyed by NVIDIA and ATI. We take a first look.
Read more.Intel spills architecture details on upcoming Larrabee, promising to end the VPU stranglehold enjoyed by NVIDIA and ATI. We take a first look.
No, but TimeLogic should be.should NVIDIA and ATI be worried?
If this approach takes off, then NVIDIA ought to be worried as they can have no direct response. AMD already have a similar base in the pipeline, but NV currently have neither the license or the partner to reply.
It's going to be a case of 2 heavy-weights having a tug-o-war contest in order to determine which approach wins out by the looks of it.
The real winner here (and looking longer term) is the budget builders.
Have to say, it's nice to see some innovation again. We've got an interesting few year coming up.
Techncially speaking, both AMD and nVidia do a 'software' approch also. The drivers compile OpenGL and DX calls into a binary language their GPUs understand. Whether the x86 approch pays off for Intel remains to be seen, either way this is looking to be the best solution for a 'fused' CPU and GPU unit in the long term.
The scalability graph doesn't necessarily show linear performance increases - in fact it doesn't necessarily show anything at all. Given that the Y-axis is 'Scaled Performance' rather than just 'Performance', Intel could have calculated the performance index based on an assumption of diminishing returns-to-scale at any ratio they saw fit; without some real numbers or an explanation of how the index was calculated the graph is pretty much meaningless.
Or am I just being cynical?
This seems weird statement, first you claim the drawback is that it costs to natively code, then you point out the DirectX OpenGL will be supported .. ipso, facto, DX and OGL games should run without issue since these APIs are supported via the drivers."Intel reckons that this makes Larrabee fully programmable and far more suited to future workloads, but such an approach comes at the inevitable cost of having developers natively code for Larrabee using a C/C++ API. The 'problem' is somewhat mitigated by the fact that coding shouldn't be too different than writing for x86, which, after all, is what Larrabee is based upon.
Being software-based has other intrinsic advantages too, such as driver-updating for newer APIs when they become available. Kind of like adding microcode for your CPU.
DirectX and OpenGL will be supported, of course, and the traditional rendering pipeline can be run through software, but it's not how Larrabee talks best. "
In fact, is it not the unique programability of the core architecture that is it's greatest strength, along the lines of CUDA, Larabee has the head start as now all libraries can be utilized with little to no effort, coding is the same within the construct of the x86 ISA, and it opens up a whole new realm of possibilities. Add to the fact that with a full x86 core, Larabee contains all the memory management paged and non paged, coherency, branch predicators, etc. that it would be possible (and possibly likely) that it could run an OS, in fact meld with the platform to produce amazing results across any application. Intel's paper specifically states that Larrabee could run an OS as such.
It is an exciting product, but also a huge gamble... either the chip will be amazing (even if it is barely competitive at first), or the biggest flop since itanium... nonethless, it is intersting to see it turn into a three way race.
There are currently 1 users browsing this thread. (0 members and 1 guests)