Read more.However the only major feature found to be found lacking so far is Rapid Packed Math.
Read more.However the only major feature found to be found lacking so far is Rapid Packed Math.
Another pointless ploy by Intel to make their CPUs more expensive for anyone with a dedicated graphics card. Not to mention the stress on the fan!
Both the PS4 PRO and XBox One X GPUs have a mixture of Polaris and Vega features too.
Which makes sense. I'm sure AMD provided whatever Intel asked for, and there will be features like fast double precision that make no sense for Intel. From the timing, I think we can assume they cut and pasted the graphics cores from either a games console or Ryzen. There are also features like video transcode which I presume are handled by Intel's UHD630 core so can be omitted from the Vega companion die, unless the Vega die is general purpose enough to turn up elsewhere.
Last edited by CAT-THE-FIFTH; 10-04-2018 at 11:11 AM.
You seem to be missing the point. These chips are for laptops and very small form factor machines where discrete graphics aren't an option. You're not going to be able to buy one of these to drop in a s1151 motherboard - they're BGA only (i.e. soldered to the motherboard).
The point is the tightly integrated components gives you a simpler, smaller deisgn footprint, and also allows Intel to play a few power management tricks to balance the distribution of power between the CPU and GPU - something you can't do with separate CPU and dGPU.
TDPs vary between 65W and 100W, so well within the capabilities of modern cooling technology, I assure you
But Vega shaders were evolved from Polaris shaders, so every time you cut out a Vega feature you end up with something more like Polaris.
This does make me wonder how configurable AMD make their shader design. I presume silicon layout is all synthesised from a description language like vhdl in which case they might be able to use the latest shader design and just build one with a bunch of stuff configured out to Intel's spec. I expect only Intel can afford hand layed out silicon these days, and I don't even know how much they do as it is time consuming as well as expensive to do.
But what do you define as Vega or Polaris,or even Fuji/Tonga?? Polaris is based on Fuji/Tonga.
The biggest differences seems to be enhanced FP16 compute with Vega,and higher clockspeeds.However,I would also argue,things like FP16 need good software support,and Intel is managing drivers for all of this,although the control panel looks like the AMD one but in blue!!
Edit!!
Its a bit like Zen - Zen+,Zen2,etc are all Zen based cores but improvements and tweaks here and there with the newer ones.
Last edited by CAT-THE-FIFTH; 10-04-2018 at 11:25 AM.
But GTX 1060 MaxQ is still fighting Vega M GH on benchmarks but so far there is no clear winner.
Wasn't this suspected 6 months ago.
It looks like Vega has popped up somewhere else too!! I knew AMD liked its modular designs,but that is taking it a bit too far!!
Exactly, I don't see such a clear distinction
Will Intel have wanted FP16 support though? This silicon will have been specified a long time ago, and FP16 is generally considered a machine learning thing. Mobile phones can do fp16 to reduce power and silicon size, but desktops moved to more bits for better image quality a very long time ago. Intel are making their own machine learning chip, they probably don't want this cpu intruding on that space and if games started using fp16 in the meantime that could have blindsided them.
In case someone knows the answer here: One of the statements is that the reported features for this AMD part are lacking compared with "Vega", but isn't the uhd630 graphics still used for low loads like in discrete laptop graphics? In which case, does the driver have to report only common features of the two parts to allow switching?
I'm a CPU guy, I don't know graphics to that detail
I probably missed the original story but does anyone have a link/article about why Intel have gone with an AMD solution rather than Nvidia? I am guessing that Nvidia said "no" or they wanted too much money for the project, as it seems a bit odd for Intel to go straight to one of their biggest competitors and the underdog in the GPU market rather than the market leaders where there isn't (as much) conflict of interest. I'm just curious I guess price is the factor, as has been the case with the console GPUs for a long time.
After Intel has wreaked Nvidia's chipset business, went beyond simply banning Nvidia from making x86 compatible CPUs into stopping them from even software emulating x86, cut the PCIe lanes off Atom chips so that Nvidia couldn't sell add on graphics into that market...
These are companies that hate each other with passion, there is no way Nvidia would help Intel overcome their graphics shortfall. I imagine the cost was "give us an x86 license". Last time there were talks of Intel using Nvidia tech the price was "Companies merge, Jensen Huang becomes CEO because graphics is more important than your CPUs".
Spud1 (10-04-2018)
Old puter - still good enuff till I save some pennies!
There are currently 1 users browsing this thread. (0 members and 1 guests)