Was thinking about this overnight. Bit that worries me is the fact that Intel are doing the drivers. Intel have been very poor at GPU drivers in the past - I'd have rather had AMD producing them. It also means that you won't get crossfire (or whatever its called now - mGPU???) which is a shame.
I hope they don't do the drivers, though there is precedent with the Atom chips that had third party graphics cores and Intel drivers.
Interesting comment from Charlie, the compared board is an old school design with GDDR5 ram, but the low end Vega should be with us soon and that would trim the size of the board down a lot making the quoted board reduction not so impressive. If as it seems EMIB is only between GPU and HBM2 dies leaving basically a tuned PCIe connection between CPU and GPU then this custom part really could be just a standard Vega die.
I still find it difficult to believe that Intel would embed logic silicon fabbed elsewhere onto one of their parts though, so perhaps the semi-custom aspect is just the use of Intel as a foundry. That would still fit in the "not licensed AMD graphics" stance Intel took.
https://semiaccurate.com/2017/11/06/...rk-amd-rescue/
Edit: The only real reason for Intel to be involved with the drivers is down to the system power consumption and balancing. That integration with CPU power management would be hard to get right and impossible to test with separate CPU and GPU driver releases.
That would be odd though, because I thought the EMIB controlled the power balance across the entire ... erm ... SoC? SoC feels wrong, as it's not a single chip. It's not actually on an interposer though, is it? System on Substrate?
Whatever, the way EMIB is described suggests it links all the chips on the substrate. Whether Charlie genuinely has an impressive inside scoop is debatable (and of course I can't read most of the body of his articles since they're behind a subscription wall now) - perhaps the connect between the GPU and CPU is mostly PCIe with a bit of EMIB for management purposes? *shrug*
They're embedding HBM2 on it, and AFAIK Intel don't fab HBM2 either. The word that keep leaping out at me for EMIB is heterogenous. I can't help feeling that this part is less for the laptop market and more of a proof-of-concept for EMIB itself in the consumer market. If they can do it with an Intel CPU and an AMD GPU, what else could you stack onto EMIB? Just contact Intel and we'll work with you to fab the small-area combinations of your dreams....
That's certainly my reading of it - Intel will handle the management drivers, but the GPU will pretty much just be an AMD dGPU that happens to be attached to a common substrate rather than its own package. It's a long way off the integration of a proper APU, but it's tighter than a cxonventional CPU and dGPU.
I don't know, given AMD loves to just push default power consumption up to hit arbitrary performance figures at the cost of other metrics, perhaps having someone else do power management will be nice. (jk I remember the mobile Polaris theory) It's not like AMD has the same manpower pool to work on it anyway, they must be much happier giving Intel the ability to make AMD tech and then just providing the minimum software support. Interesting deal anyway and will be an interesting product.
The EMIB video describes the EMIB embedded into the substrate as linking the GPU and RAM, and the CPU as being on the same package as GPU with no mention of an EMIB there. When we know that really fast GPUs can survive on a fairly slow PCIe link it doesn't really make sense to embed a second EMIB into the package for the CPU-GPU link.
Oh, and from the FPGA use of EMIB in the past it seems that coupled devices are expected to be really close together. The GPU and RAM are, the GPU and CPU aren't.
As for the RAM, the only reason Intel don't make it is if they feel the profit margin isn't there. A GPU is logic, not quite an ideal match for Intel's performance optimised CPU process but close enough I think.
That would be the usual entry level VR spec, you might find Intel's entry level spec is a tad more forgiving
But as I said, my old R9 380 is now driving a Rift and despite the warnings I read I haven't needed a puke bucket once, I just play on lowish settings. That's much better than not playing at all, which on my budget would be the alternative. Well, the plan was to steal the wife's RX 480 if necessary but for just a 50% speed increase I'm not sure it is worth getting the screwdriver out.
Apparently it's going to be based on the Polaris architecture.
Originally Posted by PCPer
hmmm, colour me suspicious. If the GPU rumours (1536 shaders @ 1GHz+) turn out accurate I'm not convinced Polaris shaders could do those clocks at a low enough power envelope for a 16mm laptop. Perhaps a Polaris-like structure but with Vega-like shaders? That would certainly be "custom-to-Intel" if so
Actually, power is an interesting question. Intel's existing HQ parts rate at 45W, but they've got quad-core parts down to 15W now. Raven Ridge's 640 shaders @ up to 1.3GHz share their 15W TDP with a quad core CPU. So within a 45W TDP is it unreasonable to think these chips could pull off a reasonably-clocked Intel quad core CPU + 1536 AMD shaders @ 1GHz? Probably not...
I thought AMD had Polaris running up to 1.2Ghz, IIRC some of the 400/500 series could hit 1-1.2Ghz.
You're right to say the power thing is interesting as we don't know how Intel plan to deal with reducing the power, looking at the chip it seems the GPU with its HBM are set off to one side, or at least a fair distance from the CPU, it's not unreasonable that Intel may retain their IGP and power down the AMD GPU with its HBM until needed.
Hmmm, I may be a victim of process improvement
The 400-series Polaris cards had a voltage curve that meant anything over ~ 900MHz required significant voltage increases. While the desktop 500-series cards all clocked higher than the 400-series parts, they also all seemed to draw more power.
However, I've done some digging and when they put the RX 580 in the ROG Strix GL702ZC, ASUS capped the peak GPU at 1077MHz - that's higher than I'd expect. That was for an advertised 65W TDP, and while TDP isn't everything (and I don't have much data on the clock speeds it actually acheived during use*), it does imply that a smaller Polaris die which traded GDDR5 for a closely coupled HBM stack could probably quite happily reach 1GHz in a reasonable TDP...
*EDIT: ha, found a review that confirmed it sits at 1077 throughout benchmarking. Impressive....
Last edited by scaryjim; 07-11-2017 at 03:54 PM.
Polaris-like, but with half a vega memory controller bolted on?
BTW,looking at the AdoredTV video on the announcement he mentioned this:
https://www.pcper.com/category/tags/hades-canyon-vr
Looks like there will be 65W and 100W desktop parts too.
Last edited by CAT-THE-FIFTH; 08-11-2017 at 12:59 PM.
After having spent sometime thinking about this I'm struggling to understand why Intel decided to do this, if the above info about Hades Canyon is correct releasing such a product seems daft when you can get what seems like similar performance in a 25W TDP from AMD.
Obviously I'm guessing that a 4/8 core/thread Kaby Lake mobile CPU paired with a custom Polaris like GPU in a 65-100W TDP envelope is going to be similar in performance to a 4/8 core/thread Ryzen Mobile APU paired with Vega GPU in a 25W TDP, basically i can't work out why anyone would buy a Hades Canyon device over a Ryzen Mobile one.
There are currently 1 users browsing this thread. (0 members and 1 guests)