Read more.Quote:
Supply chain sources say this is due to the tricky packaging technology and yields.
Printable View
Read more.Quote:
Supply chain sources say this is due to the tricky packaging technology and yields.
God damn, looks like Nvidia might start doing an Intel and reducing the output to manage a longer release window because of lack of competition.
I don't blame em, it's inevitable, why keep releasing gold if your closest competitor is barely bringing bronze. Just I'd gotten used to yearly improvements of 30%. Hopefully they don't go full Intel and give us 10% improvements and a slap in the face.
Increases of +30% per year are not sustainable anyhow as they rely on either process improvements or architectural improvement, or both.
16nm look like it it will stay for a while longer and how many low hanging fruit do Nvidia have left? The already ditched most of the compute features from their gaming line, removed the hardware scheduling, increased the max frequency a lot. There can't be a lot left aside from 8-bit packed maths.
I know when some details of big Volta (GV100) were released a lot of people went into hype overdrive, but while GV100 vs GP100 looks impressive what seems to have been forgotten is that the die size went up by a third (from 610mm² to 815mm²). Now it is possible that for consumer Volta Nvidia might be willing to sacrifice some margin and release with larger dies but then again they do love their margins.
First was the MASSIVE power consumption now is the shortage.
Apparently AMD has to source HBM2 from Samsung as SK Hynix have failed miserably and won't be able to supply it until the end of the year. Considering Vega was meant to be out late last year,it seems to indicate that the major reason for the delays is SK Hynix screwing up entirely with their HBM2 delivery schedule with AMD then having to try and use Samsung HBM2 instead sometime this year.
Here's an idea AMD, as you said your GCN architecture can work with both HBM2 and GDDR5, why not cobble a GDDR5 card together and get that out. What with all this lack of supply that there appears to be I'm sure it'll make a few $ whatever it is.
Edit: Having a look at Scan and OCuk they seem to have several stand alone V64 in stock. Obviously at over inflated prices.
Because converting a chip which is designed to work with HBM2 to work with HBM is not trivial - HBM/HBM2 integrate most of the memory controller logic into the actual RAM chips themselves. The Vega design was completed last summer,so unless the large Vega chip had a GDDR5X controller onboard as well,it would probably mean a big redesign of the chip. We also don't know what promises SK Hynix made to AMD too. Only sometime this year did AMD say they would be also using Samsung as a HBM2 supplier. On top of this only Nvidia seems to be using GDDR5X for high end products only,so we don't know how tight supplies of that is either and whether Nvidia has got most of it allocated to their products.
The use of HBM2 is probably less of an issue for compute cards,as Nvidia uses it too,as these sell for decent money,but for cheaper consumer cards,its probably more of an issue with SK Hynix failing so massively.
To show you how much fail it is by SK Hynix - AMD invented the HBM standard with SK Hynix and prototype GPUs on interposers were shown at least 5 years ago. Since it is a relatively open standard,Samsung created its own HBM2 BEFORE SK Hynix could.
That is the most interesting passage from the article to me. I did think that - from the timing of the GV100 release - consumer Volta should have been ready in Q4, and it looks like nVidia was indeed holding it back.Quote:
Thus, it says, Nvidia Volta-based GPUs for PC enthusiasts and gamers have been pushed back from Q4 2017 to Q1 2018.
Once Volta is released in 2018 we could be seeing an AMD that is even less relevant in the consumer GPU market than their CPU division was nine months ago. What a turnaround from the days of Tahiti and Hawaii.
"Apparently" that's AMD's fault for choosing this tech TWO TIMES which screwed production (Didn't learn the first time with HBM1? caused all the same crap), margins, and profits. They should have either chose highest speed GDDR5, or went higher bus with it, or both etc. IMHO there is not enough GDDR5x to offer on both NV's entire line (except for the lowest card in the stack, all going GDDR5x with 20x0 series coming up, even NV had to wait for ramp here) and AMD cards, so AMD would be left with some combo of GDDR5. But they could have had cards out in massive quantity and decent prices (GDDR5 is cheap today, easily integrated vs. HBM2) and be taking far greater part in this craze right now due to this. Management keeps taking roads that screw production, or simply pricing new generations totally wrong which leads to the same story (no margin/profit).
Instead of listening to marketing, who chant blue crystals (ask Intel engineers, they do nothing for perf), AMD should have listened to engineering who might have told them: "umm, bandwidth isn't an issue currently (95% of market not using 4k and still not an issue with faster 1080ti gpu still too slow...LOL) so lets go with what works and can be produced in massive quantity cheaply instead of hard to produce crap that benefits nobody now and kills any chance of profit via shortages and high cost to use or implement the tech.
The cards have HBM2 which is supposed to add efficiency but the GPU is EXTREMELY power hungry. AMD should have SIMPLY designed low tier none-compute (gaming/video rendering) cards with GDDR5X and leave the frontier cards with HBM2....why? because GDDR5 is VERY cheap and easily available like drinking water. NVIDIA has proven to rule 3 years consecutively. I hope AMD will not mess up the CPU division next season.
Another point I forgot before is how NV screwed AMD by not jumping on HBM/HBM2 enough to help push it. Again, AMD should have seen this coming (both times...LOL) as it's not in NV's interest to help AMD push something when there is an alternative that is cheaper and works fine. You gain nothing if you're NV, by helping AMD to make HBM/HBM2 a standard/commodity. IF NV goes full on GDDR6, HBM2 will take even longer to get cost effective if ever for mainstream stuff (relegated to pro/server stuff only where cost is ok?) and AMD should RUN from the tech ASAP as it will continue to sink any card for consumers from AMD attempting to use it.
I'm sure NV is gauging if they should dump an HBM2 card as opposed to using it, if it hurts AMD. There is a point here where it is worth NV dumping an HBM2 card and absorbing that loss to keep AMD from reducing costs and improving availability of anything HBM2 related. If I eat a 10mil design (a HBM2 high-end card, 2080ti or something coming up) to stop you from making 100mil or more in NET INCOME, that is a great deal for NV. Maybe AMD should have started this whole HBM deal with Samsung (30B income) instead of going with SK-Hynix (3Bil). This is like choosing GF as a fab for your new product release instead of far better TSMC (even though they suck at times, better than GF record). AMD needs to make smarter decisions. Another example of this is like apple buying up 50mil in memory (even though they maybe only need 30mil for their next launch for example) just so somebody else can't get any of that memory and can't launch ANYTHING to compete with apple's new release. It is worth it for apple to eat whatever they have to (up to a point), if you can't make a dime because they did it. Business is dirty and only the paranoid survive ;)
GDDR 5 isn't really feasible for GPUs in the Vega performance bracket. Even with HBM2 Vega reportedly shows good gains with memory overclocking, and the GTX 1070 also benefits hugely from memory overclocking.
To feed a GPU in the performance range of Vega with GDDR5 they would have to use a 512-bit bus, and when RX 480's 8GB GDDR5 over a 256-bit bus draws 40-50W that will cause difficulties.
The delays were a problem, but fundamentally the real issue is that the Polaris and Vega architectures were simply not fast enough in games. AMD has had to overclock and overvolt these chips massively just to compete, to the extent that the originally power efficient Polaris ended up breaking PCIE specs just so that it would remain relevant vs the GTX 1060.
^^This^
Although as a sort of thought experiment was the Polaris and Vega architectures not fast enough or were they caught off guard by better than expected performance from Nvidia, in other words where did people, including AMD, think Nvidia's performance would be 2 or more years ago.
the power consumption is normal if you want the power of a 1080, and the cards will only use the TDP recommended power when under full load, while the vega will be around £200 cheaper when a 1080
Anyone else delaying a launch would be decried as an engineering failure, not a business move