they won't lose money. They'll warehouse them with Dell or similar OEM, rebadge them GTS-x2205 and peddle them off to gullible desktop system buyers in 2 years time.
they won't lose money. They'll warehouse them with Dell or similar OEM, rebadge them GTS-x2205 and peddle them off to gullible desktop system buyers in 2 years time.
I read the article as just the GPU chips being returned, not whole cards. In fact with 300k chips I suspect it was more of a contract cancellation and the chips never turned up at the factory. After all, someone selling 300k chips is making their own cards, and I don't see how used chips can be returned once soldered to a board marked ASUS or Gigabyte or whoever it was. But chips, still in the sealed trays unused, possibly thanks to just in time manufacturing never having left the Nvidia warehousing or factories is fine.
So that's a lot of chips to have sitting in a warehouse, call it £30M in inventory if the chips cost £100 to make each sat around depreciating, but there are plenty of options on how to shift them and the sort of money involved isn't going to worry Nvidia. The usual alternative in this case is for an electronics company to resell unwanted chips on the grey spot market, I expect Nvidia won't want that and would rather retain control of who makes how many boards.
Production cost could be guessed with a little of research but that's another story and as I said I was replying to a specific question of yours, a GTX 1080Ti at £150 isn't going to happen as the RAM alone are probably around 100£ and they may even encounter problems with antitrust agency for "predatory pricing". If there was really a problem we would already seeing AIB reducing price gradually, especially if new cards are incoming.
But to answer your question: where did the new came from? was from Charlie Demerjian? which GPU's did they supposedly return? why did the news reported that NVIDIA overestimated the demand while it would clearly have been the AIB who supposedly returned the order they have placed to overestimate it? without at least knowing of which GPU we are talking about we can't really estimate anything.
Are you serious or joking? The Radeon embedded in the Kaby Lake G processor has the performance of a GTX 1050 while its die size is bigger than the GP106 that powers the GTX 1060 and power consumption is double compared to Pascal as comparison showed that with the same battery size its autonomy is halved.
https://www.pcper.com/reviews/Mobile...fe-and-Pricing
https://techreport.com/blog/32904/ho...r-battery-life
Missing the point massively. That SKU is a test vehicle for EMIB:
https://ieeexplore.ieee.org/document...6/?reload=true
https://www.anandtech.com/show/11748...m-pt-345pm-utc
https://www.extremetech.com/computin...-nodes-package
Now consider the issues they have with 10NM,and how things will start to look in the next few years. The ability to use something like EMIB so you can only shrink critical parts to smaller nodes,and leave less important things on higher yielding older nodes,will be something very useful.
That potentially might put AMD and Nvidia in a bad way,if their fab partners hit their own issues,a few years down the line.
Edit!!
If you are wondering why they didn't use an Nvidia part,well they kind of fell out with each other!
Last edited by CAT-THE-FIFTH; 28-06-2018 at 04:36 PM. Reason: Maybe I am being unfair! ;)
EMIB seems a less than optimal solution, I maybe wrong so don't go shouting at me but from what i understand AMD's infinity fabric takes a chiplet approach similar to EMIB but is a lot more intelligent (as in controlling and monitoring what's sitting on top of it), I'll have to check but i think it's fabricated on a large node along with the memory and IO controllers and all the other gubbins that isn't part of the CCX's.
Just reading through the wikichips entry on IF and they talk about it being 'communication planes' and 'AMD can efficiently scale up many of the basic computing blocks'...to me that reads as if IF is a separate plane of silicon.
That's irrelevant. Even if demand drops somewhat, that doesn't mean prices will go down significantly. The current PC market is a great example of this. Look at RAM prices for example. If you want a PC you have to have RAM, and even if prices went up twofold, which probably reduced the amount of RAM people buy, the profit is still the same or better, so there's no great incentive for producers to reduce prices.
Same goes for GPUs. Gamers need a GPU. They might get a slower one if the prices are high, but they will still get one.
All the people who post their new builds and GPU purchases on Reddit, for example.
That's a rather silly claim. If you want a PC and your choice is either to buy it at the current market price or not buy it at all, buying it is the logical thing to do. Sure you can wait for the market to drop, which is fine for people like you who live in alternate realities where prices drop continually, but people who live in the normal world don't necessarily want to wait years for a price drop, and would eventually either compromise on spec or save enough money to afford the higher priced hardware.
There is that, Intel's EMIB is the cheaper option when it comes to connecting different silicon IP but AMD's solution is more versatile IMO as (afaik) it allows units of logic, cells, or integrated circuits to connect to each other irregardless of who their from or how they communicate, the way Intel's going means they can't just take any old off the shelf IC and connect it up to their existing IP as they need to know how to communicate with each other, that's not something you need to worry about with IF.
Take Kaby Lake G for example, Intel had to (afaik) order customised Polaris parts for it as the standard Polaris parts didn't allow them sufficient control over power and from what i can tell didn't come with a HMB memory controller, the Vega part in Kaby Lake G appears to be a mishmash of Vega and Polaris because Polaris (afaik) can't talk to HMB memory and EMIB can't translate from one to the other, IF on the other hand can.
Last edited by Corky34; 29-06-2018 at 10:38 AM.
You actually missed we weren't talking about packaging at all... he claimed that Kaby Lake G and Ryzen APU "offer so much more performance for so much less power consumption and less cost given the smaller sizes compared to the 1030/1050" which is simply a pile of BS, that's why I asked if was joking.
If you think Intel invented something new with EMIB you are dead wrong, problems in effectively scaling some design, like analog ad RF IC's, had been encountered a long time ago so foundries have already developed different solutions for heterogeneous integration, the main benefit of Intel's approach is z height (which come more from removing a bumps layer than from the absence of an interposer) and savings by using a smaller piece of silicon for interconnect compared to an interposer (but on the other hand require a more complex packaging).
Infinity fabric is just a marketing term, yes the SDF is a superset of HyperTransport but that's about where the similarities end, saying it has nothing to do with the physical interconnect is like saying the PCIe protocol has nothing to do with the physical PCIe interconnect, it's not only silly but it's wrong.
If it has nothing to do with the physical interconnect solution then what's the protocol being used to connect everything in this image?
(Source)
AMD is behind Nvidia in power consumption,which is compounded by the GF process they use,so I don't disagree.I haven't seen any reliable estimates for the Vega M die size though and apparently its not fully enabled(apparently has 1792 shaders according to NBC).
At least in its NUC form it seems slightly faster than a GTX1050TI. However,saying it is bigger than a GP106?? The GP106 is 200MM2 and Polaris 10 is 232MM2. That would place Vega M at close to Polaris 10 size,with less shaders(2304 shaders),but it has double the ROP count.
The Ryzen APU is a smaller chip,since its a single 209.78MM2 SOC,as opposed to an Intel 4C/8T CPU at around 125MM2,plus a southbridge and a separate GPU,and runs off bog standard DDR4.
Regarding the APU,laptop mag tested two HP X360 models,which are very similar(same battery and same case,so as close to apples to apples you can get which is not easy with laptops),and battery life was a bit better in the case of the Intel system:
https://www.laptopmag.com/articles/a...l-8th-gen-core
It could be that the Ryzen APU systems are configured for a higher TDP,but there were a ton of driver issues for the desktop models(see some of the discussions here we had for the desktop versions) and AMD taking yonks to actually update the drivers,to the extent one YT channel ran Vega64 drivers on the IGP and performance went up(!),so it makes me wonder whether that is also not helping.
Even the TR article you linked to alluded to that. BT and Hexus had issues too.
Who said I am not aware of other solutions(link describing some alternatives) - its not like people haven't been talking about it in the past here! But EMIB does look cost effective compared to what AMD/Nvidia have tried so far,and none of them have integrated a decent sized CPU and GPU(made on different nodes) like Intel have done in a production PC.
Last edited by CAT-THE-FIFTH; 29-06-2018 at 10:36 PM.
https://www.techpowerup.com/245606/d...ext-gen-launch
DigiTimes, citing "sources from the upstream supply chain", is reporting an expected decrease in graphics card pricing for July. This move comes as a way for suppliers to reduce the inventory previously piled in expectation of continued demand from cryptocurrency miners and gamers in general. It's the economic system at work, with its strengths and weaknesses: now that demand has waned, somewhat speculative price increases of yore are being axed by suppliers to spur demand. This also acts as a countermeasure to an eventual flow of graphics cards from ceasing-to-be miners to the second-hand market, which would further place a negative stress on retailers' products.
Alongside this expected 20% retail price drop for graphics cards, revenue estimates for major semiconductor manufacturer TSMC and its partners is being revised towards lower than previously-projected values, as demand for graphics and ASIC chips is further reduced. DigiTimes' sources say that the worldwide graphics card market now has an inventory of several million units that is being found hard to move (perhaps because the products are already ancient in the usual hardware tech timeframes), and that Nvidia has around a million GPUs still pending logistical distribution. Almost as an afterthought, DigiTimes also adds that NVIDIA has decided to postpone launch of their next-gen products (both 12 nm and then, forcibly, 7 nm) until supply returns to safe levels.
That would be the encapsulated SDF that CAKE outputs, transferred over the wiring of the organic package. The point here is that you could run that CAKE output over an EMIB connection as well, though EMIB probably couldn't cope with fully connecting 4 dies. But then AMD don't use anything fancy for that, there just aren't enough wires. Once you start looking at connecting up something like a 512 bit HBM bus then you need EMIB or an interposer, but not for a 32 bit inter-die comms channel.
Well technically speaking you could run it (the SDF protocol) over any electrically conductive medium, but that doesn't change the fact that AMD use it on the physical interconnect communication plane and that it's as far removed from HT as is PCIe to PCI.
Also i think trying to connect up something like an Epyc or TR would be tricky with EMIB, I've not done an exact count but i think you need something like 15+ EMIB's, yes EMIB is cheaper but it doesn't seem to scale well and I'd suggest where you want to connect lots of things together is in the high-end stuff.
Last edited by Corky34; 30-06-2018 at 11:42 AM.
There are currently 1 users browsing this thread. (0 members and 1 guests)