Read more.The Pascal GP100 GPU is likely to feature HBM2 on the chip package.
Read more.The Pascal GP100 GPU is likely to feature HBM2 on the chip package.
Doing a large die on a new process is not a clever move - consider that so far we have no >200mm^2 14nm chips (an no 16nm TSMC chips!).
Of course 16FF+ is over twice as dense as 28nm, so they will achieve their needs without needing a massively large die. I expect Pascal to be between 300mm^2 and 400mm^2 (or to be multi-die on the interposer). Another factor is that HBM controllers and PHYs are far smaller than high-speed GDDR5 controllers.
never stopped NV before, don't see why it would stop them now. NV have traditionally taken the larger-core-first approach, right up until GF100 turned out to be a poorly yielding, power-hungry mess, and they started releasing the x04 cores first to test the process and architecture. But if they think they can successfully launch big first, I think they'll go for it.
GP100 will almost certainly be over 500 sq.mm. That's the size that NV aim for for their flagship die. There will be smaller versions of Pascal targeting different markets, but the big old HPC compute-targeted GP100 will be that big. NV have always done larger dies than AMD: between the 2900 (420 sq.mm) and the R9 290 (438 sq.mm) they didn't produce a die over 400 sq.mm., whereas every generation from nvidia since the 8800GTX (484 sq.mm) has had a flagship part with a die size > 500 sq.mm.
Whether NV will decide to go large-core-first in Pascal will depend on their confidence in both the process and their design. They've executed pretty well on the last couple of generations though, so I'd imagine they're feeling pretty bullish....
So, this article says Pascal has "taped out" (http://www.theregister.co.uk/1999/07/14/what_the_hell/) and then like many others around the web goes on to describe getting first silicon. The two are not the same, and they are both fairly major milestones. Please don't confuse the two, the original Beyond3d "leak" doesn't mention first silicon, or even mask production, just tape-out.
Then the article goes on to say that Nvidia might be first to market with HBM2. Well they might, or they might not, no information there either. But if they have indeed just taped out, then they still have to await mask production and first silicon before they can see how well their design works and start the debug/respin cycle, then I would say it is rather early to call even if we had full visibility of what both AMD and Nvidia are doing, which I expect no-one on earth has. But we do know that AMD will have production knowledge of interposer based products very soon, so how come Nvidia is getting talked up? We know AMD have HBM2 stuff in the pipeline, why are people talking like they will be forever stuck on HBM1 and Nvidia are going to perform some leapfrog over them?
I dunno, I just find how people interpret this lack of news baffling.
Jowsey (08-06-2015),Pleiades (08-06-2015),watercooled (08-06-2015)
Traditionally the big cores are a limited enough market that they don't have to produce much more silicon than required to make a batch of review samples. If they knock out the next GTX 960 segment card first then they had better be in a position to make millions of them and those chips have to sell at a healthy profit so I think if anything bringing low/mid end parts out on a new process is more dangerous.
scaryjim (08-06-2015)
I'd love to know how pin-compatible HBM and HBM2 are. We know AMD have experience making hybrid MCs, and we know that HBM2 will also be 1024bit-per-stack path, so I do wonder if we'll get a Fiji revision not too far down the line which bumps the memory to HBM2, or indeed if that's even possible....?
I'll be another one to go on record saying something smells wrong with those rumours.
>500mm2 on 14nm (i.e. early process, multiple patterning, etc) in that time frame, 32GB HBM memory etc. Nah. Not impossible, but given cost is actually a factor in the real world, I'll believe it when I see it.
Also, 14nm? TSMC's is known as 16nm.
WRT large die first, they may have done that in the past but haven't for quite some time, the first 55nm GPU was a die shrink of the G92 (9800GTX+), then some of the smaller GT200 parts on 40nm, then GK104 at 28nm.
Last edited by watercooled; 08-06-2015 at 03:20 PM.
If by the time HBM2 comes out we are using 16nm chips, then something with "just" 4000 shaders might look distinctly mid range.
OTOH, if AMD can "just" drop HBM2 onto a Fiji interposer, and then later drop a Fiji2 made at 16nm onto the same interposer and drop the lot into existing cards, how cool would that be?
Fudzilla thinks Fiji is HBM2 capable, but pinch of salt applied here: http://www.fudzilla.com/news/graphic...rt-hbm2-memory
They may not have done it on a new node for a while, but they've certainly gone large-die-first for each generation: G80, G92, GT200, GF100, GF110 - all the first part of their generation, all the largest part of their generation. I'd forgotten their low-end venture into 40nm before GF100, which makes their troubles with that die particularly inexcusable (imnsho!). In fact, looking back it's unusual for them to put a new architecture on a new node: I think GK104 is the first time they've done that for along time. But then again, progression through nodes is running a lot slower now that it has for a number of years, so it's not entirely surprising that new nodes are starting to line up with new arches more regularly....
That would be pretty damn cool
There's a huge difference between uArch family and node though; releasing the flagship part first is nothing unusual for either company, but the flagship usually isn't a near-reticle-size die on an early node, especially one with potentially high initial cost/transistor. Pascal would be the first on the node for Nvidia which really matters, combined with a new uArch on such a huge die. Even Intel don't risk huge dies on a brand new node. Kepler and Maxwell both started with smaller dies and moved up, Maxwell in particular started on a very mature 28nm with smaller dies then moved up.
Last edited by watercooled; 08-06-2015 at 05:27 PM.
Nvidia simply don't have a choice but to move to the next generation node, they may just have to suffer low yields at first.
I imagine these will be exclusively Tesla products until yields improve, before Quadro cards and finally Geforce parts see a more widespread release.
There will be a smaller GP104 part to pick up the slack on the gaming side.
Moving to a new node is a given, that's not in dispute. But because of the way semiconductor manufacturing works, yields go down (faster than linearly) as die size increases; combined with typically lower yields early on the ramp of a new node and large dies can easily be very unprofitable.
No-one I know of has any large dies on 20nm yet, Intel are only just beginning to roll out ~170mm2 processors on their 14nm process (which has been shipping to consumers for ~9 months already), but somehow Nvidia are going to do a ~550mm2 die on a brand new 16nm process (of which we haven't seen a single shipping product I'm aware of yet) in a few months?
Last edited by watercooled; 09-06-2015 at 12:50 AM.
Its ok. They are going to photoshop out the woodscrews this time...
There are currently 1 users browsing this thread. (0 members and 1 guests)