Read more.Quote:
Going super-aggressive to gain market share, we think.
Printable View
Read more.Quote:
Going super-aggressive to gain market share, we think.
Ooh, this looks promising. >5TF is pretty much equal to the 390 (assuming memory bandwidth isn't a bottleneck, which it shouldn't be), so this looks like 390 performance for 380X price (200USD in GBP + VAT = ~£170). The power usage is a little concerning, the 1070 does considerably more on 150W
Aren't we meant to get an AM4 update today as well? Odd that the embargoes would run at different times
So with aftermarket cooler and taxes we are looking at almost 260-280$ card with 4GB VRAM? Even it is 8Gb still the problem is AMD Failed to deliver again. I'll wait till all the benchmark results pour in.
Then why have they called it RX rather than R7 or perhaps R8 in keeping with their establish performance categories? RX (10) implies it's a whole step above the R9 products :/ We don't need yet more confusion from AMD.Quote:
Originally Posted by hexus
However, very nice price. Just what a lot of people are looking for.
Q: Hex, do any of your writers/editors have any positions in nVidia stock or sponsored in any way by nVidia?
(I think it would be worthwhile being a little transparent so your readership is aware)
Benchmarks I've seen elsewhere suggest the performance is around Fury X/GTX 980 level, but more importantly for AMD when run in crossfire these things outperform the GTX1080 for two thirds of the cost. Assuming it's HBM memory I don't think it will be a bottleneck, the Fury's handled 4K rather well despite only having 4GB.
Nvidia made step forward, for AMD I cant say nothing... yet
A few weeks on Chiphell forums they said the card was close to a R9 390X. Also,remember a single PCI-E six pin connector only means the card consumes above 75W with a maximum of 150W.
If it's actually new architecture rather than just a rehash of an old card then it's well worth a number change. Nice to have something actually new, it's been a while.
Fingers crossed it's got enough bang for buck to be a success!
Looks promising. No idea where the 150W TDP is from though, the slides say "150W Power", which can mean a lot of things. Maybe it's better not to assume the worst case from that. 150W TDP with only a 6 pin power connector would be a pretty bad design after all. If this GPU turns out to be an overclocking monster, 200 bucks would be well worth it, but 150W TDP with only a 6 pin would mean close to no overclocking potential.
Here's a link from a very reputable site: http://www.theverge.com/circuitbreaker/2016/5/31/11826478/amd-radeon-rx-480-199-vr-ready-graphics-card
2 in CF outperforming a GTX1080... Polaris 10 is an amazing price to performance card! Can't wait to see the benchmarks when the NDA embargo lifts at the end of the month.
I'm very happy for the 480 bit to be new! It's the market category designator which AMD are breaking their previously sensible rules for - R7 was meant to indicate market segment, 200/300 etc. are (supposedly) the different generations and 50 70 etc. are the performance levels within that generation.
I would link to the AMD FAQ that mentioned this, but their FAQ points to a page not found. Smooth AMD :/
Well if AMD can deliver on R9 390X level performance at £165 to £210 that would mean Nvidia would have to have a GTX1060/GTX1060TI with similar performance for a similar price,and that should nicely drop the price of the GTX1070 to hopefully under £300.
36 CUs, probably 64 shaders per CU to make a total of 2304. IIRC shaders tend to do 2 flops per cycle, so 5TFlops would come from clock speeds around 1.1GHz. All sounds utterly plausible. That level of compute suggests 4k isn't a serious target for this card - I'd guess it's targeting 60fps at 1080 ultra/1440 high. 4GB of VRAM on the entry level cards is therefore more than sufficient, I'd suggest (particularly if they've improved the compression algorithms again). An overclocked 8GB card might well make a good fist of 4k with medium to high settings...
Yeah, FLOPs don't tell us anything relevant. What we will have to wait for are independent benchmarks.
One of the leaked slides suggested slightly just under the TFLOPs ratings for an R9 390X.
The clockspeed has been pretty much confirmed as 1.266GHZ by a few prior leaks and somebody taking a picture at the launch event.
The thing is a 36CU design does seem a rather odd number of shaders - so we don't even know if AMD has a 40CU design,but has disabled CUs for yields. They did the same for Tonga. It might mean,they have something waiting for the GTX1060/GTX1060TI when it is being released.
Sure they do, they tell us how floppy something is.... ;)
Maybe i should get my coat. :)
Being more serious wasn't there talk (rumors) that AMD's first showing on the new node size would be targeting the lower & middle range versus Nvidia targeting higher & middle?
If so, and it seems like that's how both sides are going, then we won't really know how both sides showings compare until next year, that's unless AMD's lower/mid range can best Nvidia's higher/mid.
I think going by the GTX1070 and GTX1080 threads, AMD is pricing it too cheaply. According to some in the other thread the GTX1070 is a veritable bargain at £320 to £400 and the best jump ever.
So instead of say £200 for R9 390X level performance,AMD should have a minimum price of £230 to £240 ,as that should match a GTX1070 in price/performance and make it better value than a Fury X!! It also will be £20 cheaper than a R9 390 and cheaper than a R9 390X.
AMD are a stupid company - instead of charging less they should be charging more so that their shareholders should be happier. We as PC gamers and enthusiasts should pay more to make this happen.
Value for money and expectations are for mugs.
Sarcasm\.
It should be interesting to see how an upper end card from what AMD consider the volume market compares to what Nvidia considered the bottom end of the high end, if that makes sense, basically how the cross over cards from each side compare may tell us a little more about the price/performance when each side get around to releasing cards targeting the other parts of those markets.
Not to sound rude but I thought that was insinuated by naming it the 480 and not the 480x?
I'll wait for the benchmarks but I'm kind of not feeling the love like everyone here is. I'm sure it'll be a good part for the price, but I was hoping it would just be a good part and then the price would make it outstanding.
New arch, new node process which should enable much higher clocks, and some sites are reporting that it'll be about 390 performance? It's another incremental upgrade, or it seems that way.
It looks promising, however, I was expecting lower power, somewhere near 120W.
But we will see.
From graphs, this seems like half the speed of GTX 1080 for almost the third of price.
Now, nVidia will answer to this with 1060. But at what price? 249USD?
I'm a champion of the smaller guy so want AMD to succeed. But you know the general opinion amongst PC gamers who don't follow the tech so closely...."yeah but nvidia are awesome / better drivers / better optimised games / I trust nvidia and have never used AMD / vlaue added features with nvidia (CUDA / shadow play etc)..." blah blah blah....there's a collective general consensus you're taking a risk by choosing AMD, as such they need to build confidence in the brand and have better pricing.
AMD's marketing sucks, you've got to give it to nvidia they know what performance and price they need to hit to just make people happy enough...and rally the fan boys to champion their products (and profit margins). They are a well oiled PR machine.
It is partly because the PR is not proactive exposing Nvidia problems and the R9 290X,R9 285 and Fury X launches were bungled. The R9 290 series launch had Nvidia sending out reference cards for free to highlight the throttling in quieter fan mods for example and AMD should have spent more on a better cooler. The R9 300 series rebrands were handled not too badly though and it just shows you that if they started that way with the R9 290 series having better cooling,it would have not been perceived as bad.
However,AMD has hired this lady from Nvidia:
http://www.overclock3d.net/articles/...vidia_to_amd/1
Quote:
After 13 years working for Nvidia Leslie Pirritano has moved over to AMD, giving the Marketing department over at AMD a huge potential boost moving forward.
From 2006 to 2009 Leslie acted as Nvidia's Head of Developer Relations, with her job title changing to "Your Best Friend" from 2009 to 2016. Over her time at Nvidia she has worked with many of the world's top video game developers and created co-marketing and programs which have benefited Nvidia in a Huge way, creating many Geforce Game bundles and establishing Nvidia's worldwide digital redemption process for game codes.
If you remember buying a Nvidia GPU over the past few years and receiving a free game with your purchase it is likely that Leslie Pirritano is responsible for that. Alongside that she also plays a huge role when it comes to creating relationships with game developers, helping them to get the help that they require getting their games to run well on PC hardware.
Now that Leslie has moved to AMD her existing relationships with developers and her expert negotiating skills will allow AMD to forge stronger relationships with developers and hopefully allow them to strengthen their position in the GPU market.
Yesterday AMD had released a new AMDiy video, where Leslie upgraded from three water cooled EVGA GPUs (Nvidia) to three AMD R9 Fury X GPUs.
https://i.imgur.com/HFfWscy.jpg
PCB looks very short indeed - the cooler overhang seems similar to what Nvidia have done in the past.
An RX 480 + Zen could be a very tempting prospect. Imagine if this pricing is mimicked on the AMD cpu front?
Like many I really want AMD to succeed and I do think with the nano they are moving along right lines.
Nvidia's 1070 makes sense at 1440p (but not quiet 4K)and is overkill for 1080p. I would prefer to have a single graphics card rather than Xfire/SLI. But my experience of 2 card systems (admittedly a couple of years ago now) is that a single card system is more stable and quieter. Couple that with fact I like to water cool my GPU (and overclock heck out of it) and a dual card system quickly gets too pricey.
nice idea, but I don't see any driver for this on the CPU side - we already have way more CPU power available than most people really need. The driver for pushing GPU prices down - according to AMD, at least - is to lower the entry point for an acceptable VR experience. The generally accepted base entry cost for VR at the minute (GTX970/R9 390) is the same as the top-end mainstream CPU from Intel (i7 6700). I'm pretty sure you can get good VR experiences from an i5 @ < £200. The CPU side isn't really such a barrier to VR adoption as the GPU side. If AMD can genuinely get VR-capable cards coming in at well under £200, that's quite a market shift.
EDIT:
If the RX 480 uses a single 6pin PCIe cable, what money on an RX 480 X2 down the line... ;)
AMD changed the command processor with Polaris,so it will be interesting to see if Polaris under DX11 is less CPU limited than the previous GCN based cards. Nvidia did very good work with their DX11 drivers.
XFire and SLI have issues with the newer more console optimised engines,so I am not sure whether a dual GPU card would make as much sense.
Better if AMD refines Polaris 10 a bit more with higher clockspeeds like they did with the HD4890.
it's a strategy that worked for them in the past - mid-range chip first then unleash a dual-gpu card. And we know the top board power of the RX 480 but not the average or TDP - if it's actually a 100W card then a dual card could run with a single 8pin connector and potentially be competitive with the 1080. Food for thought ;)
As to the naming - who knows? Perhaps RX is the top segment of a new alphabetic system (like nvidia's G/GT/GTX from a few years back). RX for the 480 and 490, perhaps RS for the 460 and 470, RE for the 450 and below (representing eXtreme, mainStream and Entry)?
$199 = ~£137
Even with the US/UK 'conversion' - that's spectacular for 980 beating performance. Period.
It comes in at around the £180 mark after tax and a little rounding up. Fingers crossed for the benchmarks being as good as we hope.
Doom, The Division, ROTTR (DX12), Hitman (DX12), Quantum Break, Forza (and basically ANY UWP game), Batman: AK, Fallout 4, Just Cause 3, Rainbow 6 Siege, Wolfenstein The Old Blood/New Order, Assassin's Creed Syndicate....
....all recent big releases where SLI/CFX is either completely broken or not worth the price/performance. Unless something changes soon, multi-GPU systems are going the way of the dodo :(
But it won't work as well now - the engines have limitations as they are console focussed.
If you look on the more enthusiast forums,people are starting to get annoyed with dual card solutions as it seems more and more variable in performance. You have more and more people going the way of single GPU solutions. Look at this AT article:
http://www.anandtech.com/show/9874/a...elayed-to-2016
AMD and Nvidia are pushing VR dual card solutions since they can use a single card for each eye.
If AMD uses a dual GPU RX480 solution against the GTX1080 or factory overclocked GTX1070,AMD would have to make sure they have profiles out at every launch and prior history is saying it won't happen,and I think it won't be a good strategy.
Nvidia will just say we might have lower FPS but we have better frametimes.
I have to say CFX on The Division is actually working well for me now. After playing around with settings (and not using the AMD profile for it but AFR friendly instead) I'm seeing about an 80% gain over having CFX turned off. But I do feel that single GPU solutions are always going to have the edge cause I wanted this level of performance from day 1 not months later.
Having said that I'm still really interested to see how DX12 changes the multi GPU landscape cause there will be a lot less reliance on drivers to get the best out of multi GPU setups so it could just be a matter of staying patient and waiting for DX12 titles to arrive.
I was thinking this morning that I should re bench on 3d Mark to see how the old R9 295X2 compared to the 1080 and was pleasantly surprised to see it just about matching the OC numbers Hexus put up for 1080 in the FireStrike Ultra test.
Yep,agreed. I think AMD might be better served seeing if they can tune Polaris 10 in an HD4890 kind of way. The 18CU design with 2304 shaders makes me wonder if the RX480 is a die salvaged part,to maximise volume(and maybe performance/watt).
Supposedly there are big changes to DX12 like explicit multi-adaptor which will help with multiple GPUs,but the thing is it means it will be more in the hands of devs to kind of optimise for this.
It means only some of the bigger devs will probably have the resources,and I can just see AMD or Nvidia sponsoring games and making sure they can screw up card support either way for their competitors.
What sort of performance is the RX 480?
Around gtx 970/980 level.
You're a prophet ;)
http://cdn.videocardz.com/1/2016/06/...-3-900x532.jpg
Source: http://videocardz.com/60780/amd-announces-radeon-rx-480
I've not done an in depth look at DX12 benchmarks in AotS but if that image you posted is close(ish) to correct wouldn't the RX 480 be closer to GTX 960 territory?
If two RX 480's get around 62FPS then is it safe to assume that a single card would be around the 30FPS mark, if so isn't that closer to GTX 960/70 territory than 970/80.
Yes,funny how I was never aware of that it seems but AMD would be smoking weed if they would ever think that a dual RX480 card would be that viable.
It would only be viable as a vr card - for normal gaming it would not be as viable.
Plenty of engines don't support XFire or sli - you might want to read that at article in detail. Probably because consoles only use a single GPU.
Lots of complaints on forums of XFire and sli support being crap and these are from people who ponied up the cash. Heck even one of my workmates runs dual water-cooled r9 290x cards and he feels the same.
It would be a disaster of a card as soon as AMD cannot provide day one XFire profiles the card will be an r9 390x at most.
The moment it hits an engine with no support,the same problem.
Even the Pro Duo was more targetted toward vr game devs with its partial pro support.
Even though DX12 will be far better suited towards multi GPU it will take time for support to be added and by that time Vega or its successor will be probably available.
It would be better IMHO for AMD to concentrate on the single cards and making support as perfect as possible,and try and honestly work with a few more AAA devs. Nvidia having its name splashed on so many games is not helping AMD. The TR franchise went from good pr with AMD to making them look silly. NV pr just mocked them.
I think the point could be to show how wide the gulf in price between AMD and Nvidia is. We have a £165 pound card offering 65-70% the performance of a £650 card?
Unless we know the XFire scaling in the benchmark it could be 100% scaling or 50% - the joys of multicard setups.
What CAT said. If dual-gpu scaling in modern engines is poor (note the graph says the RX480 is only hitting 51% GPU utilisation) a single RX480 could easily be getting 40fps+. It looks like AotS is well programmed for using multiple GPUs efficiently though - low utilisation across multiple GPUs returning overall higher framerates is pretty impressive.
At least we only have to put up with a few more weeks of speculation and rumors, do we know if the RX480 is going to be the top end of Polaris 10?
Well the CPU used was a Core i7 4770 non-K in an H87 motherboard supposedly.
Should be powerful enough on the CPU front then. x8/x8 lanes should also be enough considering the move engines. You'd have thought that if they were being held back by something else they'd want to get around it because with the 1080 that heavily loaded it wouldn't increase much, while the AMD setup should stand to gain even more fps. Very mysterious - more so that they would highlight low GPU utilisation as a point on the slide - if anyone at Hexus could ask what the point of that utilisation figure is I'd be very grateful, perhaps we're missing something fundamental.
I'm sure if you ask Asus nicely they may oblidge...
Link of awesome
This article on Fudzilla does some speculation on the GPU utilisation thing.
So at this point the differences can be explained by that. AMD is using AOTS since it favours them anyway and async compute is deactivated on Nvidia cards.Quote:
As many have noted, Ashes of the Singularity is a DirectX 12 strategy game that uses procedural generation for rendering textures, along with dynamic game character unit composition depending on the map situation. These rendering features will prevent any two running instances of the game from ever being identical.
I think it is more interesting that the Doom dev said Vulkan and DX12 were somewhat inspired by Mantle and said you didn't need a $700 card to run the game well - and this was after Nvidia showed of the GTX1080 running Doom under Vulkan at the GTX1080 launch. Hopefully won't get Nvidia being annoyed with them,LOL.
Anyway,I watched the video again - the utilisation figures is probably because he is trying to say the GTX1080 is almost at 100% usage and the XFire solution is being barely taxed and it actually has more performance in the tank.
Not that I care much for XFire or SLI solutions anyway.
That was my first thought, but odd that they report 62, and that they'd miss a chance to beat it comprehensively. Vsync is off, but that doesn't prevent a frame limiter.
According to the Fud link it was a i7 5930K used, so even less cause of a bottleneck, though I think they are taking that from the stardock site. It seems the heavy batch portion of the benchmark/setting (whatever that is) used up to 92% GPU on the AMD setup. I'm just looking through the purported bench runs now at Star dock and I'm not quite finding the ones AMD seem to be reporting.
Then again this reddit discussion started by AMD_Robert seems to be saying the GTX1080 is botching the rendering "The GTX 1080 is doing less work to render AOTS than it otherwise would if the shader were being run properly"
I'll be so glad when tech sites can get their hands on a RX480 to settle things. :)
But realistically 58 and 62 FPS is within 5% of 60FPS,so either way I wouldn't be reading into PR bumpf that much.
Also,AMDMatt on OcUK has been looking through the database to find the exact runs - when he does I will post it here.
Wait,what the AMD cards do more work running AOTS??Quote:
The elephant in the room:
Ashes uses procedural generation based on a randomized seed at launch. The benchmark does look slightly different every time it is run. But that, many have noted, does not fully explain the quality difference people noticed.
At present the GTX 1080 is incorrectly executing the terrain shaders responsible for populating the environment with the appropriate amount of snow. The GTX 1080 is doing less work to render AOTS than it otherwise would if the shader were being run properly. Snow is somewhat flat and boring in color compared to shiny rocks, which gives the illusion that less is being rendered, but this is an incorrect interpretation of how the terrain shaders are functioning in this title.
The content being rendered by the RX 480--the one with greater snow coverage in the side-by-side (the left in these images)--is the correct execution of the terrain shaders.
So, even with fudgy image quality on the GTX 1080 that could improve their performance a few percent, dual RX 480 still came out ahead.
As a parting note, I will mention we ran this test 10x prior to going on-stage to confirm the performance delta was accurate. Moving up to 1440p at the same settings maintains the same performance delta within +/-1%.
OK,somebody on OcUK might have found the entry from the same system but at 2560X1440 using a single card:
http://www.ashesofthesingularity.com...d-22666a7149f5
Actually settings might not be the same as the OcUK thread - I checked them.
Why are the GPU's at 75% load?
Is this measuring the pair of cards pulling the same power as the GTX1080?
Interesting comments from one of the AMD technical marketing guys on Reddit regarding why they only really showed the mGPU comparison:
https://www.reddit.com/r/Amd/comment...y_controversy/
Quote:
Originally Posted by Reddit member
Quote:
Because that is what we sample GPUs to reviewers for. Independent third-party analysis is an important estate in the hardware industry, and we don't want to take away from their opportunity to perform their duty by scooping them.
Quote:
I don't know how to explain it another way. Posting sGPU numbers hurts their reviews and their traffic. mGPU sort of doesn't. That's it.
Under lighter loading the overheads from the DX12 mGPU magic and lack of game optimisation for this new fangled mGPU mode are significant. This should improve when the tech matures. Under heavier load you see utilisation reach much closer to normal.
Yeah, on the outside this sounds nice of them. But of course they could just reduce the NDA if they wanted to let the tech sites have the glory of the single GPU benchmark reveal.
Power 150w.
when 2x it is 300w....hmm..
I'm not sure why they would be worried. This chip looks very promising and it's not as if anything AMD can do has an effect on Nvidia performance.
Great !
Looks like UK retailer pricing is coming in at around £230 for the 8gb model, so almost exact $=VAT/import adjusted £.
That's a tough sell against last gen products if performance is at the lower end of expectations - nvidia could be doing AMD a huge favour if they managed to cut inventories.
It is what I feared:
http://forums.hexus.net/hexus-news/3...ml#post3657842
The cheapest GTX1070 is £365 which means a £130 premium for a GTX1070,so AMD AIBs have no incentive to price it too lowly,which is annoying.
TPU mentioned this tidbit of info in their GTX1070 review:
So that would be around R9 390X to Fury level performance.Quote:
AMD's upcoming Polaris cards will be nowhere near the GTX 1070 in terms of performance. Rather, expect RX 480 to perform about 20-30% slower. But AMD's $199 pricing for the 480 could stir things up, so if you don't need a new card immediately, maybe wait a few weeks and see how things pan out, which would also allow you to see how the custom GTX 1070 designs by board partners turn out.
If you look at the model which was leaked it was a Sapphire RX480 Nitro with shiney LED lights - the R9 390 Nitro is around £255 to £260.
Even adding a £20 premium still sadly makes it the fastest sub £300 card. GTX1070 pricing is really screwing things up now.
So around 10% lower price and around 10% to 20% better overall performance,so its an improvement in price/performance over a GTX1070(which at £365 is probably worse than an R9 390),but this generation is starting to look rather meh overall.
OTH,the Nitro models are some of the more expensive AMD AIB partner cards out there though,so hopefully the models from Powercolor,etc are nearer to £200.
This is the Nitro model which was leaked:
http://videocardz.com/60992/sapphire...nitro-pictured
http://cdn.videocardz.com/1/2016/06/...80-NITRO-2.jpgQuote:
SAPPHIRE Radeon RX 480 NITRO 8GB
Thanks to our friends from HardwareBattle we get to see the first Radeon RX 480 with custom-designed cooler. The new Sapphire NITRO card features brand new silver cooler shroud, packed in dual-slot and dual-fan design. The side of the card features LED illuminated Sapphire logo. According to the source, the colors will change depending on the fan speed, GPU temperature or custom-profile configured by the user.
Most Radeon RX 480 cards that will be released on June 29th will be equipped with 8GB GDDR5 memory. 4GB models are also expected, but availability of such cards will depend on AIBs.
According to our information, some european stores already received their stock, and they are ready to sell Polaris 10-based graphics cards. In China a company called Dataland (Chinese PowerColor brand), was already selling places on reservation lists.
Meanwhile some reviewers already have their cards for tests, so we expect to see a lot of leaks in the coming days.
http://cdn.videocardz.com/1/2016/06/...-1-900x367.jpg
http://cdn.videocardz.com/1/2016/06/...21326_7469.jpgQuote:
SAPPHIRE RX 480 Reference design
Additionally, the render of Sapphire’s own RX 480 reference design was also revealed. It appears that the front shroud will be painted in white, while the back will receive a fancy backplate. The Radeon side logo will be replaced by Sapphire’s.
So Sapphire is launching two RX480 cards - the reference one and the Nitro ones. It does seem the reference card is copying some of the Nvidia ones and having an illuminated logo.
I was looking at the Sapphire 8gb 480 non-nitro, which is £233 at one retailer.
Yeah - it looks like those bling cases from a decade ago. The reference model actually looks OK!! OTH,if that is the reference model for £235,I feel this is the third 14NM/16NM launch which will dissapoint me.
AMD has such a big chance here - only they can screw it up. Heck,if I were them I would even do some rebates to AIB partners to have at least some of the 8GB models be closer to £200 as a fair number on other forums have not been happy with new gen pricing so far.
Maybe I'm missing something, but why does it seem like non-reference Nvidia cards are even more expensive than the overpriced founders edition cards?
https://www.scan.co.uk/shop/computer...force-gtx-1080
True. :P
The pricing just seems ridiculous compared to the USD RRPs.
Not sure what to make of this thread:
https://www.reddit.com/r/Amd/comment..._and/?sort=old
If you look down the page,a chap who is an editor of an Italian techsite says the full Polaris 10 chip has 3072 shaders??
That would mean that RX480 is only 75% (36/48) of the full die which is rather low. The GTX1070 is similar but AMD don't usually cut that much, do they? Maybe GF have or were anticipated to have bad yields?
Still, whatever else I could accuse Nvidia of, the usually execute quite well but the launch of GP104 seemed very rushed. Maybe they got some leak of the full spec Polaris 10 chip? Because 3072 shaders at the same frequency as RX480 would be very close to GTX1080 which would be very impressive for a chip only less than 74% of the GP104's size (232/314) even if GF's 14nm is denser than TSMC's 16nm.
But I agree with kalniel that 2560 is far more likely unless AMD was really worried about initial yields.
The 1070 is also 25% which is unusual even for Nvidia, but then it's a larger die so you'd expect yields to be lower. 3072 does seem a bit far-fetched but we haven't had confirmation of die size besides pixel-counting presentation slides AFAIK.
It depends - GF/Samsung 14NM is meant to be twice as dense as TSMC 28NM,so technically a direct shrink of Hawaii would be a bit smaller than the 232MM2 figure touted. You need to consider Polaris won't use a 512 bit memory controller and probably lacks the DP compute abilities of Hawaii too. Remember 3072 shaders is not massively higher than the 2816 shaders in Hawaii. AMD also has increased shaders massively on the same node in the past - the RV670 in the HD3870 had 320 shaders and the RV770 in the HD4870 had 800 shaders and on the same 55NM node,the die size went up from 192mm2 to 260mm2. Then you had Tahiti with 2048 shaders and a 384 bit memory controller which was around 365MM2 and Hawaii had 2816 shaders and a 512 bit memory controller and was only 438mm2.
I think it is quite possible,but whether AMD went that way in reality is another question.
Some leaks from Chiphell. Might be fake.
https://www.youtube.com/watch?v=O_qg2PITCts
http://s25.postimg.org/xrouk7i0v/Untitled1.jpg
Quote:
Official drivers successfully installed as the legitimate driver signature
Radeon RAGE 67DF: C7
4GB GDDR5 memory
@ 1080MHz core clock
This card is the test sample card 8 + 6pin
Supposedly he has the 4GB version of the card.Quote:
Temperatures are expected public version of 1266MHz point of view of 70 degrees is a cool card in between 40 to 50 degrees
And so there is a new BIOS update will run sub
1080MHz people how to live
http://i.imgur.com/1WgWKim.jpg
http://i.imgur.com/8OI8uYX.jpg
http://i.imgur.com/GOrt5eB.jpg
DG Lee(Korean modder who is reasonably well known and leaked the picture of the 29th June NDA) says their example runs at 1.2GHZ+ but this is probably the 8GB version.
Different clock speeds for the 4GB vs 8GB card? But without a change in nomenclature? I was already annoyed with AMDs changing name scheme again, but that would be very confusing. I could imagine a 480 vs 480X having different clock speeds. Guess we'll have to wait for retail packaging to leak to find out.
On the other hand.. the RAGE moniker has history! I remember the Rage 128 Pro (but went with a TNT2 Ultra instead).
and for those too lazy to type it, the bit.ly shortened link points to http://www.pcgamer.com/what-to-expec...ng-show-at-e3/
Take this with 10 tonnes of salt with an additional sprinkling on top:
http://www.tweaktown.com/news/52553/...980/index.html
http://i.imgur.com/vlvcbxM.jpg
30% OC vs <10% of the 1070? Hmmm.
Stock speed result looks feasible. But 30% OC result from a 10% increase in clocks from 1266 to 1400? I don't buy it.
edit: Ah, they're going from 1080 stock speed to 1400. OK that's more reasonable. Would imply the 8gb 1266 model should give 3744 result, ie between fury and fury X. Quite impressive. Put me down for an order with some cash for a freesync monitor to go with it :)
Assuming it's true it looks like AMD could be going back to their (at the time) hugely successful small die strategy with excellent value cards. If not rushing out the halo cards with early, poor yields means they can bring better value to the mid-range market then it could be a big win for them IMO.
Maybe someone at RTG has taken a look at what's worked and not worked for AMD/ATI over the years, and given how expensive it is now to design several new dies, it probably pays to concentrate on the areas they're more capable of capitalising rather than butting heads with halo products early on.
As I've said before, I'm not all that bothered about 200fps at 4k, ultra settings on a game which still looks like it could have been released 10 years ago so this sort of thing really appeals to me. That, and recently I tend to actually play the games I've bought about a year or so later so it gets easier to run them anyway. :P Even incremental driver improvements help a lot with that.