Last edited by CAT-THE-FIFTH; 01-07-2017 at 06:58 PM.
My problem even then, is there seems to be no real architectural gains (clock-for-clock) from GCN, and that would likely push power through the roof.
I can't remember where I read/heard it but apparently some of the architectural changes require some software modifications to take advantage of when comparing to e.g. Fury - either in the game engine itself, in the driver abstraction layer, or in the driver on a game-by-game basis (e.g. for older games made without consideration for Vega), which hopefully isn't the only solution for obvious reasons.
While I'm not about to jump on the 'drivers are going to ~double performance' bandwagon, like I said earlier it really seems like something isn't yet working correctly in gaming workloads as performance just doesn't make sense. Even ignoring all of the architectural improvements and assuming this is just a scaled-up Polaris, or higher-clocked Fury, performance just isn't where it should be.
In gaming workloads I think it's reasonable to expect Vega to be behind GP102 on a perf/mm2 basis given the stripped-back nature of the latter, and the same thing for perf/watt. But in absolute performance terms, I was expecting something like double Polaris 10 at a minimum which would mean ballpark 1080Ti. Just with higher power draw.
Well it could be performance/clock has had to go down slightly so they can clock the GPU higher?? IIRC,Pascal is meant to be slightly worse than Maxwell at similar clockspeeds and shader counts.
Good point, I'm just assuming (perhaps incorrectly) that base IPC would be at least equal to Polaris. In the presentation slides, AMD claim higher clocks and IPC, but this is a bit of a nebulous claim and could just refer to rapid packed math.
At a high level, it seems the width of an NCU is the same as a CU so theoretical peak IPC should be about the same, but perhaps they've slackened timings or something to hit higher clocks? TBH I've not read that much in depth about either Pascal or Vega in this regard.
AMD has now confirmed the actual die size is between 484MM2 to 529MM2:
https://twitter.com/GFXChipTweeter/s...70308694822913
That is quite close to the GP102,and its a dual use GPU,so between GTX1080 and GTX1080TI level performance would be quite good if better coolers and more gaming optimised drivers can add extra performance.
Edit!!
Gamers Nexus has their review up:
http://www.gamersnexus.net/hwreviews...o-soon-to-call
There seems to driver issues even outside gaming and even some of the non-gaming scores also might hint and drivers needing to improve too.
Second Edit!!
Apparently the memory might be downclocking too.
Last edited by CAT-THE-FIFTH; 02-07-2017 at 12:01 PM.
So is there anything that the card is actually good at?
When I heard about the release I just assumed it was going to get used for machine learning as there is probably quite a market right now, and ignored it.
Mind you, just being in stock seems like a unique selling point atm
Meh, I took the money I could have spent on Vega or a 1080 and blew it on a 3d printer. Kit just turned up, am still assembling it, so far it feels like a far better toy than a graphics card that does the same as my old graphics card but a bit faster
In the SPECviewperf12 testing it's at least 60% faster than Fiji*, and generally a lot more. Given the clock speed advantage was less than 40% it's clearly better designed for compute and professional workloads. In those tests its also looks a lot more consistent than Titan Xp, which leaps about all over the place compared to the nvidia pro cards.
As CAT says, it's not an out and out pro card because it doesn't have certified drivers, but it does perform fairly consistently like a pro card, including frequently beating a Quadro P5000 that costs twice as much. There's also - again as CAT has already said - a lot of questions about the drivers, including the fact that it appears to not be using tile based rasterising: although I've seen debate as to whether that's because it's not available in the driver, or if it's because it's choosing not to as it think TBR would be slower/less efficient.
I've already thrown my tuppence in on this, but there is one other thing to consider - the 1080 Ti is currently £700, give or take. If Vega FE had launched with 1080 Ti gaming performance, how long do we think the 1080 Ti would've stayed at £700?
* that 60% minimum is only in one test out of 9 - in two others it's 75% faster, and in the rest it's 100%+ faster. That's a big architectural leap, in pro workloads, at least.
What's this thing about the certified drivers? I've heard it a few times but I'm not sure what it refers to - perhaps certified by some of the software companies for use with their software?
I'll be honest, I'm not 100% sure myself, but I'd guess that's what it is. Or maybe some companies have their own certification schemes (like WHQL for Windows in general). Or perhaps it is just WHQL itself. I'm sure CAT will appear and fill in the gaps
For professional users, and particularly those in big corporations, I can see the benefits of having properly certified drivers. OTOH I can see many self-employed creatives deciding that it's not entirely worth their time and effort worrying about driver certification, particularly if they typically have the kind of workloads where the Vega FE comfortably beats the twice-as-expensive Quadro P5000.
AFAIK a FirePro Vega card with certified drivers will land eventually. I've heard it mooted in quite a few places that the Frontier Edition is literally a share-holder launch - the best they could get out before the end of June so they could tell shareholder they'd hit their release target. I'm also intrigued that it's a 16GB card - that means 2x 8GB stacks, whereas consumer Vega was rumoured to be an 8GB card. Makes me wonder if the 4GB HBM2 stacks arrived later than the 8GB, and they had to release a prosumer card first as the 16GB equipped cards would be too expensive for the out-and-out gaming market...
EDIT: if the Quadro page is anything to go by the certification is on a per-application basis: http://www.nvidia.co.uk/object/quadr...artnerSelected
EDIT 2: AMD have a search form for their certified drivers: http://support.amd.com/en-us/downloa...tion/certified
ok back to zen ...
for the ryzen cpu higher memory is better ? but only with tight timings ? cl14-15 ? just wondering why there pushing a brand of mem on a diff forum @3400-3600 .. cl16-18
or I'm I missing the point ?
What does it matter now if men believe or no?
What is to come will come. And soon you too will stand aside,
To murmur in pity that my words were true
(Cassandra, in Agamemnon by Aeschylus)
To see the wizard one must look behind the curtain ....
Single rank memory with Samsung B die chips is more likely to be able to run at higher speed and at decent latencies.
I think I might have posted a few pages back some tests with speed and latencies.
See over at Anandtech Johan de Gelas and Ian Cutress have started a dig into EPYC and Skylake-EP:
http://www.anandtech.com/show/11544/...-of-the-decade
Big read, but as the mention a few times they have more to do so very much a work in progress.
CAT-THE-FIFTH (11-07-2017)
Just about to post that, particularly the FP section where it does surprisingly well against Intel. It also does it while drawing considerably less power than the Intel equivalent!
Computerbase.de tested the Vega FE and found the card in reality seems to have quite low memory bandwidth in their test suite.
I do wonder whether this might be part of the reason the gaming benchmarks on the Vega FE look underwhelming.
There are currently 5 users browsing this thread. (0 members and 5 guests)