Read more.Quote:
The second-rung Maxwell GPU is not what it seems, apparently.
Printable View
Read more.Quote:
The second-rung Maxwell GPU is not what it seems, apparently.
Glad I've stayed with my AMD 7950 till now and hoping AMD come up with their next set of cards soon
Its a touch shady of them not to give the correct details to the press, but where did the press get the original specs from ? If it was all just assumed then the press are just as much to blame.
Anyway I bought a gtx 970 in the summer and the memory hasn't effected the cards performance as far as I can tell, all my games run smoothly at 2560x1440 (mostly maxed out settings) and I knew I wasn't going to go 4k for a few years so the card is doing a grand job despite nvidia's dishonesty.
The point everyone seems to be forgetting is that the cards price/performance/power consumption are still great, nothing there has changed.
I wish people would stop using Nai's benchmark for testing, most people aren't ruining it on a headless system as it's meant to be run, and even then it's nothing more than a CUDA based benchmark so it bypasses the DirectX and/or OpenGL built in memory management.
While GTX 970 performance might still be very good for the card's going price, the fact is that games already use up to 4GB of VRAM at 1440p (apparently some games can use more than 3.5GB at 1080p as well), and that situation isn't likely to change at all, considering how there will be plenty of console ports that come from platforms that have twice as much VRAM as current typical mid/high-end GPUs.
All in all, I'm glad I got a GTX 980 that did cost more than I wished, but at least is supposed to be a better built card and did avoid all this memory segmentation issue, but that doesn't diminish the GTX 970's price/performance ratio at all, only that people now have information that'll allow them to better understand the potential weak points of the card and whether it'll suit the user's usage.
Different day same poop from Nvidia once again... not even slightly surprised.
The (incorrect) specs were from Nvidia themselves: http://anandtech.com/show/8935/gefor...ory-allocation
SoM is showing what can happen - you dump a load of frequently accessed textures in that 512mb space and wammo , you get stuttering
Would this issue affect the mobile versions of Maxwell too? The 970m seems to come in 3gb and 6gb variants.
The question I can't get past is why did Nvidia do this?
Whats wrong with just using the same speed memory, and why did it take Nvidia so long to figure out what the problem was.... Did Dave from warehousing just decide to dump a load of slow 512 meg IC's on the GTX 970's?
The problem isn't memory clock speed, it's down to functional units on the processor itself being disabled, limiting throughput.
Often with processors, functional units are arranged in to blocks which can only be disabled with limited granularity, so in order to disable shader cores, it might include other parts. I'm not certain that's fundamental to this issue, but it's not just a case of using different memory ICs.
Caches are controlled by the processor itself and transparent to software. Memory OTOH is software-allocated, and if some software is allocating the bad memory areas it can cause problems, as is happening.
As others have said I find it extremely hard to believe no-one noticed the error in specs published to most (all?) news sites reviewing the card. None of the engineers, nor anyone who knew the base specs stumbled upon a review and thought to mention it to their boss?
To a lesser extent, it's also hard to swallow that no-one foresaw this at any stage of the processor design; the block fusing as I said tends to be fairly fixed (and is part of the design), hence it would have been known that fusing cores to make the 970 would take part of the memory subsystem with it. Even if it's possible to mitigate it to extent in software/firmware, why did it make it out without that in place?