Read more.Top-end Quadro GP100 with 16GB HBM2 packs 3,584 CUDA cores for up to 20TFLOPS.
Read more.Top-end Quadro GP100 with 16GB HBM2 packs 3,584 CUDA cores for up to 20TFLOPS.
Wonder how much effort they have put in to the "GeForce mode", I know these things are overkill for gaming, but who wouldn't want them if they significantly outperform the gaming class cards?
In what way are these overkill for gaming? As far as I understand they just focus more on compute, especially FP64. I'd genuinely expect these to be worse in today's games, especially if they required (obviously meagre to nonexistent) optimization to perform at their best. That said, it would be nice to know what P100 can do...
I doubt they would be any better - it seems the GP100 and GP102 have the same amount of shaders but the former has a lot of transistors dedicated towards FP64 performance,so the GP102 probably can boost out of the box higher anyway.
It also means,we might be seeing a much larger 16NM part at some point too,which is much faster than the GP102 in gaming.
Okay, overkill wasn't really the word I was looking for there. These aren't designed or "marketed" for gaming, and to spend the premium on one only to use it for gaming would be daft as all that nice compute stuff wouldn't be used.
Anyway, I'd still be curious to see what they can do
I'm more interested in the bit about scaling the memory... "Customers can combine two GP100 GPUs with NVLink technology and scale to 32GB of HBM2". I wonder if they have finally made it so they can share the memory (I know they were working on it) or if it's just marketing, if they have done it it may be better in the future to buy two lower end cards in sli for the extra memory for 4K etc.
Well, they are overkill in terms of core design, cost and features, just not gaming performance. Apologies if this sounded rude.
Having already used the Titan name, and not even delivered 1080Ti (yet?), what exactly are you expecting to see in that form? And what sort of customer would they be aiming at? Don't hold out on us like this!
If I'm well informed (doubt it) a few or several productivity compute software packages are capable of combining GPU memory into a single pool. I would expect this is just marketing, especially bearing in mind that NVidia deliberately cut SLI out of the 1050 and 1060 models.
Last edited by Ozaron; 06-02-2017 at 01:36 PM. Reason: combining posts
I Think Its Most Powerful Gpu Till Date
NVLink is a coherent memory fabric - in the much vaunted deep learning boxes (DGX-1, was it?) the P100's sit on a daughter board and have coherent links to each other's memory spaces, so the whole lot can be addressed as one. However I strongly suspect, as CAT hints at, that applications would need to be specifically written to take advantage of that capability, and as such it won't apply to general workloads or gaming.
If Nvidia can't manage a simple NUMA memory layout over their memory fabric, then I would have to wonder what the point in it is.
Still, by the time games need more than 16GB of ram this card will be obsolete
People using these for things like Neural Nets will probably want to control the boundaries themselves, but tools to make moving values from one card to another easy will still be useful.
Actually, the upcoming Radeon Instinct MI25 has 5TFLOPs more compute capability at 16bit precision: http://hexus.net/tech/news/graphics/...hine-learning/
At 25TFLOPS @ 16bit (scalable with mixed precision), the MI25 25% more powerful (on paper). A lot of my 3D, 2D and video processing friends vastly prefer AMD compute solutions except in software where Nvidia have strongarmed themselves into it and Cuda is the only truly supported option.
There are currently 1 users browsing this thread. (0 members and 1 guests)