Read more.Offers up to 5.07 TFLOPS of peak single-precision compute performance.
Read more.Offers up to 5.07 TFLOPS of peak single-precision compute performance.
The 290x is tuned to run as fast as it can, I guess this will be tuned to use 235 watts. Subtle but important difference
Old puter - still good enuff till I save some pennies!
Don't believe everything you read...
Pic from: http://www.techpowerup.com/reviews/M...Gaming/22.html
Clocked lower so it can run undervolted. If this is a server part, you want to cram loads of them into a single chassis so performance per watt is more important than absolute grunt.
If you average 230W, just how much peak power are you consuming? The article implies 225W is the peak consumption, which is easier to power budget for and to wire up as you can get 75W from the motherboard and another 150W from a single PCIe cable.
Last edited by DanceswithUnix; 07-08-2014 at 12:02 PM.
I may not have been clear, sorry; I wasn't comparing it to the Firepro (although the average power of the 290X is close to the spec), just that the 290X simply isn't this power-sucking monster it's often made out to be.
its a banana
edit:
bit tech agree its on par with a titan for power drawr
http://www.bit-tech.net/hardware/gra...-x-oc-review/8
Platinum (07-08-2014)
Both titan and the 290X are going to be tuned for almost outright performance. The 290X is at least 10% faster than the FirePro card where it is allowed to be.
Plus they will be able to die harvest by now - save the lowest leaking chips for the firepro and sell the higher leaking ones as gamer cards.
TPU have reviewed quite a few 290X samples from different MFRs, with different boards, etc, and they're all around the same ballpark in terms of power consumption.
TPU measure the power draw of the card itself, removing errors caused by swapping components between tests which can have a huge impact on the results, and which some review sites do without making it obvious.
But yeah, as above, it's common for server parts to have lower clocks than their desktop counterparts, this applies to CPUs too. Depending on the workload, it may be favourable to gain some cores with the power saved by turning down clocks, for example, since performance gained by increasing clock speed isn't linear but adding cores may actually be close to linear (again, depending on workload).
I'd put my money on the techpowerup review. Bittech are measuring the whole system power draw which is about as much use as a chocolate teapot for us if we are trying to work out how much the actual GPU uses. There's no way for us to tell how much of that ~400w power draw is the CPU or the GPU without bittech telling us, which they don't.
There are currently 1 users browsing this thread. (0 members and 1 guests)