Read more.NVIDIA's 40nm GPU isn't expected until at least later this year, but we're already hearing rumoured specifications.
Read more.NVIDIA's 40nm GPU isn't expected until at least later this year, but we're already hearing rumoured specifications.
that looks nothing more than expected.. anyone want to vs it with ati's future card specs?
VodkaOriginally Posted by Ephesians
Surely if it's running GDDR5 the effective memory clock should be in the region of 4400MHz, not 2200?
one word... Expensive!. If that came out it would cost a crap load, look at the buffer! 512bit and with ddr5 that must cost one arm to make and sell for an arm and a leg lol.
Wont really worry about specs still its even announced lol.
Because the HD4870 has 3600Mhz GDDR5 memory and the HD4890 has 3900Mhz GDDR5 memory> http://www.hexus.net/content/item.php?item=18359&page=3
Not sure what card you have but if you are looking at frequencies in an overclocking tool such as RivaTuner or AMD Overdrive you are seeing the base clock which is then multiplied by 2 to get the effective clock. So a max of 2200Mhz showed in a overclocking tool is in fact 4400Mhz effective frequency. GPU-Z also shows you the base clock of the card not the effective clock with multiplyers in place.
I already accounted for 'effective' frequency. Understandably, people are getting confused over real frequency and 'effective' frequency, and often double up the 'effective' frequency because they didn't realise that they already had 'effective'.Code:aidan@aidan-i7 ~ $ aticonfig --odgc Default Adapter - ATI Radeon HD 4800 Series Core (MHz) Memory (MHz) Current Clocks : 500 900 Current Peak : 750 900 Configurable Peak Range : [500-790] [900-1100] GPU load : 0%
I'm happy as long as it brings down the price of the current GPU range. These things are beasts for computational simulations.
I thought GDDR5 was *actually* QDR and just misnamed? Or does the 4890 have an actual memory clock of 1950MHz?!?
EDIT: I did my homework (i.e. looked at Wikipedia) and it says "GDDR5 is the successor to GDDR4 and unlike its predecessors has two parallel DQ links which provide doubled I/O throughput when compared to GDDR4." So does that means that it effectively doubles the frequency of transfers, or doubles the bitpath throughput of transfers? I'm very confused now...
if its mostly just an extension of the current design the gt300 will be, well powerful of course but, because the shaders aren't really in clusters/packs like the ati design, you get a LOT more transistors and die size for every extra SP because essentially the logic/tranny overhead per shader is FAR higher on the Nvidia design than the ATi design at the moment. a bump from 800 to 1200 sp's on the 5870(if to be believed) won't increase the core size anywhere even close to 50% despite a 50% bump in shaders. The nvidia core , assuming over a 100% shader increase 240 to 512, wouldn't be 100% bigger, but it would be a much more linear core size increase. Meaning either the GT300 has switched to a similar shader setup to AMD, a group of shaders that can do "up to X instructions per clock" rather than 1 shader = 1 instruction per clock, or their GT300 will be HUGE. ASsuming all but identical design to their current models with simply increase shaders and core logic to go with it, the gap between yields/size will continue to increase between AMD/Nvidia, meaning this new part would be even more expensive than now, be even further away from AMD in bang for buck and Nvidia will be losing money.
IN all likelyhood Nvidia HAVE to move to a more efficient AMD style smaller core design because frankly, it will cost them far to much to not be competitive this round. If they do then their peak gigaflops numbers get a lot less impressive like AMD's are massive now but in reality you can't leverage all that power all the time in games, while the brute force design for Nvidia is far simpler to leverage the power out of.
It's a common source of confusion. It just means they've essentially slapped extra channels onto the memory controller ICs, thereby effectively doubling the bit rate. But the clock frequency is still 900-1150/1800-2300 actual/effective.
Personally I think all this DDR/QDR business is pretty misleading, since the clock frequency never actually goes above SDR frequencies. They should stick to bit rate metrics if they want to communicate how much data the bus can push out, IMHO.
So GDDR5 with a 128-bit path can actually shifts 256 bits per transfer, and does this twice per clock cycle, right? As oppose to shifting 128bits four times per clock cycle, which would be QDR?
Now I know why I didn't become an electronics engineer...
As you might have guessed, I'm determined to understand this before the end of the working day
Last edited by scaryjim; 19-05-2009 at 04:37 PM.
I'll believe it when i see it. Also given that nv is rebranding like mad as they cant get the shrink right... I doubt you will see this card before Xmas. More likely Q1 2010.
My next cards are ATI for sure. Got a 4770 that is seriously asskicking in the other half's machine. I want a new card
There are currently 1 users browsing this thread. (0 members and 1 guests)