Fair point - and me too!
My next card could be a 480 - unless ATI come out with a single GPU card to challenge it.
Don't really care who makes it - certainly don't give a stuff about physx rubbish etc.
Fair point - and me too!
My next card could be a 480 - unless ATI come out with a single GPU card to challenge it.
Don't really care who makes it - certainly don't give a stuff about physx rubbish etc.
i may get a 5870 card. Should complement me nicely replacing my 260gtx
j.o.s.h.1408 (19-03-2010)
If he'd used 'I don't care' at the start, that would have been OK. But as he said 'Who cares' as a negative connotation rather than a question, yeah I took offence to that, because I care. He didn't have to present his attitude so negatively.
If I had said 'Who cares about Hexus, I just want to see some results', clearly shows that I don't appreciate any time the people here have put into reviews or care about them. This would get a few peoples backs up. But I don't say that, because I understand the effort that goes into it.
Maybe I'm on a short fuse today. Maybe some people are short on brain, too.
So, just to clarify - a quad core which has a single memory controller, and a single memory interface is the same as two gpus with their own memory controllers and interfaces? That's exactly what you're saying? Surely there's an obvious architectural difference? Or am I missing something? This is forgiving you for talking about 'ghz' when the issue is actually bandwidth.. I'm pretty sure if they had two GPU's with the same memory controller and memory pool they'd choke a lot more (much like a quad core might given the same access requirements) - perhaps then it'd make some sense!
Nope.
I'm saying that it's wrong to label as 3ghz quad core as actually being 12ghz. Similarly, it's wrong to label a 5970 as having 512bit memory or as having 256gbs bandwidth. In the former you don't quadruple the ghz just because it's a quad core, and in the latter you don't double the memory bus or bandwidth (or speed of GPU in ghz if you want) just because it's two chips.
On that I agree - what I don't agree with is comparing quad core ghz to bandwidth on dual GPU cards because it's a very different kettle of fish (for reasons as pointed out). I think ghz actually muddies the waters somewhat when the issue is bandwidth (and hence talking about memory controllers et al) - does a dual GPU card have the same bandwidth as a single card? Of course not, but neither does it have (in this case) the same effective bandwidth as a single gpu card, or a dual gpu card sharing the same controller and memory pool. If it did, it'd suck. Of course the scaling isn't linear and hence why I haven't ever (ever) gorn and done SLI or crossfire. And hence why a single badass GPU floats my boat more so than dual. And so, is this 480 as good as a dual card or er.. wot? Since I have a 280 which effectively is the 'fat bastard' card of the last gen, a 480 is pretty much the same monster till the 485 no doubt..
It's a simile - the technical reasons are not the same, nor did I ever claim them to be.
I thought it was relatively well understood that you don't necessarily multiply an attribute just because you have more chips - I gave the example of another attribute that isn't multiplied when you have more chips to help the reader understand.
What absurd power requirements. Glad to see them using GDDR5 finally, something that ATi developed.
I must say I took the 3ghz quad comparison just as it was meant, have more than one of something doesn't multiply the the final figure. Nothing more.
As for just wanted gaming benchmarks, same here. The primary purpose of a GPU is to make graphics (most used in games). There is many of us for don't care what cuda is up to.
I'm not making my judgment until Nvidia have offically released the specs.
now i want my PC to be quiet so if the Nvidia are hot it may put me off them but yet i also wouldn't mind 3d which ATi don't do yet (although they have included somethign in their latest driver release).
The cards also need to be able to bitstream HD tracks like ATi do, not like Nvidia have done in the past and used the SPDIF header.
There are currently 1 users browsing this thread. (0 members and 1 guests)