now wouldnt that be IF you buy pre overclocked cards?
Agreed. Given that the current XFX ones are stock, there's no reason to suggest they're unreliable. Sounds like a disgruntled ex-XFX owner trying to get a word in. On the same basis, I try and veer people away from Asus stuff, since the only Asus card I had was dodgy. That in itself wouldn't be sufficient, but coupled with the fact that every other Asus product (of many) I've had has also been faulty, I have what I feel is enough evidence to suggest people consider alternatives. If Gonzo has only had one dodgy XFX card, that really isn't enough to condemn them, especially given how successful and seemingly reliable their ATI offerings are (remembering that ATI cards tend not to crash or break when they get hot).
[rem IMG]https://i69.photobucket.com/albums/i45/pob_aka_robg/Spork/project_spork.jpg[rem /IMG] [rem IMG]https://i69.photobucket.com/albums/i45/pob_aka_robg/dichotomy/dichotomy_footer_zps1c040519.jpg[rem /IMG]
Pob's new mod, Soviet Pob Propaganda style Laptop.
"Are you suggesting that I can't punch an entire dimension into submission?" - Flying squirrel - The Red Panda Adventures
Sorry photobucket links broken
I have to say my limit is £200 for a card, it seems at that cost it should handle anything, my 4870 did and now my 5850 does.
Depends if you want to play games or blow up your e-peen.
"handle anything" is subjective. If that means playing more popular titles at high settings with very minor lag, or playing Crysis on High but not very high at resolutions like 1680x1050 without filters and having a reasonably smooth experience but not perfect, then yes, cards like the HD5850 are fine.
if "handle anything" is interpreted literally as I do, it needs to play any game out there at maximum detail (stupid levels of AA excluding, I see little merit in exceeding 8xAA, even the gap between 4x and 8x is very minor most of the time), and I mean every game, and I mean 2560x1600, as that's the resolution I use. That, is exceptionally difficult. Four HD5870s would just about achieve it most of the time, but there'd still be a good few games where it wouldn't be quite good enough, and AA would have to be cut or removed altogether to keep the lag down. People who say 5970s are overkill are either using very small monitors (in which case yes, you wouldn't buy a £500 graphics card!) or don't fully appreciate how demanding so many games out there are. Take the recently released Just Cause 2 for example. It's quite a heavily nvidia-biased title (but seemingly only when AA is applied?) even at only 1920x1080 resolution, max the detail and add 8xAA and the HD5870 manages a minimum frame rate of just 27. What'd that be at 2560x1600? 16? Even if you had four HD5870s, to get that perfectly smooth you'd need 3.75x scaling out of the quad array to get the game perfectly fluid. That's no mean feat in itself, and JC2 isn't anywhere near the worst example.
I consider around that figure to be smooth in the concept of Strategy games, and maybe arcade style games that aren't too involving. While there are plenty of demanding RTS titles, the latter tend not to be too taxing for video hardware. FPS games though, it's annoying when the fps dips below 60 at all in almost any of them, due to how the frame timing works out.
Did anyone else notice how the picture linking to this article on the main page looks a little bit like an egg being fried?
Jesus I hope no one uses a dual screen setup with one of these cards, its seems they idle on 2 screen setups at 85 degrees.
NV posted: "We are currently keeping memory clock high to avoid some screen flicker when changing power states, so for now we are running higher idle power in dual-screen setups. Not sure when/if this will be changed. Also note we're trading off temps for acoustic quality at idle. We could ratchet down the temp, but need to turn up the fan to do so. Our fan control is set to not start increasing fan until we're up near the 80's, so the higher temp is actually by design to keep the acoustics lower." - NVIDIA PR
Jesus this just gets worse.
You could always use Rivatuner or something to set the fan speed yourself.
Kalniel: "Nice review Tarinder - would it be possible to get a picture of the case when the components are installed (with the side off obviously)?"
CAT-THE-FIFTH: "The Antec 300 is a case which has an understated and clean appearance which many people like. Not everyone is into e-peen looking computers which look like a cross between the imagination of a hyperactive 10 year old and a Frog."
TKPeters: "Off to AVForum better Deal - £20+Vat for Free Shipping @ Scan"
for all intents it seems to be the same card minus some gays name on it and a shielded cover ? with OEM added to it - GoNz0.
tbh, that was marginally better than I was expecting. I wasn't expecting much, though.
Looks like it's actually a decent architecture, there's some genuine innovation there. It's pretty consistently ahead of a HD5870, so fastest single GPU goes to NVidia. They've also kept the power draw down to 250W, and managed to clock it at a reasonable speed - there were rumours of it being much closer to 300W and shipping with 1200MHz shader clocks. Plus, hexus found some overclocking headroom, which gave it a nice performance boost. In all of those respects, it's a better GPU than I thought it would be.
Of course, it's very hot, and hence noisy. Given the heat output, there's no way you could run one overclocked for any length of time without some better cooling than the stock - almost certainly water, or something even more esoteric. It's too expensive and there's no stock, but that's more a feature of the yield problems in TMSCs 40nm process, and you can say the same about ATI's current cards, even now, 6 months after launch.
But for me, the most damning thing about this launch in that we know it's a cut down version of the card it was meant to be. It's right there in the architecture diagrams, clear for all to see: nvidia can't manufacture the card they designed.
I think the best we can hope for - now the card is out, unveiled, tested, reviewed, discussed, lambasted etc. - is that nvidia go back to the drawing board, work out how to make the design work, and come back in 9 months time with a working fermi, manufacturing problems resolved, that actually performs how it's meant to and gives us a competitive graphics card market. Because if they don't, my next upgrade is going to be even further away than I thought...
Depending on where the newer games go in a design sense Fermi may end up blasting the ATI cards due to the superior tessellation offered. But anyway like Scary, its a shame they couldn't produce the original card - I'm hoping once the yields improve they will actually release the real GF100.
Kalniel: "Nice review Tarinder - would it be possible to get a picture of the case when the components are installed (with the side off obviously)?"
CAT-THE-FIFTH: "The Antec 300 is a case which has an understated and clean appearance which many people like. Not everyone is into e-peen looking computers which look like a cross between the imagination of a hyperactive 10 year old and a Frog."
TKPeters: "Off to AVForum better Deal - £20+Vat for Free Shipping @ Scan"
for all intents it seems to be the same card minus some gays name on it and a shielded cover ? with OEM added to it - GoNz0.
guru3d have done a very nice review, you will need at least half an hour to read it all !
evga hydro copper for me when i can afford one
Actually, tessellation performance was one of my biggest disappointments from fermi in this review. We'd all heard the spin about how brilliant fermi was at tessellation and how it blew ATI out of the water, and it turns out that's only true at extreme tessellation settings, and you get about 95% of the benefit from using the lowest tessellation settings. In terms of relative difference, moving from no tess. to moderate tess. is a huge improvement in image quality - thereafter the difference is minimal, and ATI easily keeps pace with nvidia until you move to the highest settings, which actually make very little difference to the overall image quality. So, one of fermi's biggest selling points - it's fantastic tessellation performance - turns out to be utterly superfluous!
It is funny that Guru3D seem so impressed by the GTX470 and GTX480 whereas Anandtech, HardOCP and Bit tech don't seem that impressed.
From what I have heard the next generation of ATI graphics cards will be launched at the end of the year and AFAIK it will be a totally new architecture.
The HD5850 when overclocked to HD5870 clockspeeds seems to offer very similar performance.
Even if the GTX470 does show an advantage at the end of next year you would probably be able to sell an HD5850 1GB for around £40 to £60.Considering that the HD5850 1GB is £80 to £100 cheaper than a GTX470 you would be able to get a better second or third generation DX11 card for around £100 to £140 by that time IMHO.
If the GTX470 was the same price as the cheaper HD5850 1GB cards then it would make more sense. However the lower power draw of the HD5850 1GB makes it less PSU dependent and of course the HD5850 1GB cards overclock better too. OTH,the GTX470 potentially has better long term performance(not guaranteed though) and additional features which maybe of use(or not) like CUDA.
Last edited by CAT-THE-FIFTH; 29-03-2010 at 12:29 AM.
There are currently 1 users browsing this thread. (0 members and 1 guests)