I'd buy a CPU with integrated GPU if it used intelligent power features, using the on-die for 2D and only powering up the add-in card for when heavy GPU processing is needed.
I'd buy a CPU with integrated GPU if it used intelligent power features, using the on-die for 2D and only powering up the add-in card for when heavy GPU processing is needed.
Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive
I might buy one depending on implementation. The last rumour I heard from AMD was that their modular architecture will essentially use the GPU element of the CPU for most floating point calculation, and that they were actually cutting back the number of dedicated FPUs on the CPU. A highly modular CPU with multifunctional elements that can do a variety of workloads as appropriate sounds like a good idea to me, but I'll be waiting for some time before upgrading - I've yet to find a task I do day-to-day that my Q6600 can't cope with at stock speeds, and I've got a summer project planned to get more out of that rig without changing any components, so I reckon it'll be a good couple of year before I think about a system upgrade anyway...
Not *quite* my understanding.
If FP heavy workloads are being offloaded onto the GPU, then the CPU doesn't need to be so FPU intensive in future. The GPU and CPU sound nicely integrated in terms of silicon layout and memory controller access, but it still sounds like they are very much a CPU and a GPU in terms of how they are programmed.
However:
1/ If that is the case, offloading onto my plugged in card with its own dedicated high bandwidth RAM will be better.
2/ If intelligent power saving is already happening in the CPU, then it is bound to happen in the GPU as well. Shut down all but say 80 pipes on your 1600 pipe card, no need for anything on the CPU.
Hmm, I can't find the article where I read that now From the articles I have been able to find, Bulldozer's design appears to be based on a hybrid module, that uses 2 integer cores but a shared FPU - so not quite a dual core module but more than a single core!. They also talk about the modular design, and the possibility of incorporating a GPU module into the processor in a similar way to adding extra cores, so I suppose there's no reason it couldn't have massively parallel processing done on it as part of an enhanced CPU.
The frustrating thing is I swear I've seen a quote from someone at AMD talking about offloading heavy FP loads to an integrated GPU-type unit, but I really can't find it now. Perhaps I dreamt it... *shrug*
Isn't that basically the same argument as DX11 at the moment though? IIRC, both Dirt 2 and AVP take a massive framerate hit when you turn on tessellation and advanced shadows.
Incidentally, is Eyefinity open technology or propietary technology?
At the risk of sticking my neck out, I think Nvidia gets a bit of a raw deal on these forums at times. Don't get me wrong, I'd like on-GPU physics to be vendor neutral as well, so that it wasn't a factor when choosing a card, but then I'd like my GTX 275 to support 3 monitors as well...
And before anyone accuses me of being a Nvidia Fanboi, I don't really care who developed the card I use, I care about what it can do. I happen to think those games that use hardware Physx look better for it, and since I play a couple of those games, that's what I went for. For my last card I was looking for a decent bus-powered card that could game at low resolution, so I got a 4670. For my next card, I'll look at the features on offer in my price range (and the performance, obviously) and choose based on that.
***ducks before the flames start***
EyeFinity is an AMD technology.......you will need an AMD/ATI 5xxx card to use it.
But, AMD will not disable the feature if they spot a nVidia card in your system
Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive
Probably an Anandtech article, they had a couple of good ones on there.
Spot on with what I have read on Bulldozer, however I gather that the shared FPU is a bit of a monster in its own right compared to Phenom.
I got the impression they could flexibly bolt GPU capability onto the design, but it was still a GPU and the FP offload spiel was just CUDA style marketing speak.
It is rather irritating the SLI/Crossfire split (still unconvinced by that, remember 3DFX?).
As well as the different vendor specific things like CUDA/PhysX/EyeFinity. These really are things that should be universal amongst the cards I think.
A slide on brightsideofnews shows for the year 2011 the "Llano" APU with 4 cores and integrated GPU as "mainstream", and "Zambezi" bulldozer based 4 or 8 core CPU coupled with discrete graphics for "enthusiast" segment.
http://www.brightsideofnews.com/news...-2011-set.aspx
Anandtech says the Llano has Phenom derived cores, possibly tweaked but with no shared L3 cache (so Athlon II style).
http://www.anandtech.com/cpuchipsets...oc.aspx?i=3736
Sounds like Zambezi is the one for me, should give me enough CPU grunt to handle physix on CPU too
Just 16 cores then, sir?
I do wonder how Bulldozer will do for single core performance, since (I assume) the CPU will only use one of the int blocks to run single threaded software - at least they've finally moved to 4-issue for those block, though. I suppose, if it's got a big fat FPU that's designed to handle 2 cores' worth of work, proper optimisation could see single core performance fly through it...?
Yep - if you have a fat FP instruction then you can go through in less cycles. This is kind of what happened when Intel took the wrong direction with the p4 - they were fast but slower AMD (and intel mobile) parts beat them by doing more per clock (or requiring less cycles per instruction).
Whether your usual workload is going to suit that or not is another matter - Intel have really stepped up since the p4 and their efficiency is awesome, and the vast majority of home use is not large FP stuff.
I'm going to have a 2nd attempt. Just ordered a used GT 220. Although they're not great GPUs, physx performance is supposed to be near a 9600GT.
Hoping this one will work in the 2nd pci-e slot.
The interesting bit is going to be OS support for this.
If you have two FPU heavy threads and some integer heavy threads, you will win big time if the FPU threads are on different cpu modules and don't share an FPU. Windows XP won't ever know that though
I know you can set core affinity on a program, can you set core affinity on a thread?
Yes, and windows 7 does this automatically, for example to make sure that where possible threads run on concurrent cores rather a core and hyperthreaded core from the same chip.
However I suspect this will be OS agnostic - Int loads will be internally parallelised where possible as part of the dispatch process rather than at the thread level - you will just get more instructions per clock on Int loads.
There are currently 1 users browsing this thread. (0 members and 1 guests)