Cheers, I've been looking for a die size comparison like that for ages.
Cheers, I've been looking for a die size comparison like that for ages.
So the codename for the 2014 Jaguar successor has now been leaked:
http://www.fudzilla.com/home/item/30...essor-is-beema
The change in the Phenom X4 line when it went from C2 to C3 stepping was quite noticable so that could be the only hardware change in Richland.
But the figures AMD are quoting are CPU *or* GPU rather than game benchmarks which makes me think they are better able to balance energy from boosting CPU or GPU when the other is under light load. That could make the extra watts available in the 100W bin more valuable and make the top part stand out.
Better balance between CPU and GPU could make external graphics cards work better too.
From what I have seen so far, PS2 is a bit of a mess and needs seriously fixing/optimising. It's the only game I can remember where I get a smoother framerate out in the open with a gigantic draw distances, then a do in a building where I can't see further than 20 feet. Very odd indeed.
Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive
A slightly excited article about HSA:
http://www.overclock.net/a/amd-fusio...-revolutionary
TH article about the history of AMD fusion and how it began:
http://www.tomshardware.com/reviews/...tory,3262.html
A range of CPUs tested with a GTX670:
http://translate.google.com/translat...omu-po-core-i7
The FX6200 does very well against the Intel Core i3 CPUs and considering that the FX6300 is generally faster for games,it is reinforces why I think it is a better CPU overall for a gaming rig.
My Tech, PC and gaming blog is at http://computergaming.daonews.com/ you're welcome!
FB page at https://www.facebook.com/pages/Compu...ookmark_t=page
Excited? It's like a puppy has scoffed a load of happy pills
I really liked:
"HSA is very innovative because aside from the benefits it will provide, it has been designed to succeed from the start."
like anyone makes an effort to design new tech to fail from the start...
To be fair, I think they got quite a few right, they just aren't taken seriously until Intel start doing it as well.
Amusingly the only one that really stuck quickly was the AMD64 instruction set, and that is supposed to be only because Microsoft forced Intel to adopt it. That was kind of a game *not* changer Mind you, looking at the dead end alternative that is the Itanium architecture I am at least glad that didn't take off.
Of course, if Itanium had taken off it probably wouldn't be a dead end now...
As to HSA, I think it's worth remembering - if I understand it correctly, anyway - that it won't magically make all current software 5x faster. Software will have to be developed specifically to benefit from it, and that's unlikely to happen (quickly) in the consumer market as AMD's market share is pretty small. After all, in the consumer market the software developers have no control at all over what hardware someone will try to run their software on.
It'll make a huge difference in the server and HPC markets though, where software development/procurement and hardware procurement are often intrinsically tied.
Itanium was always a stupid idea.
Intel is struggling to get more parallelism out of the awful architecture that is x86.
The answer is to schedule in blocks of 6 instructions? Now to run two bundles at once you have to compare all resources used in one bundle against all the resources used in the following bundles to run a bundle out of order. That's *really* hard. So you rely on software scheduling at compile time to get it right? Some things you only know at run time.
I just don't think Intel are very good at this stuff. The fact that people argue whether Intel or AMD are better when they both sell the same i86 junk, its like arguing whose farts smell best. Go sniff roses for pity's sake, there is better out there
I wish someone would get AMD's VCE working properly in some transcoding software. AFAIK, even software which claims to use it now, in fact only uses OpenCL to make use of some GPU cores, not the dedicated hardware. Intel with their juggernaut marketing approach managed to get software out for it fairly quickly, but it seems people don't tend to code for specific features for AMD nearly as quickly.
I'm hoping, if the rumours are right about the next Xbox/PS using HSA, it will be a good, large-market learning tool for developers, so it can carry over to PC.
Even when Intel had a clear performance deficit to AMD for ~7 years, they still held something like 80% of the x86 market share. Combined with incredibly biased benchmarks commonly used by big companies today, you need more than a performance advantage to gain market share unfortunately.
The thing is though that Adobe is starting to migrate over to OpenCL acceleration for their Creative Suite. The Mac version has already ditched CUDA and the PC version is going to probably do the same over the next year or so too.
The HSA Foundation does have some big backers though.
I will make a prediction. I expect Haswell to get better compute abilities and I expect "all of a sudden" more reviewers and forums will start to see how "great" using IGPs for acceleration is,if Intel does it.
Agreed.
Somewhat off topic, but it seems Nvidia didn't bother with OpenCL/compute on Tegra4, and has lost customers because of it. Strange move for a company trying to push GPGPU?
There are currently 1 users browsing this thread. (0 members and 1 guests)