maybe they should have tested at `high` , as ultra is gpu limited but medium isn't.
Kaveri is DDR3 only. No GDDR5 support planned.
http://www.fudzilla.com/home/item/33...port-on-kaveri
So Intel's latest CPU is actually a 486.
http://semiaccurate.com/2013/10/28/i...-little-quark/
Just when the world heads to 64 bit
For Batman: Not overly surprising really, interestingly though PhysX effects seem to run reasonably well on CPU according to Techspot: http://www.techspot.com/review/733-b...rks/page4.html
I wonder if they've actually re-written it or it's just a case of CPUs getting fast enough to brute-force their way through the code now?
For Kaveri GDDR5: Also not really surprising IMO, I did wonder how they planned to implement it. I could see it working for certain laptops i.e. soldered in memory. Maybe DDR4 will help with bandwidth.
I dunno, the PS4 uses external GDDR5 as main system memory, so they must reckon it gives them some advantage.
I guess chip density is the biggest factor: most graphics cards are currently using 2Gb chips with one or two chips per 32bit channel, but that still limits you to 8GB of total RAM over a 512bit interface. On a more reasonable 256bit or 128bit interface you're struggling to meet memory demands. Sony are, I believe, using 4Gb chips for the PS4, with 2 per channel, so they'll hit 8GB over 256bits. On the more typical 128bit interface of an APU, you could get 4GB of total system RAM using soldered on GDDR5.
While I can see the logic for not pushing GDDR5 onto mainstream products, I do hope there'll be an OEM brave enough to take a punt on a semi-custom chip with ULV Kaveri CPU cores, maybe a wider GCN implemention (512/640/768 cores?) and 128bit GDDR5 memory interface. Stick it in a light, portable notebook chassis and you could have a very tasty product for gaming on the move
Not that I'd be surprised if it were true, but yeah that 'source' doesn't look to reliable. They claim 'latest news' but don't give any source material - or are they claiming to have unique access to leaked documents?
I think the nature of the console market helps Sony here.
If an APU was released using GDDR5, then the moment DDR4 comes out it is dead. Far more dead than DDR3 platforms, as DDR3 will at least have a cost advantage for a while.
I guess that's true from a platform perspective, but from a device perspective I don't think it's an issue. You can build a tablet or laptop with a console - or perhaps a should say consumer electronics - mentality: in the ultraportable market a lot of devices aren't upgradable anyway so widespread adoption of a technology and platform longevity isn't really that important.
Of course, in laptops I'm not sure how much difference DDR4 will make anyway. As I understand it its main advantage other then higher clocks is a point-to-point interface (one module/channel), and I can't see many laptop manufacturers wanting to give up extra chassis space to fit more than 2 sodimms. So even with the predicted clock speed increase for DDR4, 128bit GDDR5 will provide equal or better bandwidth (at the cost of scalable memory capacity). Assuming graphics card will stay with GDDR5 for a few years yet, there's every chance that higher density chips will start to become available, so a mid-term strategy of soldered-on GDDR5 might actually work for enterprising OEMs looking for product differentiation in the mobile gaming market.
On the desktop, of course, the flexibility of DDR4 and the ability to increase bandwidth by adding more modules easily surpasses any short-term gains of using fixed GDDR5. And I suspect it will be with the move to DDR4 that we start seeing AMD adding more cores to their APU graphics: 4 DDR4 modules at 3200MT/s surpasses the bandwidth available to a discreet 7790....
The advantage for DDR4 is expected to be density. It seems to be designed for stacked dies, allowing more on a package. That means less channels needed for a given amount of RAM which drives down cost.
So I was thinking in my earlier post in terms of someone like Dell or HP, I wouldn't want to invest money in developing a product that will obsolete as soon as DDR4 is released. That doesn't matter to Sony, partly because their chip is more graphics then CPU so GDDR5 is probably a better fit, and partly because you can't go and buy a competitive PS4 from another vendor as the design is fixed.
Now, don't echo any of this as I don't want to become a source of possible rumours, it's just a discussion starter, but the situation with Intel vs others process nodes might not be as bad as it seems going by the marketing numbers.
Of course, the xxnm numbers we get don't necessarily refer to feature size, it's just a name for the node, and different companies can have different definitions. Also, different features scale at different ratios, especially at these lower node sizes, although SRAM still seems to scale fairly well i.e close to 2:1 with a full node drop.
Density isn't purely down to min. feature size, but even excluding the dense GPU logic, Bobcat on TSMC 40nm was far denser than Atom at Intel 45nm (more than you'd expect for the half-node shrink).
I've been reading around and I've seen claims that 28nm could be considered 'smaller' than Intel 22nm. Various sources claim 25nm or 26nm gate length for '22nm', apparently what Samsung/GloFo also claim for their '28nm' nodes. Although we're comparing planar to FinFETs so not a perfect comparison, but still, the public node names can be deceptive it seems...
Also, I wonder if Intel choosing 14nm over 16nm actually means anything, or it's just marketing damage control in case other fabs hit 14nm first/close to Intel hitting 16nm, which would mean negative press?
One of the sources (is worth reading IMO): http://semiaccurate.com/forums/showp...7&postcount=18
Noxvayl (03-11-2013)
Some pictures of a desktop Kaveri CPU:
http://vr-zone.com/articles/heres-lo...ple/62565.html
According to some speculation over on the SA forums,the CPU code might indicate a part with a 3.5GHZ base clockspeed.
LOL at TR:
http://techreport.com/review/25584/t...system-guide/2
http://techreport.com/review/24954/a...pus-reviewed/6
Look at the budget gaming box they specced! Its like they have not even bothered to look at their own test results from the second review.
Last edited by CAT-THE-FIFTH; 05-11-2013 at 01:34 PM.
They do at least accept that the decision is contentious, and the benchmarks you've linked are with an older Core i3 against an FX6350, rather than the Haswell v. FX-6300 they discuss in the budget rig - a faster i3 v a slower FX-6 could be pretty close. Plus they give other considerations (although I'm not sure "no known upgrade path" is a valid one the way Intel have been changing sockets recently) like power consumption and platform features.
Personally, I'd recommend the FX6300 every time at the minute, but the i3-4130 is at least sensibly priced and presumably competitive, particularly in the many older games that *don't* use many threads (then again, older games are unlikely to need that much CPU horsepower anyway). As TR say, the AMX platform is very long in the tooth, the chipsets aren't modern (AFAIK the 900 series are just rebadges of the 800 series?), and Intel still win the straight line race by a mile. For longevity, I think AMDs extra cores outweigh those facts, but that's more of an opinion than an unassailable truth....
There are currently 22 users browsing this thread. (0 members and 22 guests)