It seems AMD demoed a laptop running the Bulldozer based Trinity:
http://semiaccurate.com/2011/06/14/a...rinity-laptop/
Anandtech retested the A8 with faster DDR3 RAM:
http://www.anandtech.com/show/4448/a...ance-preview/3
The difference is massive!!
Considering that the HD6550D has the GPU running 20% lower than that of the HD5570,there seems to be little performance loss if fast DDR3 RAM is used.
I'm not surprised really, this is basically a discrete GPU so will be very RAM instenstive.
Holy ...... words fail me (publishable ones, anyway!).
Mind you, I shouldn't be surprised: The HD4650 was massively bandwidth bottlenecked (as demonstrated when Sapphire released a DDR3 version) and that's a very similar GPU design to this, so I guess it's expected that pushing the RAM clocks on an A8 through the roof would result in something very similar.
Shame the A8-3500M only officially supports 1333MHz DDR3 - no speed boost for the 35W mobile version
EDIT: although yeah, not surprising. I noticed something similar when Hexus tested the 780G chipsets: their original test used an Athlon X2 4850e, so HT1: when they retested alongside a 790GX they used an Phenom X4 - i.e. an HT3 processor - and the results for the 780G jumped more than 20% thanks to the higher bandwidth available throuigh the HT3 link. What really holds / held AMD IGPs backwas bandwidth: I assume that's why they didn't bother updating beyond the HD3200 in future iterations: they knew they were bandwidth limited so the extra tarnsistors would go to waste...
Last edited by scaryjim; 15-06-2011 at 11:02 AM.
From the comments: http://www.anandtech.com/show/4448/a...ance-preview/3
But why would they retest that if they're so rampantly Intel fans?
CAT-THE-FIFTH (15-06-2011)
Wow! Nearly a 40% increase across the board. Not bad for a 40% Increase in Memory bandwidth.
This shows the GPU is only limited by the RAM. I suspect going to 2133 RAM will see a further 15% increase.
Llano, compared to a Core i5 goes from being a very weak CPU with a GPU that wipes the floor with the Intel chip to having a very weak CPU with graphics that that deny intel the right to call what they have on hteir CPU a GPU.
I wonder if they'll introduce sideport memory or maybe a shared L3 cache between the CPU and GPU for the next gen?
"In a perfect world... spammers would get caught, go to jail, and share a cell with many men who have enlarged their penises, taken Viagra and are looking for a new relationship."
Hard to see how they'd do sideport: they'd have to give the GPU its own memory bus & controller and that's going to increase the complexity of the die and also mean more traces, so a new socket. Similarly, while I can see them bringing in L3 for the CPU, I'm not sure that they could put enough cache on the die to make it worth sharing it with the GPU: but a decent L3 for the CPU should reduce the CPU's need to use the main memory, allowing more bandwidth to be allocated to the GPU.
I'd assume the next step will be faster memory, tbh: perhaps quad rate or double bit rate memory a la GDDR5 (don't know how quad rate memory is progressing for the desktop?).
Interesting how the Llano release has pushed bulldozer out of this thread though
A lot of people had pointed it out??
BTW,where did the A8 results for 1600MHZ DDR3 RAM and at 1680X1050 come from?? The Anandtech preview did not test the A8 using 1600MHZ DDR3 and at 1680X1050. I looked at the comments and could not find the link!
Edit!!
It seems to be a leaked slide from the A8 review!
Last edited by CAT-THE-FIFTH; 15-06-2011 at 11:29 AM.
They're not usually, which is why it was so surprising really. But like I said Anand does pay attention to comments and will correct stuff like this.
@badass: I don't see how it's a 'very weak CPU'. It's not meant to be an i7 killer, it's still a K10 derived chip and they've done a very impressive job with power draw and transistor density. I.e. instructions/watt. Just because it doesn't wipe the floor with a 2600k doesn't mean it's pointless. For the mainstream it's more than enough, and as AMD showed by delaying Bulldozer in favour of this, mainstream is more important to a chip maker than the few enthusiasts or those who are mislead by a salesman into thinking they need a 980X for web browsing.
Last edited by watercooled; 15-06-2011 at 11:34 AM.
CAT-THE-FIFTH (15-06-2011)
TBH,in multithreaded applications an Athlon II X4 or Phenom II X4 tend to have similar speed or even greater speed to a Core i3. It is only single core performance which is lower and I cannot fathom why AMD has not implemented Turbo Core on desktop Llano processors.
Lack of TDP budget? Poor yield at GF means they can't bump the clock speeds above 3GHz without the power draw running away with itself?
Perhaps we'll see a "T" series in a few months time as they smooth out the manufacturing process: after all, the early Phenom IIs topped out at 3.2Ghz but they got them up to 3.7 / 3.8GHz after a couple of years...
There are currently 2 users browsing this thread. (0 members and 2 guests)