No boost on mobile? As well as testing the design, the main reason for GM107 seems to be to awe OEMs for this year's mobile refreshes (although with what CPU?).
Strange how in certain forums AMD get so much slack for any PR they do, yet Nvidia's big PR stunts never get mentioned. But initial impressions are very important. Because of the slick launch of GTX680 people still have the impression that Tahiti was very slow and hot (of course 7970GHz @ 1.25V was another AMD own-goal), and while R9-290/290X looks a bit worse than 780/780Ti, until this 750Ti release TPU's perf/watt chart was actually dominated by Pitcairn.
Also, Hawaii hasn't had any silicon revisions while GK110 has probably had a few. But yes, binning every chip and building up inventories, while against a lot of JIT nonsense currently accountancy dogma, has served Nvidia very well. If the initial R9 290X had only used chips with very little leakage (ASIC high according to GPUz), they could have left a much better impression. Then later the could have released leaker chips as R9 290 and R9 285 or something. There is a reason Nvidia uses their chips in such a large range of cards and it's not just to confuse consumers!
Of course, 290/290X has sold out anywhere people are scrypt mining but AMD shouldn't rely on that.
As everyone knows PCPER has close ties to Nvidia(the whole FCAT thingy),but this is what I was talking about:
http://www.pcper.com/reviews/Graphic...PCs-Gaming-PCs
PCPER only uses 60 second run throughs. That review should be doing extended testing over 15 minutes to 30 minutes.
The cases will hardly get warm in a minute.
Nvidia wants to push the GTX750TI for that segment,but if anyone has examined systems like that,the cases used by OEMs have very poor thermal properties and most at most have a single 80MM or 92MM exhaust fan for cooling,and a top mounted PSU. People with such systems are unlikely to service them,meaning they will be full of dust after a year or so.
The review world seems to be full of people,who don't seem to understand basic concepts. Its shoddy.
That is a great idea,regarding a hot box.
AMD are fond of PR own-goals it seems. On the subject of Hawaii, as you say it could have been binned to give a better impression, but they could've just released the cooler design to AIB partners as usual rather than forcing reference-only, as we've been saying. Yeah I get it was probably for a more rapid time-to-market, but they've damaged the reputation of a perfectly good GPU in the process:
How many people do you hear endless parroting how 'nuclear hot and power hungry' they are?
http://www.techpowerup.com/reviews/AMD/R9_290/24.html
Yes, that's nearly twice the power consumption of the 780 isn't it! /sarc
And that's for the hot-running reference model (leakage increases with temperature, etc). Even if running games literally 24/7 there might be something on the order of a £30 difference in electricity costs per year - there was a far greater difference between the cards themselves to more than offset that (although the US price gouging changes things of course), and of course no-one games like that.
And the same people don't seem to understand heat and temperature are fundamentally different things, deducing that because the die is at 90C, there must be more heat released into the system. News flash! Heat is directly related to power consumption, so if Hawaii is nuclear hot running, so must GK110 be! :O
As we've seen, the same non-reference cooler used on a GK110 and Hawaii leads to similar temps, as expected based on the above, with exceptions made for higher heat density of Hawaii due to the smaller die of course.
And as for the temperature thing, people seem to forget that multi-billion pound companies probably know a lot more about semiconductor thermal management than them, who get all worked up about what is essentially an arbitrary number displayed in some software. People seem to be hitting 90C with a 4770k under the stock cooler, I don't see half as big a fuss being made about that!
Yes the temperature due to insufficient cooling led to throttling, which was of concern, but in and of itself, really isn't as huge a deal as some are making it out to be. That, and companies wouldn't release a line of cards knowing a load of them would come back for warranty replacement. Besides, a huge number of reference Hawaii cards seem to be coping fine with 24/7 cryptocoin mining!
Voltage is generally of more concern than temperature concerning semiconductors, although heat cycling can cause problem with certain solders e.g. Nvidia 'bumpgate' and Xbox 360 problems, both of which IIRC were using fairly new combinations of lead-free solder which tended to crack, i.e. it wasn't the actual chips failing, rather they were losing connection with the PCB.
I think these people would be terrified to hear how hot tons of other chips get, including things like mobile SoCs, laptop chips, etc.
The *sole* problem with Hawaii was the poor reference cooler. That's sorted now, and I've seen these cards sell at or below the price of reference models. Problem solved. Not that idiot fanboys let a silly thing like reality bother them.
/longer rant than I intended
Yes, so many people don't under stand power consumption = heat. Like somebody asking about wanting to mine and during the conversation they mention that their electric bills is already high as they use an electric heater, and it turns out they have a one room flat...
Or someone who thinks computers somehow do 'work' so that if you input 100W you don't get 100W of heat back!
But yes, the consumption figures for 780Ti are pretty much the same as Hawaii. It's only when overclocking like mad that there is a noticeable difference. Also, since Nvidia throttle with Furmark, some Max power consumption figures are worthless.
TPU peak (gaming):
R9 290X (uber) 271W
780 Ti 269W
TPU max (Furmark):
780 Ti 260W
R9 290X (uber) 315W
Somehow, Nvidia's legion of stealth marketeers seem to dwell on the Furmark results. Wonder why nobody lists the Furmark score? An card which is obviously throttling should score less. Of course, Furmark is a pretty useless power virus.
As for Hawell: Linpack AVX2 version + stock cooler seems equal throttling. Strangely, while FX 9750 gets ridiculed, Intel shipping a CPU whose stock cooler isn't up to the job gets no attention.
People have no clue about efficiency as a whole,especially when it defeats the whole objective of saving money when they spend £100s of quid to drop a few watts on a desktop. Then they don't bother switching it off ever,defeating the whole objective.
The worst thing is heating,cooking,washing and washing and drying clothes are what make up the vast majority of most energy costs in the UK. Saving a few watts on a computer each year really accounts to nothing,especially with the fact most CPUs and GPUs downclocked themselves anyway.
People think synthetic load measurements are indicative of normal power use. It just shows how much people seem to know about computers,even the ones supposedly reviewing them.
Last edited by CAT-THE-FIFTH; 20-02-2014 at 04:56 PM.
tumble dryer = at least 1kw for 60mins minimum
washing machine = 3kw for each heating cycle.
kettle - 3kw for each use
hoover 1100w or so for as long as its on for.
BTW the hardware.fr analysis of the GM107 hardware design is quite detailed:
http://translate.googleusercontent.c...TmnnxEcoqJPsbg
n its global communication, Nvidia Kepler present as architected around SMX monolithic face Maxwell whose SMM were partitioned for greater efficiency:
This presentation of things however, is not quite correct. More than a technical reality, it is actually easy to sell and prepared by the marketing to decorate various items, which may mislead many journalists simplified technique history.
The reality is different and more complex. SMX Kepler GPUs are already partitioned into four GPUs as Maxwell, the main difference at this level is to be sought in the resources that are shared by pairs these partitions. To represent this, we changed the architecture diagrams Nvidia to get closer to reality, to the best of our knowledge of different architectures:
Nvidia has really simplified the SMM 2 main points:
- Pooling of texturing units per pair of scores as the GK208 and GK20A (Tegra K1), which is to increase the ratio of calculation units per unit texturing, a natural evolution
- Removal of the block units further calculation shared between 2 sheets of Kepler, it boostait the theoretical power of 50%, but had a rather weak performance in practice
So when Nvidia announces a performance by "heart" up 35%, this does not mean that the main units of computation have been improved, but this block was removed misused. This results in a significantly smaller when the SMX during heavy work in SMM calculation, its performance is very close, on the order of 90% by NVIDIA. Of course, in the game, when the texturing becomes more important, the new SMM may have to settle for 50% performance SMX.
Other small optimizations have been implemented: latency reduction of 32-bit units, fusion of L1 cache and texture cache, shared memory extended, some faster ... what instructions to give a boost to GPU computing.
It seems the design is not as big a step as Nvidia PR indicated. They basically took the GK100 series design,and shared parts between each pair of units in a SMX to save space and power. It also means a reduction in the processing power of certain workloads.
So basically they took Kepler and gimped it even more to save power and transistor count. OTH,it also means the higher end chips might not be the same design,as they might be more focused on processing power than power consumption.
Last edited by CAT-THE-FIFTH; 20-02-2014 at 05:49 PM.
The translation could be better, but it sounds like they're describing a VLIW5>VLIW4 type of improvement, improving utilisation?
Edit: And then I realised what the pictures are for. :facepalm:
Edit2: But yeah from that it looks like the have essentially the same core building blocks, but re-arranged and scaled to allow better utilisation in gaming. Admittedly I've not done much reading into the arch yet.
Last edited by watercooled; 20-02-2014 at 06:12 PM.
Wow,the GTX750TI review from TR is almost as meh as TH:
http://techreport.com/review/26050/n...hics-processor
But the GTX 750 Ti can go places the R7 265 can't. The reference GTX 750 Ti is 5.75" long, doesn't need an auxiliary power input, and adds no more than 60W to a system's cooling load. The R7 265 is over eight inches long, requires a six-pin power input, and draws up to 150W of juice, which it then converts into heat.Its only a 36W load difference. I expected better from Scott over on TR,but it seems he has an inability to read his own review.I'm not sure what AMD can do to answer other than drop prices. Heck, I don't think we know much of anything about the future Radeon roadmap. AMD seemingly just finished a refresh with the R7 and R9 series. Looks like they're going to need something more than another rehash of GCN in order to stay competitive in 2014.
The GTX750TI is slower in almost every review than a GTX660,R7 265 or R9 270. Why should AMD need to drop prices outside the R7 260X??
This is simply one of the poorest reviews TR has ever done,and it makes me wonder what incentives Nvidia has offered to them and other reviewers.
Edit!!
Lets have a looky then.
HD6970
http://techreport.com/review/20126/a...ics-processors
December 15th 2010.
Totally new GPU. New uarch too.
HD7970
http://techreport.com/review/22192/a...hics-processor
Jan 2nd 2011.New uarch too.
AMD can replace a new card in a year.
The HD6970 was made because 32NM was canned.
The R9 290X was because 20NM was delayed.
When was the R9 290X launch??
October 2013,only a few months after the GTX780.
Last edited by CAT-THE-FIFTH; 20-02-2014 at 11:44 PM.
You mean aside from having to not write their own review? Really wish that sites would publish the review guidelines whenever they do a review, but I guess the PR departments have those 'copyrighted' as that is the current Orwellian thing to do. That first quote is definitely straight out of Nvidia's PR mouth.
Going back to the Hawaii launch, I really think the main reason AMD launched without AIBs was because they didn't want to show Nvidia their hand. It seems Nvidia PR know AMD plans almost as soon as AMD do.
The problem is TR only has the balls to stand up to AMD. They moaned about Trinity,when AMD wanted a staged review of Trinity with GPU first and then CPU performance later. They even published personal e-mails AMD made to them.
Yet,a few years before they did a canned review for Intel in one of their labs for one of their CPUs. That was OK.
This is getting like game and hifi reviewing.
You cannot even make it up.
I wish I was rich.
I would start my own site,and buy all my own hardware and do some proper testing without AMD,Nvidia or Intel "review" guides,and worries of lost income.
PS:
I commented on the article.
I expect -1000 from the people over there and probably scathing response from anyone answering me,but it pisses me off when a review says something and the figures says something else.
Last edited by CAT-THE-FIFTH; 21-02-2014 at 12:05 AM.
I got a response. The review makes it sound like the board power and power consumption are the same,yet his response to me indicated that these were not the same. Emm,so why mention it in the review then??
Surely if the R7 265 has a higher rated board than needed,it means it is somewhat overbuilt and it would suit overclocking??
Plus on a sidenote I predicted what would happen when I posted,although one chap thought I was an alias of another person he hated,and kept harranging me. I should have realised US forums are a tad nutty like that,after looking at Anandtech and the chap appears to comment over there too.
I have never posted in TR in my life - I rarely do comments on articles outside Hexus,and only do so when I see reviews that I think need clarification,like when Bit-tech changed that article(after trying to ban me to hide their crass mistake).
Plus my own long winded ranty posting style with multiple edits is my own!! MINE!! No one else can have it!!
Last edited by CAT-THE-FIFTH; 21-02-2014 at 12:10 AM.
All hail CAT!
P.s.:I can't even be bothered to read the links of CAT, just read trough the sifted essence of each review extracted by CAT :-P
Are there any "real" reviewers who can't be brought?
CAT-THE-FIFTH (21-02-2014)
I think Hexus is reasonably fair as DR said they went to great lengths to not accept the "hospitality" of companies.
They have always been open to feedback,more than any other review site I have seen.
What a bunch of nv stooges though. Downvoted to -7 (atm). Thing is NV do pay people to spin for them and everything in that conclusion was straight from the NV PR department's script.
NV stock seems to a favourite for talking up too, so as well as the spinners there are shareholder on forums too. After all these years I still can't believe how little coverage the only significant GPU failure, which caused probably over $1billion worth of damage, was hushed up by a tech press too subservient.
CAT-THE-FIFTH (21-02-2014)
There are currently 27 users browsing this thread. (0 members and 27 guests)