Read more.Eco-Design law guidelines to come into force in early 2014.
Read more.Eco-Design law guidelines to come into force in early 2014.
From the comments section in the article.
Yeah, they've posted the wrong link (it's an outdated preliminary study), the actual draft of the policy is here: http://www.eup-network.de/fileadmin/...ect-to-ISC.PDF
Now it should be mentioned that whoever wrote this article fails horribly at basic reading comprehension/is trying to grab attention by spewing sensationalist BS because the draft is BEING MISINTERPRETED HORRIBLY. THERE IS NO MENTION OF ACTUALLY TRYING TO LIMIT THE TOTAL POWER CONSUMPTION OF GPUS OR PERFORMANCE, THEY'RE JUST CATEGORISING THEM INTO DIFFERENT PERFORMANCE GROUPS AND SETTING REASONABLE LIMITS FOR IDLE/SLEEP POWER DRAW FOR EACH CATEGORY.
Perfectly reasonable if you ask me, it can only benefit the end user if their devices aren't being horribly inefficient when they're not actually putting all their resources to use. Apart from that they're also seeking to enforce the 80+ bronze standard for PSUs, another thing that actually benefits the consumer.
kalniel (16-10-2012),watercooled (16-10-2012)
Its simple. More of this nonsense and NVidia's dream will come true - cloud computing for everyone. While I run my PC on 400W tops and not too happy with that - my kettle runs at 3000W and no one in the house cares. Cap the memory bandwidth? The what? For what reason? So when we move to 10nm or less and 4k displays our PCs will suck and we will have no choice but to head to cloud gaming.. Don't get me wrong, cloud computing is the way to go - just leave some room for enthusiasts. US gets all the lowest prices and highest energy demands per capita and what we do here in EU? Drown in regulations..
“The commission wants to stop dedicated graphics cards of group G7 from going above 320 GB/s“
That sounds like a cap to me.
Anyway, it seems a little excessive going to all this effort in compiling the report when you consider the number of these cards that are actually in use, not everyone has a monster GPU. Having said that though a little extra efficiency would be nice.
Edit: Nevermind, I'v just seen Cat-the-Fifth's post
Last edited by douglasb; 16-10-2012 at 03:35 PM.
It sounds like something similar to 'cap' but with an additional character to me Look at the draft policy - the final category (D/G7) is greater than or equal to 192bit. That is not a cap.
Further more, if you have a powerful computer, with greater than 320 GB/s bandwidth (among other factors) not only is it not capped, you are exempt from the classifications for a while:
When they then do apply, you just fit into the same cat D/G7 as the rest of them, which is, to be honest, plenty of power.Category D desktop computers and integrated desktop
computers meeting all of the following technical parameters are
exempt from the requirements specified in points 1.1.1 and
1.1.2:
(a) a minimum of six physical cores in the central processing
unit (CPU); and
(b) discrete GPU(s) providing total frame buffer bandwidths
above 320 GB/s; and
(c) a minimum 16GB of system memory; and
(d) a PSU with a rated output power of at least 1000 W.
Last edited by kalniel; 16-10-2012 at 03:37 PM.
What happens with crossfire. If they make a cap like that I can see some lunatics doing like 7 ways SLI/crossfire to get around. i.e each graphic card might be energy efficient but you still have a ridiculous number.
Think of it this way. A HD 5850 was a pretty efficient and didn't consume that many watts. Crossfired HD 5850 is basically a HD 5970 and that card was definately not a lower power draw one.
Well spotted Kalniel, I only had a quick skim through it (I have an essay in the background that I really should go back to writing...)
From what you have said (and how it reads) it seems as though if you want to be excempt from the rules 1.1.1 and 1.1.2 you have to go all out and build a very high spec pc, otherwise you fall into the restrictions, however having said that who is going to check if I were to build a pc in category D (for example) that exceeded 234kWh/year?
Edit: having read it again it looks like that 234kWh/year for category D is only it's power consumption off (poff), sleeping (psleep) and at idle (pidle). That makes it sound like if you turn everything to it's lowest power setting for idle and sleep (so as not to draw 234kWh/yr) you can draw as much power as you want at load since that isn't taken into account?
Last edited by douglasb; 16-10-2012 at 03:56 PM.
I think so - it's like the existing regulation for TV equipment having to draw less that a certain amount when in standby mode - legislation that successfully countered the problem of standby modes eating almost as much electricity as the equipment being fully on. There's a large allowance for graphics cards because until AMD's latest series they did not enter low power states when the computer was idle/sleeping.
That's covered - additional cards have far less of an allowance than the first card. Again, AMDs ZeroCore addresses this perfectly, I wouldn't be surprised to see NVidia follow suit.
Last edited by kalniel; 16-10-2012 at 04:48 PM.
Do we now have to import graphics cards from US then ?
So after the big scary headline, there isn't really anything to worry about. As has been said already, encouraging lower idle power consumption isn't a bad thing and it's already happening on both CPU and GPU; you only have to look a few generations back to fins cards drawing 50W+ when idle. HD5000 and GTX600 series made serious improvements to idle power consumption.
It would also be nice if the 80 plus standard measured down to realistic idle load levels to encourage MFRs to make more of an effort, rather than just meeting the current requirements and letting efficiency plummet below 20% load.
They are mad if they go though with this.
*double post*
Last edited by kalniel; 17-10-2012 at 08:59 AM.
There are currently 1 users browsing this thread. (0 members and 1 guests)