Read more.Quote:
Say hello to the 3rd Generation Core Processor Family.
Printable View
Read more.Quote:
Say hello to the 3rd Generation Core Processor Family.
The overclocking is disappointing....although not very surprising based on leaked info. I personally think the heat is from the tri-gate tech. That is having the biggest change to transistor density that I can see.
Still.....I think I would be content with one of these at 4.5GHz
"Generally speaking, the Tick refers to major architecture revamp, while the Tock insinuates a minor architecture refresh coupled with a shrink in manufacturing process."
I think you have that the wrong way round.
The heat it generates is a let down. I put off my Sandy Bridge build to wait for these. I wasn't interested in the small performance gain rather the USB3 and PCIe3 support.
I wouldn't mind seeing different overclocks.
I was planning on achieving 4.5GHz stable and running cool (to match Sandy Bridge tbh).
Now to wait for the i5-3570K review to see if it runs any cooler and to make my decision by the 29th when Scan stock Ivy Bridge.
Nice detailed review though.
runing the amd 3870 with 1600 memory was unfair, we all know its gpu side runs much faster with 1866 memory and so 36% over the 3770k could well have been 40% plus. mind you if you have a 3770k or 3570k i would at least be runing a Nvidia 560 gpu, cant see it run with much less.
as for the heat, roll on Haswell or whatever the next one is in a year.
I don't get how a 1.2ghz bump (33%!) isn't a good overclock amount ? And 75c under load is a perfectly acceptable temperature especially considering the overclock level.
Should be expected though, smaller tighter CPU is going to have less wiggle room.
Personally I can't see me upgrading my CPU for at least another 5 years anyway.
Well, they have successfully persuaded all the 2500k not to bother changing a thing. Ivb-e might be more interesting, I still find it a bit of a con that you need to go "extreme" to get more than 4 cores with intel though. I mean, we had the Q6600 over 6 years ago.
I for one will just keep using my i7 920 D0 for another generation. Had skipped Sandybridge to see what Ivy had to offer. Not enough for me to part with my hard earned.
Nothing has convinced me that i need to upgrade just yet.
It does look like the jumps in performance are getting smaller with each iteration though.
Also doesn't help that games are being dumbed down for the current generation of consoles :(
I am really disappointed. I expected either more performance or more efficiency. I don't see any real advantage over SB, in that if you can buy SB cheaper than IB why would you buy IB?
This is all AMDs fault! Just like they did with the GPUs, where nvidia release within themselves, Intel don't need to do anything special. Now i'm thinking i'll wait till the next generation.
If it's simply because of AMD, then why does Intel's latest IGP still not perform as well as AMD's first gen? And AMD's drop over the last few years could be attributed to Intel's bribery with companies like Dell, which they got sued for. AMD perform below par, people complain. Intel perform below par, people blame AMD. Love it! :D
Intel are free to make CPUs as fast as they like and charge a premium for them, take the 2011 line. AMD will also have their new CPUs out soon so it would make no sense for Intel to make only minor improvements, they don't know how the AMD offering will perform so they're not going to sit on their laurels. Shrinking to 22nm in itself takes no small amount of effort, and fabs are running in to some serious limitations they need to overcome, to the point where cost/power reductions aren't nearly as great as they used to be with die shrinks. Intel deserves credit for rolling out 22nm so soon IMO, and although initial efficiency doesn't look excellent, things will probably improve a fair bit down the line with yields, but unfortunately not many places will re-review chips to show this. Don't forget a Tick is merely a die shrink, so aside from minor architectural tweaks, most of the difference in performance will come from clock speed. And it allows Intel to perfect the process for the Tock later down the line, so they have less to worry about.
I wouldn't say I'm shocked by the negativity, especially considering all the marketing/fanboy hype, but lets not forget this is the first like-for-like Tick we've seen since Penryn (Core 2) - the last one went from 45nm quad core with no graphics to a fairly different 32nm dual core with a separate IGP/MCH die on-package. But if you compare like-for-like, it's a similar story to SNB-IVB.
Have they!? I must've missed the Miles Ahead GPUs! :P In what aspect are we talking?
I actually agree with Scainer to a degree, it doesn't help with AMD not putting in a decent CPU (or decent value) at the top end any more, that relieves a lot of pressure off intel. Still a good chip for someone coming from 1366.
cheers
brasc
You clearly have been missing something if you didn't realise AMD is a mile ahead in gpu vs nvidia :p Sure the 680 has closed the gap but until that AMD was so far ahead it was basically embarrassing. Even at that, it's not like Nvidia can make anything of it due to bad yields on their chip so AMD has free reign on the entire market anyway.
Ivy Bridge is as good as it can be - none of these companies is holding back and they never have been, and when AMD releases Piledriver in Q3 you'll realise it.
My Lynnfield (s1156) Core I7-860 is still plenty fast enough for what I do, so I will be waiting for Haswell at the very earliest before thinking about a new build. I'd say, if you haven't alreaady, you're better off spending money on a decent SSD which will be a massive boost if you're stil on mechanical storage.
Overclocking temps are a tiny disappointing, but it should still be a nice upgrade to my e6750!
Ivy issues aside, I find it very annoying that a family of processors will be officially launched worldwide but only available to buy, in UK etailers etc, 6 days later....
I think it might be time to upgrade from my Core 2 Duo to one of these.
That t model could be a little bit of a star. It looks as though the process is more beneficial in lower power situations, down to 45w with only a few Mhz taken off the turbo freq.
will build a new system at some point this year, likely that something from the range will end up in it.
To be fair I think its been the hype surrounding the launch that has caused the disappointment. All the talk of the improvements, and the most noticeable point to take out of the reviews is that they get hot. I wanted IB to match SB but do so at a cooler temperature so that a build would be quiet on air when fully loaded @ about 4.5GHz.
I am now even more interested for a review on the Be Quiet Dark Rock 2 / Pro.
(off topic: AMD have not dominated GFX card since the 5### in 2009/10. Even then the 460 dominated the mainstream sales, just as the 560Ti has been doing, and I'd expect the same will happen this time around (rumours suggest that the 680 was actually a lower range card that they added some extra MHz to outperform AMDs best.)).
SPCR measured HD4000 power consumption and it is consuming more power than the HD6550D:
http://www.silentpcreview.com/article1259-page3.html
Obviously I can't say for sure yet, but based on past experience there's not a lot of point in going for one of the lower TDP models - they all idle about the same because of gating and the normal parts are often equally or even more efficient (i.e. performance/watt) under load - considering they're often more expensive, the only real reason to go for one IMO is if you're physically limited with the amount of power you can draw or amount of heat you can dissipate.
It's greatly attributed to the hype IMO, as I was saying in an earlier post.
They are just that - rumours. There may be a larger die based on the same architecture planned, but it doesn't mean it was ever intended to be released at the same time. If Nvidia could produce this larger die which massively outperforms anything at the same price point as existing AMD cards, why wouldn't they? And there wasn't much of a gap between AMD's release and Nvidia's - it would take a tad longer than that to scrap existing production runs, modify a smaller die and get it from data to silicon. ;) But this is a subject that's been done to death in other threads.
I noticed Anand left Llano power consumption measurements out of the CPU comparison tables...
I'll be missing this one out ! Looking forward to the next Gen...
Production samples have a larger IGP section than the review engineering samples:
http://www.chip-architect.com/news/2...es_Sandys.html
It might to improve yields of the IGP but I wonder if power consumption is also affected - but unless production and engineering samples are tested side by side it will be hard to say.
It seems IB is actually around 183MM2 as opposed to the 216MM2 of SB.
Core i7-3770T <--- 8 threads and 45 watt TDP.
Wow! That's outstanding.
I can see why you guys may be a bit disappointed with the overclocking and heat results. But for people like me with more desktops than active directory can actually display, that kind of efficiency is a BIG bonus for new purchases.
Looks like I am the only person here that is actually pleased with Ivy Bridge :p
Butuz
@Butuz: Check my post 23, first paragraph. The 'energy saving' models generally aren't worth it.
Agree with you for home use. At home you can just get the K version (generally roughly the same price) and drop the multi and voltage if you want effecient.
In industry though where every watt counts, we don't all want to spend all day under/overclocking / testing stability on our PCs so the pre-under clocked fully tested and warranted intel ones actually are worthwhile.
When you've got a building pulling 600 amps through 600w fuses sometimes and you need to add 100 more PCs the only option to avoid frankly absurd costs of digging up roads etc to upgrade the electric supply to one building/site can be using low TDP CPUs and paying a bit of a premium per CPU still works out vastly cheaper.
Home users don't even need to bat an eyelid at considerations like that. That's why I very much welcome 45w tdp 8 thread CPUs :-)
Butuz
The lower wattage Core i5 and Core i7 cost more and are not worth the price IMHO(unless you are going for a very small PC) and I have a mini-ITX build which would benefit from such a CPU too.
IIRC,with the Core i3 2100,if you simply dropped the clockspeed to the same as a Core i3 2100T power consumption was very similar.
The thing is the Xeon E3 quad core CPUs make more sense though. For example an 80W TDP Xeon E3-1230 has slightly lower clockspeeds than a 95W TDP Core i7 2600K but has the IGP switched off and yet costs far less - around £175. The E3-1230 V2 has a 69W TDP,runs 200MHZ slower than a Core i7 3770K and lacks an IGP but hopefully will be under £200.
IIRC,configurable TDPs are probably going to be more common as time progresses and Intel and AMD are already starting to do this.
The thing is though that under idle and lower load conditions power consumption between IB and SB is not massively different - however under heavy load IB does better. The issue is how oftern are you going to put your CPU under 100% load rendering and video encoding?? Why don't,for example,more websites test gaming power consumption as many people will buy Core i5 CPUs and only game and do office tasks.
Too many websites don't test power consumption under multiple situations - at least SPCR and Toms Hardware are doing this.
Yeah I meant unless you have tight power draw limits. The low power versions like I say usually idle within a watt or two of each other, but unless your PC is constantly 24/7 loaded regardless of task completion, you have to look at energy rather than power i.e. a faster CPU will spend less time under load for a given task, so given the same efficiency it would draw the same amount of energy as the 'energy saving' chip. However, it's often the case that these chips are actually less efficient then their full-speed counterparts, so you have to compare carefully before assuming the 'green' model will save you money, as it could be both costing you more money and making you wait longer for tasks to complete. Consider it more of a speed limit for chips.
However, it depends on the silicon; sample variance and bins have to be taken into account, remember the green models will be using the same die as the full-speed chips. Basically, unless you have strict worst-case limits you can't exceed (relying on circuit breakers for the rare occasion every PC in the building is fully loaded would be a better option in many cases), you're best waiting for reviews comparing efficiency, preferably of several samples; I'm basing this on previous chips but IVB and the process it's made on could be most efficient at lower clock speeds.
Looks like it doesn't mind being undervolted, it could make sense to buy the standard one and do that for similar gains. They do however have the same list price rather than being more expensive so the s version might be a nice compromise. Also we have a case here where because the heat is very concentrated a small increases in power result in big thermal differences, this means fans can run slower and as the relationship between speed and noise is non linear - much quieter :).
Again though, it shouldn't really matter unless you're constantly loading the CPU - the idle power (where the CPU will be most of the time in most home systems) will be almost identical, if not exactly identical. And unless you're overclocking, the thermal characteristics of IVB look fine.
It seems that Intel is using TIM now under the heatspreader:
http://www.overclockers.com/ivy-bridge-temperatures
Before they used solder and in the article they suggest this is the reason it runs hot.
That doesn't make any kind of sense. Back to school*.
*rate limiting step in heat extraction is the metal/air interface of the reference cooler. As long as every previous step is faster than this rate it doesn't matter what their performance relative to each other is.
Hmm, if that is the case then IVB may end up being a screaming overclocker as it makes it easy to remove the IHS and get really good temps.
We need answers! :(
Hmm, I suspected as much (it was at least obvious the die-heatspreader interface was the weak link), and this also has the potential to degrade with age. I wonder why they made the switch?
OK, please explain to me how the die-heatspreader interface can be anything like as poor as the metal-air one on a default OEM cooler, even fan assisted?
Put another way, if thermal grease was the limiting factor, why do CPUs bother ramping up fan speed underload/when hot?
Because heatsinks have a massive surface area to transfer heat to the surrounding air.
Like I said the article suggests the TIM but TBH in the scheme of things it does not really matter. Whatever said and done,Ivy Bridge does run hot above certain voltage and that is down to how Intel has designed the CPU and the packaging. It is a consideraton we need to take into consideration when compared to the competition,ie, the SB Core i5 and Core i7 CPUs.
Thinking about it some more, I am expecting the first IHS-less results to be not a lot better then the results with the IHS.
The most hardcore overclockers have been trying to remove the solder and replace it with TIM for a while....and when they do manage it without destroying the chip, they do run cooler.
Well even de-lidding you're still relying on TIM to transfer heat from the tiny die to the heatsink, so while it might improve things a bit with a lapped heatsink and good, carfully applied TIM, it's still not going to be as good as with a soldered heatspreader transferring heat much more efficiently to the larger surface area of the IHS (that's its purpose, besides the secondary one of protecting the fragile die). Like I said it's curious why Intel did this?
Yeah cost probably is the sole reason. I'm not impressed though, TIM dries up (∴becomes less effective at conducting heat) over time and it's not exactly simple to replace the stuff under the IHS.
Well, until we know better, it's anyones guess. It could be that the density and tri-gate causes such hot-spotting on the chip that TIM was needed to make them run as cool as they do.
We really need someone to risk breaking their spanking new CPU on Monday.......although I am sure there will be a couple!
But TIM is far less conductive and is far more likely to leave gaps than solder.
Oh I don't doubt there will be a few, even with a soldered IHS people attempt it: http://www.xtremesystems.org/forums/...d.php?t=256092
and the solder tends to be a lot thicker then TIM.......
That was my point. When people remove the solder and IHS, temperatures drop. Is it because of the solder or the IHS?Quote:
Oh I don't doubt there will be a few, even with a soldered IHS people attempt it: http://www.xtremesystems.org/forums/...d.php?t=256092
Out of interest what rig are you putting the new CPU into?? Its not like your current ones seem deficient!!:p
Will be going in the main pc. Socket 1366 is way too old for my liking and now with PCI-E 3.0 I can finally get all the lanes I need on a desktop board.
And yeah, it isn't really needed but then neither was the starbucks I had this morning....and the upgrade itch has been getting to me.....can't remember the last time I had a motherboard last this long before replacement!
Just bagged a 600T in white for £90 delivered as well to put it all in and have a Megahalem lying around......so even going air at the same time and ditching the WC......so the biggest changea I have made to my gaming rig in ~4 years!
Wouldn't the bandwidth be the same, as the Z77 motherboards support PCI-E 3.0 8X which is more or less then same as PCI-E 2.0 16X in multi-card setups??
It doesn't matter, solder is just metal so it has similar conductivity to the heatsink, but TBH it is very thin, easily within the range of TIM, possibly thinner depending on how it's applied (when melted, solder is very fluid, it's not gloopy like TIM so doesn't need a lot of force to thin it out). OTH, even the best TIM is a rubbish conductor vs solder if it's not spread incredibly thinly. Even best-case it's miles off. Solder is the best interface to use, but because having a heatsink soldered to a CPU would be impractical to say the least, relying on solder to conduct the heat to a much larger surface area is a better solution which has been used for years. We're not talking small differences either: http://en.wikipedia.org/wiki/List_of...conductivities compare lead-free solder to silver-based thermal grease. Look at how many thermal pastes are using metal or ceramic particles to improve conductivity, but the conduction path is still likely to involve the oil the particles are suspended in which is a relatively poor conductor, or if not, conduction through small particles pressed against each other still doesn't match solid metal. And these 'liquid metal' pads/TIM trying to emulate solder by using alloys which are liquid near to room temperature.
Most people don't succeed in removing a soldered IHS, but I've not seen conclusive evidence that de-lidding a CPU actually improves temperatures over lapping the IHS for a soldered die. It doesn't stop people experimenting though.
Even running 2 580s on 2 8x PCIe2.0 lanes didn't seem to make any noticeable difference in most games IIRC, I'll try to find the article which I think was on Tom's...
Edit: This looks like the article but I seem to remember a more recent one, comparing more than the one card, but here's results for the 480 anyway. http://www.tomshardware.co.uk/pcie-g...iew-31964.html There's a small difference for some games but it's mostly negligible, not something you're likely to notice.
I wonder if in a future stepping they will use solder instead of TIM. Or if they are reserving solder for EE parts.
I was wondering that, considering they've had delays but the desktop line seemed unaffected, maybe they made up time by switching to TIM for a while?
The problem wasn't 2 GPUs....its adding an 8xpci-e raid controller as well
Isn't there a total of 16 PCI-E 3.0 lanes??
Wouldn't that mean there would be none left after you use a pair of GTX580 cards??
Ah fair enough, but don't forget the chipset is much the same as the Z68, providing 8 PCIe 2.0 lanes, anything else is from the CPU's integrated controller and isn't dependant on the chipset, even the older boards will support 3.0 for the sockets fed by the CPU. The cards will also need to support 3.0 if they're to take advantage of the extra bandwidth.
Edit: @CAT: Yeah, but you get another 8 2.0 lanes from the chipset, but they may be shared by on-board features e.g. extra SATA/USB controllers.
Ah,OK so there are an additional 8 PCI-E 2.0 lanes available!
Yeah but unless there are no extra chips on board the 3rd socket will likely be either 4x or switchable i.e choose between 8x or the SATA/USB controller.
Some boards have 3 PCI-E 3.0 slots plus some PCI-E 2.0 slots. I'm looking at the gigabyte that runs 8/4/4 @ PCI-E 3.0 (so 16/8/8 @ PCI-E 2.0)....plus some 1xPCI-E slots (ideal for soundcard)....which is exactly the same amount of lanes each card is getting on my x58 currently. You do need the IB chip to get the PCI-E 3.0 compliance and the 3rd slot to work though.
I have noticed that the Z77 boards differ significantly in how PCI-E lanes and 3.0 compliance works.....you really have to read the specs on every board carefully.
8/4/4 PCIe 3.0 does not mean you can use 2.0 cards as 16/8/8 BTW; you will still have the same number of lanes, just they will run at PCIe 2.0 spec instead. So, assuming the same 2.0 cards, it will be no different than running them on a SNB CPU. Yeah the 'main' PCIe controller is on the CPU, hence why the older boards often support 3.0, electrically the board doesn't have much to do with it.
Is this a retail chip or a review sample that you've been sent? There's been a lot of talk about intel using Thermal past rather than fluxless solder to attach the IHS, accounting for the "huge" temperature differentials when overclocking vs Sandybridge
Wrong thread?
I can see me moving to one of these, but im coming from amd so it will be a step up for me.
It will be a step up for your electricity bill at the least :D
Butuz
Hmm these temps put me off a bit. Im on the brink of creating a new machine and ivy bridge was in my mind, now Im debating whether jumping onto 2011 is the way to go?
Yeah it is a retarded move by Intel; did they think no-one would notice or something? It's not initial temps that concerns me, it's what they're going to be like a few years down the line when the paste starts to degrade. After all, lots of us clean+reapply TIM because of degradation. In extreme cases, a friend's P4 system for example, the stock TIM had degraded so far the fan was constantly at (very loud) max speed trying to do something, but according to BIOS the CPU was even idling way above the max recommended temp. Clean+new paste sorted that, but it's not as simple if you have to remove the heatspreader to do it too!
I couldn't really recommend 2011 for most people, unless you have special requirements of some sort. Unless you're planning on heavy overclocking, IVB should be fine, if not I'd go SNB.
I'll be putting one of these in my first build. Will be a noticeable improvement over my quad core Q6600
Defiantly going in my next build...love Intel :)