Read more.Doubles the pace of Moore’s law, apparently.
Read more.Doubles the pace of Moore’s law, apparently.
Proof that intel is losing the plot more like.
There are theoretical limits to how small the shrinks can go yes, 11nm seems to be the point where it can't go any further using the current methods.
Some are saying Intel have a five year lead on rivals when it comes to finFET technology, so competitors may have to deal with shrinking stencils whilst reducing static leakage in other ways.
So maybe it's worth Intel doing. It depends largely on what TSMC and GlobalFoundries can do, and whether the sheer pace of changing the node size will lead to failures in designs. Variability is going to be a growing problem as we get down towards 11nm and beyond.
Please let your processors have a sensible lifetime. At the moment, buying INtel processors isnt proving to be future-proof as another one comes very soon.
it depends how you look at i guess, just because they bring out a new one doesnt mean you have to buy it. Im still using S775 and i have no NEED to upgrade however its been this way a good few years, and its only just starting to get a bit sluggish (in games that is). I would suspect a 1136 i7 will be fine for a similar amount of time.
I agree with you the approach is whack though, ASRock proved that its unnecessary with that motherboard based on the P67 chipset which took 1156 CPUs.
I don't mind new processors coming out, it's the damn socket changes which annoy me. Buying new Memory, Motherboard and CPU is costly compared to a simple CPU upgrade.
Anyway... I think focusing on the die shrink is an easy way to show consumers they're putting effort in, I doubt this will be the most important thing they improve over the next few years. Intel(and AMD) have a lot of work ahead of them if they are to maintain dominance over RISC architecture.
Perhaps legacy applications will hold people back, but sooner or later the benefits of RISC architecture will overcome that hindrance. I know I look forward to a Cell processor PC, even if I use Linux or Chrome OS instead of Windows.
My first thought was: "We've dug ourselves into a hole but we'll dig our way out !"
The irony is that for years x86 CPUs have been RISCy on the inside, with a huge chunk of decode logic on the front to convert x86 into their RISCish microcode.
@ Steve:
I think they did that in order to benefit from pipelines, their usual CISC methodology wasn't conducive to pipelines. I think they still have RISC elements.
CISC was popular because it was easier to code for. The benefits of having more control of your program through your code was eclipsed by the difficulty of creating the code. That locked the PC market into x86 architecture and it's been a struggle for RISC ever since.
erm yes and no. First off I'm a ARM fan boy, I cut my teeth on a BBC so moved on to an A5000 then a RISC PC.
But they are no panacea, ARMs use less power right now because of better design.
It shouldn't be thought of as a CISC been a RISC entity with instruction decoder. In some ways this could be said it to be a collection of RISC entities (I don't want to use the phrase cores, because they ain't) which get the instructions passed too them by a CISC decoder. This pipelining actually allows you to get more usage out of the building blocks. Now I'm not saying for a second you can't do this with a compiler, but when you have a simple risc core it is just that, simple, so the oppertunities for optomisation aren't there.
When you look at some of the mid end android handsets and how incredibly slowly they run despite plenty of MHz you can get a feel for how much so this is.
What its coming down to now is more a case of useful work down (again don't want to use 'instructions' because a millions NOPs are useless) with power, in time...
That said I'd Longing ARM and Shorting Intel does strike me as an obvious move right now.
throw new ArgumentException (String, String, Exception)
Sounds to me that intel intends to address the power usage gap by using a hammer, ie with a process that this is more efficient rather than changing design/instruction set (because they can't).
(\__/) All I wanted in the end was world domination and a whole lot of money to spend. - NMA
(='.*=)
(")_(*)
Lets say we move from x86(etc) to arm.
What speed of legacy emulation would be required to run 99% of apps?
A 1ghz pentium-M is more than enough for most people, so how/when do we get that kind of emulation speed?
That was true about 15 years ago, when the first Intel Pentium and AMD K5 came out and had a millon or so translators dedicated to converting CISC instructions into micro ops.
These days, the number of transistors for that conversion has stayed roughly the same, while every other part of the CPU has expanded massively, especially various caches. The net result is that those micro op translators now consume a tiny fraction of the total die area.
Also, the fact that the instructions are translated to RISC internally to the CPU could be seen as an advantage, as it gives Intel and AMD the flexibility to change that internal instruction set any time they like without affecting anyone.
ARM meanwhile are stuck with their instruction set, and if they want to change it (Which they have about 4 times in the past 20 years), they force people to recompile. The result is that you often can't take an ARM binary for one device and expect to run it on another.
Intel already tried this with the Itanic... sorry Itanium... The market said no, we are happy with the x86, that did emulation. I doubt intel will try it again.
(\__/) All I wanted in the end was world domination and a whole lot of money to spend. - NMA
(='.*=)
(")_(*)
There are currently 1 users browsing this thread. (0 members and 1 guests)