Read more.But asserts that it is accelerating its 10nm transition, with desktop CPUs due in H2 2021.
Read more.But asserts that it is accelerating its 10nm transition, with desktop CPUs due in H2 2021.
As a naive outsider to the market, I think it's getting to the point now it is obvious that Intel has lost the foundry war and just needs to focus on architecture, that is unless they're going to curve ball us all with a 3/5nm node in 2021...
How can they keep operating their roadmap with all these delay?
Good news all round really, AMD will contintue ti gain market share.
Judging by this announcement, a decent portion over the next year (and likely more), will be at Intels expense.
Should create a healthier market for consumers.
Do intel really need to spend time and money on die shrinks? Why not let others do the groundwork and testing then just jump to 7nm+ when the process is more efficient? I don’t see them loosing by any great strides still using 12nm or whatever they’re on at the mo...
Old puter - still good enuff till I save some pennies!
To be fair it won't hurt them much. Not enough capacity around for AMD to jump in really. Prices will rise as people jump to AMD even more and they have the same supply issues. Only thing Intel can do is chop prices and again the consumers will win
Old puter - still good enuff till I save some pennies!
well if they don't out source to the professionals... then it would be hard to otherwise keep up think of 5nm and 3nm... AMD and others wont hold back... the companies in the business keep on popping up with new factories all the time... if Intel would build new factories... would they then be able to keep the same competetative price ont their products as well?
Eh!? Intel had fabs before most of the other companys were even a twinkle in their designers eyes...
I think if you actually researched how many fabs there are you'd be surprised at how few there are... and even more so how few that are actually shipping decent yields of chips on processes smaller than Intels
Old puter - still good enuff till I save some pennies!
I wanna agree, because Intel getting shafted after years of being words-that'll-get-blanked to consumers is just tasty delicious. But competition is important, and if Intel lie down, AMD will slow down, raise prices, and stagnate. In business it's daft to supersede your own products when you can continue to make money on them, so if there's no competition, they'll hold back on innovation
I think AMD need some more time to dig in before Intel come swinging for them though, else we get back to Intel ruling the market with an iron bludgeon in no time. They don't have a history of tolerating competition, or playing nice with OEMs that don't stay nicely in line.
The quad core Athlons at the low end still being on 12nm might help AMD keep volume moving if Intel keep fumbling the ball.
Being vertically integrated, being able to push knowledge from chip design back to production and tweak the process so it fits your needs better, having full control over manufacturing capacity? All huge (theoretical) advantages.
Meanwhile in reality it seems like Apple's A14 is going to be the first commercial product on TSMC's 5nm node, probably around the time Intel launch their newest desktop processors... still on 14nm.
Zero mention of Xe, is that a surprise?
So sort of what GF did when the licenced Samsung's 14nm, except in this case Intel would try to buy the know-how of a previous node once TSMC or Samsung no longer consider it that relevant?
Can't see that being that attractive as they'd be constantly one to two nodes behind everyone else - and Intel still have very deep pockets so no point in trying to licence a old node to save money.
Seems either fabbing elsewhere, or trying to licence a working leading-edge node makes more sense. And fabbing elsewhere might mean a chiplet strategy like AMD did with Zen2. IO on GF's old 12nm node, cores on 7nm. Expanding that, IO and GPU could probably be Intel's own old node while the cores might be done at TSMC.
If Intel are willing to swallow margins*, they could make any GPU part extra wide but run slow at or even below the node's sweetspot.
*The chase for fat margins is ultimately what probably got Intel into this mess. The biggest mistake they made in the last decade (yes bigger than wasting $1 billion on Larrabee, $6 billion on McAfee, $3ish billion on contra revenue (dumping), and so on), was turning down Apple's request to make mobile SOCs for the first Iphone. That and their general policy of crippling Atom in case it cannibalised their Core or Xeon cashcows. In the end TSMC got where they are because they have all the volume of mobile and since silicon fabs cost billions to make, big constant volume is critically important.
They might be behind when it comes to being on an older process, but, how much are they actually behind in performance, I'm sure someone with an excel sheet and time could estimate how much of a boost Intel would gain by being on the same process as AMD, also think about how refined the 12nm or whatever they are on is, I should image they're getting more good dies per wafer, but then the flipside of that is less actual dies per wafer due to them being bigger, I should imagine someone at Intel has a rather large spreadsheet and it watching that margin..
There are currently 1 users browsing this thread. (0 members and 1 guests)