Read more.California startup employs several of the creators of RISC-V and is valued at about $2bn.
Read more.California startup employs several of the creators of RISC-V and is valued at about $2bn.
No I also can't see any benefit for anyone except maybe SiFive's shareholder and Intel. Although Intel aren't very good at buying stuff so maybe not even Intel. About the only benefit might be if Intel get to work on their compilers for RISC-V.
Then again I couldn't see any benefit of Nvidia buying ARM aside for Nvidia shareholders.
Ian Cutress on his TechTechPotato channel had an interesting take.
SiFive is a big contributor to RISC-V so it won't stop RISC-V but when the largest contributor is swallowed up into someone like Intel/Nvidia, it can affect some things (for better or for worse).
And Intel hasn't exactly had a great track record with some of their more "diverse" acquisitions.
They do have a history involving a fair range of CPU instruction sets, some released with quite the fanfare. None really survive beyond amd64 and some clone 8051 parts afaik.
Now I can see that this could be done with the best of intentions. I'm right now coding for an Intel FPGA which has a pair of 32 bit ARM cores integrated onto it as hard cells. Intel could start doing parts with RISC-V cores instead, either at a discount or as 64 bits for the price of 32 or something. I'd be fine with RISC-V, they just need to keep that vomit worthy x86 stuff off our design. That would be a mild poke in the eye for Nvidia assuming the ARM sale goes ahead.
My main worry is that this would be the new Itanium. Heralded as the next great thing, and then quietly dropped into the dustbin of history along with i860 and all the rest. Possibly from someone in Intel deciding once again that they should only make x86 chips (I'm looking at you Larrabee for gpus plus Atom and that funky 386 core for embedded) could axe SiFive, or if the architects are working there partly because it isn't Intel then they might find everyone ups and quits.
Intel can't own RISC-V, they aren't really competing with it in any way other than on their ARM based FPGA chips so derailing RISC-V doesn't seem that useful. The likes of WD would keep tramping on with their internal designs, compilers would keep getting updated.
I partly wonder why Intel wouldn't just design their own RISC-V from scratch. They could probably rip the ugly front end off an Atom core, and put a nice RISC-V decoder on there and instantly have the fastest RISC-V chip on the market outpacing anything SiFive have.
Don't even think this would annoy Xilinx that much. Again, AMD can just use ARM cores and given the RISC-V eco system is still a bit young for embedded use have time to knock out a competitive part before it really matters for a whole lot less than the $2B this deal is supposed to be.
What am I missing? I just don't get why this purchase makes any sense. Perhaps it is no more than a "Me too!" from Intel after Nvidia bagging ARM.
Tabbykatze (11-06-2021)
I imagine it's for the headstart. A $2bn acquisition is chump change for Intel and if they see RISC-V becoming big then even a couple of months advantage could be worth far more than that further down the line, and that's before accounting for the headstart against competitors. It also means that they don't have to divert as much of their current resources or try to headhunt from the very small bunch of RISC-V developers. If they put down the full $2bn that would also kick out Qualcomm and SK Hynix which would also be beneficial by setting back some of the competition.
All very good points and my big worry as is yours is that this will be another itanium.
Intel could box themselves in again and pursue an avenue of RISC-V that ultimately no one else is interested in.
Or they're just buying the talent to forge ahead with RISC-V on some of their systems, interesting comment you've made around the Atom core which could be a cool avenue they could be pursuing.
My take is betting on NV upping ARM licenses massively so a RISC-V design would actually be more cost effective
Old puter - still good enuff till I save some pennies!
But, what IP? RISC-V is an open source design, steered by an open committee. SiFive have a range of designs, but don't control anything as such.
SiFive are supposed to have a U84 core which looks pretty pokey, as well as the whole range leading up to that. That would fit in nicely with Intel's desire to provide foundry services, but they have a bad record for foundry services so far and I think the world could do with SiFive remaining foundry neutral really.
I'm really out of my league on this subject. Nevertheless, my "Way Back Machine" tells me that HP attempted a major push with RISC (reduced instruction set computing) several years back.
What I remember from their technical marketing literature was a major gain they expected from processing simpler instructions faster, as compared to a complex instruction set which is inherently slower.
As a simple example, I can see how loading less code into a large L3 cache could make up in higher speed what a CISC complex instruction set loses to marginally slower speed.
There may also be significant gains to be derived from RISC when massive multi-core CPUs are getting crowded onto ever shrinking die sizes. Those did not exist when HP proposed a major RISC investment, as far as I can remember.
Google scientists just published a very revealing study showing how their production CPUs are producing faulty calculations. The simplicity of RISC may also have an edge as far as reliability and quality control over time are concerned.
Likewise, there may be worthwhile differences in power consumption of end products.
Forgive me if I write like an amateur, because I am one when it comes to RISC.
> this will be another itanium
... or another Optane on M.2 using only x2 PCIe lanes instead of native x4.
I get the feeling that some of Intel's products are actually designed by committees.
There was a skewed purpose for that, it was so that the Optane dimms could have a stronger market reach by being both sata and nvme signalling capable. The first generation optane dimms were made during a time thay NVMe was still quite early days and it was more common to have a sata key capable m.2 interface or at least...cheaper.
I believe the term "Optane DIMM" refers to Intel's proprietary 3D Xpoint persistent memory
installed in DIMM slots, not M.2 slots. "DIMM" = dual in-line memory module. Even then, Patrick Kennedy found that "Optane DIMMs" result in down-clocking the entire DRAM subsystem to DDR4-2666, read huge performance penalty for large servers with large DRAM subsystems. I don't know if Patrick's finding has been replicated, but it certainly deserves serious consideration. See his website for details, a video + lots of added Comments. Sorry if this comment is off-topic (RISC).
p.s. Instead of an homogeneous DRAM subsystem, I would like to have seen a modification of the former triple-channel DRAM chipsets:
the third channel could be dedicated to Optane DIMMs which could easily host the OS partition and administrative partitions archiving one or more drive images of the Primary OS partition.
The Optane DIMMs could operate in "dual-channel" mode, using 2 x DIMM slots. Thus, a typical configuration would have a total of 6 x DIMM slots: 2 for Optanes running in dual-channel mode, and 4 for faster volatile DRAM running in quad-channel mode.
Then, it seems to me that such a third channel could support a clock speed that is different from the other 2 DRAM channels (4 x DIMM slots in quad-channel mode).
For example, using Patrick Kennedy's finding, the Optane channel could run at DDR4-2666 (for now, and hopefully improve with time), and the other 4 x DIMM slots could run at DDR4-3200+.
Call this a "heterogeneous" DRAM subsystem, instead of an homogeneous DRAM subsystem.
Because memory controllers have migrated into CPUs, a modern implementation of this triple-channel variation would now require architectural changes to those on-chip memory controllers.
We explored an older concept of this approach in a Provisional Patent Application that has now expired. Very briefly, it would add a "Format RAM" feature to a motherboard BIOS, permitting a fresh OS install to that ramdisk.
We were anticipating persistent DRAM as a feature of that expired Provisional Patent Application, although non-volatile DIMMs did not exist when we submitted that Application to USPTO.
There are currently 1 users browsing this thread. (0 members and 1 guests)