Read more.It is basically a NUC on a dual-slot PCIe card. It can work with other cards on a backplane.
Read more.It is basically a NUC on a dual-slot PCIe card. It can work with other cards on a backplane.
1970's called, they want their S100 bus topology back.
But that's the bit that baffles me, PCIe isn't strictly speaking a bus and if you put a CPU on a card it has x16 lanes to talk to the back plane which then have to be multiplexed or divided up to the other slots giving a choice of latency or bandwidth penalty. Put the CPU on the backplane, you could call that a "motherboard", and you get as many lanes to the PCIe slots as you want.
So AFAICS this is a way of limiting power, cooling & expansion all while driving up cost. Someone please explain what I have missed?!?!
So basically a 'blade server' that fits in/on another pc rather....
I can see the logic of a low power mitx server pc along with a 'gaming pc' inside one case(case etc allowing), I can see the benefit of specialised processors to speed up certain functions such a encoding/decoding, but I just can't see the reason for a (low power) pc that needs another pc for it to run in the first place... I might as well just use the main pc that is on.
Only vague use case I can think of is multiple users with their own desktop but with multicore cpu's and virtualisation I'm not sure that's really needed either (linus tech tips did a video on this and showed it's viable)
This feels like such a wasteful design method compared to a blade rack with networking backplane. Unless Intel are dramatically increasing their PCIe backplane interconnects, how many of these would you even be able to fit in a standard rack/tower chassis?
I think this might be a (perhaps misguided) attempt at cramming more CPUs into a workstation rather than a server part, actually. Or at least, I can't figure out why they'd put Wifi onto the thing otherwise (Wifi as a networking option for servers which have wired connections...?). I guess this makes a tiny bit of sense, kind of, in that it would enable people to get a few more CPUs into their workstation, although this is going to be a niche product.
It's probably most relevant to people who bought a workstation and need to add more CPU power to it but don't want to shell out for a new workstation. I mean, sure, Intel could go with the AMD strategy of letting people upgrade processors rather than introducing a new socket every processor revision, but that just wouldn't be the Intel way.
Well, it is weird idea, but.
I am trying to find an proper use-case scenario.
So lets assume we have a base computer composed of efficient LP ARM processor.
It will only be used for task like browser.
Then you have 2 expansion cards - CPU and GPU that are idling at ultra low power.
Whenever you run game or heavy application then the cards will kick in.
This make little sense, but still it does.
The biggest problem I see is that you have plenty of space for cooling solution for the LP CPU and not that much for the expansion card.
Another use-case
Powerful base PC (content creator / developer)
Extension cards with CPU for non-virtual separated machines that behaves as virtual ones.
Extension card with GPU used as on PC running VMs
This make sense from financial standpoint.
Less space used, less hardware overall (single PSU / GPU)
Anyway. For this to actually be viable PC contender it would need nice hardware virtualization handling by OS and software. I remember SLI / CROSSFIRE. It gave a lot of computing power, but the restriction was just too great.
Did they not try this a decade ago?
Intel's throwaway motherboard philosophy taken to the logical conclusion. Just bundle the whole thing with the CPU! I'm surprised they won't allow custom PCBs, GPUs show that the board partners can do a pretty good job
Xeon phi already offer that, but without the USB ports (which aren't needed on a server)
Looks in PC, has a spare pcie slot... challenge accepted...
Makes me think of a alternate dimension where slot 1 processors became the way forward and evolved into this.
But seriously failing to understand this
Last edited by siu99spj; 09-10-2019 at 09:39 AM. Reason: Fix pic link
If the Earth is a sphere how do you travel to the ends of it?
So this is how scared intel is of amd, their product design department is literally rubbishrubbishrubbishrubbishting itself and this is the result!
I got a slightly different take on this review.
I read it as putting this onto a PCB which just has PCI slots, not a normal PC motherboard.
As someone else sort of mentioned, "like a blade rack with networking backplane."
This way you could scale up (or down as needed and can be easily moved around within the business to where it is needed most) a system with several of these working in unison via the backplane - and can even add in some other PCI cards (eg - workstation video cards) for various computations or needs.
Thus making each workstation easily adaptable to the present and changing circumstances.
Could something like this work for streamers - main PC for gaming and this thing used for streaming (and any required editing)?
Also wondered if this would come with the newer PCIe 4 protocol for the increased bandwidth.?
What is leaving everyone confused is that t does seem like a strange product, especially as there is no mention of specific real-world applications.
There are currently 1 users browsing this thread. (0 members and 1 guests)