Read more.Samsung also reveals its 3-bit per cell 3D V-NAND is weeks away from shipping.
Read more.Samsung also reveals its 3-bit per cell 3D V-NAND is weeks away from shipping.
I think we still have computers in the office doing non-critical tasks (disc duplication, legacy email etc) that probably have less cpu grunt (and ram) than a modern ssd controller. Yep, just checked - the Samsung controller is a triple core 300Mhz ARM9 - the office machines are lesser spec'd. I'm not suggesting we could suddenly turn our SSDs into desktop computers (Pi in the sky?), but that's a lot of cpu oomph that's just sat there idling most of the time.
The original iphone used a 400mhz single-core arm cpu, for reference...
Ughhh I can think of several reasons why; security, tampering and stability being somewhere near the top of that list. This is the sort of reason why phones have completely isolated baseband processors, and yet another isolated system in the SIM card.why not run apps on your SSD controller?
Why not? Because if you have hundreds of drives then they are in a storage rack the other side of a NAS from where the main compute is taking place. You want to send a compute task over a 10GbE network aggregate, through the network switch, through the NAS, through its RAID controller, over fibrechannel to the drive? Lol. Too messy for the data centre, too dull for home use.
Hard drives have ARMs in them too. In fact there's probably several ARMs inside your desktop, laptop or server already. Any of them could run apps. They probably shouldn't though, until we're much better at ensuring isolation of critical software and hardware components from non-essential stuff.
Why not move the CPU, GPU, cache, DRAM, SSD into modules (a mix of all of them stacked together in a single module). Need more, then just add appropriate module that suits your needs. Network of modules and distributed computing, distributed file system etc. Data storage and movement is the priority and move the processing closer to the memory.
But that is what we have now. My CPU is in a square thin module with lots of pins on it, the GPU is a long thick rectangular module that plugs into a PCIe socket. The motherboard connects them together in a star configuration with the CPU in the middle.
Unless you limit the amount of memory available by moving the memory into the CPU package, the ram is directly wired so as close to the CPU as you can get.
The one and only big winning factor of the PC has been it's ability to evolve how things plug into it. Evolution is a powerful driver, so you have to think very carefully of why things are how they are before you dismiss what we have now. For example, in the 386/486 CPU days the PC did indeed have cache plugged into the motherboard sometimes with the option of expanding it, so modular just as you suggest. The CPU has evolved to bring that closer to the CPU with slot1 & slot-A, and then as part of the CPU itself as that makes it faster and a small fast cache is generally better than a big slower cache.
The further you have to move data the higher power is required to overcome interference and other losses. Integrate as much as you can and share the work with asymmetrical MP.
Turn a SSD controller into a accessible CPU and add a GPU and stack the RAM & SSD memory on top.
You can think of it like stacking a CPU core, part of a GPU, some RAM and SSD into a single chip placed on a PCB that is put in a slot. Then, integrate more logic on the edge of the RAM so that a streams of instructions can be processed there with simple cores (eg. ARM) so you don't have separate CPU/GPU. The MB then has multiple slots or stacking ability to extend the processing and storage resources.
There are currently 1 users browsing this thread. (0 members and 1 guests)