Read more.This storage system will enable in-memory computing with DRAM-like performance.
Read more.This storage system will enable in-memory computing with DRAM-like performance.
If this stuff's anything like as fast as it's purported to be and as resistant to failure as conventional DRAM then this could be the end of conventional SSD's and C:\ drives.
Just think, having 512GB (or more) of PCM Ram with your OS, Programs and Games on and then maybe a large storage drive beyond that to swap programs in and out of PCM.
But then again we've been hearing variants of this story for years, decades even. "This is the next big thing in memory, it'll replace DRAM and is persistant like Flash... here see this is a white paper all about it..." and then we hear no more about it except as a footnote on a wiki page somewhere.
Cost is another major factor; if the cost is closer to DRAM then it won't be taking over from NAND any time soon, let alone HDDs.
Not in the consumer market place, but this is aimed at data centres where long term running costs have a more significant impact on the total budget than hardware costs. I think this and other similar technologies would eventually be filtered down to consumer and enthusiast level but that's not a priority for developments such as this, at the moment it is interesting to see that they are considering alternatives and it might just end up going nowhere as they could find that costs would be too high, there could be reliability issues, manufacturing difficulties or someone puts a patent on part of the process and then decides to stop others from using that design etc...
This might be a game changer. Well, in the long run, anyway. For long time, there's been a dream of removing the third layer of storage (CPU L cache, DRAM and slow storage). SSD's have upped the ante, but if this is anything like they say (even though there's a long way from 50ns DRAM latency to 2ms PCM latency), it's gonna change quite a few things. Intel and Micron developed something similar. HP is already working on hardware and software that workes inside RAM (which will be non volatile). You can remove quite a bit of logic from the CPU that hides latency. You can makes caches a bit smaller, given the fact that you don't have that slow storage anymore. U can have shared memory pool between GPU and CPU and work on a fast memory with no worry for power loss. In a not quite distant future, you'll be able to have an SoC with everything on it. The board would be only to route signals for IO.
KeyboardDemon (11-08-2015)
I may be dreaming (for now), but I do look forward to:
(a) PCIe 4.0 with 16GHz clock rates and 128b/130b jumbo frames;
(b) NVMe RAID controllers with x16 edge connectors and 32GB/s raw bandwidth; and,
(c) 2 or 4 U.2 ports on such a RAID controller connecting 2 to 4 x 2.5" NVMe SSDs.
If modern chipsets can feed 3 and 4 high-speed video cards,
we should be able to feed one such RAID controller
with the required number of PCIe 4.0 lanes, even if
a PLX-type bridge chip is required on-card, in the interim.
Then, the wiring topology will be identical to what
the industry has already been providing to workstations
using x8 PCIe RAID controllers and SFF-8087 "fan-out" cables.
Ideally, upcoming motherboards will support multiple
integrated U.2 ports, replacing SATA ports 1-for-1,
similar to the disappearance of PATA/IDE ribbon cables.
One of the main reasons for my "dreaming" above is the
extraordinary maturity that comes with integrated chipset
support for a full range of modern RAID levels e.g. RAID-5 and RAID-6.
PCIe SSDs do suffer from a single point of failure,
a weakness that is not being discussed at industry websites
I visit regularly.
There are currently 1 users browsing this thread. (0 members and 1 guests)