Read more.Phase-change memory could prove the faster successor to flash.
Read more.Phase-change memory could prove the faster successor to flash.
Regardless of current usefulness, I'm always a fan of innovation, not least because it provides more options (and competition). Plus, nice to see IBM are still doing significant research, I'd not heard much about them recently - and they have been hugely instrumental in the development of modern computing.
Not knowing anything about this tech, isn't also possible that PCM will continue to be developed, in which case there still might be some benefit to using it? I'm kind of disappointed to see the word "enterprise" mentioned frequently, as I would have thought increased longevity and reduced latency would have been of interested to most, if not all, users of memory-resident storage.it also doesn't expect products featuring PCM before 2016, by which time it's possible that innovations in flash will have negated the benefits of PCM altogether.
I'm sure I saw a report recently listing the top 5 patent filers in the US*. IBM was at the top, and it was pointed out that if you added #2-#5 together it was still less than their filings. It's a company I've got a good deal of respect for on a technical level, even though I work for a competitor.
(* disclaimer: yes, I know that means it was the US patent system - hopefully as few of their granted patents as possible were at the "fastening shoelaces" level of stupidity)
Enterprise = profit. They're not going to spend millions on the research unless they can be sure that big companies needing reliable and redundant server farms/clusters are going to buy them by the truckload. So yes, enthusiasts like us will be slavering over it and no doubt it'll find its way into the market, but really we're not the target market.
For consumers, current SSDs are more than reliable enough. To actually get to the write-limit, you'd have to be continuously writing and re-writing data for a looong time. A 64GB drive, at 80MByte/s will burn out at around 50 years, more recent drives are rated at over 80. Given that most people switch out hard drives within 5 years when they change computers, or perhaps every 10 for people who migrate them, this isn't an issue and likely never will be. Even if you're using your disk as the OS drive, it's not a problem.
And that's just fine for home use - but obviously for active archival work you may well want/need more time (and peace of mind).
Unfortunately, these figures are (as ever) misleading. For example, it is possible to quickly (within weeks or months) kill an SSD by filling it to 98% of its capacity, and then writing/deleting/writing/deleting to the last few GB repeatedly. And it's not actually that unusual a scenario - especially with smaller drives, where after OS and a few games you're running very full capacity-wise, leaving the last little bit for you to juggle files on as you need to, or the OS to use for temporary files, or some such thing. Overprovisioning can only help so much (in this case by extending the e.g. 2GB of free space to e.g. 10GB and therefore buying five times as much time as if there was no overprovisioning).
Don't drives try to re-arrange static data on the drive so that doesn't happen?
I believe they do, yes. But I thought they do it through a more subtle method - that they don't actually reshuffle physical data around on it; rather, that they prioritise writing to cells which have had fewer write cycles, thus averaging out over time but not much use if you have data blocking cells with fewer write cycles.
I am happy to be proved wrong on this though as things may have changed in the last year or so.
There are currently 1 users browsing this thread. (0 members and 1 guests)