Read more.But hobbles the demo comparison system with SATA interfaced NAND SSDs.
Read more.But hobbles the demo comparison system with SATA interfaced NAND SSDs.
Game changer
Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive
I dunno, there's something off about those charts. The SSD-based one was clearly transferring at speeds way faster than the 284MB/s when it started. You'd need to analyse why it had that significant slow down of transfer tio identify what was bottlenecking the transfer.
It's somewhat pitful for them to compare it to a tablet or smartphone booting, though Those things are painfully slow to start if not in standby state (which a PC could already do). Let's hope Optane is a bit nippier than that! I fear that BIOS/UEFI time will end up being a more significant hog, plus Windows login time.
Well, tbh, I doubt anyone (with a few braincells) will upgrade from nand-based-SSD to XPoint for Windows bootup times!
This is for sheer throughput.
Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive
A lot of TLC drives have a small psuedo-SLC buffer to allow a certain amount of writes to proceed at a much higher speed than is allowed by writing straight in TLC mode, and the controller then moves this buffer into TLC in the background. Because it's limited in size, once you exceed a certain amount of writes you'll fill the SLC buffer and start writing straight to TLC which shows as a drop in transfer speed as shown in that picture.
E.g. from: http://www.anandtech.com/show/8520/s...0gb-ssd-review
So it's only really of benefit if you're writing very large files or making very large transfers? From the graphs you've posted it looks like you'd need a persistent transfer of over ~ 8GB to hit that point. Certainly far from impossible, but potentially quite niche - wonder how durable it is from a backup/archive point of view - that's one use-case I can see it making a real difference for...
That drop-off only really applies to TLC drives with an SLC cache; some TLC drives don't have any cache (and hence are consistently slower with writes) and MLC drives tend to be consistently fast.
If Intel had chosen MLC drives for the test it wouldn't have shown that sort of drop-off so they're showing a bit of a worst-case result for NAND. However, I suspect they're going to be somewhat limited by PCIe bandwidth over Thunderbolt so while it's a bit of a fun demo to show on stage I don't think it's really doing a good job of showing the differences. Because even with PCIe NAND SSDs you can hit 2GB/s.
AFAIK XPoint is going to sit somewhere between NAND and DRAM in cost and density so it's not likely it will be aimed at archival storage, at least initially. However it supposedly has large benefits over NAND in terms of latency, not just copy speed, potentially adding another layer to the storage hierarchy. We probably won't see them replacing NAND SSDs any time soon; it seems Intel are thinking the same given they're developing 3D NAND and Xpoint in parallel.
That is more the point here.
There are lots of ways to drive flash, and with nand being so much cheaper than xpoint you get a lot of channels of nand to raid together for the same money. That is why these new memory technologies have to be on DDR3 or DDR4 channels. But that doesn't stop it being a read mostly media, a million writes to a DDR4 stick isn't going to take long.
It looks like a nice technology, but the price they are quoting just seems too high. Lower the price, get it in tablets and phones.
There are currently 1 users browsing this thread. (0 members and 1 guests)