Read more.M.2 2280 form factor drives mix Optane and Intel QLC 3D flash. Available next month.
Read more.M.2 2280 form factor drives mix Optane and Intel QLC 3D flash. Available next month.
Optane will never get proper widespread adoption while they keep just locking it down to Intel based platforms because Optane is actually great, they're awesome write caches and brilliant read caches for QD1 4K random IO, all my ZFS systems have at least 1 optane write SLOG per pool.
But the proper benefits of Optane are still being locked behind Intels compatibility meaning it won't get adoption wholeheartedly, especially while AMD solutions (like Threadripper and TR Pro) are smoking the workstation circuit.
This kinda feels like hybrid SSD/HDDs where you have the worst of all worlds. Maybe I'm just being cynical because I won't be sacrificing system performance to allow me to use a storage technology.
I was throwing 4K/60 video files around yesterday on my PC into an editing program and the SSD was absolutely not a bottleneck. What would have been a bottleneck would have been buying a similarly priced Intel based system back when I purchased, which would have near doubled the rendering time.
I think it's more than a bit misleading (Intel) saying these are good for gamers (Hexus seems to be quoting that). Last time I checked, Optane's major advantage was in random, not sequential performance. Most games rely on sequential read speeds for loading and this isn't going to give you MOAR FPS! Additionally, Optane in this case is only 32GB - nowhere near big enough for modern games.
What solutions like this do is give you stunning performance for one thing and then unexpected slowdowns for others (like hybrid SSD/HDDs). Personally, I prefer a fairly middling and predictable responsiveness, not awesome speed and then wondering what's happening when the slowdown comes. The other question is whether the Optane is going to be limited by bus speeds, etc.
There are definitely specific use cases, but for the throwing around of large files, doesn't a high spec, PCI-e 4.0 TLC SSD work better?
Tabbykatze, I'd be interested in what you think having obviously adopted it.
PCIe 3.0 x4 interface
x2 for Optane
x2 for QLC NAND
They are not shared.
Previous version more than twice the cost of a decent mid range PCIe 3.0 x4 TLC based SSD.
No thanks Intel. I'll pass on that one.
"In a perfect world... spammers would get caught, go to jail, and share a cell with many men who have enlarged their penises, taken Viagra and are looking for a new relationship."
WRT the gaming side of things, actually you're finding a lot of modern AAA games do streaming assets (that's why we're getting Microsofts DirectStorage API, really cool) so random IO is becoming far more needed than sequential bandwidth.
I think straight Optane drives are excellent and are a diamond in the rough technology because their extremely low latency QD1 writes and to a lesser extent, reads are perfect for caching systems. The big issue is to get them at any kind of size that makes sense literally break banks.
With ZFS using a SLOG, the cache is flushed to the drives every 10 seconds so on a 10gbps link, you'll only need 16GB drive to suit a 10GbE read/write iSCSI target. I can tell you now though, running a SLOG Optane for a VMWare iSCSI target really cleaned up the performance of the VMs in regards to storage, it's a great technology and completely destroys the proposition of a RAID card with a memory module and BBU.
Tbh, I think Optane caches should certainly start being used on PCIe 4.0 SSDs for writes and somewhat reads but then again, SSDs these days normally have much better writes than reads till they're full up anyway. So I guess, outside niche areas, not really sure why on earth i'd buy an H20 over an XPG SX8200 or an equivalent Sabrent/Samsung.
Again, this feels that it's just yet another case of Intel making a solution for a problem that doesn't really exist for users.
Reminds me a little of SATA SSDs Vs NVMe. To the average end user, the difference just isn't there. Once you get away from the terrible random HDD performance, there's little to be gained once the bottleneck is removed.
As for streaming games, I considered this. Either the game can stream assets at an appropriate rate or it can't (see: Boiling Point when you drive too fast and use a 5400 HDD, no such problems on faster spinning rust of the time). To pitch that rate of asset streaming at anything more than a low end SSD would ruin the game experience for a significant number of people. Even pitching it at a mid range SSD would exclude 50% of people from playing your game without jerks whilst asset loading caught up. You certainly couldn't have a situation where only the most insanely fast drives would allow adequate performance.
Once you get over the random read threshold required to stream assets as they are required, adding more storage chooch won't do anything, as you're already loading assets as they're needed. Loading more or earlier won't add to frame rate. If anything, it would use more RAM and CPU resources handling the extra loaded doodads.
I'd love an Optane drive of sufficient size to use properly, if only for the epeen and the big numbers. But before I'd spend on that, I could do with 16GB more RAM, a new GPU, a new sound card (so I can switch to Linux) and so on.
mixed experience with orginal h10 drive- got one in a HP desktop with mechanical drive certainly boosts bootup considerably but do any large file transfers and boom plus every so often seems to just stop working have to disable and re-enable with the H10 drive and lets not talk about windows updates <eek>.
I've got a load of old optane drives that came in HP machines and were used as cache for a spinner, 16Gb, useless things..
I can absolutely tell you that a difference between NVMe and SSD is almost the same as SSD against HDD.
A game that pushes the extremes for asset streaming is Arma 3, the faster your storage device in all areas but especially random IO, the more and more you can max out your view distance. Same works for games like MS FSX.
The other thing you have forgotten is that yes you can have a good random IO performance but you also need to maximize the bandwidth to increase the pipeline so the rate you can push the data you've retrieved more rapidly. That and the controllers in NVMe drives are far superior to SATA SSD controllers because they're built to a much higher specification because of the higher bandwidth needs.
The leap from HDD to SSD was huge, and as you say results in fast boot times, but moving from a Samsung 850 Pro to an MP600, while the maths would certainly suggest a leap, for my usage, I just don't see it, but then again I surf, work, YoutTube and Warzone on my PC so increased sustained read/writes just don't get the benefit in my usage case, where you, video editing will see it as you are working with large files and chucking them about, but for the average user its a lesser leap..
I think it's the diminishing returns in play here. It is as good as the jump from hdd>ssd but other factors come into play which means it doesn't appear as much of a leap. I certainly wouldn't want to go back to sata on my work and music pc's. Wifeys gaming rig now has an nvme drive but it's not made half as much difference
Old puter - still good enuff till I save some pennies!
There are currently 1 users browsing this thread. (0 members and 1 guests)