Read more.Alder Stream SSDs will have double the density and offer better than double the performance.
Read more.Alder Stream SSDs will have double the density and offer better than double the performance.
Not that good considering it's supposed to be a cache between ram and drives if it's still based on nand flash, the write longevity's no going to be upto much. I'd rather wait for mram, at least that has the same kind of rewrite longevity as dram even if it's slower and is still pretty persistant.
are there any advantages for SSD using Optane? I thought that MAND memory hit the barrier of SATA3 long time ago...
oh well this slide (bandwidth/capacity) tells me nothing, does that mean that optane persistent memory will have 40% bigger capacity and only 15% worse bandwidth?
These PR slides...
Does this work with non Intel CPUs already?
That was only a problem with the DIMMs AFAIK, there are plenty of SSDs out there and they should work on any computer should you want blistering storage on your NVMe modified Raspberry Pi 4.
OFC you would probably still be better off buying a truckload of conventional SSDs for the same money to give similar aggregate performance and way more storage.
Yay for the march of tech improvements. I would like to know the 4k QD1 numbers though.
Optane is a different kettle of fish that is much better at small 0.5K-4K files than the sort of multimegabyte transfers of media files / photos / Game files that fall better to traditional SSDs
Windows and its make up has alot of small size files and that is where Optane shines for a very quick responsive windows system but lacks the raw performance lesser SSDs can give on larger file transfers. at those small files sizes Optane can be some 5x the speed the best ssds but 4x less on the lager files.
So horses for cources so to speak in an ideal world Optane as boot drive than something like an Samsung Evo and a 2nd drive.
I am sure it willl cost millions of pennies too.
I think Optane SSDs will continue to be "niche" products, chiefly
because of their superior latencies and costs, but not much else
compared to what's now available from the competition.
The evolution of "4x4" bifurcation using full x16 PCIe expansion slots
has enabled direct paths to multi-core CPUs, eliminating the need for
dedicated Input-Output Processors and "hardware" RAID controllers.
Just using mathematical probability, the flood of multi-core CPUs
has inevitably resulted in one or more relatively "idle" CPU cores
which can then be scheduled by the OS to process one or more x4 devices
wired directly to an x16 PCIe expansion slot.
Now that PCIe 4.0 has become standard, the efficiency of each x4 M.2 drive
will also depend more and more on the efficiencies of the chipset and device drivers.
We did a low-cost experiment recently, and installed a Highpoint SSD7103
in an empty PCIe 3.0 expansion slot in a refurbished HP Z220 workstation.
The RAID-0 array consisted of 4 x Samsung 970 EVO Plus M.2 SSDs.
[link not allowed here.]
All components were off-the-shelf, and the SSD7103 was the single
most expensive component. Only challenge was switching the BIOS to UEFI,
but we knew about that requirement in advance, and it only required
one change in the default factory BIOS settings.
The SSD7103 does not use "4x4" bifurcation, because it is designed
to support booting from industry-standard x16 PCIe 3.0 expansion slots
where 4x4 bifurcation is not supported.
We migrated Windows 10 to that RAID-0 array with the latest licensed
edition of Partition Wizard, which has worked wonderfully for us
every time.
On that system with all stock settings, CDM measured READs at 11,687.72 MB/second:
[link not allowed here]
It boots up really fast too.
Bottom Line: that experiment was a roaring success. If I had the money,
I would love to try a similar experiment with 4 of the latest Optane M.2 drives,
particularly if they support PCIe Gen4.
The thing is, Windows itself is just awful at small file handling. That one of the reasons that servers tend to run Linux, but with Linux the caching of filesystems and their metadata in ram so so good that for most uses a standard SSD is just fine.
Optane is good for those cases where your data is too big to reasonably cache but you want more speed, so you can store or cache on Optane which whilst way more expensive than flash in conventional SSDs is much cheaper than RAM. That is a pretty narrow use case.
> Optane is good for those cases where your data is too big to reasonably cache but you want more speed, so you can store or cache on Optane which whilst way more expensive than flash in conventional SSDs is much cheaper than RAM. That is a pretty narrow use case.
Excellent point. It also appears to me that Optane R&D took longer and
was more expensive to develop, than Intel had projected. Then,
Intel's initial products were too small and only used x2 PCIe 3.0 lanes,
further restricting M.2 performance.
I can't prove this, but the outcome appears to have resulted from
a lack of sufficient understanding among Intel's product marketing
and advertising groups. That was the impression I was left with.
The latter groups did predict performance metrics that did not materialize,
and those disappointments were not quickly forgotten.
Then there was that devastating "dangle the dongle" review by Allyn Malventano,
during his evaluation of Intel's VROC implementation. Intel should have hired
Allyn during their initial Optane R&D!
Moreover, the last three years have also demonstrated very stiff competition
from Samsung et al. chiefly the Samsung 970 Pro NVMe M.2 models. The latter
came very close to saturating x4 PCIe 3.0 lanes @ ~3,500 MB/second READs.
(8,000/8.125)x4 = 3,938.5 MB/second MAX HEADROOM
(PCIe 3.0 uses a 128b/130b "jumbo frame": 130 bits/16 bytes = 8.125 bits/byte)
It should shortly be very interesting to benchmark Optane devices that
also support the PCIe 4.0 standard. Intel could, in theory, expand
that small use "niche" by upping capacity (which appears to be happening)
and maintaining the same or similar low latencies at Gen4.
Lastly, the prices will need to come down for a lot more potential buyers and
developers to take notice, and have a real incentive to put Gen4 Optanes
into production applications. Intel's competitors in this storage sector
are certainly not standing still.
Now that he's working at Intel, look for more exhaustive benchmarks
from Allyn Malventano.
Another experiment that I believe Intel can afford to do,
whether or not the results inspire Intel to offer a retail product,
is a variation on triple-channel DIMM slots.
The Optane DIMMs can be dedicated to the third channel,
and those Optane DIMMs can be formatted with a "Format RAM"
option in the UEFI/BIOS during a fresh OS install.
Years back we submitted a Provisional Patent Application
for a "Format RAM" option to enhance standard BIOS subsystems
(before UEFI became standard). But, that concept assumed
volatile DRAM hosting a memory-resident OS.
Allyn Malventano knows about that Patent Application,
because I shared it with him many months ago, and
his comment about "hybrid" RAM subsystems showed to me
that he did understand the basic idea completely.
Because Optane DIMMs are non-volatile, they appear to offer
a superior alternative for such a memory-resident OS.
Intel's experiment with such a setup should compare
the performance that can be achieved by hosting an OS
on a Gen4 "4x4" add-in card, e.g. by populating same
with a variety of different M.2 NVMe drives --
to enable apples-to-apples comparisons.
My reason for suggesting the latter was the CDM measurement
we did with G.SKILL DDR3-1600 in our refurbished HP Z220:
the Highpoint SSD7103 was actually FASTER than DDR3
by a factor of ~2-to-1, on certain metrics.
I'm going to address all your posts, not just this one.
1. RAID setups increase latency, and with a drive as low of a latency as Optane, the increase is significant. The much oft-touted small file random reads increase in latency by 1.5-2x after RAID. Sure the sequential increases with RAID 0. That's a trade-off the user has to be willing to accept.
2. You are saying the performance claims did not materialize. They claimed 1000x over NAND. Actually the NVMe(PCIe) devices fall way short of it, and its about a 10x gain(10us vs 100us).
How about the DIMMs?
Optane DIMMs are at 180-300ns latency. That's 300-500x improvement against the fastest NAND SSDs, and may be 1000x compared to SATA ones. Even against NVMe Optane SSDs its 30-50x improvement. The performance claims are met.
Of course some less knowledgeable folk may have expected 1000x increase in bandwidth. Not even HBM2 offers that much.
The latencies are the important part and that sort of gains are transformational, not just an evolution. Your 7GB/s PCIe 4.0 NVMe SSD still has 100us read latency. So what? Data still needs to be loaded from RAM. Your SDRAM from 1995 can do things your 2020 SSDs can't do because of none other than latency.
Optane DIMMs will be fast enough to have a system where the OS is installed and the boot times be at near zero(hundreds of milliseconds, since it does not need to load from memory).
3. You cannot compare a PCIe RAID setup to Optane because PCIe and NAND will NEVER reach latencies possible with Optane on a DIMM interface.
DRAM has latencies of under 100ns. RAM-tweaked Intel platforms can reach 50ns. That's 1500-2000x better than the 980 Pro NVMe. Benchmarks like CrystalDiskMark are suited to storage, not for hyper-fast DRAM supplements.
Last edited by DavidC1; 18-08-2020 at 10:25 PM.
> You cannot compare a PCIe RAID setup to Optane
I'm not sure to what "comparison" you are referring there.
Can you clarify, please?
If the "comparison" is between the PCIe bus and DIMM channels,
then yes I agree that such a comparison is "apples-to-oranges"
The clock rates are different, and the number of parallel wires
is also different, for starters.
I was simply referring to a "4x4" x16 add-in card
populated with 4 x Optane M.2 drives e.g.
ASRock Ultra Quad M.2 card and Hyper Quad M.2 card,
assuming a chipset that also supports bifurcation.
(Highpoint's SSD7103 is not an apples-to-apples comparison,
because it hosts its own "PLX-type" switch, which is
intended to eliminate the need for bifurcation support
in a chipset.)
Other "4x4" AIC vendors are now announcing Gen4 support
and the PCIe 5.0 standard isn't too far off.
Yes, Optane DIMMs are also a useful place to host a ramdisk
and standard file system.
And, I continue to expect a useful experiment is
a "Format RAM" feature in a UEFI BIOS, e.g. dedicating
Optane DIMMs to a third channel in a triple-channel
memory subsystem hosting a memory-resident OS.
Many thanks for your valuable comments above!
p.s. The next comment was not mine:
> Optane is good for those cases where your data is too big to reasonably cache but you want more speed, so you can store or cache on Optane which whilst way more expensive than flash in conventional SSDs is much cheaper than RAM. That is a pretty narrow use case.
FYI
Google optane dimm performance
found 510,000 results today
So, there is plenty of reading available
for anyone interested in such questions as the
number of "early adopters" -- particularly
large data centers / cloud service providers.
I was saddened to read how much money Intel
has lost on Optane during the preceding 3 calendar
years.
There are currently 1 users browsing this thread. (0 members and 1 guests)