Read more.A couple of US retailers have started taking pre-orders for 128GB and 256GB modules.
Read more.A couple of US retailers have started taking pre-orders for 128GB and 256GB modules.
Intel selling high-end parts at a price that appears to be good value? That cannot be right...Originally Posted by Hexus
But it's still not RAM and that's why it costs relatively less.
I would be interested in an article on STH in 6/12 months about whether businesses are actually using these because everywhere I look you see Intel going "Look at this, it's amazing, you want it, we forced our partners to install a bunch more DIMM lanes for this feature dramatically increasing board cost, you definitely need it" seems to be more like this (Freeman: Intel, BendyFace Coddlesfart: Customers):
![]()
Surely they have to sell it at a decent price to offset the issues, one being that it limits the number of RAM sticks you can have by occupying the slots but without performing as well. I'm not familiar with the data centre environment so please tell me if this is rubbish.
Thinking it might be useful for avoiding ruinous Windows 10 updates without having to hibernate...
Server motherboards usually have a crazy amount of dimm slots. From reading around you can partition the storage between a RAM extension and a general storage. So you can extend your RAM amount making your actual DRAM a cache for the Optane sticks. That's basically using it as swap, just fast swap.
The example I read used the managed storage to create a /dev/ block device which was then partitioned, formatted with a filesystem and then mounted. So that's an SSD, just fast.
I'm guessing you can use the managed storage to do something more interesting, if anyone ever comes up with such a thing. But I have to wonder how this all ties in with modern containered and virtualised services where compute and storage is supposed to drift around your personal cloud.
That makes sense I suppose... When it comes to my own personal cloud. They can sod right off. It's my data, I'll take responsibly for it thanks so much. I get far more problems from cloud based services not working properly than I've ever had with local data storage. The only thing I use it for is a backup and occasionally transferring data from one PC to another.
If you consider that the SSD is the slowest part of the system, just mounting the entire OS C:/ drive onto that 128GB module would speed things up considerably. Conversely, you wouldn't need as much RAM now that the OS drive and consequently swap/virtual drives are much faster.
Its like when we first started swapping mechanical HDDs with SATA SSDs, remember? It threw out the "upgrade RAM" age old paradigm of how best to lengthen the life span of old PCs/laptops..
Tabbykatze (08-04-2019)
I think it's more like going from an old SATA SSD to a new NVMe SSD, nice but no biggie. The switch from mechanical storage to SSD eliminated seek times which was revolutionary, this is really just a speed upgrade.
On top of that, in an enterprise scenario the storage is networked to get the required redundancy. Optane should work fine for that, and indeed might have an advantage once integrated into your CEPH storage pool of your hyperconverged data center, but when data isn't stored until there is a redundant copy elsewhere on the network you lose quite a lot of the advantage. I'm still expecting that the stack of NVMe flash drives you could buy for the same price would offer a better user experience with similar aggregate performance and a lot more capacity for the money.
DanceswithUnix (09-04-2019)
Intel have acted too precious with this technology.
They will release it mainstream in 2030.
By then it'll be too late.
Actually, STH said they'll use the Optane DC PMMs themselves.
Find the article at STH named: Why-amd-epyc-rome-2p-will-have-128-160-pcie-gen4-lanes-and-a-bonus
It has a lot of potential. VM servers, cloud servers, and in-memory databases can use them. Initially, the Memory Mode that expands capacity will be more popular. Then as time passes, App Direct mode where its Persistent(because Memory Mode is not persistent) will catch up as application support increases. Since this is officially when NVM is available, new usage scenarios will pop up over time.We will add Xeon Gold with Optane DCPMM in the coming quarter as well simply because DCPMM memory mode is very useful to us.
There's also the option to use as a block storage device, so it acts just like a super fast storage device. The pricing per GB is actually comparable to Optane DC P4800X so I think it'll displace it completely in many applications. Of course the block storage mode is much higher in latency compared to App Direct/Memory Mode but still a fraction of the latency of P4800X.
Last edited by DavidC1; 10-04-2019 at 07:05 PM. Reason: Grammar and spelling
It's likely not something we as consumers will get to see directly.
Remember the previous Xeon generation? They were split into -EP and -EX. The -EX was basically for enterprise. Being able to support more than 2P was one reason -EX existed, but the other was greater memory capacities.
Historically a lot has been done to increase memory capacity. You may remember the FB-DIMMs. With the Nehalem-EX, Intel moved the buffer into a separate chip, called Scalable Memory Buffers. The SMBs as they were called obviously added costs, power consumption, and latency. Anand's testing of the -EX was that it doubled the memory latency.
Yet, the SMBs lived on all the way until Skylake -SP, the first generation where the distinction between Enterprise and regular servers disappeared. It's also the first generation where Optane PMMs were supposed to be out.
The DC PMMs increase memory capacity while being cheaper and keeping the platform simpler as well. There's definitely a market for just the Memory Mode, nevermind the combined market with App Direct. Obviously it won't be suitable for everything.
Last edited by DavidC1; 11-04-2019 at 10:16 AM.
There are currently 1 users browsing this thread. (0 members and 1 guests)