Yep, my bad - I meant striping these gizmo's wouldn't make much sense. As to the RAID1 comment (mirroring) I always assumed that it was there to give some protection against a disk failing prematurely.
My VM disk is RAID1'd - mainly to give me a window to back off the data somewhere else if one disk fails, but also because the raid controller is clever enough to split read requests between both disks, which is desirable if you're running two VM's and they're both doing some disk IO.
Getting back to the M.2 device, I'm sure that there's someone who'd RAID these "just because they can".
I've complained about it too many times to count, but RAID only protects against a drive physically failing, not any number of other causes of data loss. Reason it annoys me is because of the amount of times people have mistakenly relied upon it, for important uni work for example, only to lose something really important in a way RAID doesn't help, often after ignoring my advice to back up properly.
Yep, I remember being in a meeting (some time ago) where the architect had to very carefully explain to the assembled pointy-haired bosses that - for example - if you deleted a file on a mirrored system then all the mirror meant was that you'd deleted two copies of it rather than just one.
And as such, yes, they DID need to "waste" (huh) money on taking separate backups!
Isn't this just an msata running on the mpcie controller?
For those who say it's going to be more expensive, as corporate pcie cards, it wouldn't. Because the pci variants are the equivalent of buying 4 or 5 sata ssd's and raiding them.
What are the small file transfers like compared to the standard sata ssd's. It seems that's for 4k transfers etc at low queue depths there hasn't been that much improvement for a few years now. When I looked up raid zeroing them it appeared that it actually made it worse. I didn't look into it too closely so maybe it was just a few outliers but it certainly seemed like we hit a bottle neck in that respect already.
It feels like we will have to wait for a new technology like mram before we see significant gains in that area.
Intel's NGFF (Next Generation Form Factor) should deal with this, given that it will fit into a PCI-E slot. (with PCI-E 2x and 4x depending on the model, so 2GB/s or 4GB/s)
So, the standard exists and I'd suspect that the PCI-E drives Samsung are making and Apple are using in the Mac Pro and Macbook Air are using NGFF, but we'll have to wait for iFixit to take them apart to find out.
I was under the impression that inside the SSDs, smaller bunches of RAM were RAID-0d together anyway. I can't really see any case for anything other than RAID5/6 on a normal system.
Intel isn't the first company I'd rely on for a fair/open standard which benefits consumers TBH.
The NAND banks are often arranged into channels on the controller to allow faster, parallel access to data without needing to increase bus clocks. It's not the same as RAID0 but I suppose it's comparable at a high level.
The controllers or even the interface are possible bottlenecks for performance, RAID-0 could theoretically increase performance despite that. That's not to say it would be worthwhile of course.
There are currently 1 users browsing this thread. (0 members and 1 guests)