Last edited by SUMMONER; 10-09-2014 at 05:56 PM.
virtuo (11-09-2014)
An interesting thought, usually raid-Z and BTRFS have an advantage during rebuild that they only write where there is actual data to rebuild rather than raid-5 where every block is done. On a shingled drive where multiple tracks may have to be written for every write, I wonder if that advantage is lost.
I'd disagree. The problem with SSDs is that they're too expensive to replace the HDD in the enterprise. How many enterprise environments are running on nothing but flash? The likes of Pure Storage would like to change that, but there's a place for spinning rust as part of a tiered storage strategy and likely will be for a number of years to come.
Hadn't read up on Shingled magnetic recording (SMR) when I made that comment. Rather I was just thinking if this does something like 170MB/s sequential (like the 6TB Helium drives). 10TB = 10,000,000MB and 10,000,000 divided by 170MB = 56,497 secs = 941 min = 15.7 hrs. And with a failure on a 2 drive setup, that's 16hrs where the other drive could also fail. Too risky for me.
If the SMR on these drives means the performance is a lot less than 170MB/s then that just makes it even riskier.
it depends on what the setup is for. for a home setup 16 hours on the off chance that a drive may go faulty isn't a problem. just leave it running overnight or whilst at work. 2x10tb drives instead of 20x2tb drives or 10x4tb drives, surely a lot lesser power, and it's usually a lot cheaper for a 2 drive NAS than even a 4 bay one
He was talking about 12x10TB drives in a rack at the end of a 1Gb WAN link, so I think we can be fairly sure it is a commercial setup
OTOH, if you can stuff a couple more in the rack to have RAID 6 and a hot spare, then you should be OK. From hearing the storage guys at work talking, I would expect 12 bays to be a single RAID6 stripe, though our NAS boxes do require big stripe sets to keep up with us so perhaps we are a bit of a special case.
I hope if the helium escapes from these drives, you can take them to the local balloon shop for a refill!
I'm not saying this is going to be the same since they separated, but under Hitachi these were the least reliable drive manufacturers going. Even ignoring the IBM Deathstar fiasco, if a laptop hard drive failed, you took it out and it didn't have a Hitachi sticker on it it was a rarity.
I'm sure their reliability is better now (it could barely be worse), but their history is atrocious. An SSD made chocolate would have a fighting chance.
Have to say in my own experiences Hitachi drives have been my most reliable drives (touch wood lol), western digital were my worst experience.
That would be Hydrogen... I don't think an H filled drive would last long.
Helium is inert.
Try too expensive and too small capacity for many scenarios. SSDs are fine for all but the very highest write IOPs jobs, and even then if you don't care for the longevity of the drive you can just burn and replace them - hence RAID, because HDD were never that reliable either.
Point me at an 4-8TB SSD which is anywhere near the same cost/GB as a HDD. Not all workloads need SSD performance and high write endurance, i.e. cold storage in cloud datacentres. Consider also a typical 2U SAN, DAS or NAS chassis, i.e. the HP P2000 which you can buy with a backplane to fit either 12x 3.5" or 24 x 2.5". Using 3.5" drives you can still achieve higher capacity (if that is your primary goal) because 2.5" drives top out at 2TB.
12x 6TB, leaving 1 hotspare and using RAID 6 means 9x6TB with current drives, so 54GB.
24x 2TB, leaving 2x hotspare (same spare:used ratio) and RAID 6 means 20x2TB so 40TB.
Last edited by kingpotnoodle; 11-09-2014 at 04:29 PM.
There are currently 1 users browsing this thread. (0 members and 1 guests)