Read more.Range of "World's Fastest Hard Drives" will not be further developed, as SSDs take over.
Read more.Range of "World's Fastest Hard Drives" will not be further developed, as SSDs take over.
I see why its the last generation....315 to 215MB/s
That's excellent for a mechanical drive though!
Old puter - still good enuff till I save some pennies!
But for the same price you could get at least half-a-dozen bigger consumer drives and put 'em in raid 0+1, get 15TB worth of raid0 at 500MB/s on the inside track and 750MB/s at the edge.
This is what I use as a Main storage drive and as long as the game install is under 12GB I'll load it in ramdrive and play it from there with near zero loadtimes, if it's over 12GB I'll copy it to SSD and take the loadtime penalty whenever I want to play. At least until I double my Ram to 48GB that is
These drives would usually be deployed by the tray-full, set up as raid 6 stripe sets. If you are reading from 20 drives at once then throughput isn't a problem, but latency still is and with a 15K drive you have half the average wait time for your sector to appear under the heads.
However, with an SSD there is no rotational latency, which is why I struggle to see why there is still a market for these things.
I think the only reason there's still a market is the fact that a dead SSD is completely dead and data is almost entirely unrecoverable, with mechanical drives the data can be recovered with a lot less difficulty & expence. Though as you say these drives are used in massively redundant arrays so the chance a drive would need to be recovered like that is remote.
If the data matters, it will be not only on backup but on remote disaster recovery server automatically duplicated from the main server. Given these are on near line storage, there might be a copy on slower bulk storage in a slower tier. If someone is dropping a million or so on their storage (the traditional users of these drives) then they don't muck about.
Google's white paper suggests that redundant arrays, RAID, RAID rebuilding etc. etc. are all hog wash.
Just use JBOD arrays and dumb drives (mechanical or SSD) then duplicate, triplicate etc. the data. Using slower tiers or near-line storage even. Way cleaner management, faster performance, better data security and lower cost.
That's why I dun use any RAID devices either.
Google's white paper suggests that redundant arrays, RAID, RAID rebuilding etc. etc. are all hog wash.
Just use JBOD arrays and dumb drives (mechanical or SSD) then duplicate, triplicate etc. the data. Using slower tiers or near-line storage even. Way cleaner management, faster performance, better data security and lower cost.
That's why I dun bother with any RAID devices either.
Lower cost?
Say you store on raid 6, and you have a 6 data + 2 parity stripe setup. So to store your 6 disks worth of data, you have to buy another 2 disks.
Now you switch to cloud storage. To get the same redundancy level as raid 6 (data in 3 places) now your 6 disks worth of data need to be stored on 18 disks.
Now big companies don't have one raid stripe, they buy by the rack full and spend millions. So a more reasonable scenario would be that instead of spending £800K on disks, you need £1.8M worth of drives. That is for the same redundancy level, so saying "just buy consumer drives" works equally well in both setups.
Then there is the performance. In the cloud system, you find the nearest server and read the data from its disk. If the cloud is in one room and you have the latest protocols, you might read from three servers at once tripling your throughput. In the raid array, you read from all 6 drives in the stripe at the same time.
Google are different. They want their data in at least three countries, individual task throughput isn't that important. But most companies aren't Google.
There are currently 1 users browsing this thread. (0 members and 1 guests)