Need help with optimal server RAID configuration ...

thecomputerguy

Well-Known Member
Reaction score
1,414
I need to put a server together for a client with a minimum of 1TB of storage ... the more the better. Also need to take disaster recovery into account as well.

I was thinking of two different configurations

1.) 3 x 600GB 10k Drives [RAID 5] (1.2TB Total Usage) May also include an addition 600GB 10k hotswap

2.) 2 x 500GB SAS Drives + 2 x 1TB SAS Drives [RAID1 + RAID1] (1TB Total Usage)

So ... Capacity is about the same, disaster recovery should be easier from the RAID1/RAID1, but the RAID5 setup should perform better and be better bang for the buck?

All of this is going in a Dell T320 with either the H710 or H710p RAID card

Edit: I want Dell to be responsible for whatever drives they put in this system which means going with SSD's isn't really an option when a 500GB SSD from Dell costs you a newborn child.

The server will be covered for 5-Years same day service by Dell.
 
RAID 5 is only "faster" with reads. It is quite slower than RAID 1 when it comes to writes.
It is easiest to expand...you can feed it 1x drive at a time...and if the RAID controller you're using has online RAID expansion, simply "stretch" the volume...and then in the server refresh disks and instantly expand.

Cheapest route....RAID1/RAID1. If you have to expand...replace 1 drive...let mirror rebuild, then replace other drive...let mirror rebuild. Prolly 1 hour for each rebuild. Then expand volume Then expand in the host OS.
Middle route...RAID1/RAID 5. Easy to expand..just keep plopping new drives in....start with 3 drives...then if need to grow, toss in a 4th drive...toss in a 5th drive...6th drive..etc. RAID 5 keeps getting faster read performance with more drives.
Best performance route..RAID1/RAID 10. If they need to grow...well...toss in a pair of drives at a time..and once you fill all 8 bays of the cage, gotta replace drives 1 at a time. Can be done in a day (I just did that 2 weeks ago) without impacting client.
 
I wouldn't go with RAID5 these days, I'd be much more comfortable with RAID6 - particularly as you get up to larger capacity drives. If you're looking at the 10k drives for responsiveness, be sure you're also spec'ing 2.5" drives - latency will be much better, IIRC latency on a 2.5" is generally comparable to the next speed up in 3.5" drives.

Make sure you specify either 8 hot-swap 3.5" drives or 16 hot-swap 2.5" drives. The 8-drive config is the same price as the 4-drive one that's "Dell Recomended", and gives you room to throw in hot spares or otherwise expand.

If you go with the 1TB 7.2k drives, go ahead and pay the extra $15/drive for the self-encrypting ones. For the smaller drives, it might make sense to go with 5x500GB SAS 7.2 in a RAID6, which gives you 1.5TB usable and able to lose 2 drives for $1300 vs 1.2 TB and able to lose 1 drive with the 600GB 10k drives. Not sure how much latency difference there'd really be, particularly if you can use SSD caching (see below).

Will this be running 2012R2 on the metal or VMWare? Either way, consider either adding an SSD or adding a PCIe SSD and using caching - e.g. https://redmondmag.com/articles/2013/10/28/ssd-write-back-cache.aspx

Edit: Also, if you have remote access to the network or just machines on it, I really like the iDRAC remote management capabilities - I'm not sure if you get remote console in Express or only in Enterprise, but you want it. Just 1-2 visits saved over the lifespan of the server likely pays for the upgraded capability.
 
Back
Top