Need help choosing a RAID configuration for a Business Client.

thecomputerguy

Well-Known Member
Reaction score
1,399
I sold a dental office a server about 6 years ago and it's time to replace it. It's going to be a TON of work because they have dramatically changed their workflow and potential data requirements.

Their main program is still Eaglesoft but they are also now doing sleep studies, 3D imaging, and TMJ stuff so their current server went from like 1 or 2 SQL instances to like 5 or 6 now between the various programs.

The previous server was configured in a RAID1/RAID10 configuration, RAID1 being the OS volume and RAID10 being their data storage.

The RAID10 consisted of 4x600GB 10k + 1 hot spare which gave them about 1.08TB of total storage which they are slowly about to run out of.

I am a Dell premier partner so I will be sticking with them.

I really do love RAID10 and it's performance but at a cost of 50% storage I'm wondering if I should just move to RAID5 to get more potential storage for them.

I could drop in 4x1.8TB 10k hotswaps at about $600 a piece in a RAID10 which would net 3.6TB of storage but that still just feels crazy low by todays standards.

I'd love to go with a SAS SSD but equivalent sized drives through Dell are over $1500 a piece.

I want this Server covered under Dell's 5-Year 4hour warranty so I will not be piecing anything together myself.

Am I looking at a huge performance loss if I just went with 4x1.8TB in a RAID5 netting me 5.4TB?

Is there any benefit anymore to splitting the volumes physically into an OS/DATA split?

I will be using Dell's best RAID card they offer because that is something I don't want to skimp on which is their PERC H740P

So many ideas, just no sure where to go.
 
Just say no to any RAID 5. It almost never ends well. Here's another discussion on the matter.

 
Just say no to any RAID 5. It almost never ends well. Here's another discussion on the matter.


Do you usually just stick with RAID1 or RAID10?

What about RAID6?

Just watched the video and yeah RAID was also in my mind but didn't know how much more worth it it was over RAID6
 
Last edited:
I'd go with one big R10 with SATA SSD if SAS SSD is above their budget.

I used to be the R1/R1 or R1/R10 guy..separating spindles of the OS and guest OS's 'n data. But with SSD's...I'm just loving the simplicity and flexibility of one big Raid 10.
 
I'd go with one big R10 with SATA SSD if SAS SSD is above their budget.

I used to be the R1/R1 or R1/R10 guy..separating spindles of the OS and guest OS's 'n data. But with SSD's...I'm just loving the simplicity and flexibility of one big Raid 10.

You haven't noticed any performance loss with the overhead of the OS sitting on the same array? What about performance gain?

I don't plan on virtualizing the system at this point in time.
 
I use RAID 10 only these days, for performance and reliability. And also basically all SSD.

And what performance loss? With SSD spindles, locking a spindle to just the host OS load just locks drive IO into the host it's going to waste.

And R6 with four disks?!? That's the same loss as RAID 10, 50% storage loss... so...
 
You haven't noticed any performance loss with the overhead of the OS sitting on the same array? What about performance gain?

Since moving to SSDs...I have not noticed any performance loss. Perhaps there is on paper if you have a stop watch measuring to the thousandths of a second...but since SSDs are so dang fast....the advantages of having just one huge bucket to work with just take over.
 
+1 for R10 with SATA SSD

I originally struggled to get out of the mindset of SATA = Consumer, SAS = enterprise ... but it just doesn't hold true for SSD's.

With server grade SATA SSD's you get reliability on par with, if not higher than, traditional spinning SAS disks. And the performance is just leagues above.

SAS SSD's are only necessary for extremely high performance requirements, or something dual-ported like a SAN.
 
How many drives will your server (and RAID card) support? You can do 6 1TB SSDs in a RAID 10 (so 3 mirrors) for 3TB of usable space, or 8 x 1TB SSDs in a RAID 10 (so 4 mirrors) for 4TB of usable space.... The 1TBs are much less expensive than the bigger ones, so maybe the math works to lower the individual capacity and increase the drive count.
 
How many drives will your server (and RAID card) support? You can do 6 1TB SSDs in a RAID 10 (so 3 mirrors) for 3TB of usable space, or 8 x 1TB SSDs in a RAID 10 (so 4 mirrors) for 4TB of usable space.... The 1TBs are much less expensive than the bigger ones, so maybe the math works to lower the individual capacity and increase the drive count.

More drives would also provide better performance, so could be a double win.
 
Usually more drives are more expensive, the larger drives should have a lower $$ / GB ratio. But more drives does provide better performance, and more wear leveling surface. So when you're building an array for a server, you have to consider what you want to do, and how you're going to carve that up to optimize the platform to the role it's going to play. And given the costs of everything involved, those decisions are the single largest factor in server price.
 
So when you're building an array for a server, you have to consider what you want to do, and how you're going to carve that up to optimize the platform to the role it's going to play. And given the costs of everything involved, those decisions are the single largest factor in server price.

In the SMB market, unless you are building a dedicated server for one of their LOB apps, this is the hardest decision - for me at least. The "what you want/need to do" question is almost always answered "a little bit of everything" - ha. I guess there is greater tolerance for getting this "wrong" with SMB, though. I tend to overbuild if the budget will allow, that gives more room to reallocate resources down the road if necessary. Even if you are working with a budget, you can get one CPU but make sure the MB will support 2, put in a smaller amount of RAM but make sure the MB will support more, put in a smaller number of drives but make sure the case/backplane/RAID card will support more, etc.

I have an architect client that needed a dedicated server in 2017 for their Ajera software I think it is. Anyway we did just this because we knew that their aging SBS 2011 DC (new in 2013) was going to need replaced this year. So now, instead of getting a new server, we just need to add some RAM and spin up a new VM. Since the new server is setup as a HyperV host with a single VM now, we don't even need a new OS license. This is a real cost savings, and our work creating the thing can largely be done remotely with no downtime. Win-win.
 
@HCHTech, agreed... it is indeed the hard part. Which is also why Microsoft is abandoning SBS server software entirely for the cloud. Because if instead of buying a server, you're leasing space in the cloud you can scale up or down based on needs, and you can do that on an HOURLY basis! Which means the small business only pays for what they need, no more, no less. They can work with their technology partner to upgrade the equipment for a single evening if necessary, at the cost of a reboot.

You just can't do that with bare metal, but if the client wants bare metal... you're back to the storage question. I tend to default to a four disk RAID 10 at a minimum, but again even that can fall to a mirror if budget requires it.
 
Last edited:
Back
Top