What would you recommend for server running out of hard drive space?

Magic Miguel

Member
Reaction score
13
It's a Dell T100 tower that has SBS2008 and hosts their Share Point and Exchange. The main hard drive is 2x300SAS 15kRPM drives in RAID1. The original person who setup the server had the RAID volume partitioned with 80GB C: drive, 192GB data drive. The C: drive is completely filled to the brink and about 50GB left on the D: or data partition.

Ideally I would want to upgrade the two drives to 600GB SAS 15kRPM drives to double their space but it comes at an extremely high cost of around $732.00 for the two drives my cost w/o markup and then the labor associated with taking an image of the drive, installing the two new drives, setting up the RAID and re-applying the image back onto the new hard drives.

Is there anything else you would recommend for someone who is conscientious of their budget?
 
SBS2008 has multiple issues with log files taking up extreme amounts of disk space over time if you don't manually clear them out. SBS2011 "fixed" that issue by adding scheduled tasks that run monthly to remove the log files automatically.

http://blogs.technet.com/b/sbs/arch...he-c-drive-in-small-business-server-2008.aspx

Read through that; I bet you have a combination of large Sharepoint Database files & logs + WSUS log files taking up a good portion of that 80GB of disk space.

http://www.sbsfaq.com/?p=1597
http://www.sbsfaq.com/?p=1598

Those two links have a batch file to remove the log files more easily.
 
SBS on a single spindle = horrible performance.
Pickup another pair of 300 gig drives...and create a second RAID 1 volume out of them. So now you have a server with 4x drives...a pair of RAID 1 volumes. RAID 1 for a C drive, and a second spindle..another RAID 1...for the data volume D drive.
Now using an image cloning program clone the D drive off of the first RAID 1 volume...onto the entire second RAID 1 volume.

Now stretch that 80 gig system volume to utilize the entire original RAID 1 volume.

Add a pagefile to that second spindle.
 
SBS on a single spindle = horrible performance.
Pickup another pair of 300 gig drives...and create a second RAID 1 volume out of them. So now you have a server with 4x drives...a pair of RAID 1 volumes. RAID 1 for a C drive, and a second spindle..another RAID 1...for the data volume D drive.
Now using an image cloning program clone the D drive off of the first RAID 1 volume...onto the entire second RAID 1 volume.

Now stretch that 80 gig system volume to utilize the entire original RAID 1 volume.

Add a pagefile to that second spindle.

I like that idea.

Hank Arnold (Microsoft MVP)
 
And this solution may seem cheesy to some guys but I know buffalo has some external hard drives that I think run 300 for like a 4tb. They are usb but they have 2 2tb drives, you can arrange them so they mirror each other, I think you can hit swap them as well as they support usb 3.0 and come with a monitoring utility. If your budget is tight might be worth a quick look.
 
SBS on a single spindle = horrible performance.
Pickup another pair of 300 gig drives...and create a second RAID 1 volume out of them. So now you have a server with 4x drives...a pair of RAID 1 volumes. RAID 1 for a C drive, and a second spindle..another RAID 1...for the data volume D drive.
Now using an image cloning program clone the D drive off of the first RAID 1 volume...onto the entire second RAID 1 volume.

Now stretch that 80 gig system volume to utilize the entire original RAID 1 volume.

Add a pagefile to that second spindle.

This is exactly what I usually end up doing.
 
Thanks guys. Didn't want to leave you guys in the dark. I did what jzukerman recommended but saw that the main WSUS db which was over 20GB was never transferred yet the other WSUS files/location was. I made the move and this freed up the 20GB. I plan to give them the estimate for the additional 300GB drives as well. Thanks again!
 
SBS on a single spindle = horrible performance.
Pickup another pair of 300 gig drives...and create a second RAID 1 volume out of them. So now you have a server with 4x drives...a pair of RAID 1 volumes. RAID 1 for a C drive, and a second spindle..another RAID 1...for the data volume D drive.
Now using an image cloning program clone the D drive off of the first RAID 1 volume...onto the entire second RAID 1 volume.

Now stretch that 80 gig system volume to utilize the entire original RAID 1 volume.

Add a pagefile to that second spindle.
Wouldn't you rather do a 300x4 Raid 10? Better performance
 
Wouldn't you rather do a 300x4 Raid 10? Better performance

I do prefer RAID 10 over RAID 1...but this is a Dell T100...it's basically a Dimension desktop. Cramming 6x drives in her....eh....and it doesn't even have a "true" RAID card.

For immediate relief, slap in a pair of drives to add a second spindle...move that second existing partition to that spindle.

Now...budget in another year or two for a better server. ;)
 
I completely agree with Yeolde. You got a failure for future scalability and sustainability. With Yeoldes fix (add the HDD's), you are basically going to be throwing a band-aid on the problem that requires a complete replacement. But you have to fix current problem (low on space) first. Getting those drives in there and doing up a RAID 1 like Yeolde proposed is your best short term option.

Once you get the client on board and seeing that a replacement needs to be brought in within 2 or 3 years, you can sell them on a tower server, with a RAID 1 C:\ (2x250GB should do it depending on size) and a RAID 5 (I don't know how much space you'll need for this, so use the calculators you can find online) D:\. That's my personal opinion.
 
Once you get the client on board and seeing that a replacement needs to be brought in within 2 or 3 years, you can sell them on a tower server, with a RAID 1 C:\ (2x250GB should do it depending on size) and a RAID 5 (I don't know how much space you'll need for this, so use the calculators you can find online) D:\. That's my personal opinion.

I would completely go against Raid 5. Old hardware, rebuild could easily fail, data loss a real possibility.

Raid 1 or Raid 10. Personally, if it is going to be in circulation for a few more years, I would like Raid 10 more. For them, I'm not sure it would matter. But, One Big Raid 10 (OBR10) is the go-to for a lot of Enterprises.
 
I would completely go against Raid 5. Old hardware, rebuild could easily fail, data loss a real possibility.

Raid 1 or Raid 10. Personally, if it is going to be in circulation for a few more years, I would like Raid 10 more. For them, I'm not sure it would matter. But, One Big Raid 10 (OBR10) is the go-to for a lot of Enterprises.

I was talking the newer server. The old server needs the RAID 1. A new one (in 2 to 3 years) should use the RAID 1 for the C: and RAID 5 for the D:
 
I would stay away from sas drives. I have been doing server networks for over 20 years. I know why do many like them but my recent experiences with 3 different servers who had crashes the resulting cost and extra time needed for recovery is not worth the the slight performance gain. Also after talking with some very large data recovery places, they make days recovery, if necessary, very hard. You can setup a SATA server and with three money saved you can setup another device, a server or a NAS dive that is mirrored. Just my experience. Even my very large radiation oncology places have avoided using SAS drives.
 
I was talking the newer server. The old server needs the RAID 1. A new one (in 2 to 3 years) should use the RAID 1 for the C: and RAID 5 for the D:

Still no. Raid 5 is too risky, and storage is a lot cheaper nowadays. Raid 10 for performance, Raid 6 if you need redundancy + additional storage space.

There are links that support what I say, but I don't want to look it up. Basically, in the corporate IT community, Raid 5 is never suggested anymore due to the increased risk in Disk failure.

When a Disk fails in a Raid 5, the server will still function. However, data is messy and the drive is crawling along (it will be slower with a failed drive). When you put a new HDD in a live system (which is typically supported), it is rebuilding while in online, the chances of the Raid rebuild failing is quite high. Raid 5 is complex compared to Raid 1 or Raid 10.

Raid 1 or 10, the disk has a mirror, and rebuilding is a lot simpler, as it is just copying data from disk to disk. Easy Peasy.

Rebuilding a Raid 5 is quite complicated. Data is spanned across all disks, and rebuilding requires looking at all levels. I'm bored so I will write this out:::

Disk 0
11000101

Disk 1
10011110

Disk 2 (Parity 'block')
01011011
(Of course Parity switches among all drives, but I digress)

{Parity basically looks at data from 2 disks, and if both are 0 or 1, parity will return a value 0, if they are different, parity block will be 1}

To rebuild, it looks at all other disks (no matter the size of the array), to rebuild each block. If doing that while the server is in production, rebuilding has a chance to fail. The chances of this happening in a Dell, HP, Lenovo Server with a dedicated RAID card is less, but still not worth the risk.

Don't look at here and now, look for WHEN (not if) a hard drive fails, do you want to gamble on a Raid 5 rebuild, or do you want to do a much safer Raid 1 or Raid 10 rebuild.
 
Use good tier-1 servers with a real RAID controller....the rebuild is painless. While I agree RAID 10 or even 6 is better than RAID 5.....over hundreds of RAID 5 setups out there we're really not had problems. A drive blows...hot swap replace it...in a few hours the rebuild is done..all is fine again.
 
Use good tier-1 servers with a real RAID controller....the rebuild is painless. While I agree RAID 10 or even 6 is better than RAID 5.....over hundreds of RAID 5 setups out there we're really not had problems. A drive blows...hot swap replace it...in a few hours the rebuild is done..all is fine again.
Sorry Stonecat, I will have to disagree. Here is a thread that goes over the pitfalls of Raid 5:

http://community.spiceworks.com/topic/446858-raid-5-is-it-that-bad?page=1

This thread is a good read for everyone. In it, you will read a few articles from smbitjournal.com that I feel like every tech should read. It explains in detail why you should look at Raid 1, Raid 10, Raid 6, or even Raid 0 instead of Raid 5. The thread has a lot of questions from people contesting that logic and it is answered very well in all cases.
 
Almost fell asleep with that thread.
It's basically 1x guy that hates RAID 5 and bashes everyone else for using it. Only spewing forth his opinion. I don't see any illustrated factual history. Sounds like someone that studied and did a bunch of paper certs..but does not have any years of real work experience "in the trenches".

I'm thinking back across almost 20 years of many..many servers. Enterprise grade servers. RAID 5 did it's job. With 3 or 4 or 5 or more drives...faithfully doing its job.

Only time I've seen RAID 5 fail...is in some setups with the minimum (3x drives)...and someone failed to pay attention to 1x drive that blew out...and many months or years kept passing by and a second drive blew. That's not the fault of the RAID, that's the fault of the moron that failed to watch the server. 2x drives in a RAID 10 can fail and take it down (depending on which drives).

I also am not a fan of "one big RAID 10".....like that article preaches. Maybe for those that don't really dig high performance...I still prefer a pair of drives RAID 1 for the Hypervisor OS and some system volumes of the guests, and then jam the big data volumes on the big RAID 10 made up of 6 or so drives.

There's also no consideration for the types of data on the server...if it's heavy read, or heavy write, there's other factors that don't even get factored in. Properly configuring a server (should at least) consider many factors other than just spewing for "Oh I gots me a nice big RAID 10".

With todays servers going more towards a big hypervisor server hosting many guests...yeah we belt out RAID 10's all the time now, RAID 5 is kind of a thing of the past...hardly done anymore. But to hear someone day RAID 5 is dangerous or risky...sorry, not in my long past with servers. Course I stuck with tier-1 products...back then most of them higher HP Proliants with bigger SmartArray RAID controllers. Wondering if those that had bad experiences with RAID 5 did it with software RAID on cheezy onboard controllers or cheap RAID controllers..and cheap non-enterprise drives. Or...were one of the suckers that failed to monitor the drives..and 1 drive failed..then years later the 2nd drive failed..and they blamed the RAID 5 instead of the appropriate target...themselves.

Regardless..RAID 5 not part of the original topic of this thread....not in that T-100 class server...so back to RAID 1 and various way to increase his capacity.
 
Almost fell asleep with that thread.
It's basically 1x guy that hates RAID 5 and bashes everyone else for using it. Only spewing forth his opinion. I don't see any illustrated factual history. Sounds like someone that studied and did a bunch of paper certs..but does not have any years of real work experience "in the trenches".

My opinion is based on thousands of other more experienced IT Professionals that have done the testing and have determined that Raid 5 doesn't make sense anymore. In the old days when servers and disk space was expensive, you had to stretch that dollar, and Raid 5 was a great way to get the most disk space.

I'm thinking back across almost 20 years of many..many servers. Enterprise grade servers. RAID 5 did it's job. With 3 or 4 or 5 or more drives...faithfully doing its job.

Sure, and at the time it was the best way to go. Again, Raid 5 made sense in the past. Now, adding that 4th, 5th, 6th hard drive is not that much more expensive, and Raid 5 isn't the proper way to go. Raid 10 is the performance king, not Raid 5.


Only time I've seen RAID 5 fail...is in some setups with the minimum (3x drives)...and someone failed to pay attention to 1x drive that blew out...and many months or years kept passing by and a second drive blew. That's not the fault of the RAID, that's the fault of the moron that failed to watch the server. 2x drives in a RAID 10 can fail and take it down (depending on which drives).

Even when 1 drive fails in Raid 5, performance takes a hit (same as all Raids), but in Raid 5 the rebuilding takes a big hit. The chances of a Raid 5 failure are a lot higher than any other raid, and they take a while. You need to assume that a Raid 5 rebuild will fail and you should have a decent backup for Plan B, although I wouldn't chance the rebuild and just bare metal restore.


I also am not a fan of "one big RAID 10".....like that article preaches. Maybe for those that don't really dig high performance...I still prefer a pair of drives RAID 1 for the Hypervisor OS and some system volumes of the guests, and then jam the big data volumes on the big RAID 10 made up of 6 or so drives.

If you want good performance you should go Raid 10! There are also software vendors that don't support Raid 5 setups.

There's also no consideration for the types of data on the server...if it's heavy read, or heavy write, there's other factors that don't even get factored in. Properly configuring a server (should at least) consider many factors other than just spewing for "Oh I gots me a nice big RAID 10".

http://www.smbitjournal.com/2012/11/one-big-raid-10-a-new-standard-in-server-storage/





With todays servers going more towards a big hypervisor server hosting many guests...yeah we belt out RAID 10's all the time now, RAID 5 is kind of a thing of the past...hardly done anymore. But to hear someone day RAID 5 is dangerous or risky...sorry, not in my long past with servers. Course I stuck with tier-1 products...back then most of them higher HP Proliants with bigger SmartArray RAID controllers. Wondering if those that had bad experiences with RAID 5 did it with software RAID on cheezy onboard controllers or cheap RAID controllers..and cheap non-enterprise drives. Or...were one of the suckers that failed to monitor the drives..and 1 drive failed..then years later the 2nd drive failed..and they blamed the RAID 5 instead of the appropriate target...themselves.

That is not the case. Mike Smith from the Mike Tech Show said basically the same thing about Raid 5. 3x3TB in a Raid 5. I actually emailed him on the subject and he BASHED my e-mail hard, stating again that Raid 5 is the way to go. I've never heard him bash anything so hard at all, so I was quite surprised.

The next episode he apologizes for being out of touch with today's corporate IT standards and blindly choosing Raid 5. He admitted that he will go forward with Raid 10 going forward as that is the new standard in the IT community. (Raid 1 as well, but that is a different type of server).

Regardless..RAID 5 not part of the original topic of this thread....not in that T-100 class server...so back to RAID 1 and various way to increase his capacity.

Right, and I'm sorry to go off on this tangent of mine. Raid 5 should not be used going forward. If you already have Raid 5 that's fine, but you should be sure to have good backups and plan on the rebuilding process not working.

Link Bomb of Raid stuff. Recommended for all to read:

http://www.smbitjournal.com/2012/12/the-history-of-array-splitting

http://www.smbitjournal.com/2012/11/one-big-raid-10-a-new-standard-in-server-storage

http://www.smbitjournal.com/2012/11/choosing-raid-for-hard-drives-in-2013

http://www.smbitjournal.com/2012/11/choosing-a-raid-level-by-drive-count

http://www.smbitjournal.com/2012/11/hardware-and-software-raid

http://www.smbitjournal.com/2012/08/nearly-as-good-is-not-better

http://www.smbitjournal.com/2012/07/hot-spare-or-a-hot-mess

http://www.smbitjournal.com/2012/05/when-no-redundancy-is-more-reliable

http://www.smbitjournal.com/2011/09/spotlight-on-smb-storage

Again, sorry for blowing this thread up.
 
Raid 10 is the performance king, not Raid 5.

Even when 1 drive fails in Raid 5, performance takes a hit (same as all Raids), but in Raid 5 the rebuilding takes a big hit. The chances of a Raid 5 failure are a lot higher than any other raid, and they take a while. You need to assume that a Raid 5 rebuild will fail and you should have a decent backup for Plan B, although I wouldn't chance the rebuild and just bare metal restore.

Nobody is trying to say RAID 5 performs better than RAID 10. We're all aware of that. RAID 10 performs better than 5...this is not news. And RAID 6 has some features that 5 was missing. That's not disputed.

We haven't experience bad rebuild times, nor seen rebuilds fail. You make a claim that rebuilds "easily fail"...my point is, "no they don't easily fail". An increased chance of failure over RAID 1...yes. But "easily fail"..I disagree. Suppose lots of this is based on the RAID controllers and drives themselves, the better RAID controllers would rebuild quite well. My point is...the couple of people in those threads that hate RAID 5....got burned by it because of poor monitoring or someone failed to recognize that a drive blew out a long time ago..and now a second drive went tango uniform. I don't share the bad experiences those peeps in those threads expressed. Anyways..old passee technology, ran great for us, now back to troubleshooting an Exchange server.
 
So this thread has derailed a bit...

Here is the thing, get that current server fixed up with a pair RAID 1's. Start saving up for a new server, you can get away with some nice tower servers, or, if they can afford it, a rack mounted server, and do it up super right. Hot swappable is key for this to fully work. Most tower servers don't or poorly do hot-swap. HP has some really good hot-swappable tower servers, but they aren't the best. Dells have given me problems in the past. They say "hot-swappable" but the one they sent you doesn't come with that feature.

The new server, once you got it all figured out, you can put the OS on two drives in a RAID 1. Then shove your D: on either a RAID 5, 6 or 10. I prefer a RAID 5 for the smaller companies without a lot of heavy lifting, or if it's a tower server. Tower servers sometimes don't have all that much drive space for the hot-swappable. Your OS won't need that much, so if you can plug those in on the inside in the cold-swap bays, go for it. And then RAID 5 in the hot-swap. The way I look at it is like this. If the bay is inside where I need to remove the panel, it's cold-swap. If I can pull the drive out of the front, then it's hot-swap. Typically I will use Enterprise (WD RE4) drives for servers, only because when I get direct from Dell or HP, or whoever, they send me a server with crap drives...

The WD Enterpise 250GB would work for just the OS partition, and they have a 5 year warranty (SWEET!!!). Then Anywhere from a 500GB up to the 2TB drives for the Data partition. Again, 5 year warranty. This makes it worth the money, because you basically have a warranty for the life of the server. Remember to have a good backup plan in motion as well. Too many times have I had a system call it quits, luckily there has been a good backup in place in most of these situations.

If you get a rack mount, RAID 10 the whole thing.
 
Back
Top