Home lab server - Storage Spaces, JBOD or other storage ideas?

timeshifter

Well-Known Member
Reaction score
2,387
Location
USA
I have a decent Lenovo TS440 server with 20GB of RAM. I've been using it with VMware ESXi v6.x and run one Windows Home Server 2011 instance. Had played around with a couple of other OSs but that's all it runs on a regular basis. I don't have any clients running VMware, don't expect any anytime soon. But, I do have several running Hyper-V.

My plan is to install Windows Server 2016 as a Hyper-V host then migrate my WHS 2011 over and then have a nice platform to "plug in" other Hyper-V machines, like copies of my client's servers or using viBoot from Macrium to spin up regular PCs that I may have worked on (thanks @Moltuae).

I'll probably get a SSD of 256, 512 or 1TB for the main drive. I'd also like to have another 3-4 TB or more of spinning storage for things that don't need speed, are typically large, for stuff I like to keep around.

Also noteworthy - this machine won't run anything mission critical. So if it crashes or is offline for a week, not a big deal.

SO... MY QUESTION:

How do I slap together a mish-mash of different spinning drives I have laying around? I've got several 250GB, 500GB, 1TB, 2TB drives that have no purpose in life other than as dust collectors.

I'd heard of JBOD, so I Googled it. Storage Spaces came up. Looks like that might be a good way to do what I want. Is it? Or do you guys have any other suggestions or tips?

Thanks!

https://www.windowscentral.com/how-use-storage-spaces-windows-10
https://docs.microsoft.com/en-us/wi...storage-spaces/storage-spaces-direct-overview
 
Have a play with storage spaces, it’s a kind of software raid and will give you a chance to learn another technology
 
Storage Spaces are pretty neat, I've never used them in production, just in lab. If you want to stay all Windows (i.e. not looking to delve into some Linux solutions here), then I think that's the way to go. I wouldn't do JBOD, that's just asking from trouble, and there's still risks with RAID 5 even when it's software RAID ('Single Parity' in Storage Spaces = RAID 5), but it's probably an acceptable risk in your case if you want to get a bit more storage than mirroring would provide.

Also, assuming this part of the licencing is the same as Server 2012, if you use storage spaces your Hyper-V install will count as one licence used, and then you will only have 1 extra for a Windows Server VM. However, I think pairing Hyper-V with Storage spaces probably makes sense, though most of the time I would keep Hyper-V as the only role on bare metal.

EDIT: Clarified that I was still talking about Storage Spaces when bringing up RAID 5
 
Last edited:
Storage Spaces is nice to work with, however
1. Deployment may be tricky (watch your number of columns),
2. expansion (adding drives for moar capacity) may be downright ugly if deployment was not good enough,
3. Once it fails, data recovery is tricky, if at all possible.

Now if you have a backup, Storage Spaces is great and versatile.
 
The problem with drives of different sizes is not wasting space. RAID 5 & 6 will default to the smallest drive. Personally I don't like JBOD but it's probably the most effective way to use a mix of drives in Windoze Server without wasting space. But no redundancy. I've not used Storage Pools/Spaces but the description reminds me of FreeNAS ZFS which does a pretty good job of combining different size drives.
 
JBOD is dangerous for your data. Been there and done that too often in my early days. A hard drive can recover from a bad sector(s). JBOD cannot. It brakes the stripe set and there goes your data with little chance of recovery. To do JBOD with new drives is a risk. To do it with used drives is ridiculous.
 
Sorry Diggs, but that's not correct.

1. JBOD does not have stripes. There is no stripe set in a JBOD.
2. JBOD recovers from bad sectors (single bad sector, or a few bad sectors) same as a single drive.
3. JBOD has higher failure probability, which increases as the number of member disks increases, same as it goes with any RAID.

Generally speaking, JBOD is about the same reliability as RAID0. Given that data loss is acceptable, or given backups,

RAID0 is used with drives of the same size, or where speed matters over capacity; JBOD is used with drives of different capacity, when capacity matters over speed.
 
Last edited:
Sorry Diggs, but that's not correct.

1. JBOD does not have stripes. There is no stripe set in a JBOD.
2. JBOD recovers from bad sectors (single bad sector, or a few bad sectors) same as a single drive.
3. JBOD has higher failure probability, which increases as the number of member disks increases, same as it goes with any RAID.

Generally speaking, JBOD is about the same reliability as RAID0. Given that data loss is acceptable, or given backups,

RAID0 is used with drives of the same size, or where speed matters over capacity; JBOD is used with drives of different capacity, when capacity matters over speed.

How much JBOD have you run? Are you quoting specs are have you dealt with JBOD failures in the past? (I shouldn't have used "stripe set" but thought it would be more informative to most as any break in the volume set of the drives results in loss.) I have never been able to recover a JBOB volume after one of the drives experienced an issue. Sure, maybe through ddresuce or advanced recovery, but it only takes one drive to hiccup and things go bad quickly for the user.

Per wiki (just for reference)- "A spanned volume provides no redundancy, so failure of a single hard drive amounts to failure of the whole logical volume.[5][6] Redundancy for resilience and/or bandwidth improvement may be provided, in software, at a higher level."
 
Last edited:
I am in data recovery business. So, I normally do not run RAIDs, but I recover them quite often, JBODs included. JBODs specifically, I'd say I have seen a couple dozens cases. In fact, most modern NAS may have a JBOD inside, they just don't tell you that, and automatics handle that in recovery, so you don't bother yourself with it.

All in all, JBOD is by far not the worst thing to recover.
 
I am in data recovery business. So, I normally do not run RAIDs, but I recover them quite often, JBODs included. JBODs specifically, I'd say I have seen a couple dozens cases. In fact, most modern NAS may have a JBOD inside, they just don't tell you that, and automatics handle that in recovery, so you don't bother yourself with it.

All in all, JBOD is by far not the worst thing to recover.

Thank you @Alexey . You've changed my thinking on JBOD. (But I still will never use it again.)
 
Diggs is one great fellow. It is the subject, storage reliability, fault tolerance, which is goddamn complex.
 
  • Like
Reactions: GTP
The main questions I have are:
  1. Do you pay for your own electricity? If so, a couple of newer high-capacity drives may save you money over a bunch of old drives just in power costs. This is a topic that comes up regularly, though not so much on here.
  2. How much is your time worth, and is this something that you'd consider doing at a customer so you're trying to get experience with it? If you figure you're going to spend a few hours getting everything going and maybe a few more over the course of the year, is it worth it?
 
  • Like
Reactions: GTP
Do you pay for your own electricity? If so, a couple of newer high-capacity drives may save you money over a bunch of old drives just in power costs
Iv'e done exactly this. A couple of clients had raid setups running up to 8 old drives all chewing power like there's no tomorrow.
We swapped them for (much) larger capacity new drives that use less power, run quieter (2 versus 8) and generate far less heat. The 2 new drives use less power than just one of the old drives! (well almost :p)
Our power is the most expensive in the world now, so any saving no matter how small adds up on systems that run 24/7.
 
The main questions I have are:
  1. Do you pay for your own electricity? If so, a couple of newer high-capacity drives may save you money over a bunch of old drives just in power costs. This is a topic that comes up regularly, though not so much on here.
  2. How much is your time worth, and is this something that you'd consider doing at a customer so you're trying to get experience with it? If you figure you're going to spend a few hours getting everything going and maybe a few more over the course of the year, is it worth it?
I do pay for my own electricity, about $0.10 / kWh. I think it would be tough to justify buying new drive(s) on power savings. It would take a LONG time.

Not considering doing this for a customer. I've got a bunch of drives lying around AND I'd like to have a server with some capacity. Don't know much about JBOD or Storage Spaces. Will be a good opportunity to learn, and maybe a little fun at the same time.
 
Back
Top