Synology Hybrid Raid (SHR) vs. RAID 10 .. Which to use?

thecomputerguy

Well-Known Member
Reaction score
1,381
I have a photographer client who is in desperate need of more space so I sold him on a 12 bay Synology NAS specifically this one: http://a.co/aHMgEbo

We also decided to max out the memory on the NAS.

I ordered 2 Samsung SSD's for Read/Write caching and 10 x Western Digital 6TB Red's.

Question is, does anyone have experience with working with SHR vs RAID 10?

The difference is 54TB of available space using SHR vs 30TB available space using RAID 10.

The original plan was to use RAID 10, and that is probably what I am sticking with. They will be more than happy with 30TB of storage and I'm assuming RAID 10 should be quite bit faster than SHR and it looks like the failure protection should be about the same.

Their primary issue right now is their current 10TB NAS is nearly full, and the transfer speeds are absolutely terrible, not to mention some of the client workstations are wireless which I will also correct during the installation.

So ... am I missing out on anything by going with a standard RAID 10 vs using SHR?
 
This is bulk storage not live video editing, correct? Others may have more knowledge and contradict me on this, but I'd go with a 9-drive RAID 6 plus one of those being a hot spare. That'd give you 42GB and the stay alive through failure of any 2 drives at once (or 3 if there was enough gap for the hot spare to come into play). Have the Synology set to email you and the owner for any drive failures, and probably set an annual reminder to test that.

For as much as they're sinking into that you might also look at $50 for Cloudberry plus backup to Glacier or something comparable. An on-demand restore of all of it would really suck pricewise, but for a photographer I suspect that most of it is accessed within the first 3-6 months of when it's shot then rarely if ever after that.
 
This is bulk storage not live video editing, correct? Others may have more knowledge and contradict me on this, but I'd go with a 9-drive RAID 6 plus one of those being a hot spare. That'd give you 42GB and the stay alive through failure of any 2 drives at once (or 3 if there was enough gap for the hot spare to come into play). Have the Synology set to email you and the owner for any drive failures, and probably set an annual reminder to test that.

For as much as they're sinking into that you might also look at $50 for Cloudberry plus backup to Glacier or something comparable. An on-demand restore of all of it would really suck pricewise, but for a photographer I suspect that most of it is accessed within the first 3-6 months of when it's shot then rarely if ever after that.

Interesting ... this is a good idea ... I guess RAID 10 isn't totally necessary since the NAS really wont be under constant load....

Is there a difference between Cloudberry and the built-in Glacier backup from the Synology Store?
 
Is there a difference between Cloudberry and the built-in Glacier backup from the Synology Store?

Not a clue, I didn't know the Synology one existed.

Also, with as much capacity overage as you're looking at there versioning on the NAS may be a good idea. Assuming that it's only manageable from the web dashboard it may be very helpful as protection against malware/ransomware hitting mapped drives at least until they start using a significant percentage of the capacity.

Protect against
  • Human failure ("Wups, deleted the wrong file..." or ransomware) (versioning)
  • Software failure ("Windows ate my drive!" or "Synology nuked my volumes!") (versioning or offsite)
  • Hardware failure ("Wait, how many drives have failed?") (offsite, data recovery)
  • Structure failure ("The firemen say they don't think much survived") (offsite)
 
Not a clue, I didn't know the Synology one existed.

Also, with as much capacity overage as you're looking at there versioning on the NAS may be a good idea. Assuming that it's only manageable from the web dashboard it may be very helpful as protection against malware/ransomware hitting mapped drives at least until they start using a significant percentage of the capacity.

Protect against
  • Human failure ("Wups, deleted the wrong file..." or ransomware) (versioning)
  • Software failure ("Windows ate my drive!" or "Synology nuked my volumes!") (versioning or offsite)
  • Hardware failure ("Wait, how many drives have failed?") (offsite, data recovery)
  • Structure failure ("The firemen say they don't think much survived") (offsite)

Yeah definitely a good idea ...

I couldn't get them to bite on a secondary NAS in a cheaper array to backup locally using rsync ... they have a guy there who is somewhat knowledgeable (the one who got me the job), and moving to this setup versus their old one is going to be a HUGE upgrade for them anyways and he said he'd handle the backup.
 
Not a clue, I didn't know the Synology one existed.

Also, with as much capacity overage as you're looking at there versioning on the NAS may be a good idea. Assuming that it's only manageable from the web dashboard it may be very helpful as protection against malware/ransomware hitting mapped drives at least until they start using a significant percentage of the capacity.

Protect against
  • Human failure ("Wups, deleted the wrong file..." or ransomware) (versioning)
  • Software failure ("Windows ate my drive!" or "Synology nuked my volumes!") (versioning or offsite)
  • Hardware failure ("Wait, how many drives have failed?") (offsite, data recovery)
  • Structure failure ("The firemen say they don't think much survived") (offsite)

Looks like it's probably going to be "Snapshot Replication" which requires the btrfs file system ... sound about right?
 
Hm. For the existing 10TB NAS is it at capacity as far as drive sizes? If not, then with replacement drives (possibly with less redundancy?) it might be a viable backup destination at least for the first year or two.

I'm looking at an unexpectedly soon need for capacity upgrade myself - a recent server upgrade as part of an office and equipment move was spec'd based on their traditional ~2.5GB/day for imaging studies, but they also got some new imaging equipment for the new location and storage on the imaging system is growing at 10GB/day instead. Discovering that did not make me happy.
 
On cloud backup. Make sure you get a good handle on restore options with the various vendors. Did some research last year about this. Many offer a drive shipment option but it's limited to a common drive size like 1TB, etc. Downloading, say 10TB, would take days if not weeks.
 
I'd go with a big SHR2...which is based on RAID6.
Gives you more fault tolerance..and gives you flexbility of upgrading drives down the line, since with either SHR flavor you can mix match drives.
While "yes" RAID10 is faster...even with RAID 5 or 6 with over half a dozen drives they will rip along very fast. Striping on the read is crazy fast with R5 or 6 with a whole bunch of drives. It's not like you're comparing R10 with 4 drives against R5 with just 3 drives.

With over half a dozen drives..especially with 10 or 12 drives...you will not notice a difference,. Our Synology R2212+ with just 4x WD Reds in there on original SHR1 keeps up pretty well with our gigabit connection through a ProCurve switch....she'll hold around 78-80 meg/s on a big long transfer.
 
I'd go with a big SHR2...which is based on RAID6.
Gives you more fault tolerance..and gives you flexbility of upgrading drives down the line, since with either SHR flavor you can mix match drives.
While "yes" RAID10 is faster...even with RAID 5 or 6 with over half a dozen drives they will rip along very fast. Striping on the read is crazy fast with R5 or 6 with a whole bunch of drives. It's not like you're comparing R10 with 4 drives against R5 with just 3 drives.

With over half a dozen drives..especially with 10 or 12 drives...you will not notice a difference,. Our Synology R2212+ with just 4x WD Reds in there on original SHR1 keeps up pretty well with our gigabit connection through a ProCurve switch....she'll hold around 78-80 meg/s on a big long transfer.

The extra space for them beyond 30TB is most likely unimportant, they are at 10TB after a decade of data storage. I'm thinking it might be a good idea to reserve one of the drives as a hotspare, so the setup would look as follows:

2x 250GB SSD (Cache)
9x 6TB WD Red's in RAID 6 (42TB total storage) or SHR (48TB)
1x 6TB WD Red (Hotspare)

All of this is going into a Gigabit switch with Link Aggregation enabled on all 4-Ports ...

What doe you think about this setup?
 
Last edited:
Sounds like a good setup. Software RAID 6 can be a bit slow on the performance end at times, but not nearly as bad as it once was. Newer RAID controllers can handle it at almost the same speed as RAID 5, but I'm not sure how well Synologys handle it. I've got a couple of them, one which I think is in RAID 6, but I don't use it for anything IO intensive. I've only got a single Gb NIC used, despite the NAS having four of them.
 
The extra space for them beyond 30TB is most likely unimportant, they are at 10TB after a decade of data storage. I'm thinking it might be a good idea to reserve one of the drives as a hotspare, so the setup would look as follows:

2x 250GB SSD (Cache)
9x 6TB WD Red's in RAID 6 (42TB total storage) or SHR (48TB)
1x 6TB WD Red (Hotspare)

All of this is going into a Gigabit switch with Link Aggregation enabled on all 4-Ports ...

What doe you think about this setup?

Sounds pretty zippy. Although I'd choose SHR2 for the option instead of SHR
What's the horsepower on the client end? They doing 10 gig?
 
Sounds pretty zippy. Although I'd choose SHR2 for the option instead of SHR
What's the horsepower on the client end? They doing 10 gig?

Right now the whole company is two people and they operate off 802.11g wireless even though the computers are sitting right next to their $60 Linksys router. There is also leftover equipment indicating that at one point they were operating the network off the AC power, so they are a mess. The plan is to just hook them up with a Link agg switch at GB speeds which will be a substantial upgrade over what they have now.

Also I think I'm going to plan to use the BRTFS file system to be able to take system snapshots (which I believe isn't supported through EXT4 through synology website), to add an extra layer of redundancy since their primary backup method is going to be through S3.

Sounds good?
 
Small (probably) cheap client, don't get too fancy with filesystems and configurations.

If you don't already have the drives, consider also whether any of the options you're looking at can simply be expanded by adding drives later, then start them out with ~20-30 GB of capacity with a few empty bays.
 
G wireless....Linksys router..what are they one one of those old WRT54G models?
Yeah I'd toss a Unifi gateway 'n Unifi switch and AP in there!

I also agree with FencePost.....start off with some less drives (put that money towards a new router/switch/AP)..and one of the beauties of Synologys native raids is the ability to toss in disks and instantly grow/expand.
 
Back
Top