High Availability Hyper-V Server Cluster (Dual-Site)

Moltuae

Rest In Peace
Reaction score
3,669
Location
Lancs, UK
Has anyone here done this kind of setup before?

I'm at the early stages of figuring out the best way to do this, so please forgive the lack of detail and specific connectivity info, most of this has yet to be decided and still very much open to suggestions.

High-Avail-Config-01.png

Ok, so we've got two sites (about 15 miles apart), linked by a 1Gbps Ethernet point-to-point connection (aka a BTnet 'Short Haul'): http://business.bt.com/networking/ethernet/

The plan is to duplicate the hardware and configure server clusters to provide Highly Available VMs that can be accessed by either site.

Presently, I'm considering installing Windows Server 2012 R2 Datacenter Ed. (full or core, not yet decided) on both of the HP servers to host the VMs (using Hyper-V), which will be stored on the Synology RackStation units (connected as iSCSI attached SANs). The intention is to use the Synology mirroring feature to provide storage failover protection.

What I'm trying to achieve is to create a private cloud of Highly Available VPSes, that will be unaffected by maintenance or failure of any one server or SAN unit.

For DNS, I'm considering using the Synology units (is this wise?) and for the DC, I'm still undecided. As I understand it, it wouldn't be wise to use the VM host as a DC, so I'm considering a separate pair of servers for this (any suggestions?), again clustered to provide failover protection.


I'm probably going about this entirely the wrong way. Any suggestions, ideas or criticisms would be extremely welcome!
 
The VM host should never be used for anything other than hosting. Especially if it M$. Personally if it was VMware I'd use it for DNS but that will not work for M$ AD. On the storage side. Synology is great but I might be concerned about the support aspect if very challenging problems will not go away. It's been my experience in situations like that, with multiple vendors, the finger pointing party gets kicked off early on. And the person/company that did the recommendation usually ends up being the target. While getting an HP SAN device will cost more I doubt it will be an order of magnitude.

Also I'm a huge fan of building a lab for proof of concept testing. That is the only way to properly test out the various applications. Database apps can be particularly cranky if the developers have not taken necessary precautions.
 
you generally start with local storage, when its not flexible you go to das, then san.

synology/nas just puts another level of failure. support is pretty bad.

so all my clients are local storage with SAS drives under 3 year nbd. i would never store the vm on a iscsi to synology
 
Thanks for the input guys. That's very helpful! :)

The VM host should never be used for anything other than hosting. Especially if it M$. Personally if it was VMware I'd use it for DNS but that will not work for M$ AD.
So I believe. Shame really, since Hyper-V allows the VMs to access the host software. In cases where the software will be common to all VMs, it would be very handy to be able to install it on the host and share it with the VMs, though I do understand the reasons why that's generally not a good idea. Do you think there are any exceptions, such as MS Office maybe?

On the storage side. Synology is great but I might be concerned about the support aspect if very challenging problems will not go away. It's been my experience in situations like that, with multiple vendors, the finger pointing party gets kicked off early on. And the person/company that did the recommendation usually ends up being the target. While getting an HP SAN device will cost more I doubt it will be an order of magnitude.

Also I'm a huge fan of building a lab for proof of concept testing. That is the only way to properly test out the various applications. Database apps can be particularly cranky if the developers have not taken necessary precautions.
Good point on mixing brands. I might look at an all-HP solution. Although the Proliants are out of warranty, so I'll be on my own supporting those anyway. Everything will be thoroughly tested before going into production though, both on my bench and on site. It will run along side their existing solution for a while too, until everything's working perfectly.

I'm actually now considering using one of the VMs as a DC, which you would think would be impossible (especially since the host needs to be a member of the domain), but after some research and testing, it seems it works very well!

you generally start with local storage, when its not flexible you go to das, then san.

synology/nas just puts another level of failure. support is pretty bad.

so all my clients are local storage with SAS drives under 3 year nbd. i would never store the vm on a iscsi to synology
There will be a significant amount of local storage, but the main reason I'm looking to use SAN/iSCSI is to create storage that is common to both servers (which will be separated by a 1Gbps Ethernet point-to-point connection) , to allow for a High Availability setup. What would you recommend storing HA VMs on in that case?

Thanks again. Much appreciated. :)
 
Last edited:
I will use the hosts for "storage"....such as ISO, software repositories, etc. Be it MS Hyper-V, or VMWare ESX...using them for storage has zero impact on stability/performance.
I will never put any "roles" on MS Hyper-V...all my Hyper-V hosts are workgroup mode, I never even join the domain with them. Putting roles on the host can impact their performance and stability. The CPU should be as close to 100% free and available to focus on guests.

Got iSCSI running on a Syn in our office...haven't done it in product for clients myself....although my colleague has. Although I do love Synology products, IMO...(like Dan mentioned above)....it's another point of failure, and IMO...a higher risk than going with a full enterprise solution like HP's Proliant MSA (fiber SANs). If a client is big enough to be doing clustering of servers....IMO should be sticking to true higher end tier-1 server components with good support packages.
Haven't done MS HV clustering so no input there.
 
Thanks Stonecat.

Good point about not joining the HV host to the domain; I hadn't really considered that, and I suppose there's no need for the host to be a domain member in most situations. From what I understand about HA VMs though, it seems to be a requirement that the hosts are members of the same domain, so that the VMs can be live-migrated and freely moved around between servers. I haven't actually got this far in my testing yet though, to be honest.
Haven't done MS HV clustering so no input there.
Have to admit, I've always used VMware in the past. Even Hyper-V is relatively new to me, although I have played around with it a little previously in Server 2008. The clustering feature and the ability to live migrate VMs is very cool though. Take a look at this (skip to about 3 mins in, to bypass the setup steps):



My thinking with the Synology units, was to use the 2 of them to mirror each other, to remove the single point of failure in terms of storage, although I'm not entirely sure how that would work as far as automatic failover goes (if at all). At the very least it should provide a hot swap, even if some amount of manual reconfiguration is necessary.




The budget isn't tight on this job, but it's not unlimited. Their minimum requirement is really just for a bunch of VMs at each site, accessible from either site. The VMs aren't absolutely critical, in that the business won't quite grind to a halt if the VMs go down (though a number of staff will be left with nothing to do but swivel around in their swivelly chairs), but it would be nice to be able to provide HA VMs to give them a better experience and to give me the convenience of being able to take a server down without disrupting the workflow .... hence the slightly half-arsed approach to achieve this.
 
Last edited:
Yeah I can see where joining the domain needs to be done in the MS world so there's a level of trust/authentication. Where as in the VMWare world, nothing with active directory, you do that in the VSphere.

I tend to stick with MS Hyper-V in small clients, with 1 physical host, and perhaps 2 or 3 guests. I'm not crazy about the idea of having to bounce the host to get a buncha Windows updates in....so I don't do larger virtual setups with MS Hyper-V.
Coming from the old days....being old school, I still prefer to have as much local/direct hardware as possible. We use the HP MSA's....I recently did a nice 2040 with a buncha drives. http://www8.hp.com/us/en/products/disk-storage/product-detail.html?oid=5386548
8 gig fiber connection. I just wouldn't have the warm 'n fuzzy trust in the iSCSI to Synology approach.

Your goal...to have the HA so you could bounce hosts....I'd wonder how long that actual hot migration across the 1 gig pipe would take.....would it be practical...
 
Thanks. I'll take a look the the HP MSAs ... if the budget will stretch that far.

Your goal...to have the HA so you could bounce hosts....I'd wonder how long that actual hot migration across the 1 gig pipe would take.....would it be practical...
Yeah, that did cross my mind too ... I need to do some simulated tests. As long as it's reliable though, and the speed of migration doesn't have any major adverse effects (such as data corruption!), the migration time shouldn't matter since the VM should remain up throughout (as I understand it). If the worst case scenario is half an hour of outage while a VM or two are moved, they can live with that.

If I can make it work the way I'm hoping to though, migration will, for the most part, be a server delegation transfer, rather than the transfer of the entire VM. Ideally, the VMs will be duplicated, with changes continuously synchronised between the two sites ... but I think that plan may be a bit ambitious.

I will probably keep it simple to begin with .... Starting with one site, one VM host ... but I'm trying to plan ahead by introducing shared storage, etc now.
 
If you need help setting up a HyperV Failovercluster just let me know. We have several HyperV clusters in our production environment and I have experience with setting it up using both Server 2008 and 2012.
 
If you need help setting up a HyperV Failovercluster just let me know. We have several HyperV clusters in our production environment and I have experience with setting it up using both Server 2008 and 2012.
That's good to know. Thanks! :)

Setting up the clusters will probably come last (assuming we continue down that route), once I've established the best way to go about it hardware-wise.
 
you do not need the hosts to be in domain for hyper-v replica.
just did a setup the other week. host1 - raid 1/raid 10 with sas drives. 4 vm. host 2 - raid 5 with sata. have hyper-v replica going every 15 minutes.

if you do iscsi you still have 1 point of failure
if you want to do a san you need 2 of them to be redundant
for small business not necessary.
hyper-v replica gives you data on 2 boxes, and then you backup with altaro

this is my only hyper-v replica most of my clients do not have a second host floating around

but my standard setup is raid 1 (host os/storage) / raid 10 (vm) - backup to altaro on a usb drive and backup to an altaro server offsite.
 
Thanks Marley ... Sorry, just realised you were talking about Hyper-V Replica, not Hyper-V Clusters.

I was actually toying with Hyper-V Clusters, not Hyper-V Replica, but now that you mention it, Replica might be the way to go ..... I think I need to do a lot more reading! :confused:

So, if I understand this correctly, using Replica, I can place the VMs on local storage instead (backed up to SAN/NAS/Offisite/Whatever) and configure the two host servers (which don't need to be domain members?) to replicate, keeping a synchronised copy of each VM on each server?
 
That is correct. It is like 15 minute sync.

Just remember this isn't a backup and you still need a backup method.

How big is the client's network? If under 50 users, I can't believe they require this.

My client that I just deployed this on had free hardware so we did it.

You can also look at Starwind products for cluster to keep local storage.
 
Great, thanks.

No I understand; I wouldn't dream of not providing some sort of backup system.

They're probably at about 50-60 users, both sites combined. They're a very forward thinking company though, already planning further expansion to 3 or more sites in the near future, so they're not afraid to spend/invest in any tech they believe will provide a benefit now or over the next few years.
 
That is nothing. You may want to look at this software http://www.starwindsoftware.com/starwind-virtual-san/

If you have a Dell T320 and up server with 8 hotswap bays your ideal drive configuration should be Raid 1 for the Host. You can put SATA or SAS drives in. This is just for Server 2012 to load and your management tools (Altaro, etc). The other 6 drives should be filled with all SAS in Raid 10.

The create the cluster or replica server. I cant see the purpose in your case for a cluster. Most likely you are doing a single dc with file server, single app server and maybe a terminal server.

Have fun!
 
That is nothing. You may want to look at this software http://www.starwindsoftware.com/starwind-virtual-san/
Thanks, I'll look into that.

If you have a Dell T320 and up server with 8 hotswap bays your ideal drive configuration should be Raid 1 for the Host. You can put SATA or SAS drives in. This is just for Server 2012 to load and your management tools (Altaro, etc). The other 6 drives should be filled with all SAS in Raid 10.
That's pretty much exactly what I was planning, except I was pondering using RAID 50 with the remaining 6 drives ... will probably end up sticking with RAID 10 though. The server we already have, a HP ProLiant DL380 G7.


The create the cluster or replica server. I cant see the purpose in your case for a cluster. Most likely you are doing a single dc with file server, single app server and maybe a terminal server.

Have fun!
Thanks. And yeah, I think you're right. I'm leaning towards Replica, though I think I may try to leave my options open, where possible, and try to plan/cable for a cluster configuration in case we need to go down that route at a later date.
 
Back
Top