LBFO vs. SET teams

HCHTech

Well-Known Member
Reaction score
4,121
Location
Pittsburgh, PA - USA
I don't do enough of these to have anything beyond a passing curiosity, but I have a couple of HyperV servers to setup this month and the topic has reared it's head again.

In the past, when you setup a NIC Team in Server Manager, you could then choose that team when setting up your virtual switch for the VMs. The team that Server manager created was an "LBFO" team = Load Balancing and Failover.

Sometime after Server 2022 was released, Microsoft decided that LBFO teams were no bueno, but so far hasn't chosen to update Server Manager. So if you follow the "old" procedure to make the team originally in Server Manager, you'll have trouble creating your virtual switch because "attaching a virtual switch to an LBFO team has been deprecated". MS instead wants you to use a "SET" team = Switch Embedded Team.

You need to use Powershell to make a SET team, and the command makes both the Team and the virtual switch at the same time.

Searching for why SET > LBFO, you get non-specific things like "Better performance" and "Stability", not nothing specific, and certainly no warning that LBFO shouldn't be used because of some bad thing happening. I have several Server 2019 Hyper V hosts out that using LBFO teams that have always and continue to work perfectly fine. No hint at all of networking issues that might be blamed on the LBFO setup.

Further, you can still today make an LBFO team work with a virtual switch if you use a specific Powershell command to enable the bit that MS disabled.

Maybe Server Manager in Server 2025 will have gotten their act together on this, but for now, I guess I'm wondering if anyone has any experience in choosing one way over the other. I'm using SET teams for the two new servers I'm setting up, but I'm only doing it "because MS says so". It's not a terrible reason, just not one based on any real knowledge, which is never my default path.

Anyone?
 
Copy/pasting your question into ChatGPT gave this response..

You're definitely not alone in questioning the shift from LBFO to SET teams for virtual switches in Hyper-V. Microsoft's move towards encouraging the use of Switch Embedded Teaming (SET) instead of LBFO teams reflects a more modern, flexible, and feature-rich approach, though it isn't immediately obvious why they pushed for this change, especially when LBFO teams have worked fine for many environments.

Here's a bit more insight into the key differences and the reasoning behind the shift:

1.​

  • SET (Switch Embedded Teaming) is designed to simplify and streamline the process of NIC teaming within the context of Hyper-V. SET teams automatically integrate the virtual switch functionality directly with the NIC teaming, which provides better support for things like network performance optimization, load balancing, and failover without needing separate configurations.
  • Unlike LBFO, SET eliminates the need to create an LBFO team and then separately configure a virtual switch. It allows you to create both the team and virtual switch in a single operation, ensuring tighter integration and improved management.
  • SET supports Hyper-V Virtual Switch Extensions, enabling features like NIC virtualization and support for hardware-based offloading. This can result in better scalability and potentially enhanced performance over time, especially in more complex environments.

2.​

  • LBFO (Load Balancing and Failover) teams are still valid in Server 2019 and 2022, and if you've been running Hyper-V hosts using LBFO teams without issue, you're likely in the clear for the time being. The reason for Microsoft's push to move away from LBFO is to bring more advanced features under one umbrella, and SET provides better integration with newer Hyper-V features.
  • While you can still get LBFO teams to work with virtual switches (by using specific PowerShell commands to enable functionality), it's not the "preferred" method anymore, and it’s possible that future versions of Windows Server (2025 and beyond) may deprecate or outright remove support for LBFO teams.

3.​

  • Microsoft hasn’t provided in-depth technical reasons, but there are several potential benefits to using SET teams over LBFO:
    • Unified Management: SET provides better integration with Hyper-V, meaning it can reduce configuration complexity and potential errors when setting up the virtual network.
    • Improved Resilience: SET is designed to be more resilient to failures, offering improved fault tolerance and easier troubleshooting due to its tighter integration with the virtual switch.
    • Performance Optimizations: SET teams are better optimized for things like vMotion, VM live migrations, and other Hyper-V specific workloads. It can also provide better support for high-performance networking scenarios where traditional LBFO might struggle with certain features (like virtual machine port mirroring).
    • Future-proofing: By using SET, you're positioning your setup in alignment with future Hyper-V features and best practices. While LBFO is stable now, it's possible that Microsoft will no longer fully support LBFO teams in future versions of Hyper-V or Windows Server.

4.​

  • While there may not be immediate, catastrophic issues with using LBFO teams, SET teams are the "recommended" configuration by Microsoft. This aligns with modern best practices and is likely to be more stable and reliable long-term.
  • It's also a more scalable approach. As your infrastructure grows, SET is designed to scale more gracefully, especially if you're deploying large numbers of VMs or planning to take advantage of newer Hyper-V features.
  • Compatibility with Newer Features: As new Hyper-V features are released, they are likely to be tested and optimized with SET, meaning you're less likely to run into compatibility issues in the future.

5.​

You mentioned using PowerShell to create SET teams, which is the recommended method. Here’s a simple example of how you might do it:

Code:
# Create the SET team and virtual switch in one step
New-VMSwitch -Name "VirtualSwitchName" -NetAdapterName "Ethernet1", "Ethernet2" -SwitchType Internal
 
The adapter teaming baked into Windows Server has an overhead that's borderline catastrophic for Hyper-V.

The updated process yes... requires Powershell to deploy, but it also offers an almost 80% reduction in RAM utilization, and crushes the CPU load to levels that are barely measurable. These changes were essential in solving 10gbit performance issues, and opening the door to 25gbit and beyond on the vSwitch.

Look at it this way. LFBO terminates the switch on the host OS, SET terminates the switch on the VIRTUAL SWITCH! The frame flow is very similar to how two physical switches communicate once it's online. LFBO sort of emulates this via the host OS, and slows things down in the process. And yes, it does work, and it's still supported just not ideal.

These improvements were leveraged HARD in Server 2025's updates, and are exposed via the current editions Azure ARC control plane in a more reasonable way. But if you're managing things locally... yeah... Powershell time. Microsoft has moved Hyper-V's control plane to ARC in Windows Server 2025, which allows deployment of Azure vNets to the local Hyper-V, the converged control is... a tectonic shift when you get used to it. Deploy things in Azure, get it on prem or wherever as the BICEP / JSON / Terraform demands is pretty wild.

The only real downside is the loss of LACP, but that's not the huge deal you'd think it is. LACP is often a liability, and having your hosts being switch portable will save your rear end when the excrement elevates. There are ways to get it back, but again the idea is converged management, and the crazier you get with the networking, the less portable the configuration is.

The old way of teaming leads to intermittent connectivity issues, and excessive CPU / RAM consumption at 10gbit and greater bandwidth levels. If you've only got a VM or two, and using a team of GBIT NICs, you will feel some intermittent issues but it's rare enough that you may not even notice! My Hyper-V server still operates this way in my lab, and the only real side effect I have is having to reboot VMs every so often because the IP stack falls over. This has always happened while I'm puttering around in there, so it's not cause any sort of service outage. But all servers going into production, get a SET team.

Note: this process also allows TPM enabled VMs to extend coverage to their virtual NICs too, which has value in various contexts, but for most SMB deployments probably doesn't matter much.

Relevant Learn article: https://learn.microsoft.com/en-us/a...-server/context/windows-server-virtualization

P.S. I consider copy / pasting GPT output into a forum thread to be extremely rude. If we wanted the input of an LLM, we'd go get it ourselves. It has no place in discussions like this unless we're trying to correct its hallucinations. What was the goal here? Padding post count?
 
Last edited:
Thank you both, that all makes sense. However if SET is the new standard and they way things should be done (for the reasons you have clearly explained), then it should be doable in the GUI without having to a) get the error message, b) research the concept to understand the difference (which, as my question implied, didn't really produce results to explain it clearly) and c) look up the powershell command to do things the "recommended" way.

Once you look it up, for example, it isn't clear that this replaces both steps. If you are going to use SET teaming, you don't need to create the team first, before you even get to creating the vswitch. You learn that by trial and error - more time spent.

Just make the link in Server Manager create a SET team in the first place. That will get more folks to do it the recommended way, and will lower the frustration level and time required to set the thing up in the first place. Says I.
 
The old way of teaming leads to intermittent connectivity issues, and excessive CPU / RAM consumption at 10gbit and greater bandwidth levels.

I guess this begs the question of whether I should "fix" the deployments I have using LBFO teaming. most are teaming multiple 1GbE NICs, but 2 are teaming 10GbE NICs and the "big" one is teaming 25GbE NICs. Absolutely none of them have presented any networking issues.
 
I guess this begs the question of whether I should "fix" the deployments I have using LBFO teaming. most are teaming multiple 1GbE NICs, but 2 are teaming 10GbE NICs and the "big" one is teaming 25GbE NICs. Absolutely none of them have presented any networking issues.
As always, if it ain't broke... don't fix it.

Just keep it in the back of your head for the future should the need arise. I've seen the LBFO setup work for some people for years. My NOC inherits them on the regular, and we support them as is. We won't change it until the host is replaced, or there's a project to implement multiple new virtual machines during which we have a maintenance window to rebuild the team.

As for the UI issues, yes... Microsoft has devolved into a command line company, Powershell is the only UI that can do all the things, and the UI updates are not going into the OS, they are going into Azure. I would NOT expect Server Manager to improve, because it's no longer in active development. Microsoft has plenty of customers, and the comfort of technical assets is just not on their care list. On premises is dying anyway, so they aren't going to put money there.

That means you and I get to make money because we KNOW THESE THINGS. It's a barrier to entry that separates those that are trained, vs those that are not.

I'm going through something similar with an off-boarding customer right now. They expect me to train them on RBAC, and how to grant administration rights to various accounts, and even how GDAP works! Because their new provider doesn't understand Entra ID 101 crap, nor are they effectively and properly involved in a Microsoft partner program to gain the required support. Not my job to do either, but I do it on the regular here to try to help everyone because... well... If you screw up authentication... nothing else matters.

But I have to eat too!
 
Last edited:
Once you look it up, for example, it isn't clear that this replaces both steps. If you are going to use SET teaming, you don't need to create the team first, before you even get to creating the vswitch. You learn that by trial and error - more time spent.
Everytime I work with a new Microsoft thing I have to fix their documentation. Part of the cost of using Microsoft products.
 
Everytime I work with a new Microsoft thing I have to fix their documentation. Part of the cost of using Microsoft products.
The documentation is perfect!

...

If you have the right document...
And it was updated relatively recently...
And the LLM that updated it wasn't on crack...

...

Seriously, I find Learn.Microsoft.com to be invaluable, but goodness... it's a slog to find the right article sometimes!
 
Or better yet, you find the exact answer to your question on some Reddit post, with a link to the MS article, which no longer exists! Just follow the steps HERE! Always with replies stating "You're a lifesaver! I've been struggling with that for months!"
 
Or better yet, you find the exact answer to your question on some Reddit post, with a link to the MS article, which no longer exists! Just follow the steps HERE! Always with replies stating "You're a lifesaver! I've been struggling with that for months!"
This happens because things change. Old answers aren't always relevant, especially when they are dealing with a cloud service.

Good news, AI screws this stuff up because of those links. Which means we get to keep our jobs a few years longer.
 
Think I only used it at one client...but it was back when you'd use HPs software to do it...if I recall the OS itself wasn't doing it yet.

I much preferred the approach of having "quad NICs"...usually in addition to the two onboard NICs the server came with.
I always wanted Intel NICs...versus the broadcom or white label NICs (that were still broadcom under the hood) that were OEM'd into the mobo.

This way I could create multiple vswitches and spread the load of the guests across them.
Started doing 10 gig NICs (with uplink to a 10 gig SFP+ port on the switch)...when 10 gig's came out. But to be honest...even at my client with a bunch of heavy software...a 1 gig uplink from the server was not a bottleneck.

Anyways, nothing I typed answered the question. Just a quick mention...I prefer the quad NIC option when ordering servers...and spread the load of guests across those. I've only seen a server cook a NIC once....."once"...and that was many years ago, an exclusive golf club where the office building took a direct lightening hit...building was made with lots of huge steel beams.....huge hole blown through the building it was such a strong hit. Everything got really roasted bad in that building...including their servers NIC. Installed a massive APC NetProtect system after that! But doubt it would survive a direct hit anyways.
 
In the past, when you setup a NIC Team in Server Manager, you could then choose that team when setting up your virtual switch for the VMs. The team that Server manager created was an "LBFO" team = Load Balancing and Failover.

Sometime after Server 2022 was released, Microsoft decided that LBFO teams were no bueno, but so far hasn't chosen to update Server Manager. So if you follow the "old" procedure to make the team originally in Server Manager, you'll have trouble creating your virtual switch because "attaching a virtual switch to an LBFO team has been deprecated". MS instead wants you to use a "SET" team = Switch Embedded Team.

You need to use Powershell to make a SET team, and the command makes both the Team and the virtual switch at the same time.
Late to this thread as I am not as active on these forums as I used to but I use LBFO when setting up servers. These are the powershell commands I use after I have created them in Server Manager under Local Server section. I create 2 NIC Teams. I been using the Dell Power Edge R450

New-VMSwitch -Name "vSwitch-01" -NetAdapterName "NIC-TEAM-01" -AllowNetLBFOTEAMS $true -AllowManagementOS $true
New-VMSwitch -Name "vSwitch-02" -NetAdapterName "NIC-TEAM-02" -AllowNetLBFOTEAMS $true -AllowManagementOS $true
 
Back
Top