Hyper-V server 2019 slow network

RoadRunner

Active Member
Reaction score
25
Location
Australia
Hi,

just setup a new server with Server 2019 host and one Server 2019 VM and one Win10 VM. VM is doing not much. Running normal DHCP/DNS and some dental software. We replace this server every 3-5 years and not much changes. This Server VM now has very slow network speed. It's a Lenovo server with SSDs and we used the onboard Broadcom card. Found the usual stuff about Broadcom and disable queueing. Done that and no change.
Ordered an Intel dual 10G card also only running at 1G in our case as the Unifi switch has only 1G SFP modules and nothing. If anything it feels slower.

I have a Win10 VM on the server as well they use to connect remotely. Everything works fine on that machine. So it can't be the hardware/SSDs/install as directly on the server it works fine. Copying files off the server to workstations as a test is very slow as well. All networking is brand new Unifi hardware and we installed this a few weeks ago with the old server already. And there have been no problems with the network hardware on the old server.

Any thought ?

It fells like some magic powershell command to disable some magic MS feature but can't find anything online.
 
Did someone fart with the MTU? Because this happens if you enable Jumbo Frames on the switch or the server, but not other side.
 
Presumably Bandwidth Management isn't enabled on the virtual network adapter?

Can you post some screenshots of your virtual switch and virtual network adapter configuration.
 
Can you layout the setup?
What drive/RAID config? These days..esp with SSDs...one big raid volume is fine.
How much RAM total? How much to each guest?
What CPUs given?
How are guest drives done? What gen?
Specify the details on what you did to change VMQ
 
Brand new Lenovo ST250
2x 1.96GB SSD mirror + hot spare
64GB RAM I think 48GB for server VM and 8GB for Win10 VM
SSDs are not partitioned host is just one server install on the full drive with VMs in their default location
VMQ just done through network GUI
Just did a bit more testing and it seems like one big file like a 1GB test file copies at full speed. Heaps of little files copy at 100KB/s
copying the same test folder with 5000 little files between workstations I only get 550KB/s
 
Try disabling RSC on the HyperV host, then reboot and test again.

netsh int tcp set global rsc=disabled
Get-NetAdapterRsc | Disable-NetAdapterRsc
Get-VmSwitch | Set-VMSwitch -EnableSoftwareRsc $false
 
netsh int tcp set global rsc=disabled
Get-NetAdapterRsc | Disable-NetAdapterRsc
Get-VmSwitch | Set-VMSwitch -EnableSoftwareRsc $false
done this and it seems way better. Same speed as in between workstations now. Which still seems slow but I think that's more of a Windows thing that it really doesn't like heaps of small files.
 
done this and it seems way better. Same speed as in between workstations now. Which still seems slow but I think that's more of a Windows thing that it really doesn't like heaps of small files.

Loads of small files being slower to transfer than one large file is just a thing in general. Windows or Linux any version old or new.

Each file requires a new SMB call, new filesystem command, check it doesn't already exist, possibly AV scanning every new file. Add on the natural latency of a network connection and it all adds up.

There isn't much you can do about it past a certain point. Even with NVME and 10Gb NIC's both ends you still won't see great performance copying folders with 10's of thousands of small files.

RDMA does a good job at tackling this (RoCE, InfiniBand, SMB Direct etc) but you won't see it on desktop hardware anytime soon, if ever. Mostly reserved for block storage purposes. SAN's, S2D, vSAN etc.
 
Back
Top