Terminal Server Questions

HCHTech

Well-Known Member
Reaction score
4,025
Location
Pittsburgh, PA - USA
My biggest client is due for new servers at the end of the year (between Thanksgiving & Xmas is their traditional slow time), and I'm starting to scope out configurations and costs.

For background, here is their current setup:

45 workstations & 3 physical servers in the main office, and 5 workstation in a satellite office, which is connected to the main office with a full-time VPN tunnel. All workers in the satellite office RDP into virtual workstations at the main office to run software and share resources.

Server 1 VMs: Domain Controller, App server (run SQL & holds main data store), 6 workstations
Server 2 VMs: Backup DC, Main backup server running Quest Rapid Recovery
Server 3 VMs: Located in owner's home, duplication server for Rapid Recovery

Their main LOB app uses a SQL database, with client software installed on all workstations. Database is about 30GB.

Their secondary LOB app uses an access database that is housed on the AppServer and accessed by client software from the workstations as well.

All employees have the option of working from home part of the time. Currently, these employees make VPN connections to the office and then RDP into their workstation.

===

The lease on their current space for the main office will be up in February next year and they are considering moving to a smaller space and supporting more full-time remote workers. They want to alleviate the need to have physical workstations for every worker in the main office. That means setting up a terminal server on premise or in the cloud to support their LOB application.

I don't believe they are a candidate for a cloud-only solution. They have a TON of data, and storage costs would eat them alive. If we offer a cloud solution, I think it will have to be only for their LOB app. If we go with an on-premise terminal server, then I think performance will be better and it could be used for all of their apps, not just the SQL LOB app.

Despite being fairly well versed at most server setups I run into in my corner of the shallow end of this pool, I have never had to setup or maintain a terminal server before, so I don't know the real-world pros and cons of that solution.

For example, exactly where is the client software installed in a terminal server setup? Does each remotely connected worker run the same installation of that software with the OS doing the magic of allowing multiple sessions? This would seem to necessitate some pretty beefy hardware requirements for that box, yes? How do you quantify these requirements when configuring a new build?

With TS, is the end-user experience the same as with a regular workstation? Each worker would have their own desktop with their own shortcuts & Outlook configured for their email, etc.? Are these profiles all stored on the TS itself? Anything special required w/r/t backups for this box?

Additionally, I would need RD CALs in addition to regular user CALs for each remote worker, yes?

What can we expect w/r/t real-world performance (compared to working at a physical workstation in the office as well as compared to working over a VPN/RDP like they are using now for remote workers)? I don't see how a TS setup could ever be faster, but maybe I'm wrong. I'll stop here before I show too much of my ignorance!
 
$.075 / gb is a "ton"? How much storage do you need? Two TB of SSD Storage in Azure is ~$150 / month.

Yes you need RDS User CALs. And I would think long and hard about nuking that second DC, consolidating into two Azure hosted VMs. You can use the Azure AD service too if you want to move the AD off the VMs entirely.

RDS environments can be highly customized, by default they're very similar to a normal desktop environment.
 
Last edited:
$.075 / gb is a "ton"? How much storage do you need? Two TB of SSD Storage in Azure is ~$150 / month

Their backup footprint is just shy of 9TB.

If the Azure rates are linear, that's $750/mo in storage alone. I'm not dismissing it out of hand, but it's not pocket change.

QUOTE="Sky-Knight, post: 702865, member: 122610"]And I would think long and hard about nuking that second DC, consolidating into two Azure hosted VMs. You can use the Azure AD service too if you want to move the AD off the VMs entirely.[/QUOTE]

Well, I'm clearly old school, then, because the thought of moving a business that size completely to the cloud scares the hell out of me. A ton of work, a ton of money and I have no confidence they would see any advantages over a completely on-prem solution. Maybe? sure. But guaranteed? never. I've seen how many problems we've had taking a few tiny businesses to sharepoint when we decommissioned an old server. I would want to see first hand a business of this size running well in the cloud before I even thought of proposing such a radical change for them.
 
The advantage is... they can move to a new premise with ZERO secured and air conditioned space for servers. Their entire onsite infrastructure congeals into a wall mounted rack enclosure.

The larger the business is, the greater that benefit manifests. If they're already used to using RDS, there is zero reason to have things on premise. Cloud that thing up!

But you're right it isn't for everyone. If they're willing to make the investment to do it right on prem, Azure should still be cheaper. Datto + Dell financing costs more for replacement... every time. The decision boils down to how reliable their Internet connection is.
 
Last edited:
M

For example, exactly where is the client software installed in a terminal server setup? Does each remotely connected worker run the same installation of that software with the OS doing the magic of allowing multiple sessions? This would seem to necessitate some pretty beefy hardware requirements for that box, yes? How do you quantify these requirements when configuring a new build?

With TS, is the end-user experience the same as with a regular workstation? Each worker would have their own desktop with their own shortcuts & Outlook configured for their email, etc.? Are these profiles all stored on the TS itself? Anything special required w/r/t backups for this box?

Additionally, I would need RD CALs in addition to regular user CALs for each remote worker, yes?

Don't let your brain over complicate it. You know how you can take a workstation and have different user profiles on it? Say we setup a computer has Quickbooks and MS Office installed on it. And we setup profiles for you, Rob (Sky-Knight), and me. On a workstation, since Microsoft made the desktop OS single console at a time, you, Rob, and I can setup our own unique profiles. So if you browse the C drive, you have C:\Users\HCHTech, and C:\Users\Rob, and C:\Users\Brian (me). Under each, are \Documents, \Desktop, \Pictures, etc etc. And also a unique user registry..and unique programdata, etc etc. Each profile has its own settings. My Outlook goes to my O365 mailbox, yours goes to yours, so on and so forth.

Yet...we can all run Quickbooks, and open the QBW file from the Q drive. Yet it was installed only once. Granted...when installing programs, for many programs you need to tell Terminal Server you're installing a program by doing "change user /install" at CMD prompt..and then run the install. Allows the OS to know to do specific things to some older software that has old ways of doing installs like with .INI files to handle many concurrent users properly. And when done, do "change user /execute" to tell terminal server to go to run/app mode.

Terminal Server allows all to be run at once. No "One at a time" rules.
We can all run Quickbooks, and Word, and Excel, and Outlook, and browsers, and print, etc.
 
As YOS says, think of a TS (or to use the correct, modern term, a Remote Desktop Session Host) as a desktop PC that has multiple users. You (the admin) install all the required software and the users log in to their accounts to access it. The only differences being that, with an RDSH server, the users log in remotely and (usually) concurrently. An RDSH allows users to log in concurrently to their Windows accounts/profiles, however, you will need to individually check/test any required software's ability to run multiple instances. Most do, but some require additional configuration or licencing. For example, O365 must be the ProPlus edition and users will need E3 licences. And quite often I find that LoB software refuses to run multiple instances out-of-the-box. But in most cases a call to the software vendor and an .ini file tweak is all that's needed to fix it.

I'd recommend creating an RDSH 'collection' (previously known as a TS 'farm'), even if it's only a collection of 'one' for now. And, most importantly, virtualise everything. You'll need to assign the RD roles (Gateway, Connection Broker, Licencing) to a different (preferably virtual) server, then create a collection and add your virtual RD session host(s) to it. The reason for doing it this way is that you can easily add more RDSHs later, should you need to. For smaller businesses I usually run a minimum of 2 RDSH servers, hosted on separate physical servers, for load balancing and availability (this also enables you to block new logins to one of the RDSH servers for maintenance). If the users require a lot of different software and you need to add more RDSH servers in future, you can just clone an existing RDSH server and re-licence all of the software instead of having to re-install and reconfigure everything. Additionally, if a physical server needs to be replaced or requires maintenance, any virtual RDSH servers that it hosts can quickly and easily be live-migrated to a different physical server, with zero downtime.

Here's an overview of one of my setups, which might help to explain ...

RDS_Overview.PNG

There are 2 RDSH 'collections' in this case (for 2 different sites) and 4 RDSH servers (assigned 2 per site). All the other RD roles you see here are installed on a single (and separate) virtual server, which runs on a single host server and uses Hyper-V Replica to maintain an offline copy of the VS on another physical server. Load balancing automatically splits users between all available RDSH servers (unless you choose to block access to one or more RDSH servers for maintenance).




The advantage is... they can move to a new premise with ZERO secured and air conditioned space for servers. Their entire onsite infrastructure congeals into a wall mounted rack enclosure.

Agreed, but I think it's important to point out the MAJOR disadvantages too: With cloud services you're building your (or your customer's) entire infrastructure on someone else's land. Now, while it might be unlikely that services run by the likes of Microsoft will disappear overnight, you're still putting a lot of trust in such services, often without any backup plan should all of that data and infrastructure become inaccessible, for whatever reason (catastrophic failure, natural disaster, political, hack/compromise, etc). Cloud services are custodial services that will (all being well) grant you access to the infrastructure and data they host for you. You don't technically 'own' the data held on cloud services and access to that data can be taken away (intentionally or unintentionally) at any time.
Your servers, your data -- Their servers, their data.
 
Last edited:
Wow, thanks for the overviews - that does help a lot. I've gotta get one of these setup as a lab so I can play with it. Based on my past conversations with this client, I'm pretty sure they will lean towards an on-prem setup, but that may be my own bias sneaking through. I did move their email to O365 a couple of years ago, but that was a long conversation. This will be a big job no matter how we do it, good for the balance sheet, but outside of my box, that's for sure. Fun times ahead.
 
For my setups, I've not gotten as fancy as Moltuae.
I've combined the gateway role on the same box as the terminal server itself (guest instance). I haven't done multiple servers for redundancy...every client I have a terminal server at was on Datto.

I believe in giving them adequate horsepower. The biggest one I have, doing both the TSGateway role and the terminal server itself...can have as many as 35 people peak in at the same time (typically in the mid teens though)....running a pretty bloated (heavy) healthcare app that runs on the Progress database engine. If they were to grow more I would likely add 2x more servers...1x to move the TSGateway role to, and the other....add to the farm like he mentions. I always wondered about "at what point does the gateway role really need to be on its own server for performance purposes"...but to be honest, the box I mention here runs dang good. This client got purchased by a big state wide hospital so....pretty soon not my problem anymore. Had them for over 20 years ...hate losing them.

Anyways, as I'm sure you know, do not open/forward port 3389 (nor any alternate ports for direct RDP).....it'll get pillaged and plundered quickly from exploits. ONLY allow it through TSGateway, that has a proper SSL cert, and if you allow access from the outside world, it comes in over port 443 only. I'm not aware of any exploits on TSGateway yet. Or if you want to force them to VPN in first...and then RDP in...even better. Down the road not too far along I will be closing my port 443 to TSGateway and forcing VPN to be used...as I'm sure TSGateway will be exploited eventually.

Performance wise...depends on how much horsepower you throw at it. Can be damn snappy! The LOB app....may have a "terminal server specific" setup. The server I mention above, running on Progress. The regular workstation users run it from a dedicated server for this app...they map a drive and their client works with the progress database of that server. However, for terminal servers, they have a specific install for TS's...where the "heavy code" sits locally on the terminal server, so the TS clients don't pull much across the network...rather it runs most of it locally on the TS's disk subsystem. Helps performance. And it maintains a replication with the primary database server.

Should fly with SSDs and lots of RAM and cores, and if virtualized....hopefully multiple good Intel NICs on the host that you can spread the load with.

I do some GPO settings which loopback apply to users when on the terminal server, restricting access to certain things (like control panel, CMD prompt, registry, browsing local C, etc). Lots of guides out there on the internet on making policies to "lock down terminal server". Oh...and...removing ability to "shut down". Don't want users accidentally shutting down the server. I put a "Log Off" icon on the "public desktop"...making it easy for everyone to log out.
 
I put a "Log Off" icon on the "public desktop"...making it easy for everyone to log out.
I do that too. And I encourage the users to save their work and log off (rather than disconnect) when they're done.

I don't use any access restriction GPOs however, because all of the users have standard/non-admin accounts, so they're already very restricted with what they can do, but I do use lots of GPOs for things like shortcuts, drivemaps and printer deployment.
 
When you have multiple TS VMs for load balancing, does the TS Gateway VM just handle that (directing new connections appropriately)?
Yes. It does so both for 'full' Remote Desktop logins and RemoteApps. If a user is already logged in (ie a disconnected session), they will be directed to the RDSH server they were previously using, to allow them to continue their session, otherwise the gateway (technically the broker, as I understand it), will log the user into the least 'loaded' RDSH. Speaking of RemoteApps, that's another useful feature of this setup. Even if you don't need full Remote Desktop access for all users, domain users can be given access to software installed on the RDSH servers via RemoteApps. However, bear in mind that all RDSH servers in the collection must have the software installed for it to work and that any software which attempts to launch another app (eg Accounting software that spawns Excel/Outlook instances) may not work as expected.

You can see the load balancing working in 'Collections' (below). Here you can also right-click an RDSH server and choose 'Do not allow new connections', which is handy for maintenance or for forcing new logins onto a different RDSH server. I use this feature frequently if a user calls to say they're having issues. It's much easier (and quicker for the user) to ask them to log out and back in than to attempt to fix the issue there and then. If logging into a different RDSH fixes the issue they're having, I'll just keep all new logins to the problem RDSH disabled until I have time to look at it.

This screenshot is from the same setup as before, which contains 4 RDSH servers and 2 collections (1 per site). With a single collection, load-balancing is automatic across all connected RDSH servers. By creating multiple collections, you can introduce some degree of manual control if needed. I have a collection here for each physical site (of the same company). Users are given 2 shortcuts, named according to site, and are asked to use the appropriate shortcut. When all 4 RDSH server are online, the shortcut will direct them to the relevant collection where they will be directed to one of two RDSH servers. All of the RDSH servers are identical however, so the setup also enables users to self-diagnose any issues by logging in using the other site's shortcut when necessary. Additionally, staff will sometimes travel from one site to another. Having the ability to log into either collection enables them to continue working where they left off when they arrive at the other site.

RDS_Collections.PNG
 
I was just doing a pricing comparison of a RDS setup on a hosting provider vs Azure's WVD using Windows 10 for 150 users, and Windows 10 E3 licencing looks very attractive vs licencing servers and getting RDS CALs.
 
If you're doing that, you might want to look into Azure VDI hosting too. It can save money relative to a large RDS deployment because the virtual desktops power down when not in use.
 
I'll touch on hardware since it's not really been discussed yet. The usual disclaimer - requirements are going to vary massively depending on what your users do, what apps they run etc. My recommendations are for "general use" so MS Office Apps, web browsing, some average LoB apps etc. Also basing it on 50 users.


CPU
Higher core count > Higher GHz.
Terminal servers naturally have a lot of concurrent tasks. Just think about it... 50 users all working simultaneously each with their own tasks running.

I'd probably look at dual Xeon Silvers. Couple of the 8c/16t models would fit nice and keep you within Windows licencing. Anything over 16 cores requires additional licencing if you weren't already aware.


Memory
The more the better. Memory is shared between everyone using the TS and you have no way of limiting memory usage by session. This means a couple of heavy users can eat up all available memory causing everyone else to suffer.

My baseline is usually 1GB per concurrent user + 8GB for the OS and background tasks. Then round up a bit. So for 50 users that would be 58GB.... round it up to 64GB and sounds good to me.

By most standards this is considered overkill but at this scale the cost saving isn't worth the risk. Throwing in an extra 16GB or 32GB of RAM isn't going to break the bank so just get more than enough and never have to worry about it.


Storage
SSD. Always SSD.
A terminal server shouldn't require much storage. You have the OS and Apps on the C drive then almost everything else can be stored elsewhere. We have a Terminal Server for 40ish users and it has a single 126GB C drive.

Biggest storage hog is going to be user profiles if left unmanaged. 50 users worth of AppData, Desktop, Documents, Downloads etc can add up really fast.

If you already have redirected folders the easy option is using GPO to disable offline files. You would also want to prevent Outlook running in Cached mode.

The preferred option is User Profile Disks. Too complex to go into real detail here but the short story is user profiles get saved in a VHD located on a separate fileserver. The VHD is mounted when they login so profiles follow over different servers.


GPU
Often overlooked but can be vital. The onboard graphics in a server were never designed for anything stressing.
Try watching a few HD YouTube videos on a Terminal Server with onboard graphics. Besides eating bandwidth it's going to be choppy and a poor experience.
Try using PhotoShop or AutoCad and... you get the idea. It won't end well.

If you go this route you want real enterprise/professional grade GPU. Nvidia Quadro, AMD FirePro etc. Stay away from standard consumer graphics cards as they don't have the proper driver support. For example you can't use Discrete Device Assignment (DDA) to pass the GPU into a VM.
 
Last edited:
If you're doing that, you might want to look into Azure VDI hosting too. It can save money relative to a large RDS deployment because the virtual desktops power down when not in use.

I think we're talking about the same thing, I'm not aware of anything other than WVD being offered. And the last I checked, the session aware up/down was just an automation script.

The LOB app on these things update nightly, so we would have to put a bit more work into getting and automation script to boot up at night and run the update.
 
I think we're talking about the same thing, I'm not aware of anything other than WVD being offered. And the last I checked, the session aware up/down was just an automation script.

The LOB app on these things update nightly, so we would have to put a bit more work into getting and automation script to boot up at night and run the update.

I would push for 3yr reserved instances if budget / accounts allow it. You get enough of a discount it's usually worthwhile.


Quick comparison of compute costs from the Azure calculator:

4x D16 v3 instances (16 core, 64GB RAM)

3yr reserved instance = $1,163.77
310 hours per month (roughly 10 hours per day) = $1,150.72
 
Time to dig this thread back up. We got a 6-month reprieve on this project due to COVID, so I am setting up a lab server to be able to illustrate the experience to the client, as well as provide me with some no-risk setup experience. I've rejuvenated a cast-off R720. Dual Xeon 16-core E's, 196GB RAM, with (so far) a single pair of 2TB SSDs in a RAID 1. Not sure if this is enough space, but it's not that hard to add another array if necessary. Getting some use out of that MAPs subscription! I've also gotten the LOB vendor to waive additional site fees for the lab project so I will be able to install the app here along with a copy of their database.

I've read through 4 or 5 TS setup tutorials on the web, and my first reaction is boy, this would be easier if they didn't change the terminology for everything with the 2016 refresh - ha. If anyone has successfully used a particular one, links would be helpful, otherwise I'll take the normal route and just pick one and deal with problems as they occur.

So on my host, I think I need a DC VM, a RDS Gateway VM & a TS VM. I'll be putting this on its own VLAN and assigning one of my unused public IPs. I have a budget from the client for things like SSL Certs, etc. to make sure this will work like a real setup, otherwise this is on my own education nickel.

Comments welcome - I'm steeling myself for what I'm sure will be a load of self-inflicted frustration.
 
I've read through 4 or 5 TS setup tutorials on the web, and my first reaction is boy, this would be easier if they didn't change the terminology for everything with the 2016 refresh - ha. If anyone has successfully used a particular one, links would be helpful, otherwise I'll take the normal route and just pick one and deal with problems as they occur.
I prefer a good technical reference book. On my bookshelf I have Mastering Windows Server 2012 R2 (by Mark Minasi) and Windows Server 2012 Unleashed (Rand Morimoto, et al). I should update them to 2016/2019 versions sometime but most of the 2012 stuff is still relevant. When I have the time, I like to read a book cover to cover in order to gain a good and thorough understanding, rather than pick up disjointed pieces of information from online guides. Call me old-fashioned, but I also find reading a book much more enjoyable and relaxing, so I usually understand and absorb the information better as a result. I tend to use online guides if I'm in a rush or if I need to fill in some gaps in my knowledge.

So on my host, I think I need a DC VM, a RDS Gateway VM & a TS VM.
I'd recommend making the RDSH (TS) VM part of a 'Collection' (previously called a 'farm'), even if you only require one RDSH server currently. It makes adding more RDSH servers later easy. Having 2 or more RDSH servers is good for load balancing but is also very handy for redundancy, diagnosing issues and for maintenance, since it enables you to select which server(s) the users can log in to.
 
I prefer a good technical reference book.

Fair point - it's high time we updated our library as well - Ordered 'Mastering Server 2019' plus 'Mastering Windows Server 2016 Hyper-V' for good measure!

I've got "Exchange Server 2000", "Exchange Server 2010" and "SBS 2011 Administrator's Companion" holding up my monitors now. They were well-used in their day. Sadly, my "Master Red Hat Linux...Visually!" tome sits forlornly on the shelf, never opened...
 
Last edited:
Back
Top