Drive mapping with group policy

HCHTech

Well-Known Member
Reaction score
4,202
Location
Pittsburgh, PA - USA
I've been doing a lot of this lately, and I'm wondering if I'm doing it right. I'm using Server 2019 for my latest go round, but Server 2016 is largely similar.

When you first create the group policy, you have to choose whether the policy has a "Create" action or on "Update" action. If I'm replacing an old DC with a new DC, I have been choosing "Update" because the workstations will have already had those maps pointing to the old server (on the old domain profile, so maybe that doesn't matter).

If, on the other hand, I'm putting in a new server, then I have been choosing the "Create" action.

Then, on the Common tab of the policy object, I have been checking "Run in logged on user's security context" box, and then hitting the "Targeting" button to assign that map to the applicable security groups in AD.

Then, once this is done, I link the policy to the .local domain in GP manager, and usually, it works after running gpupdate /force on the workstations. If that doesn't work, then it usually works after a reboot of the workstation.

I have had some problems where the policy doesn't appear to get linked correctly to the .local domain. If I delete the policy and start over, this time, right-clicking on the .local domain in GP Manager and choosing "Create a GPO in this domain and link it here", then creating the policy again, it seems to work every time. The wizard involved in this case must be doing something I'm not when linking it manually.

I'd like to understand how this works a bit more. Is there something special needed when you manually link a policy to the domain as opposed to using the "create and link" wizard? Sometimes the manual way works, so I think my procedure is good. But sometimes it doesn't so after cursing at myself for not using the wizard in the first place, I delete the policy and start over with the wizard and it always seems to work.

Lastly, when I have policies that only apply to a select group of users in the same domain, I create a new security group in AD and make sure all of the folks for the map are in that group. Then, in the Targeting screen of the policy, I apply the map to that security group. Inevitably, though, it will work for all but one user in the group. That same security group is used to give access rights to the directory in question, and I can manually browse to the directory from that user's workstation, so I know the rights are correct. Usually, reboots all around fix this, or maybe removing that user from the security group and then re-adding them. I don't know, it all seems a bit wonky. Shouldn't this stuff just work? In the old days, logon scripts just always worked, you know?
 
Last edited:
We've phased out mapped drives in favour of network locations, which require more GPO work, but in the end come with less complications.

This is how I remember how the complications with drive mapping, If I'm remembering wrong, someone can correct me.

When a mapped drive is changed via GPO at least one synchronous GP update operation is required. So if you are not enforcing synchronous GPO update on login, I believe it is possible for the asynchronous update to wipe out the old mapped drive and then the new mapped drive will only appear when the synchronous update happens, which if it is not enforced to happen, might not neccisarily be at the next login.
 
If your domain is correctly deployed, and you're using update via Group Policy PREFERENCES, to make the drives. When you make a change to a user policy it happens on the next policy application cycle. Which by default is basically once an hour.

Now computer preferences are a bit annoying because they happen on reboot. But neither of them process on login, though the user preferences will appear to do so.

If you've got stations that aren't working you need to figure out what's up with that station. This situation is also indicative of a DNS fault supporting the domain itself. Basically, you've got a crack in your foundation and AD isn't healthy. The OP's presentation of the process of doing this is correct. Something else is going on.
 
I will check that out, thanks, @trevm999 . In this last install, it was an architect using Autodesk's Revit software. Fun fact, unless you exactly replicate both the drive map and UNC path of the old server, all of the central models will have to be recreated. That means, the new server has to have the same name as the old, same IP address as the old, the data share has to have the same name and path as the old, and, if you were using mapped drives on the old server (in our case, it was SBS2011, and we were using mapped drives), the drive maps on the workstations have to be the same as they were on the old domain.

When I expressed my disbelief to Autodesk support, they just shrugged - "Yeah, sorry. That's just the way it is". Awesome - just awesome. Somehow, I doubt they say anything like "Hope you like your current server name, cause you'll be keeping that name forever" when they onboard a new client.
 
When you make a change to a user policy it happens on the next policy application cycle. Which by default is basically once an hour.

It might just be that simple. When you're replacing a server, there just isn't time to say "Ok, let's wait an hour to see if that worked" - haha. I (incorrectly) assumed that running a gpupdate /force on the workstations would pickup the changes. Won't be the first time my lack of patience got me in trouble.
 
It might just be that simple. When you're replacing a server, there just isn't time to say "Ok, let's wait an hour to see if that worked" - haha. I (incorrectly) assumed that running a gpupdate /force on the workstations would pickup the changes. Won't be the first time my lack of patience got me in trouble.

gpupdate /force simply forces a policy update cycle right now. It's a terrific way to get the changes to the station at this magic moment, and there's nothing wrong with doing that. But, if you're using user policy preferences to map drives, you don't need to login/logout, or reboot after that, it should just happen.

Of course, if you used DFS when you install a new server you just configure DFS to point at the new server and leave your policies alone... MUCH less work.

Oh, and more more catch... if you ever have more than 1 domain controller, you also have AD propagation delays to contend with.
 
Last edited:
The only things that shouldn't be mapped via DFS (IMO) are things like a Q drive for Quickbooks, pointing back to the server that the data actually lives on. For anything else, the ability to add a new peer, replicate (with seeding if appropriate) then change policy so DFS only goes to the new server is great. You can leave the old server in place for a while and check for connections to it in case there were any manually mapped drives at some point.
 
I'm rapidly coming to the conclusion that Quickbooks shouldn't use a mapped drive. It's vastly more stable using UNC paths to the file, the problem is the users... they HATE using UNC paths, so it winds up being more work.

But I've run Quickbooks through DFS with no issues for ages. Now, one target for the share... NO replication... Just the simple DFS path goes to this share over here type stuff.

And to be clear, the users still use a mapped drive. Q for Quickbooks is still there, Group Policy makes it, the policy simply references the DFS path.
 
You should definitely try to keep an abstraction layer between the client and server, but depending on the situation, a DNS CNAME record might make more sense than DFS-N if there is just one server and one site. Using DSF-N namespaces adds another thing to manage.
 
You can avoid that nightmare with the magic of DFS.

While there are two servers here, one is only a DC. All of the shared files are on the app server. I don't see any advantage to using DFS when there aren't multiple servers. All of my clients are small enough that I've never run into a situation where I thought DFS might be the answer. I could be wrong, of course, but this seems like unneeded complexity for no benefit.
 
While there are two servers here, one is only a DC. All of the shared files are on the app server. I don't see any advantage to using DFS when there aren't multiple servers. All of my clients are small enough that I've never run into a situation where I thought DFS might be the answer. I could be wrong, of course, but this seems like unneeded complexity for no benefit.

Configure DFS, and in the future when they need a new file server, you setup shares on the new platform, hook them into DFS as targets, replicate the files and when you're ready, disable the original target. None of your scripts need updated, no GPO mods... it all just works because the namespace is still intact along with the domain.

I just wish you could put printers in there...

File server dies? Restore the stuff onto another server in a mad panic, setup shares and change targets... BOOOM network online.

IMHO not using DFS is just signing yourself up for more work.
 
Configure DFS, and in the future when they need a new file server, you setup shares on the new platform, hook them into DFS as targets, replicate the files and when you're ready, disable the original target. None of your scripts need updated, no GPO mods... it all just works because the namespace is still intact along with the domain.

I just wish you could put printers in there...

File server dies? Restore the stuff onto another server in a mad panic, setup shares and change targets... BOOOM network online.

IMHO not using DFS is just signing yourself up for more work.

But what value is DFS-N providing that a CNAME record does not provide? Using DFS-N means that you now have to think about migrating the nameserver role as a seperate task when swapping servers, and it's not like it's fully stored in AD. Abstractions add complexity, but I think using DFS-N instead of DNS in simple environments add unnecessary complexity.
 
But what value is DFS-N providing that a CNAME record does not provide? Using DFS-N means that you now have to think about migrating the nameserver role as a seperate task when swapping servers, and it's not like it's fully stored in AD. Abstractions add complexity, but I think using DFS-N instead of DNS in simple environments add unnecessary complexity.

More rapid targeting changes. DNS takes time, clients have to timeout or the record expire before you can redirect the flow. DFS is instantaneous.

It's the difference between an abstraction layer that's designed for this purpose, and one that isn't. And DFS isn't much complexity at all, it's two additional worries and if the service is running on the DC itself, barely noticeable. It's certainly no more noticeable than deciphering DNS.

Oh, and one more thing, SMB v3 has encryption support, using a DNS CNAME breaks this. Though I'm not sure if there are any real issues attached to that at this time. The only time you really need that encryption support is when you're connecting to a SMB server over the Internet. Something that used to be only for the insane, but now is possible, safe, and directly supported by Azure.
 
Last edited:
I don't think I've ever tried DFS with Quickbooks, and for the clients I'm dealing with I'll probably just stick with direct to the server. Pretty sure all QB is using it for is to locate the server so it can talk to its own service anyway, and I could see DFS breaking that.

As for "what's the advantage if there's only one server" there isn't much - until you need there to be a second server, even if only temporarily during things like an OS EOL event (bye 2008R2!).
 
@fencepost, exactly. I used to be the guy that never used DFS because single server why?!? Then I became an MSP, and an MSP wants to be lazy. Server migrations with DFS in place are so much easier! But at this point it's almost moot because I'm not going to migrate anyone to this platform anymore, new migrations are to Teams.

And if your server exists just to share Quickbooks, manually juggling a single share to 5-10 computers is pretty painless. Sure DFS is painless too, but it's not a time saver either at that scale.
 
I am that guy, so I guess I'll have to play around with this and see. Something to add to the list for after the W7EOL deluge is over.

For tiny shops, the one last thing it's great for are those places that have shares on workstations for whatever reason. DFS can unify the name space, so you aren't having to remember all that junk anymore. \\domain.local\files, oh look... my stuff. Great place to put a shortcut for the owner too, because now he or she can easily navigate every file storage folder in the company.

But yeah, I inherited one and now every network that has a domain controller gets one.
 
Back
Top