Preserving UNC Path after Azure Files Migration using DFS-N

[ad_1]

Table of Contents

Overview

During a file migration to Azure leveraging Azure Files, a File Server, ANF, or any target system that will leverage a Universal Naming Convention (UNC) path, it is often desirable to retain the original UNC path. Typically, Administrators will have to either:

  • Update Logon Scripts
  • Update GPOs
  • Provide manual instructions to users on remapping their Network Drives to the new UNC Path
  • Etc…

In any of the above methods, there is often downtime involved. After the file migration has completed and users are ready for the new UNC path, they must wait for the Logon Scripts to complete running and remap the network drive or wait for the GPO to apply. Or in the case of manual instructions, potential for users not following the instructions or something going wrong. Each of these has an inherent risk of failure causing calls to the helpdesk.

There is a more seamless solution. That solution is leveraging Distributed File System (DFS) Namespace’s Root Consolidation feature. This feature allows you to remap a UNC Path to DFS-N which then contains the new target file shares as targets. The scope of this article isn’t to instruct on what DFS-N is as it is a very old technology and there are many articles. See this page for an overview: DFS Namespaces overview | Microsoft Docs.

When using DFS-N, there are two deployment methods for your DFS Namespace Root Servers:

  • Domain-based namespace such as \\domain.com\
  • Standalone namespace such as \\server\

Domain-based namespace allows you to have multiple target root servers to provide automatic High-Availability for your namespace. Unfortunately, Domain-based namespaces do not provide Root Consolidation. DFS Root Consolidation is taking an old UNC Path and remapping it to your DFS Namespace so the DFS Namespace now governs the old UNC Path. Because Domain-based namespaces do not support DFS Root Consolidation, we will need to leverage a Standalone Namespace. And the method to make Standalone Namespaces Highly Available is a Windows Server Cluster.

Therefore, in this article, I will walk through how to:

  1. Standing up a Windows Server Cluster in Azure with 2 DFS Namespace Standalone Servers with the following characteristics:
    1. Windows Server 2022 as the Operating System.
    2. Joined to our Active Directory Domain
    3. Two Shared Data Disks: one for the quorum disk and one for DFS Namespace
  2. Standing up an Azure Load Balancer to Front-End our 2 DFS Namespace Standalone Servers. This is required because Azure does not support Gratuitous ARP for the Windows Cluster Client Access Point.
  3. Setting up a DFS Namespace to target an Azure Files UNC Path
  4. Remapping an old Windows File Server’s FQDN to DFS Namespace Namespace and configuring DFS Root Consolidation

If you are following along, you will want to deploy the following as we will not be walking through these:

  • An Active Directory Domain Controller (can be deployed in Azure or On-Premises). Be sure to set its Private IP to Static.
  • Two Azure Virtual Machines for our DFS Cluster that are joined to your Active Directory Domain. I will walk through creating and attaching the two shared Data Disks. Be sure to set its Private IP to Static.
  • Azure Files with a share with some sample data (can simply be a single empty .txt file).

DFS Cluster Preparation – Quorum Disk

You should have 2 Virtual Machines created with no Data Disks. Let’s go ahead and create our first Shared Managed Disk that will become our Quorum Disk. We will create this as a Standard SSD Disk. After our Cluster is built, we’ll create a second Shared Disk as a Premium SSD and add it to the Cluster.

Let’s create our Quorum Disk with the following parameters:

  • Same Region as our Virtual Machines. For us, that is CentralUS
  • CreateOption as Empty
  • MaxSharesCount as 2 as we have 2 DFS Namespace Servers
  • DiskSizeGB as 8GB as we will only be leveraging it for Quorum. You can even go lower.
$quorumDiskConfig = New-AzDiskConfig -Location 'CentralUS' -DiskSizeGB 8 -AccountType StandardSSD_LRS -CreateOption Empty -MaxSharesCount 2

New-AzDisk -ResourceGroupName 'DFSClusterBlog' -DiskName 'QuorumDataDisk' -Disk $quorumDiskConfig 

After creating our disk, you will see the PowerShell object information return back in the PowerShell window.

2022 04 10 11 22 412022 04 10 11 22 41

You can also confirm the disk was created in your Resource Group.

2022 04 10 11 25 53.jpg2022 04 10 11 25 53.jpg

Let’s attach the Quorum Disk to both our Virtual Machines, using the following commands:

$resourceGroup = "DFSClusterBlog"
$vm1 = "DFSClusBlog01"
$vm2 = "DFSClusBlog02"
$quorumDiskName = "QuorumDataDisk"

$vm1 = Get-AzVm -ResourceGroupName $resourceGroup -Name $vm1
$vm2 = Get-AzVm -ResourceGroupName $resourceGroup -Name $vm2

$quorumDisk = Get-AzDisk -ResourceGroupName $resourceGroup -DiskName $quorumDiskName

$vm1addQuorum = Add-AzVMDataDisk -VM $vm1 -Name $quorumDiskName -CreateOption Attach -ManagedDiskId $quorumDisk.Id -Lun 0
$vm2addQuorum = Add-AzVMDataDisk -VM $vm2 -Name $quorumDiskName -CreateOption Attach -ManagedDiskId $quorumDisk.Id -Lun 0

update-AzVm -VM $vm1 -ResourceGroupName $resourceGroup
update-AzVm -VM $vm2 -ResourceGroupName $resourceGroup

After running the following commands, you should see no errors and responses should show the Shared Data Disk was successfully added to both Virtual Machines.

2022 04 10 11 42 542022 04 10 11 42 54

Go onto both Virtual Machines, open Disk Management, initialize the disk (GPT is fine), partition, and format the disks. We’ll assign the Drive Letter as Q for Quorum. Again, do this on both Servers. Later on, when we set up the Cluster, it will take care of Disk Ownership.

2022 04 10 11 47 21.jpg2022 04 10 11 47 21.jpg

DFS Cluster Setup

Now that we have our Quorum Disk on both DFS Servers, let’s go ahead and set up the Windows Cluster. On both DFS Servers, open Windows PowerShell as Administrator and run the following command:

Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools
2022 04 10 11 54 10.jpg2022 04 10 11 54 10.jpg

Once installation has completed, hop on your first DFS Server and open Administrative Tools > Failover Cluster Manager.

2022 04 10 11 55 56.jpg2022 04 10 11 55 56.jpg

Add both servers:

2022 04 10 11 57 11.jpg2022 04 10 11 57 11.jpg

Run through Cluster Validation and make sure there are no significant warnings/errors. Then give your Cluster a name 15 characters or less.

2022 04 10 12 00 48.jpg2022 04 10 12 00 48.jpg

Create the cluster and verify the cluster was created successfully.

2022 04 10 12 02 04.jpg2022 04 10 12 02 04.jpg

Verify both nodes are up.

2022 04 10 12 03 16.jpg2022 04 10 12 03 16.jpg

Verify the disk is Online and it is owned by one of the DFS Cluster Nodes.

2022 04 10 12 03 57.jpg2022 04 10 12 03 57.jpg

I double clicked on Cluster Disk 1 that is assigned as the Disk Witness and gave it a new name.

2022 04 10 12 05 26.jpg2022 04 10 12 05 26.jpg

DFS Cluster Preparation – DFS Data Disk

Now that we have our Cluster operational with our Quorum Disk, let’s go ahead and create our second Shared Managed Disk that will become our DFS Data Disk. We will create this as a Premium SSD Disk.

Let’s create our DFS Data Disk with the following parameters:

  • Same Region as our Virtual Machines. For us, that is CentralUS
  • CreateOption as Empty
  • MaxSharesCount as 2 as we have 2 DFS Namespace Servers
  • DiskSizeGB as 256GB (P15).
$dfsDiskConfig = New-AzDiskConfig -Location 'CentralUS' -DiskSizeGB 256 -AccountType Premium_LRS -CreateOption Empty -MaxSharesCount 2

New-AzDisk -ResourceGroupName 'DFSClusterBlog' -DiskName 'DFSDataDisk' -Disk $dfsDiskConfig

After creating our disk, you will see the PowerShell object information return back in the PowerShell window.

2022 04 10 12 13 152022 04 10 12 13 15

You can also confirm the disk was created in your Resource Group.

2022 04 10 12 14 30.jpg2022 04 10 12 14 30.jpg

Let’s attach the Quorum Disk to both our Virtual Machines, using the following commands:

$resourceGroup = "DFSClusterBlog"
$vm1 = "DFSClusBlog01"
$vm2 = "DFSClusBlog02"
$DFSDiskName = "DFSDataDisk"

$vm1 = Get-AzVm -ResourceGroupName $resourceGroup -Name $vm1
$vm2 = Get-AzVm -ResourceGroupName $resourceGroup -Name $vm2

$dfsDataDisk = Get-AzDisk -ResourceGroupName $resourceGroup -DiskName $DFSDiskName 

$vm1addDFS = Add-AzVMDataDisk -VM $vm1 -Name $DFSDiskName -CreateOption Attach -ManagedDiskId $dfsDataDisk.Id -Lun 1
$vm2addDFS = Add-AzVMDataDisk -VM $vm2 -Name $DFSDiskName -CreateOption Attach -ManagedDiskId $dfsDataDisk.Id -Lun 1

update-AzVm -VM $vm1 -ResourceGroupName $resourceGroup
update-AzVm -VM $vm2 -ResourceGroupName $resourceGroup

After running the following commands, you should see no errors and responses should show the Shared Data Disk was successfully added to both Virtual Machines.

2022 04 10 12 24 422022 04 10 12 24 42

Go onto both Virtual Machines, open Disk Management, initialize the disk (GPT is fine), partition, and format the disks. We’ll assign the Drive Letter as N for DFS Namespace. Again, do this on both Servers. Later on, when we add DFS as a role to the Cluster, it will take care of Disk Ownership.

2022 04 10 12 27 22 1.jpg2022 04 10 12 27 22 1.jpg

Cluster DFS Role Setup

On both DFS Servers, open Windows PowerShell as Administrator and run the following command:

Install-WindowsFeature "FS-DFS-Namespace", "RSAT-DFS-Mgmt-Con"
2022 04 10 12 33 26.jpg2022 04 10 12 33 26.jpg

Once installation has completed, go onto one of your DFS Cluster Nodes, and open your Cluster Disks and choose to add Disk.

2022 04 10 12 38 26.jpg2022 04 10 12 38 26.jpg

Select the new disk.

2022 04 10 12 39 15.jpg2022 04 10 12 39 15.jpg

The new disk has been added to the Cluster.

2022 04 10 12 39 512022 04 10 12 39 51

Just as we did with the Quorum Disk, let’s double-click the new disk and give it a new name.

2022 04 10 12 40 592022 04 10 12 40 59

Now let’s go ahead and set up DFS in our Cluster. Right-Click Roles and choose Configure Role

2022 04 10 12 29 30.jpg2022 04 10 12 29 30.jpg

Choose DFS Namespace Server

2022 04 10 12 34 55.jpg2022 04 10 12 34 55.jpg

Give your DFS Client Access Point a name 15 characters or less. This will be the UNC Name users connect to if you were not using DFS Root Consolidation.

2022 04 10 12 35 47 1.jpg2022 04 10 12 35 47 1.jpg

Select the new Cluster Disk we made available to the Cluster.

2022 04 10 12 41 58.jpg2022 04 10 12 41 58.jpg

Create your first Namespace. You will later see, we will create additional Namespaces, one for each UNC Path in which we want to leverage DFS Root Consolidation. It’s ok if you don’t quite know how DFS Root Consolidation works quite yet. You will gain an understanding for it as you read on.

2022 04 10 12 43 36.jpg2022 04 10 12 43 36.jpg

Go ahead and complete the installation of your DFS Role. We will see it is in a failed state. There are two reasons for this:

  • We need to configure a Static IP
  • We will need to configure an Azure Load Balancer leveraging the same Private IP Address we configure on our DFS Role’s Client Access Point. This is because Azure does not support Gratitous ARP and thus, we need an Azure Load Balancer to handle the Static Private IP of our DFS Client Access Point, DFSBlog.shudnow.net.
image 25.pngimage 25.png

Let’s assign open our DFSBlog Client Access Point, select resources, and select our IP Address.

2022 04 10 12 47 15.jpg2022 04 10 12 47 15.jpg

Double-click on our IP Address, and assign a Static IP Address. This will need to be an available IP Address in your Azure subnet. Again, this IP Address will be the same IP Address for which we will create an Azure Load Balancer for.

2022 04 10 12 51 56.jpg2022 04 10 12 51 56.jpg

Now that we have a Static IP configured for our DFS Role, Start the DFS Role.

2022 04 10 12 52 25.jpg2022 04 10 12 52 25.jpg

Our DFS Role is now running with all resources showing as Online.

2022 04 10 12 53 25.jpg2022 04 10 12 53 25.jpg

In DNS, we can also see the appropriate DNS records have been created.

2022 04 10 12 54 47.jpg2022 04 10 12 54 47.jpg

However, if you try to access \\DFSBlog\, nothing will appear. Again, this is because the Cluster is in Azure which does not support Gratuitous Arp for which we will need an Azure Load Balancer.

Azure Load Balancer Setup

When creating our Load Balancer, make sure to deploy in the same Azure Region as your DFS Cluster. We’ll be creating a Standard Internal Load Balancer.

2022 04 10 13 39 20.jpg2022 04 10 13 39 20.jpg

For the Front End, use the same IP Address that our DFSBlog Client Access Point was configured for.

2022 04 10 13 41 182022 04 10 13 41 18

For our Back End, we will add both our DFS Cluster Nodes.

2022 04 10 13 43 422022 04 10 13 43 42

Our DFS Load Balancing Rule will use HA Ports to allow any ports to flow to the DFS Servers.

2022 04 10 13 45 112022 04 10 13 45 11

The Health Probe configuration will be configured for TCP 135 with Remote Procedure Call (RPC).

2022 04 10 13 49 41.jpg2022 04 10 13 49 41.jpg

Create the Load Balancer resource and once created, in the Load Balancer, navigate to Insights.

2022 04 10 13 54 24.jpg2022 04 10 13 54 24.jpg

Verify our two Backend Servers (our two DFS Cluster nodes) show as Healthy (green checkmarks).

2022 04 10 13 55 28.jpg2022 04 10 13 55 28.jpg

Go back to the same server you previously tried to access \\DFSBlog\ which had failed and retry.

2022 04 10 13 56 06.jpg2022 04 10 13 56 06.jpg

In the existing DFSNamespace, also called DFSNamespace, let’s try adding our target. I have an ADDS-Connected File Share I use for FSLogix in my AVD Lab that I will be using. On one of my DFS Cluster Nodes, let’s open our DFS Management Console which is also under Administrative Tools. We will notice our Namespace automatically shows up.

2022 04 10 14 01 27.jpg2022 04 10 14 01 27.jpg

Let’s create a new folder in our namespace that will point to our FSLogix File Share.

2022 04 10 14 02 55.jpg2022 04 10 14 02 55.jpg

During creation, you will be prompted with the following error. Go ahead and select yes.

2022 04 10 14 02 24.jpg2022 04 10 14 02 24.jpg

Our DFS Namespace now has our Target folder that has our FSLogix File Share as its target.

2022 04 10 14 05 40.jpg2022 04 10 14 05 40.jpg

If we go back to our other server and test going to \\DFSBlog\Target which should see some data.

2022 04 10 14 07 20.jpg2022 04 10 14 07 20.jpg

Bingo! Our DFS Cluster is functional and DFS Targets are working. But, how do I use DFS Namespaces to retain my original UNC Paths after migrating to Azure Files, Azure File Server, or Azure Netapp Files? We’ll need to configure DFS Root Consolidation for that. Read on…

DFS Namespace Root Consolidation

DFS Root Consolidation is a feature that allows you to:

  • Remap a Server FQDN (we’re going to use the FQDN OldServer.shudnow.net) to point to the DFS Client Access Point (dfsblog.shudnow.net for us).
  • For the DFS Client Access Point, modify the Service Principal Name to contain the old server’s NetBIOS name and FQDN
  • Configure a Namespace to match the old server so DFS will accept the old servers NetBIOS name and FQDN as a Namespace.

On each DFS Cluster Node, run the following PowerShell commands as an Administrator and then reboot each server. Do them one by one and monitor the Failover Cluster Manager to make sure the node you are rebooting as operational prior to moving onto the next server.

New-Item -Type Registry -Path "HKLM:SYSTEM\CurrentControlSet\Services\Dfs"

New-Item -Type Registry -Path "HKLM:SYSTEM\CurrentControlSet\Services\Dfs\Parameters"

New-Item -Type Registry -Path "HKLM:SYSTEM\CurrentControlSet\Services\Dfs\Parameters\Replicated" 

Set-ItemProperty -Path "HKLM:SYSTEM\CurrentControlSet\Services\Dfs\Parameters\Replicated" -Name "ServerConsolidationRetry" -Value  1
2022 04 10 14 14 11.jpg2022 04 10 14 14 11.jpg

On one of the DFS Cluster Nodes, open the DFS Management Console and create a new Namespace.

Select your DFS Cluster’s Client Access Point as the namespace server.

2022 04 10 14 16 51.jpg2022 04 10 14 16 51.jpg

The Namespace Name will be the NetBIOS name of our old server with a # in front of it. It is important you add the # so DFS knows this Namespace is leveraging Rootspace Consolidation.

2022 04 10 14 18 56.jpg2022 04 10 14 18 56.jpg

Create your Namespace and validate the new Namespace has been created.

2022 04 10 14 20 40.jpg2022 04 10 14 20 40.jpg

In Failover Cluster Manager, we can also see under our DFS Client Access Point our new Namespace has automatically been added to the Cluster.

2022 04 10 14 21 42.jpg2022 04 10 14 21 42.jpg

In the DFS Management Console, let’s add a new folder to our new #oldserver Namespace pointing to the same target we previously used, our FSLogix File Share.

2022 04 10 14 23 40.jpg2022 04 10 14 23 40.jpg

Let’s hop into our DNS Management Console on your Domain Controller, delete your oldserver DNS Record and create a new A Record pointing to the Front End IP of your Azure Load Balancer which is also the IP Address of your DFS Cluster Client Access Point.

2022 04 10 14 26 11.jpg2022 04 10 14 26 11.jpg

Using a Domain Administrator account, run the following commands in an Administrator Command Prompt to delete the Service Principals off the old computer object for OldServer.

setspn -d HOST/oldserver oldserver
setspn -d HOST/oldserver.shudnow.net oldserver 
2022 04 10 14 33 06.jpg2022 04 10 14 33 06.jpg

And run the following commands to add the Service Principals to the DFS Client Access Point:

setspn -a HOST/oldserver DFSBlog
setspn -a HOST/oldserver.shudnow.net DFSBlog
2022 04 10 14 33 54.jpg2022 04 10 14 33 54.jpg

In Active Directory Users and Computers, as long as View > Advanced Features is turned on, open the DFSBlog Client Access Point Computer Object, go to Attribute Editor, servicePrincipalName, and verify the two new Service Principal Names have been added.

2022 04 10 14 34 37.jpg2022 04 10 14 34 37.jpg

Now let’s try going to the \\oldserver\target UNC Path and we should see our Azure File Share data.

2022 04 10 14 38 43.jpg2022 04 10 14 38 43.jpg

Conclusion

As you can see, DFS Root Consolidation can be a handy tool to leverage if you require, or desire, to keep your old UNC Paths around as you migrate your data to Azure. If you have any questions, feel free to add a comment.

[ad_2]

Share this content:

I am a passionate blogger with extensive experience in web design. As a seasoned YouTube SEO expert, I have helped numerous creators optimize their content for maximum visibility.

Leave a Comment