Microsoft SQL Server Always On Failover Cluster Instances (FCI) let you run a single SQL Server instance across multiple Windows Server Failover Cluster (WSFC) nodes. At any point in time, one of the cluster nodes actively hosts the SQL instance. In the event of a failure, WSFC automatically transfers ownership of the instance's resources to another node.
SQL Server FCI requires data to be located on shared storage so that it can be accessed across all WSFC nodes. This guide describes how you can deploy a SQL Server 2022 failover cluster instance and use persistent disks in multi-writer mode as shared storage.
In an on-premises environment, you can let WSFC perform ARP announcements if a failover occurs to notify network equipment about an IP address change. Google Cloud, however, disregards ARP announcements. Consequently, you must implement one of the following two options:
- Internal load balancer (see Running Windows Server Failover Clustering)
- Distributed network name (DNN) (see Configure a DNN for failover cluster instance)
The article assumes that you have already deployed Active Directory on Google Cloud and that you have basic knowledge of SQL Server, Active Directory, and Compute Engine.
Objectives
- Deploy a WSFC comprising two SQL Server VM instances and a third VM instance that acts as a file share witness.
- Deploy a SQL Server FCI on the WSFC.
- Configure a load balancer or distributed network name (DNN) to route traffic to your availability group with SQL Server.
- Verify that the cluster is working by simulating a failover.
Costs
This tutorial uses billable components of Google Cloud, including:
- Compute Engine
- Cloud Load Balancing (not necessary with a DNN configuration)
Use the pricing calculator to generate a cost estimate based on your projected usage.
Before you begin
To complete this guide, you need the following:
- An Active Directory domain with at least one domain controller. You can create an Active Directory domain by using Managed Microsoft AD. Alternatively, you can deploy a custom Active Directory environment on Compute Engine and set up a private DNS forwarding zone that forwards DNS queries to your domain controllers.
- An Active Directory user that has permission to join computers to the domain
and can log in by using RDP. If you're using Managed Microsoft AD, you can use
the
setupadmin
user. - A Google Cloud project and VPC with connectivity to your Active Directory domain controllers.
- A subnet to use for the WSFC VM instances.
Make sure you've reviewed the restrictions of persistent disks in multi-writer mode and select a zone in which persistent disks in multi-writer mode are available.
To complete the guide, you also need a Google Cloud project:
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
When you finish this tutorial, you can avoid continued billing by deleting the resources you created. For more information, see Cleaning up.
Preparing the project and network
To prepare your Google Cloud project and VPC for the deployment of SQL Server FCI, do the following:
In the Google Cloud console, open Cloud Shell by clicking the Activate Cloud Shell button.
Initialize the following variables:
VPC_NAME=
VPC_NAME
SUBNET_NAME=SUBNET_NAME
Where:
VPC_NAME
: name of your VPCSUBNET_NAME
: name of your subnet
Set your default project ID:
gcloud config set project
PROJECT_ID
Replace
PROJECT_ID
with the ID of your Google Cloud project.Set your default zone:
gcloud config set compute/zone
ZONE
Replace
ZONE
with the ID of the zone you want to deploy in.
Create firewall rules
To allow clients to connect to SQL Server, allow communication between the WSFC nodes, and to enable the load balancer to perform health checks, you need to create several firewall rules. To simplify the creation of these firewall rules, you use network tags:
- The 2 WSFC nodes are annotated with the
wsfc-node
tag. - All servers (including the witness) are annotated with the
wsfc
tag.
Create firewall rules that use these network tags:
- Return to your existing Cloud Shell session.
Create firewall rules for the WSFC nodes:
SUBNET_CIDR=$(gcloud compute networks subnets describe $SUBNET_NAME --format=value\('ipCidrRange'\)) gcloud compute firewall-rules create allow-all-between-wsfc-nodes \ --direction=INGRESS \ --action=allow \ --rules=tcp,udp,icmp \ --enable-logging \ --source-tags=wsfc \ --target-tags=wsfc \ --network=$VPC_NAME \ --priority 10000 gcloud compute firewall-rules create allow-sql-to-wsfc-nodes \ --direction=INGRESS \ --action=allow \ --rules=tcp:1433 \ --enable-logging \ --source-ranges=$SUBNET_CIDR \ --target-tags=wsfc-node \ --network=$VPC_NAME \ --priority 10000
Create a firewall rule that allows health checks from the IP ranges of the Google Cloud probers: (not necessary with a DNN configuration)
gcloud compute firewall-rules create allow-health-check-to-wsfc-nodes \ --direction=INGRESS \ --action=allow \ --rules=tcp \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --target-tags=wsfc-node \ --network=$VPC_NAME \ --priority 10000
Create VM instances
You now deploy two VM instances for the failover cluster. At any point in time, only one of these VMs serves as the active FCI node while the other node serves as the failover node. The two VM instances must:
- be located in the same zone so that they can access the same persistent disks.
- have Windows Server Failover Clustering and SQL Server installed.
- have Compute Engine WSFC support enabled.
You use a SQL Server premium image which has SQL Server 2022 preinstalled.
To provide a tie-breaking vote and achieve a quorum for the failover scenario, you deploy a third VM that serves as a file share witness.
- Return to your existing Cloud Shell session.
Create a spread placement policy that spreads the WSFC nodes across the underlying data center infrastructure so that they don't share the same host or power system:
ZONE=$(gcloud config get-value compute/zone) gcloud compute resource-policies create group-placement spread-placement \ --availability-domain-count 2 \ --region ${ZONE::-2}
Create a specialize script for the WSFC nodes. The script installs the necessary Windows feature and creates firewall rules for WSFC and SQL Server:
cat << "EOF" > specialize-node.ps1 $ErrorActionPreference = "stop" # Install required Windows features Install-WindowsFeature Failover-Clustering -IncludeManagementTools Install-WindowsFeature RSAT-AD-PowerShell # Open firewall for WSFC netsh advfirewall firewall add rule name="Allow SQL Server health check" dir=in action=allow protocol=TCP localport=59997 # Open firewall for SQL Server netsh advfirewall firewall add rule name="Allow SQL Server" dir=in action=allow protocol=TCP localport=1433 EOF
Create the VM instances and enable the Windows Server Failover Clustering agent on the WSFC nodes by setting the metadata key
enable-wsfc
totrue
:MACHINE_TYPE=n2-standard-8 gcloud compute instances create node-1 \ --resource-policies spread-placement \ --machine-type $MACHINE_TYPE \ --subnet $SUBNET_NAME \ --image-family sql-ent-2022-win-2022 \ --image-project windows-sql-cloud \ --tags wsfc,wsfc-node \ --boot-disk-size 50 \ --boot-disk-type pd-ssd \ --boot-disk-device-name "node-1" \ --metadata enable-wsfc=true \ --metadata-from-file=sysprep-specialize-script-ps1=specialize-node.ps1 gcloud compute instances create node-2 \ --resource-policies spread-placement \ --machine-type $MACHINE_TYPE \ --subnet $SUBNET_NAME \ --image-family sql-ent-2022-win-2022 \ --image-project windows-sql-cloud \ --tags wsfc,wsfc-node \ --boot-disk-size 50 \ --boot-disk-type pd-ssd \ --boot-disk-device-name "node-2" \ --metadata enable-wsfc=true \ --metadata-from-file=sysprep-specialize-script-ps1=specialize-node.ps1 gcloud compute instances create "witness" \ --machine-type n2-standard-2 \ --subnet $SUBNET_NAME \ --image-family=windows-2022 \ --image-project=windows-cloud \ --tags wsfc \ --boot-disk-size 50 \ --boot-disk-type pd-ssd \ --metadata sysprep-specialize-script-ps1="add-windowsfeature FS-FileServer"
To join the 3 VM instances to Active Directory, do the following for each of the 3 VM instances:
Monitor the initialization process of the VM by viewing its serial port output:
gcloud compute instances tail-serial-port-output
NAME
Replace
NAME
with the name of the VM instance.Wait about 3 minutes until you see the output
Instance setup finished
, then press Ctrl+C. At this point, the VM instance is ready to be used.Create a username and password for the VM instance
Connect to the VM by using Remote Desktop and log in using the username and password created in the previous step.
Right-click the Start button (or press Win+X) and click Windows PowerShell (Admin).
Confirm the elevation prompt by clicking Yes.
Join the computer to your Active Directory domain and restart:
Add-Computer -Domain
DOMAIN -Restart
Replace
DOMAIN
with the DNS name of your Active Directory domain.Wait for approximately 1 minute for the restart to complete.
Create persistent disks in multi-writer mode
You now create 3 persistent disks in multi-writer mode and attach each of them to both WSFC nodes.
- Return to your existing Cloud Shell session.
Create 3 shared persistent disks:
PD_SIZE=50 gcloud beta compute disks create datadisk-1 \ --size $PD_SIZE \ --type pd-ssd \ --multi-writer \ --zone $(gcloud config get-value compute/zone) gcloud beta compute disks create datadisk-2 \ --size $PD_SIZE \ --type pd-ssd \ --multi-writer \ --zone $(gcloud config get-value compute/zone) gcloud beta compute disks create datadisk-3 \ --size $PD_SIZE \ --type pd-ssd \ --multi-writer \ --zone $(gcloud config get-value compute/zone)
Attach the disks to
node-1
:gcloud compute instances attach-disk node-1 --disk datadisk-1 gcloud compute instances attach-disk node-1 --disk datadisk-2 gcloud compute instances attach-disk node-1 --disk datadisk-3
Attach the disks to
node-2
:gcloud compute instances attach-disk node-2 --disk datadisk-1 gcloud compute instances attach-disk node-2 --disk datadisk-2 gcloud compute instances attach-disk node-2 --disk datadisk-3
Reserve cluster IP addresses
You now reserve two static IP addresses in your VPC. One IP address is used as the WSFC cluster IP address, the other is used by the internal load balancer.
Reserve a static IP for the internal load balancer and capture the address in a new environment variable named
LOADBALANCER_ADDRESS
:gcloud compute addresses create wsfc \ --region $(gcloud config get-value compute/zone | rev | cut -c 3- | rev) \ --subnet $SUBNET_NAME LOADBALANCER_ADDRESS=$(gcloud compute addresses describe wsfc \ --region $(gcloud config get-value compute/zone | rev | cut -c 3- | rev) \ --format=value\(address\)) && \ echo "Load Balancer IP: $LOADBALANCER_ADDRESS"
Note the IP address, you need it later.
Reserve another static IP address that you use as cluster IP: (not necessary with a DNN configuration)
gcloud compute addresses create wsfc-cluster \ --subnet $SUBNET_NAME \ --region $(gcloud config get-value compute/zone | rev | cut -c 3- | rev) && \ CLUSTER_ADDRESS=$(gcloud compute addresses describe wsfc-cluster \ --region $(gcloud config get-value compute/zone | rev | cut -c 3- | rev) \ --format=value\(address\)) && \ echo "Cluster IP: $CLUSTER_ADDRESS"
Note the IP address, you need it later.
Your project and VPC are now ready for the deployment of the WSFC and SQL Server.
Create a witness file share
To prepare witness
to serve as file share witness, create a file share and grant
yourself and the two WSFC nodes access to the file share:
- Connect to
witness
by using Remote Desktop. Sign in with your domain user account. - Right-click the Start button (or press Win+X) and click Windows PowerShell (Admin).
- Confirm the elevation prompt by clicking Yes.
Create the witness folder and share the folder:
New-Item "C:\QWitness" –type directory icacls C:\QWitness\ /grant 'node-1$:(OI)(CI)(M)' icacls C:\QWitness\ /grant 'node-2$:(OI)(CI)(M)' New-SmbShare ` -Name QWitness ` -Path "C:\QWitness" ` -Description "SQL File Share Witness" ` -FullAccess $env:username,node-1$,node-2$
Deploying the failover cluster
You now use the VM instances to deploy a WSFC and SQL Server.
Deploy WSFC
You are now ready to create the failover cluster:
- Connect to
node-1
by using Remote Desktop. Sign in with your domain user account. - Right-click the Start button (or press Win+X) and click Windows PowerShell (Admin).
- Confirm the elevation prompt by clicking Yes.
Create a new cluster:
- For load balancer configuration
New-Cluster ` -Name sql-cluster ` -Node node-1,node-2 ` -NoStorage ` -StaticAddress
CLUSTER_ADDRESS
Replace
CLUSTER_ADDRESS
with the cluster IP address that you created earlier.- For DNN configuration
New-Cluster ` -Name sql-cluster ` -Node node-1,node-2 ` -NoStorage ` -ManagementPointNetworkType Distributed
Return to the PowerShell session on
witness
and grant the virtual computer object of the cluster permission to access the file share:icacls C:\QWitness\ /grant 'sql-cluster$:(OI)(CI)(M)' Grant-SmbShareAccess ` -Name QWitness ` -AccountName 'sql-cluster$' ` -AccessRight Full ` -Force
Return to the PowerShell session on
node-1
and configure the cluster to use the file share onwitness
as a cluster quorum:Set-ClusterQuorum -FileShareWitness \\witness\QWitness
Verify that the cluster was created successfully:
Test-Cluster
You might see some warnings that can be safely ignored:
WARNING: System Configuration - Validate All Drivers Signed: The test reported some warnings.. WARNING: Network - Validate Network Communication: The test reported some warnings.. WARNING: Test Result: HadUnselectedTests, ClusterConditionallyApproved Testing has completed for the tests you selected. You should review the warnings in the Report. A cluster solution is supported by Microsoft only if you run all cluster validation tests, and all tests succeed (with or without warnings).
You can also launch the Failover Cluster Manager MMC snap-in to review the cluster's health by running
cluadmin.msc
.If you're using Managed AD, add the computer account used by WSFC to the Cloud Service Domain Join Accounts group so that it can join computers to the domain:
Add-ADGroupMember ` -Identity "Cloud Service Domain Join Accounts" ` -Members sql-cluster$
Create a storage pool
You now create a storage pool that combines the three persistent disks that you created earlier, and use the storage pool to create a cluster shared volume:
- Return to the PowerShell session on
node-1
. Create a new storage pool that uses the 3 persistent disks:
$NodeName = [System.Net.Dns]::GetHostName() $ClusterDisks = Get-PhysicalDisk -CanPool $True | Where-Object { ($_ | Get-PhysicalDiskStorageNodeView | Select-Object -Property StorageNodeObjectId) -like ('*' + $NodeName + '*') } $Pool = New-StoragePool ` -StorageSubsystemFriendlyName 'Clustered*' ` -FriendlyName FciPool ` -PhysicalDisks $ClusterDisks ` -ResiliencySettingNameDefault Simple ` -Verbose
On the storage pool, create a new volume that uses the cluster shared volume versions of ReFS and a 64 KB cluster size:
$Pool | New-Volume ` -FriendlyName FciVolume ` -FileSystem CSVFS_ReFS ` -Size 100GB ` -AllocationUnitSize 65536
The volume automatically shows up on
node-2
since the underlying persistent disks are attached to both VM instances.Open the Failover Cluster Manager MMC snap-in:
cluadmin.msc
In the left window pane, navigate to Failover Cluster Manager > sql-cluster > Storage > Disks.
Right-click Cluster Virtual Disk (FciVolume) and select Remove From Cluster Shared Volumes.
Select Cluster Virtual Disk (FciVolume).
In the Volumes tab at the bottom, right-lick the volume and select Change drive letter.
Select the drive letter D and click OK.
Testing storage pool failover
Optionally, you can now test whether the storage pool failover works properly:
- Connect to
node-2
by using Remote Desktop. Sign in with your domain user account. - Right-click the Start button (or press Win+X) and select Run
- Enter
cluadmin.msc
and select OK. In the left window pane, navigate to Failover Cluster Manager > sql-cluster > Storage > Pools.
You should see a pool named Cluster Pool 1 with Owner node set to
node-1
.Return to Cloud Shell and reset
node-1
VM to simulate a failover:gcloud compute instances reset node-1
Return to the Failover Cluster Manager on
node-2
.Observe the status of the storage pool by repeatedly pressing F5 to refresh the view.
After about 30 seconds, the owner node should automatically switch to
node-2
.
Remove the default SQL Server installation
You now remove the default SQL Server installation from the two nodes and replace it with a new FCI configuration.
For each of the two WSFC nodes, node-1
and node-2
, perform the following steps:
- Right-click the Start button (or press Win+X) and click Windows PowerShell (Admin).
- Confirm the elevation prompt by clicking Yes.
Remove the default SQL Server instance:
C:\sql_server_install\Setup.exe /Action=Uninstall /FEATURES=SQL,AS,IS,RS /INSTANCENAME=MSSQLSERVER /Q
Remove Microsoft OLE Driver:
Get-Package -Name "Microsoft OLE*" | Uninstall-Package -Force
Remove Microsoft ODBC Driver:
Get-Package -Name "Microsoft ODBC*" | Uninstall-Package -Force
Restart the computer:
Restart-Computer
Wait for approximately 1 minute for the restart to complete.
Install SQL Server FCI
Before you install the new FCI configuration, verify that the node-1
is the
active node in the cluster:
- Reconnect to
node-1
by using Remote Desktop and sign in using your domain user. - Right-click the Start button (or press Win+X) and select Run
- Enter
cluadmin.msc
and select OK. In the left window pane, navigate to Failover Cluster Manager > sql-cluster.
Verify that the current host server is set to
node-1
.If the current host server is set to
node-2
, right-click sql-cluster in the left window pane and select More actions > Move core cluster resources > Select node… > node-1 and click OK.In the left window pane, navigate to Failover Cluster Manager > sql-cluster > Storage > Pools.
Verify that the owner node of Cluster Pool 1 is set to
node-1
.If the owner node is set to
node-2
, right-click the pool, select Move > Select Node > node-1 and click OK.
You now create a new SQL Server failover cluster installation on node-1
:
- Right-click the Start button (or press Win+X) and click Windows PowerShell (Admin).
- Confirm the elevation prompt by clicking Yes.
Create a domain user account for SQL server and the SQL agent and assign a password:
$Credential = Get-Credential -UserName sql_server -Message 'Enter password' New-ADUser ` -Name "sql_server" ` -Description "SQL Agent and SQL Admin account." ` -AccountPassword $Credential.Password ` -Enabled $true -PasswordNeverExpires $true
Start the SQL Server setup:
& c:\sql_server_install\setup.exe
In the menu on the left, select Installation.
Select New SQL Server failover cluster installation
On the Edition page, check I have a SQL Server license only and select Next.
On the License Terms page, review the terms and, if you accept, select Next.
On the Microsoft Update page, select Next to start the installation.
On the Install Failover Cluster Rules page, you see a Warning MSCS cluster verification warnings and Windows firewall. You can ignore these warnings and select Next.
On the Feature Selection page, select Database Engine Services and select Next.
On the Instance Configuration page, enter
sql
as network name and select Next.On the Cluster Resource Group page, keep the defaults and select Next.
On the Cluster Disk Selection page, select Cluster Virtual Disk (FciVolume) and select Next.
On the Cluster Network Configuration page, configure the following settings, then select Next:
- DHCP: clear
- IP address: enter the IP address of the internal load balancer.
On the Server configuration page, configure the following settings for both SQL Server Agent and SQL Server Database Engine:
- Account name:
DOMAIN\sql_server
whereDOMAIN
is the NetBIOS name of your Active Directory domain - Password: Enter the password that you created earlier
- Account name:
Select the Collation tab and select the collation that you want to use. Then click Next.
On the Database Engine Configuration page, select Add current user to designate the current user as SQL Server administrator. Then select Next.
On the Ready to Install page, review the settings, then select Install.
After the installation completes, select Close.
Now add node-2
to the SQL Server failover cluster:
- Connect to
node-2
by using Remote Desktop and sign in using your domain user. - Right-click the Start button (or press Win+X) and click Windows PowerShell (Admin).
- Confirm the elevation prompt by clicking Yes.
Start the SQL Server setup:
& c:\sql_server_install\setup.exe
In the menu on the left, select Installation.
Select Add node to a SQL Server failover cluster.
Follow the instructions of the installation wizard and accept the default settings until you reach the page Service Accounts.
On the Service Accounts page, enter the password that you created earlier for both SQL Server Agent and SQL Server Database Engine. Then select Next.
On the Ready to Install page, review the settings, then select Install.
After the installation completes, select Close.
Configure health checks
As a final step, configure the cluster to expose a health check endpoint that can be used by an internal load balancer:
- Return to the PowerShell session on
node-2
Initialize a variable with the IP address of the load balancer.
$LoadBalancerIP = '
IP_ADDRESS
'Replace
IP_ADDRESS
with the IP address of thewsfc
address that you reserved earlier.Configure the Failover Cluster to respond to the health check service:
$SqlGroup = Get-ClusterGroup | Where-Object {$_.Name.StartsWith("SQL Server")} $SqlIpAddress = Get-ClusterResource | Where-Object {$_.Name.StartsWith("SQL IP Address")} $SqlIpAddress | Set-ClusterParameter -Multiple @{ 'Address'=$LoadBalancerIP; 'ProbePort'= 59997; 'SubnetMask'='255.255.255.255'; 'Network'= (Get-ClusterNetwork).Name; 'EnableDhcp'=0; }
Restart the cluster resource:
$SqlIpAddress | Stop-ClusterResource $SqlIpAddress | Start-ClusterResource
Restart the cluster group:
$SqlGroup | Stop-ClusterGroup $SqlGroup | Start-ClusterGroup
Create an internal load balancer
To provide a single endpoint for SQL Server clients, you now deploy an internal load balancer. The load balancer uses a health check which ensures that traffic is directed to the active node of the WSFC.
- Return to your existing Cloud Shell session.
Create an unmanaged instance group, and add the two nodes to the group:
gcloud compute instance-groups unmanaged create wsfc-group gcloud compute instance-groups unmanaged add-instances wsfc-group --instances node-1,node-2
Create a health check that the load balancer can use to determine which is the active node.
gcloud compute health-checks create tcp wsfc-healthcheck \ --check-interval="2s" \ --healthy-threshold=1 \ --unhealthy-threshold=2 \ --port=59997 \ --timeout="1s"
The health check probes port
59997
, which is the port you previously configured asProbePort
for the WSFC cluster IP address.Create a backend service and add the instance group:
gcloud compute backend-services create wsfc-backend \ --load-balancing-scheme internal \ --region $(gcloud config get-value compute/zone | rev | cut -c 3- | rev) \ --health-checks wsfc-healthcheck \ --protocol tcp gcloud compute backend-services add-backend wsfc-backend \ --instance-group wsfc-group \ --instance-group-zone $(gcloud config get-value compute/zone) \ --region $(gcloud config get-value compute/zone | rev | cut -c 3- | rev)
Create the internal load balancer:
gcloud compute forwarding-rules create wsfc-sql \ --load-balancing-scheme internal \ --address $LOADBALANCER_ADDRESS \ --ports 1433 \ --network $VPC_NAME \ --subnet $SUBNET_NAME \ --region $(gcloud config get-value compute/zone | rev | cut -c 3- | rev) \ --backend-service wsfc-backend
Configure a DNN resource and DNN DNS name
Similar to the internal load balancer, the DNN resource acts as a single gateway for SQL Server clients. During failovers, the cluster seamlessly routes traffic to the active SQL Server FCI node. Clients connect to the SQL Server FCI with the DNS name.
- Return to the PowerShell session on
node-1
. Execute the script to create DNN resource
$DNNResourceName='fci-dnn' $DNN_DNSName='fcidnn' # create the DNN resource Add-ClusterResource -Name $DNNResourceName -ResourceType 'Distributed Network Name' -Group 'SQL Server (MSSQLSERVER)' # set the DNS name of the DNN resource Get-ClusterResource -Name $DNNResourceName | Set-ClusterParameter -Name DnsName -Value $DNN_DNSName # start the DNN resource Start-ClusterResource -Name $DNNResourceName
Restart
node-1
andnode-2
Testing the failover cluster
You've completed the installation of the failover cluster, but you still have to test whether the cluster works correctly.
Prepare a client
Create a new VM instance which you can use to connect to the failover cluster:
- Return to your existing Cloud Shell session.
Create a new VM instance:
gcloud compute instances create sqlclient \ --machine-type n2-standard-2 \ --subnet $SUBNET_NAME \ --image-family sql-ent-2022-win-2022 \ --image-project windows-sql-cloud \ --boot-disk-size 50 \ --boot-disk-type pd-ssd
Monitor the initialization process of the VM by viewing its serial port output:
gcloud compute instances tail-serial-port-output sqlclient
Wait about 3 minutes until you see the output Instance setup finished, then press Ctrl+C. At this point, the VM instance is ready to be used.
Create a username and password for the VM instance
Connect to the VM by using Remote Desktop and sign in using the username and password created in the previous step.
Right-click the Start button (or press Win+X) and click Windows PowerShell (Admin).
Confirm the elevation prompt by clicking Yes.
Join the computer to your Active Directory domain:
Add-Computer -Domain
DOMAIN
Replace
DOMAIN
with the DNS name of your Active Directory domain.Restart the computer:
Restart-Computer
Wait for approximately 1 minute for the restart to complete.
Run the test
Use the sqlclient
VM to test that you can connect to the failover cluster and
to verify that the failover works correctly:
- Connect to
sqlclient
by using Remote Desktop and sign in using your domain user. - Right-click the Start button (or press Win+X) and click Windows PowerShell.
Connect to SQL Server cluster by using its network name
sql
and query thedm_os_cluster_nodes
table:& "$env:ProgramFiles\Microsoft SQL Server\Client SDK\ODBC\170\Tools\Binn\SQLCMD.EXE" ` -S
SQL_SERVER_NAME
-E -Q "SELECT * FROM sys.dm_os_cluster_nodes"Replace
SQL_SERVER_NAME
with thesql
for load balancer configuration orfcidnn
for DNN configuration.The output should look like this:
NodeName status status_description is_current_owner ------------------------------ ----------- ------------------ ---------------- NODE-1 0 up 1 NODE-2 0 up 0 (2 rows affected)
Notice that
node-1
is the current owner of the SQL Server failover cluster resource.Return to Cloud Shell and bring down the node-1 VM to test the failover scenario.
gcloud compute instances stop node-1
Repeat the query:
& "$env:ProgramFiles\Microsoft SQL Server\Client SDK\ODBC\170\Tools\Binn\SQLCMD.EXE" ` -S
SQL_SERVER_NAME
-E -Q "SELECT * FROM sys.dm_os_cluster_nodes"Replace
SQL_SERVER_NAME
with thesql
for load balancer configuration orfcidnn
for DNN configuration.The output should now look like this:
NodeName status status_description is_current_owner ------------------------------ ----------- ------------------ ---------------- NODE-1 1 down 0 NODE-2 0 up 1 (2 rows affected)
Notice that despite the loss of
node-1
, the query succeeds, and shows thatnode-2
is now the current owner of the failover cluster.
Clean up
After you finish the tutorial, you can clean up the resources that you created so that they stop using quota and incurring charges. The following sections describe how to delete or turn off these resources.
Deleting the project
The easiest way to eliminate billing is to delete the project that you created for the tutorial.
To delete the project:
- In the Google Cloud console, go to the Manage resources page.
- In the project list, select the project that you want to delete, and then click Delete.
- In the dialog, type the project ID, and then click Shut down to delete the project.
What's next
- Read more about persistent disks in multi-writer mode.
- Learn how you can configure a SQL Server failover cluster instance that uses Storage Spaces Direct.