Configuring SQL Server Failover Cluster Instances

If you need high availability and redundancy for your Microsoft SQL Server on Compute Engine, you can configure Always On Failover Cluster Instances (FCI) with Storage Spaces Direct (S2D).

Always On FCI provides a single Microsoft SQL Server instance that is installed across Windows Server Failover Cluster (WSFC) nodes. Only one node in the WSFC actively hosts the SQL instance. In the event of a failover, the WSFC service transfers ownership of the instance's resources to a designated failover node.

S2D provides a software-based virtual SAN that can use Compute Engine VM data disks to store the SQL database.

This tutorial provides an example set up. At the end of this tutorial, you will have created an Always On SQL Server FCI with S2D on Compute Engine.

Objectives

  • Set up a VPC network with a Windows domain controller.
  • Create two Windows SQL Server VM instances to act as cluster nodes.
  • Set up the failover cluster, including cluster storage.
  • Set up an internal load balancer to direct traffic to the active node.
  • Test the failover operation to verify that the cluster is working.

Costs

This tutorial uses Compute Engine images that include Windows Server licenses. This means the cost to run this tutorial can be significant if you leave VMs running. It's a good idea to stop the VMs when you're not using them.

See the Pricing Calculator for an estimate of the costs to complete this tutorial.

Before you begin

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. Select or create a GCP project.

    Go to the project selector page

  3. Make sure that billing is enabled for your Google Cloud Platform project. Learn how to confirm billing is enabled for your project.

  4. Enable the Compute Engine API.

    Enable the API

  5. Start an instance of Cloud Shell where you can run all of the gcloud commands in this tutorial.
    OPEN CLOUD SHELL
  6. Optionally, review and request an increase to your SSD persistent disk quota. You need 500 GB for this tutorial.

Overview

This example set up includes the following components and IP addresses:

  • Custom VPC network: 10.0.0.0/24
  • Active Directory domain controller: 10.0.0.3
  • Two WSFC nodes with attached S2D storage
    • node-1: 10.0.0.4
    • node-2: 10.0.0.5
  • Windows Server Failover Cluster (for cluster management): 10.0.0.201
  • Internal load balancer and SQL FCI listener (for SQL connections): 10.0.0.200

The load balancer listens for requests. Based on health checks, it knows which SQL Server node is active, and it directs traffic there. In case the active node fails, then the FCI failover node activates. The health checker receives the healthy node's signal, and the load balancer redirects traffic. The database, stored on S2D volumes, remains available.

Architecture diagram showing two Compute Engine VMs in a failover cluster

Setting up the VPC network

While you can use an existing network for this tutorial, it is a best practice to isolate systems into different networks and subnets with firewall rules that limit traffic.

  1. Create a custom mode VPC network, which gives you complete control over its subnets and IP ranges.

    gcloud compute networks create wsfcnet --subnet-mode custom
    
  2. Add a subnet, wsfc-subnet1, which will contain all of the VMs in this tutorial.

    gcloud compute networks subnets create wsfc-subnet1 \
        --network wsfcnet \
        --region us-central1 \
        --range 10.0.0.0/24
    
  3. Create a firewall rule to allow internal traffic between the VMs in this network.

    gcloud compute firewall-rules create allow-internal-ports \
        --network wsfcnet \
        --allow tcp:1-65535,udp:1-65535,icmp \
        --source-ranges 10.0.0.0/24
    
  4. Create a firewall rule to allow incoming traffic for RDP on port 3389. Notice that you are opening tcp:3389 to all IP addresses (0.0.0.0/0); in a production environment you would limit access to approved IPs.

    gcloud compute firewall-rules create allow-rdp \
        --network wsfcnet \
        --allow tcp:3389 \
        --source-ranges 0.0.0.0/0
    

Creating and configuring a Windows domain controller

An Active Directory domain is used for domain name services, Windows Failover Clustering, and Storage Spaces Direct features that are needed for this tutorial. Having the AD domain controller in the same VPC is not a requirement but a simplification for the purpose of this tutorial.

In this tutorial, the domain is gontoso.com. The domain controller VM name is dc-windows (and, by default, the Windows computer name matches the VM name, dc-windows). The VM is created in our subnet at 10.0.0.3.

  1. Create a VM to use as the domain controller. While you can specify any machine type and Windows server that you need, for this example use the following parameters.

    gcloud compute instances create "dc-windows" \
        --zone "us-central1-a" \
        --machine-type "n1-standard-2" \
        --subnet "wsfc-subnet1" \
        --private-network-ip "10.0.0.3" \
        --can-ip-forward \
        --image-family "windows-2016" \
        --image-project "windows-cloud" \
        --boot-disk-size "200" \
        --boot-disk-type "pd-standard" \
        --boot-disk-device-name "dc-windows"
    
  2. Generate a password so that you can connect to the domain controller VM using a local account. Note the username and password for future use.

  3. Using RDP, connect to the domain controller VM with your local account username and password.

  4. On the VM, run PowerShell as administrator to open an elevated PowerShell prompt.

  5. Set up an Administrator user.

    1. Run the following command, then enter a password, which you will use with the Administrator account.

      PS C:\> $Password = Read-Host -AsSecureString
      

      Note the password that you entered. You will use this password for the Administrator account.

    2. Set the Administrator account password.

      PS C:\> Set-LocalUser -Name Administrator -Password $Password
      
    3. Enable the Administrator account.

      PS C:\> Enable-LocalUser -Name Administrator
      
  6. Set up an SQL Server service account, which you will use as both the SQL Agent and SQL Admin account.

    1. Securely enter a password for use with the SQL Server service account.

      PS C:\> $Password = Read-Host -AsSecureString
      

      Note the password that you entered. You will use this password for the SQL Server service account.

    2. Create the SQL Server service account and set its password.

      PS C:\> New-LocalUser -Name "service_account" -Description "SQL Agent and SQL Admin account."  -Password $Password
      
  7. Logoff then reconnect to the domain controller VM using RDP with the Administrator account and password.

  8. In an elevated PowerShell, set the following variables, which will be used to configure the Active Directory forest.

    PS C:\> $DomainName = "gontoso.com";
    PS C:\> $DomainMode = "Win2012R2";
    PS C:\> $ForestMode = "Win2012R2";
    PS C:\> $DatabasePath = "C:\Windows\NTDS";
    PS C:\> $LogPath = "C:\Windows\NTDS";
    PS C:\> $SysvolPath = "C:\Windows\SYSVOL";
    
  9. Install Active Directory services and tools.

    PS C:\> Install-WindowsFeature -Name AD-Domain-Services -IncludeManagementTools
    
    Success Restart Needed Exit Code      Feature Result
    ------- -------------- ---------      --------------
    True    No             Success        {Active Directory Domain Services, Group P...
    
    
  10. Install and configure the Active Directory forest. This command will prompt you for credentials; use the Administrator username and password that you created in a previous step.

    PS C:\> Install-ADDSForest -CreateDnsDelegation:$false -DatabasePath $DatabasePath -LogPath $LogPath -SysvolPath $SysvolPath -DomainName $DomainName -DomainMode $DomainMode -ForestMode $ForestMode -InstallDNS:$true -NoRebootOnCompletion:$false -SafeModeAdministratorPassword ((Get-Credential).Password) -Force:$true;
    
    cmdlet Get-Credential at command pipeline position 1
    Supply values for the following parameters:
    Credential
    ...
    
    

    The machine will reboot.

  11. After the VM restarts, reconnect to the it using RDP. Use your local account credentials, which match your original credentials.

Creating the cluster's VMs and data disks

Create two SQL Server VMs with additional data disks. One VM serves as the active FCI node, the other the failover node. The additional disks serve as S2D volumes.

Each SQL Server VM must meet the following requirements.

  • It is created in the subnet that you defined earlier, wsfc-subnet1.
  • It has a private IP in the subnet range, 10.0.0.0/24.
  • The Windows Server Failover Clustering feature is installed.
  • The Compute Engine WSFC Agent, which can monitor the active node in the failover cluster, is enabled.
  1. From Cloud Shell, create two SQL Server VMs with additional data disks. The disks will be used as S2D capacity and cache volumes.

    gcloud compute instances create "node-1" \
        --zone "us-central1-f" \
        --machine-type "n1-highmem-4" \
        --subnet "wsfc-subnet1" \
        --private-network-ip "10.0.0.4" \
        --can-ip-forward \
        --image-family "sql-ent-2016-win-2016" \
        --image-project "windows-sql-cloud" \
        --boot-disk-size "50" \
        --boot-disk-type "pd-ssd" \
        --boot-disk-device-name "node-1" \
        --create-disk=name=node-1-datadisk-1,size=50,type=pd-ssd,auto-delete=no \
        --create-disk=name=node-1-datadisk-2,size=50,type=pd-ssd,auto-delete=no \
        --create-disk=name=node-1-datadisk-3,size=50,type=pd-ssd,auto-delete=no \
        --create-disk=name=node-1-datadisk-4,size=50,type=pd-ssd,auto-delete=no \
        --metadata enable-wsfc=true,sysprep-specialize-script-ps1="Install-WindowsFeature Failover-Clustering -IncludeManagementTools;"
    
    gcloud compute instances create "node-2" \
        --zone "us-central1-a" \
        --machine-type "n1-highmem-4" \
        --subnet "wsfc-subnet1" \
        --private-network-ip "10.0.0.5" \
        --can-ip-forward \
        --image-family "sql-ent-2016-win-2016" \
        --image-project "windows-sql-cloud" \
        --boot-disk-size "50" \
        --boot-disk-type "pd-ssd" \
        --boot-disk-device-name "node-2" \
        --create-disk=name=node-2-datadisk-1,size=50,type=pd-ssd,auto-delete=no \
        --create-disk=name=node-2-datadisk-2,size=50,type=pd-ssd,auto-delete=no \
        --create-disk=name=node-2-datadisk-3,size=50,type=pd-ssd,auto-delete=no \
        --create-disk=name=node-2-datadisk-4,size=50,type=pd-ssd,auto-delete=no \
        --metadata enable-wsfc=true,sysprep-specialize-script-ps1="Install-WindowsFeature Failover-Clustering -IncludeManagementTools;"
      
  2. Optionally, connect to each node using RDP and, from an elevated PowerShell, verify that Failover Clustering is enabled.

    PS C:> Get-WindowsFeature Failover-Clustering
    Display Name                                            Name                       Install State
    ------------                                            ----                       -------------
    [X] Failover Clustering                                 Failover-Clustering            Installed

Configuring the cluster VMs' network

  1. Configure the first node.

    1. Generate a password for node-1. Note the username and password for future use.

    2. Connect to node-1 using RDP.

    3. Open an elevated PowerShell.

    4. Add a firewall rule to open a port for the health check service. This tutorial will use tcp:59997. You can change this to a different port, but it must match the health checker port that you will define later. The health check process periodically pings the agent on each cluster node to determine its availability.

      PS C:\> netsh advfirewall firewall add rule name="Open port for Health Check" dir=in action=allow protocol=TCP localport=59997
      
    5. Add a firewall rule to open a port for the SQL Server instance, which normally runs over tcp:1433.

      PS C:\> netsh advfirewall firewall add rule name="Open SQL Server port" dir=in action=allow protocol=TCP localport=1433
      
    6. Set the DNS to the Domain Controller

      PS C:\> netsh interface ip set dns Ethernet static 10.0.0.3
      
    7. Update the node to use a static IP address and set the netmask.

      PS C:\> netsh interface ip set address name=Ethernet static 10.0.0.4 255.255.0.0 10.0.0.1 1
      

      You will be disconnected from the VM. Using RDP, reconnect with your local username and password.

    8. Add the node to the Active Directory domain that you created earlier.

      PS C:\> Add-Computer -DomainName gontoso.com -Restart -Force -Credential gontoso.com\Administrator
      

      You will be prompted for credentials. Use the Administrator username and password that you set when you configured the domain controller VM in a previous step.

      The machine will reboot.

  2. Configure the second node.

    1. Generate a password for node-2. Note the username and password for future use.

    2. Connect to node-2 using RDP.

    3. Open an elevated PowerShell.

    4. Add a firewall rule to open a port for the health checker. The port must match the health checker port that you will define later.

      PS C:\> netsh advfirewall firewall add rule name="Open port for Health Check" dir=in action=allow protocol=TCP localport=59997
      
    5. Add a firewall rule to open a port for the SQL Server instance, which normally runs over tcp:1433.

      PS C:\> netsh advfirewall firewall add rule name="Open SQL Server port" dir=in action=allow protocol=TCP localport=1433
      
    6. Set the DNS to the Domain Controller.

      PS C:\> netsh interface ip set dns Ethernet static 10.0.0.3
      
    7. Update the node to use a static IP address and set the netmask.

      PS C:\> netsh interface ip set address name=Ethernet static 10.0.0.5 255.255.0.0 10.0.0.1 1
      

      You will be disconnected from the VM. Using RDP, reconnect with your local username and password.

    8. Add the node to the Active Directory domain that you created earlier.

      PS C:\> Add-Computer -DomainName gontoso.com -Restart -Force -Credential gontoso.com\Administrator
      

      You will be prompted for credentials. Use the Administrator username and password that you set when you configured the domain controller VM in a previous step.

      The machine will reboot.

Creating a file share witness

To provide a tie-breaking vote and achieve a quorum for the failover scenario, create a file share that will act as a witness. For the purposes of this tutorial, you will create the file share witness on the domain controller VM. In a production environment, you would create it elsewhere.

  1. Using RDP, connect to the domain controller VM, dc-windows, with the Administrator account.

  2. Open an elevated PowerShell

  3. Create the witness folder.

    PS C:\> New-Item "C:\QWitness" –type directory
    
  4. Share the folder.

    PS C:\> New-SmbShare -Name "QWitness" -Path "C:\QWitness" -Description "SQL File Share Witness" -FullAccess "gontoso.com\Administrator", "gontoso.com\node-1$", "gontoso.com\node-2$"
    
  5. Optionally, using RDP, connect to either node as gontoso.com\Administrator and verify that you can access the shared directory.

    PS C:\> dir \\dc-windows\QWitness
    

Creating an internal load balancer

An internal load balancer provides a single IP for the SQL Server. The load balancer listens for requests and routes network traffic to the active cluster node. It knows which is the active node because a health checker is running against each node. Only the active node responds as healthy. If the active node goes down, then the SQL FCI failover node activates. The health checker receives the signal, and traffic is redirected there.

  1. Create two instance groups, and add one SQL Server node to each group. These instance groups act as backends that the load balancer can direct traffic to.

    gcloud compute instance-groups unmanaged create wsfc-group-f --zone us-central1-f
    gcloud compute instance-groups unmanaged add-instances wsfc-group-f --instances node-1 --zone us-central1-f
    
    gcloud compute instance-groups unmanaged create wsfc-group-a --zone us-central1-a
    gcloud compute instance-groups unmanaged add-instances wsfc-group-a --instances node-2 --zone us-central1-a
    
  2. Create a health check that the load balancer can use to determine which is the active node.

    gcloud compute health-checks create tcp sql-healthcheck \
        --check-interval="2s" \
        --healthy-threshold=1 \
        --unhealthy-threshold=2 \
        --port=59997 \
        --request=10.0.0.200 \
        --timeout="1s" \
    
  3. Add a firewall rule to allow the health check.

    gcloud compute firewall-rules create allow-health-check \
        --network wsfcnet \
        --source-ranges 130.211.0.0/22,35.191.0.0/16 \
        --allow tcp
    
  4. Create one backend service and add the two backend instance groups.

    gcloud compute backend-services create wsfcbackend \
        --load-balancing-scheme internal \
        --region us-central1 \
        --health-checks sql-healthcheck \
        --protocol tcp
    
    gcloud compute backend-services add-backend wsfcbackend \
        --instance-group wsfc-group-a \
        --instance-group-zone us-central1-a \
        --region us-central1
    
    gcloud compute backend-services add-backend wsfcbackend \
        --instance-group wsfc-group-f \
        --instance-group-zone us-central1-f \
        --region us-central1
    
  5. Create an internal load balancer to forward requests to the active node in the SQL FCI. The FCI will be created later and will be configured to listen at 10.0.0.200.

    gcloud compute forwarding-rules create wsfc-forwarding-rule \
        --load-balancing-scheme internal \
        --ports 1433 \
        --network wsfcnet \
        --subnet wsfc-subnet1 \
        --region us-central1 \
        --backend-service wsfcbackend \
        --address 10.0.0.200
    

Creating a Windows Failover Cluster

  1. Using RDP, reconnect to node-1 as gontoso.com\Administrator. You may need to delete previous certificates in order to reconnect.

  2. Open an elevated PowerShell.

  3. Create the cluster.

    PS C:\> New-Cluster -Name windows-fci -Node node-1, node-2 -StaticAddress 10.0.0.201 -NoStorage
    
    PS C:\> Set-ClusterQuorum -FileShareWitness \\dc-windows\QWitness
    
  4. Test the cluster. You will see some warnings that can be ignored.

    PS C:\> Test-Cluster
    WARNING: System Configuration - Validate All Drivers Signed: The test reported some warnings..
    WARNING: Network - Validate Network Communication: The test reported some warnings..
    WARNING:
    Test Result:
    HadUnselectedTests, ClusterConditionallyApproved
    Testing has completed for the tests you selected. You should review the warnings in the Report.  A cluster solution is
    supported by Microsoft only if you run all cluster validation tests, and all tests succeed (with or without warnings).
    Test report file path: C:\Users\Administrator\AppData\Local\Temp\Validation Report 2018.05.01 At 19.25.01.htm
    
    Mode                LastWriteTime         Length Name
    ----                -------------         ------ ----
    -a----         5/1/2018   7:26 PM         747166 Validation Report 2018.05.01 At 19.25.01.htm
    
    

    You can also launch the Cluster Admin GUI to review the cluster's health by running cluadmin.msc.

Enabling Storage Spaces Direct

  1. Using RDP, connect to either node-1 or node-2 and open an elevated PowerShell.

  2. Enable Storage Spaces Direct.

    PS C:\> Enable-ClusterS2D
    
    Confirm
    Are you sure you want to perform this action?
    Performing operation 'Enable Cluster Storage Spaces Direct' on Target 'windows-fci'.
    [Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "Y"): Y
    WARNING: 2018/05/01-19:36:41.315 Node node-1: No disks found to be used for cache
    WARNING: 2018/05/01-19:36:41.330 Node node-2: No disks found to be used for cache
    WARNING: 2018/05/01-19:36:41.362 C:\Windows\Cluster\Reports\Enable-ClusterS2D on 2018.05.01-19.36.41.362.htm
    
    

    Accept the default when prompted to confirm. You will see some warnings that you can ignore.

    • Optionally, if you want better disk performance, you can add local SSDs to your cluster's VMs in addition to standard SSD persistent disks. The local SSDs can serve as the S2D caching layer. Make the number of capacity drives (SSD persistent disks in our case) a multiple of the number of local SSDs. Run the following command for enabling S2D with caching:

      PS C:\> Enable-ClusterS2D -CacheDeviceModel "EphemeralDisk"
      
    • You can also enable Cluster Shared Volume (CSV) in-memory cache for better read througput:

      PS C:\> (Get-Cluster).BlockCacheSize = 2048
      
  3. Because S2D requires 2GB per volume for management, and there are four 50GB volumes per node, define the storage pool and set its size to 192GB.

    PS C:\> New-Volume -StoragePoolFriendlyName S2D* -FriendlyName VDisk01 -FileSystem CSVFS_REFS -Size 192GB
    
  4. Verify that the storage volume exists in read/write mode.

    PS C:\> dir C:\ClusterStorage\Volume1\ | Get-ACL
    
    Directory: C:\ClusterStorage\Volume1
    
    Path                Owner                  Access
    ----                -----                  ------
    MSSQL13.MSSQLSERVER BUILTIN\Administrators BUILTIN\Administrators Allow  FullControl...
    
    

Configuring high availability for the SQL Server using FCI

  1. Remove the default SQL Server instance from the first node. You will replace it later with a new FCI configuration.

    1. Using RDP, connect to node-1 as gontoso.com\Administrator.

    2. Open an elevated PowerShell.

    3. Remove the default SQL Server instance.

      PS C:\> C:\sql_server_install\Setup.exe /Action=Uninstall /FEATURES=SQL,AS,IS,RS /INSTANCENAME=MSSQLSERVER /Q
      
    4. Reboot the VM.

      PS C:\> shutdown /r
      
  2. Remove the default SQL Server instance from the second node. You will replace it later with a new FCI configuration.

    1. Using RDP, log into node-2 as gontoso.com\Administrator.

    2. Open an elevated PowerShell.

    3. Remove the default SQL Server instance.

      PS C:\> C:\sql_server_install\Setup.exe /Action=Uninstall /FEATURES=SQL,AS,IS,RS /INSTANCENAME=MSSQLSERVER /Q
      
    4. Reboot the VM.

      PS C:\> shutdown /r
      
  3. Reinstall SQL Server with FCI on the first node.

    1. Using RDP, log into node-1 as gontoso\Administrator.

    2. Open a text editor, such as Notepad.

    3. Copy the contents of the fci_install_master_node.ini configuration file to the text editor on node-1. Line breaks may not be preserved, but it still works.

    4. Update the configuration file with your gontoso.com\service_account password. The password must be updated in two places: AGTSVCPASSWORD and SQLSVCPASSWORD. Search for changeme and replace it with the password that you defined. Save the file as a plain text file with filename, C:\fci_install_master_node.ini.

    5. Open an elevated PowerShell and install the SQL Server using the configuration file.

      PS C:\> C:\sql_server_install\Setup.exe /CONFIGURATIONFILE=C:\fci_install_master_node.ini
      
  4. Reinstall SQL Server with FCI on the second node.

    1. Using RDP, log into node-2 as gontoso\Administrator.

    2. Open a text editor, such as Notepad.

    3. Copy the contents of the fci_add_node.ini configuration file to the text editor on node-1.

    4. Update the configuration file with your gontoso.com\service_account password. The password must be updated in two places: AGTSVCPASSWORD and SQLSVCPASSWORD. Search for changeme and replace it with the password that you defined. Save the file as a plain text file with filename, C:\fci_add_node.ini.

    5. Open an elevated PowerShell and install the SQL Server.

      PS C:\> C:\sql_server_install\Setup.exe /CONFIGURATIONFILE=C:\fci_add_node.ini
      

Configure cluster for health checker

  1. Using RDP, reconnect to node-1 as gontoso.com\Administrator.

  2. Run the following PowerShell script in an elevated PowerShell to let the Windows Failover Cluster respond to the heath check service:

    $cluster_network_name = 'Cluster Network 1'
    $ip_resource_name = 'SQL IP Address 1 (SQL2016FCI)'
    $load_balancer_ip = '10.0.0.200'
    [int]$health_check_port = 59997
    Get-ClusterResource $ip_resource_name |
      Set-ClusterParameter -Multiple @{ 'Address'=$load_balancer_ip;
                                        'ProbePort'=$health_check_port;
                                        'SubnetMask'='255.255.255.255';
                                        'Network'=$cluster_network_name;
                                        'EnableDhcp'=0}
    

Testing the failover scenario

Preparing a client

  1. From Cloud Shell, create a new VM to serve as an SQL client.

    gcloud compute instances create "sqlclient-1" \
    --zone "us-central1-a" \
    --machine-type "n1-standard-2" \
    --subnet "wsfc-subnet1" \
    --private-network-ip "10.0.0.10" \
    --can-ip-forward \
    --image-family "sql-ent-2016-win-2016" \
    --image-project "windows-sql-cloud" \
    --boot-disk-size "200" \
    --boot-disk-type "pd-standard" \
    --boot-disk-device-name "sqlclient-1"
    
  2. Using RDP, connect to sqlclient-1.

  3. Open an elevated PowerShell.

  4. Configure the client.

    1. Set the DNS to the Domain Controller

      PS C:\> netsh interface ip set dns Ethernet static 10.0.0.3
      
    2. Add the node to the Active Directory domain that you created earlier.

      PS C:\> Add-Computer -DomainName gontoso.com -Restart -Force -Credential gontoso.com\Administrator
      

      You will be prompted for credentials. Use the Administrator username and password that you set when you configured the domain controller VM in a previous step.

      The machine will reboot.

    3. Using RDP, reconnect using the Administrator account.

    4. Add the gontoso.com\service_account to the local Administrators group.

      PS C:\> Add-LocalGroupMember -Group "Administrators" -Member "gontoso.com\service_account"
      
    5. Logoff.

      PS C:\> logoff
      

Running the test

  1. Using RDP, reconnect to sqlclient-1 using the gontoso.com\service_account username and password.

  2. In a PowerShell, run a test query against the system's dm_os_cluster_nodes table.

    PS C:\> & "c:\Program Files\Microsoft SQL Server\Client SDK\ODBC\130\Tools\Binn\SQLCMD.EXE" -S 10.0.0.200 -E -Q "SELECT * FROM sys.dm_os_cluster_nodes"
    
    NodeName                       status      status_description is_current_owner
    ------------------------------ ----------- ------------------ ----------------
    NODE-1                                   0 up                                1
    NODE-2                                   0 up                                0
    
    (2 rows affected)
    
    

    Notice that node-1 is the current owner of the SQL Server failover cluster resource.

  3. Using Cloud Shell, bring down the node-1 VM to test the failover scenario.

    gcloud compute instances stop node-1 --zone us-central1-f
    
  4. On the sqlclient-1 VM, run the test query again.

    PS C:\> & "c:\Program Files\Microsoft SQL Server\Client SDK\ODBC\130\Tools\Binn\SQLCMD.EXE" -S 10.0.0.200 -E -Q "SELECT * FROM sys.dm_os_cluster_nodes"
    
    NodeName                       status      status_description is_current_owner
    ------------------------------ ----------- ------------------ ----------------
    NODE-1                                   1 down                              0
    NODE-2                                   0 up                                1
    
    (2 rows affected)
    
    

    Despite the loss of node-1, the query succeeds, and shows that node-2 is now the current owner of the failover cluster.

Limitations

  • Storage Spaces Direct(S2D) is only supported for Windows Server 2016 and above.
  • With S2D, each disk only contains a partitial view of the overall data. So taking a snapshot of a persistent disk won't be enough to back up your data. Use native SQL backup instead.

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

After you've finished the SQL Server FCI tutorial, you can clean up the resources that you created on GCP so they won't take up quota and you won't be billed for them in the future. The following sections describe how to delete or turn off these resources.

Deleting the project

The easiest way to eliminate billing is to delete the project that you created for the tutorial.

To delete the project:

  1. In the GCP Console, go to the Manage resources page.

    Go to the Manage resources page

  2. In the project list, select the project you want to delete and click Delete .
  3. In the dialog, type the project ID, and then click Shut down to delete the project.

Deleting instances

To delete a Compute Engine instance:

  1. In the GCP Console, go to the VM Instances page.

    Go to the VM Instances page

  2. Click the checkbox for the instance you want to delete.
  3. Click Delete to delete the instance.

Deleting persistent disks

To delete a persistent disk:

  1. In the GCP Console, go to the Disks page.

    Go to the Disks page

  2. Select the checkbox next to the name of the disk you want to delete.

  3. Click the Delete button at the top of the page.

Oliko tästä sivusta apua? Kerro mielipiteesi

Palautteen aihe:

Tämä sivu
Compute Engine Documentation