Deploying a multi-subnet SQL Server 2016 Always On availability group on Compute Engine

This tutorial is the second part of a series that helps you deploy a highly available Windows environment on Google Cloud with Microsoft Active Directory, SQL Server 2016, and Internet Information Services (IIS). In this tutorial, you set up two SQL Server instances in different zones of the same region and configure them as a multi-subnet Always On SQL Server availability group.

The series consists of these tutorials:

Each tutorial builds on the infrastructure that you create in the preceding one.

Objectives

  • Create two network subnets spanning two zones.
  • Deploy a Microsoft SQL Server 2016 virtual machine (VM) in each zone.
  • Configure each SQL Server instance to join your Active Directory domain.
  • Create a Windows Server failover cluster.
  • Create and configure a SQL Server bookshelf sample database.
  • Set up the SQL Server Always On availability group.

Costs

In this tutorial, you continue using Google Cloud resources that you configured in earlier tutorials. You use the following billable components:

The Pricing Calculator estimates the cost of this environment at around $98 per day, including the SQL Server 2016 Enterprise licenses and the two-server Active Directory environment.

Before you begin

The configuration that you use in this tutorial requires you to have access to a Windows domain controller and a working Active Directory domain. If you don't already have this environment, complete the steps in the following tutorials:

Initializing common variables

You must define several environment variables that control where elements of the infrastructure are deployed.

  1. Using a text editor, create a script that sets the following environment variables to specify your project ID, along with the region and zones you want to use.

    The variables set the region to us-east1. If you used a different region in the previous tutorial, change the region in this script to match what you used before.

    region=us-east1
    zone_1=${region}-b
    zone_2=${region}-c
    vpc_name=webappnet
    project_id=your-project-id
    

    Replace your-project-id with the ID of the Cloud project that you're using.

  2. Open Cloud Shell:

    Open Cloud Shell

  3. Copy the script into your Cloud Shell window and run it.

  4. Set the default region and project ID so you don't have to specify these values in every subsequent command:

    gcloud config set compute/region ${region}
    gcloud config set project ${project_id}
    

Creating the network infrastructure

After you've defined the infrastructure variables, you create the network subnets that SQL Server uses.

  1. Add new subnets to your existing virtual private cloud (VPC) network:

    gcloud compute networks subnets create private-sql-1 \
        --network ${vpc_name} \
        --range 10.3.0.0/24
    
    gcloud compute networks subnets create private-sql-2 \
        --network ${vpc_name} \
        --range 10.4.0.0/24
    
  2. Create an internal firewall rule to allow traffic between the private-ad* subnets that you created previously and the new private-sql* subnets:

    gcloud compute firewall-rules create allow-internal-ports-private-sql \
        --network ${vpc_name} \
        --allow tcp:0-65535,udp:0-65535,icmp \
        --source-ranges 10.4.0.0/24,10.3.0.0/24,10.2.0.0/24,10.1.0.0/24
    

Creating SQL Server instances

Next you create two SQL Server 2016 Enterprise edition VMs by using Compute Engine preconfigured images with the following properties:

  • Names: bookshelf-sql1 and bookshelf-sql2
  • Boot disk size: 200GB
  • Machine type: n1-highmem-4

If you intend to use this environment beyond this tutorial and require additional compute resources, you can customize the machine type for these instances later. You can also add disks and resize the existing persistent disks.

In this tutorial, you use the SQL Server on Windows Server public image for your VMs, which means you are billed for your SQL Server usage. You can also deploy your existing SQL Server licenses to Compute Engine using license mobility through Software Assurance and apply those licenses after installing SQL Server on any of the Windows Server public images.

  1. Create a Compute Engine instance of Windows Server 2016 with SQL Server 2016 Enterprise:

    gcloud compute instances create bookshelf-sql1 \
        --machine-type n1-highmem-4 \
        --boot-disk-type pd-ssd \
        --boot-disk-size 200GB \
        --image-project windows-sql-cloud \
        --image-family sql-ent-2016-win-2016 \
        --zone ${zone_1} \
        --network-interface=network=${vpc_name},subnet=private-sql-1,private-network-ip=10.3.0.2,aliases=10.3.0.4/31
    
  2. Create the second instance:

    gcloud compute instances create bookshelf-sql2 \
        --machine-type n1-highmem-4 \
        --boot-disk-type pd-ssd \
        --boot-disk-size 200GB \
        --image-project windows-sql-cloud \
        --image-family sql-ent-2016-win-2016 \
        --zone ${zone_2} \
        --network-interface=network=${vpc_name},subnet=private-sql-2,private-network-ip=10.4.0.2,aliases=10.4.0.4/31
    

Configure the bookshelf-sql2 instance

Next you configure the bookshelf-sql2 instance. You start with the second instance because additional work is required to set up the database on bookshelf-sql1, and configuring bookshelf-sql2 first requires fewer Remote Desktop Protocol (RDP) sessions.

  1. Create a password for the Windows instance bookshelf-sql2:

    gcloud compute reset-windows-password bookshelf-sql2 \
        --zone ${zone_2} --quiet
    

    The username is your Google account username. Note the username and password for future use.

  2. Use RDP to connect to the SQL Server instance by using the credentials you previously created.

  3. In the instance, open a PowerShell terminal as Administrator. (Click Start, type PowerShell, and then press Control+Shift+Enter.)

  4. Set the following variables:

    $DomainName = "example-gcp.com"
    $DNS1 = "10.1.0.100"
    $DNS2 = "10.2.0.100"
    $LocalStaticIp = "10.4.0.2"
    $DefaultGateway = "10.4.0.1"
    
  5. Configure the primary DNS server:

    netsh interface ip set dns Ethernet static $DNS1
  6. Configure the secondary DNS server:

    netsh interface ip add dns Ethernet $DNS2 index=2
  7. Set the IP address and default gateway:

    netsh interface ip set address name=Ethernet static $LocalStaticIp 255.255.255.0 $DefaultGateway 1
  8. Add the VM to the Windows domain:

    Add-Computer -DomainName $DomainName `
        -Credential (Get-Credential "example-gcp\Administrator")
    
  9. When you're prompted to provide a password for the Administrator account, use the Administrator credentials that you defined when you installed an Active Directory forest in the previous tutorial. For the username, add the domain name as a prefix, as in EXAMPLE-GCP\Administrator.

  10. Open Windows firewall ports for the SQL Server availability group:

    netsh advfirewall firewall add rule name="5022 for Avail Groups" dir=in action=allow protocol=TCP localport=5022
    netsh advfirewall firewall add rule name="1433 for SQL Server" dir=in action=allow protocol=TCP localport=1433
    
  11. Install the Failover Clustering feature:

    Install-WindowsFeature Failover-Clustering -IncludeManagementTools
  12. Reboot the instance:

    Restart-Computer
  13. Use RDP to connect to the SQL Server instance by using the credentials for the EXAMPLE-GCP\Administrator account.

  14. Open a PowerShell console window as Administrator.

  15. In the PowerShell console window, create the folders for your SQL Server data and log files:

    New-Item -ItemType directory -Path C:\SQLData
    New-Item -ItemType directory -Path C:\SQLLog
    
  16. Create a new folder for database backups and share it:

    New-Item -ItemType directory -Path C:\SQLBackup
    New-SMBShare -Name SQLBackup -Path C:\SQLBackup -FullAccess "Authenticated Users"
    

Configure the bookshelf-sql1 instance

Next you configure the bookshelf-sql1 instance.

  1. In Cloud Shell, create a password for the Windows instance bookshelf-sql1:

    gcloud compute reset-windows-password bookshelf-sql1 --zone ${zone_1}  --quiet

    The username is your Google account username. Note the username and password for future use.

  2. Use RDP to connect to the SQL Server instance by using the credentials you created for the bookshelf-sql1 instance.

  3. Open a PowerShell console window as Administrator.

  4. In the PowerShell console window, set the following variables:

    $DomainName = "example-gcp.com"
    $DNS1 = "10.1.0.100"
    $DNS2 = "10.2.0.100"
    $LocalStaticIp = "10.3.0.2"
    $DefaultGateway = "10.3.0.1"
    
  5. Configure the primary DNS server:

    netsh interface ip set dns Ethernet static $DNS1
  6. Configure the secondary DNS server:

    netsh interface ip add dns Ethernet $DNS2 index=2
  7. Set the IP address and default gateway:

    netsh interface ip set address name=Ethernet static $LocalStaticIp 255.255.255.0 $DefaultGateway 1
  8. Add the VM to the domain:

    Add-Computer -DomainName $DomainName `
        -Credential (Get-Credential "example-gcp\Administrator")
    
  9. When you're prompted to provide a password for the Administrator account, use the Administrator credentials that you defined when you installed an Active Directory forest in the previous tutorial. For the username, add the domain name as a prefix, as in EXAMPLE-GCP\Administrator.

  10. Add Windows firewall rules:

    netsh advfirewall firewall add rule name="5022 for Avail Groups" dir=in action=allow protocol=TCP localport=5022
    netsh advfirewall firewall add rule name="1433 for SQL Server" dir=in action=allow protocol=TCP localport=1433
    
  11. Install the Failover Clustering feature:

    Install-WindowsFeature Failover-Clustering -IncludeManagementTools
  12. Reboot the instance:

    Restart-Computer

Configure the Windows Server failover cluster

  1. Use RDP to connect to the bookshelf-sql1 instance. When you're prompted to provide a password for the Administrator account, use the Administrator credentials you defined when installing an Active Directory forest in the previous tutorial. For the username, add the domain name as a prefix, as in EXAMPLE-GCP\Administrator.

  2. Open a PowerShell console window as Administrator.

  3. In the PowerShell console window, set the following variables:

    $node1 = "bookshelf-sql1"
    $node2 = "bookshelf-sql2"
    
  4. Set the name of the cluster:

    $nameWSFC = "bookshelf-dbclus"
  5. Set the IP addresses for the clusters and cluster listeners:

    $ipWSFC1 = "10.3.0.4"
    $ipWSFC2 = "10.4.0.4"
    $ipWSListener1 = "10.3.0.5"
    $ipWSListener2 = "10.4.0.5"
    
  6. Make sure that the bookshelf-sql1 SQL Server instance is ready to join the cluster. Run the following script, which tries to connect to the SQL Server instance:

    $SQLServer1IsReady=$False
    For ($i=0; $i -le 30; $i++) {
        $SqlCatalog = "master"
        $SqlConnection = New-Object System.Data.SqlClient.SqlConnection
        $SqlConnection.ConnectionString = "Server = $node1;" + `
             "Database = $SqlCatalog; Integrated Security = True"
        try {
            $SqlConnection.Open()
            Write-Host "Connection to the server $node1 was successful"
            $SQLServer1IsReady=$True
            $SqlConnection.Close()
            break
        }
        catch {
            Write-Host "SQL server $node1 is not ready, waiting for 60s"
            Start-Sleep -s 60 #Wait for 60 seconds
        }
    }
    if($SQLServer1IsReady -eq $False) {
        Write-Error "$node1 is not responding. Was it deployed correctly?"
    }
    
  7. Make sure that the bookshelf-sql2 SQL Server instance is ready to join the cluster:

    $SQLServer2IsReady=$False
    For ($i=0; $i -le 30; $i++) {
        $SqlCatalog = "master"
        $SqlConnection = New-Object System.Data.SqlClient.SqlConnection
        $SqlConnection.ConnectionString = "Server = $node2;" + `
            "Database = $SqlCatalog; Integrated Security = True"
        try {
            $SqlConnection.Open()
            Write-Host "Connection to the server $node2 was successful"
            $SQLServer2IsReady=$True
            $SqlConnection.Close()
            break
        }
        catch {
            Write-Host "SQL server $node2 is not ready, waiting for 60s"
            Start-Sleep -s 60 #Wait for 60 seconds
        }
    }
    if($SQLServer2IsReady -eq $False) {
        Write-Error "$node2 is not responding. Was it deployed correctly?"
    }
    
  8. Create the new cluster:

    New-Cluster -Name $nameWSFC -Node $node1, $node2 -NoStorage `
        -StaticAddress $ipWSFC1, $ipWSFC2
    
  9. Enable the Always On availability groups feature on both nodes:

    Enable-SqlAlwaysOn -ServerInstance $node1 -Force
    Enable-SqlAlwaysOn -ServerInstance $node2 -Force
    
  10. Create folders for the database data and log files:

    New-Item -ItemType directory -Path C:\SQLData
    New-Item -ItemType directory -Path C:\SQLLog
    

Set up the database

This tutorial uses the bookshelf example database, which you create in order to test that your configuration is working correctly.

  1. In the PowerShell console window where you've been working, run the following SQL statement to create the bookshelf database:

    $CreateDatabaseSQLScript =  @"
    CREATE DATABASE bookshelf ON PRIMARY
        (NAME = 'bookshelf_Data', FILENAME='C:\SQLData\bookshelf_data.mdf', SIZE =
    256MB, MAXSIZE = UNLIMITED, FILEGROWTH = 256MB )
        LOG ON (NAME = 'bookshelf_log', FILENAME='C:\SQLLog\bookshelf_log.ldf',
    SIZE = 256MB, MAXSIZE = UNLIMITED, FILEGROWTH = 256MB )
    GO
    USE [bookshelf]
    Exec dbo.sp_changedbowner @loginame = 'sa', @map = false;
    ALTER DATABASE [bookshelf] SET RECOVERY FULL;
    GO
    BACKUP DATABASE bookshelf to disk =
    '\\bookshelf-sql2\SQLBackup\bookshelf.bak' WITH INIT
    GO
    "@
    Invoke-Sqlcmd -Query $CreateDatabaseSQLScript -ServerInstance $node1
    
  2. Run the following SQL statement to create the Books table:

    $CreateTableSQLScript =  @"
    USE [bookshelf]
    GO
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE TABLE [dbo].[Books] (
                [Id] [bigint] IDENTITY(1,1) NOT NULL,
                [Title] [nvarchar](max) NOT NULL,
                [Author] [nvarchar](max) NULL,
                [PublishedDate] [datetime] NULL,
                [ImageUrl] [nvarchar](max) NULL,
                [Description] [nvarchar](max) NULL,
                [CreatedById] [nvarchar](max) NULL,
    CONSTRAINT [PK_dbo.Books] PRIMARY KEY CLUSTERED
        ([Id] ASC) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
    IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
    GO
    "@
    Invoke-Sqlcmd -Query $CreateTableSQLScript -ServerInstance $node1
    
  3. Set the following variables:

    $Domain = "example-gcp.com"
    $ADControllerFQDN = "ad-dc1.$Domain"
    $ServiceAccountName = "sql.service"
    $ServiceAccountPrincipalName = "$ServiceAccountName@$Domain"
    
  4. Activate the RSAT-AD-PowerShell cmdlet with the following PowerShell command:

    Add-WindowsFeature RSAT-AD-PowerShell

    You need Remote Server Administration Tools in order to create a service account from the VM.

  5. Add a database service account named sql.service within the domain:

    New-ADUser -Name $ServiceAccountName `
        -Server $ADControllerFQDN `
        -SamAccountName $ServiceAccountName `
        -UserPrincipalName $ServiceAccountPrincipalName `
        -AccountPassword (Read-Host -AsSecureString "AccountPassword") `
        -PassThru | Enable-ADAccount
    
  6. When you are prompted to enter the password for the sql.service account, use a strong password and store the password in a safe location for future use.

  7. Run the following SQL statement to add the EXAMPLE-GCP\sql.service account as a database administrator:

    $AddSQLServiceAccountScript =  @"
    USE [master]
    GO
        CREATE LOGIN [EXAMPLE-GCP\sql.service] FROM WINDOWS WITH
    DEFAULT_DATABASE=[bookshelf], DEFAULT_LANGUAGE=[us_english]
    GO
    USE [bookshelf]
    GO
    CREATE USER [EXAMPLE-GCP\sql.service] FOR LOGIN [EXAMPLE-GCP\sql.service]
    GO
    USE [bookshelf]
    GO
    ALTER ROLE [db_accessadmin] ADD MEMBER [EXAMPLE-GCP\sql.service]
    GO
    USE [bookshelf]
    GO
    ALTER ROLE [db_backupoperator] ADD MEMBER [EXAMPLE-GCP\sql.service]
    GO
    USE [bookshelf]
    GO
    ALTER ROLE [db_datareader] ADD MEMBER [EXAMPLE-GCP\sql.service]
    GO
    USE [bookshelf]
    GO
    ALTER ROLE [db_datawriter] ADD MEMBER [EXAMPLE-GCP\sql.service]
    GO
    USE [bookshelf]
    GO
    ALTER ROLE [db_ddladmin] ADD MEMBER [EXAMPLE-GCP\sql.service]
    GO
    USE [bookshelf]
    GO
    ALTER ROLE [db_owner] ADD MEMBER [EXAMPLE-GCP\sql.service]
    GO
    USE [bookshelf]
    GO
    ALTER ROLE [db_securityadmin] ADD MEMBER [EXAMPLE-GCP\sql.service]
    GO
    "@
    Invoke-Sqlcmd -Query $AddSQLServiceAccountScript -ServerInstance $node1
    

Set up the SQL Server availability group

  1. While still in the instance where you have been working, set the following variables:

    $Database="bookshelf"
    $BackupFolder="SQLBackup"
    $SharedLocation="\\$node2\$BackupFolder"
    $ListenerName="sql-listener"
    $AGName="MainAG"
    $PrimaryServer = "$node1.$Domain\MSSQLSERVER"
    $SecondaryServer = "$node2.$Domain\MSSQLSERVER"
    
  2. Back up the database on the primary server:

    Backup-SqlDatabase `
        -Database $Database -Initialize `
        -BackupFile "$SharedLocation\$Database.bak" `
        -ServerInstance $node1
    
  3. Restore the database on the secondary server:

    Restore-SqlDatabase `
        -Database $Database `
        -BackupFile "$SharedLocation\$Database.bak" `
        -ServerInstance $node2 `
        -NORECOVERY
    
  4. Back up the database log on the primary server:

    Backup-SqlDatabase `
        -Database $Database -Initialize `
        -BackupFile "$SharedLocation\$Database.log" `
        -ServerInstance $node1 `
        -BackupAction Log
    
  5. Restore the database log on the secondary server:

    Restore-SqlDatabase `
        -Database $Database `
        -BackupFile "$SharedLocation\$Database.log" `
        -ServerInstance $node2 `
        -RestoreAction Log `
        -NORECOVERY
    
  6. Create and then start the endpoints:

    $endpoint1=New-SqlHADREndpoint -Port 5022 -Owner sa `
        -Encryption Supported -EncryptionAlgorithm Aes `
        -Name AlwaysonEndpoint1 `
        -Path "SQLSERVER:\SQL\$node1.$Domain\Default"
    Set-SqlHADREndpoint -InputObject $endpoint1 -State Started
    $endpoint2=New-SqlHADREndpoint -Port 5022 -Owner sa `
        -Encryption Supported -EncryptionAlgorithm Aes `
        -Name AlwaysonEndpoint2 `
        -Path "SQLSERVER:\SQL\$node2.$Domain\Default"
    Set-SqlHADREndpoint -InputObject $endpoint2 -State Started
    

    The output is the following:

    AlwaysonEndpoint1       STARTED            5022 - for node1
    AlwaysonEndpoint2       STARTED            5022 - for node2
    
  7. Verify the configuration by running the following commands in PowerShell:

    $node1 = "bookshelf-sql1"
    $node2 = "bookshelf-sql2"
    Invoke-Sqlcmd -Query "select name, state_desc, port FROM sys.tcp_endpoints" -ServerInstance $node1
    Invoke-Sqlcmd -Query "select name, state_desc, port FROM sys.tcp_endpoints" -ServerInstance $node2
    

    If the configuration is correct, you see output like the following:

    name                       state_desc port
    ----                       ---------- ----
    Dedicated Admin Connection STARTED       0
    TSQL Default TCP           STARTED       0
    AlwaysonEndpoint1          STARTED    5022
    Dedicated Admin Connection STARTED       0
    TSQL Default TCP           STARTED       0
    AlwaysonEndpoint2          STARTED    5022
    
  8. Set the endpoint URLs:

    $EndpointUrlSQLServer1="TCP://" + $node1 + "." + $Domain + ":5022"
    $EndpointUrlSQLServer2="TCP://" + $node2 + "." + $Domain + ":5022"
    
  9. Create an in-memory representation of the replicas:

    $PrimaryReplica = New-SqlAvailabilityReplica -Name $node1 `
        -EndpointUrl $EndpointUrlSQLServer1 `
        -FailoverMode "Automatic" `
        -AvailabilityMode "SynchronousCommit" `
        -AsTemplate -Version 13
    $SecondaryReplica = New-SqlAvailabilityReplica -Name $node2 `
        -EndpointUrl $EndpointUrlSQLServer2 `
        -FailoverMode "Automatic" `
        -AvailabilityMode "SynchronousCommit" `
        -AsTemplate -Version 13
    
  10. Create the availability group:

    New-SqlAvailabilityGroup -Name $AGName `
        -AvailabilityReplica @($PrimaryReplica, $SecondaryReplica) `
        -Path "SQLSERVER:\SQL\$node1.$Domain\Default" `
        -Database $Database
    
  11. Join the secondary instance to the availability group:

    Join-SqlAvailabilityGroup -Path "SQLSERVER:\SQL\$node2\Default" -Name $AGName
  12. Join the secondary database to the availability group:

    Add-SqlAvailabilityDatabase `
        -Path "SQLSERVER:\SQL\$node2\Default\AvailabilityGroups\$AGName" `
        -Database $Database
    
  13. Create the listener:

    New-SqlAvailabilityGroupListener -Name $ListenerName `
        -Port 1433 `
        -StaticIp @("$ipWSListener1/255.255.255.0","$ipWSListener2/255.255.255.0") `
        -Path SQLSERVER:\Sql\$node1\Default\AvailabilityGroups\$AGName
    
  14. Remove the shared folder, which is no longer needed:

    For ($i=0; $i -le 30; $i++) {
        try {
            Remove-Item \\$node2\c$\$BackupFolder -Force -Recurse -ErrorAction Stop
            Write-Host "Shared folder $SharedLocation was removed"
            break
        }
        catch {
            Write-Host "\\$node2\c$\$BackupFolder is in use waiting 1m to retry"
            Start-Sleep -s 60
        }
    }
    

Cleaning up

If you want to continue to the next tutorial in this series (Deploying load-balanced IIS web servers), keep the resources that you created in this tutorial. However, if you don't intend to use the environment that you created in this tutorial, go ahead and clean up the resources you created on Google Cloud so you won't be billed for them. The following sections describe how to delete or turn off these resources.

Deleting the project

  • In the Cloud Console, go to the Manage resources page.

    Go to the Manage resources page

  • In the project list, select the project that you want to delete and then click Delete .
  • In the dialog, type the project ID and then click Shut down to delete the project.
  • Deleting instances

    If you don't want to delete your project entirely, you can delete the specific resources you created.

    To delete a Compute Engine instance:

    1. In the Cloud Console, go to the VM instances page.

      Go to the VM instances page

    2. Select the instances you want to delete.

    3. At the top of the page, click Delete to delete the instances.

    Deleting VPC networks

    To delete the VPC network, subnets, and firewall rules:

    1. In the Cloud Console, go to the VPC networks page.

      Go to the VPC networks page

    2. Select the VPC network you created.

    3. At the top of the page, click Delete to delete the network.

    What's next