This tutorial shows how to create a Compute Engine VM instance running SQL Server that is optimized for performance. This tutorial walks you through creating the instance and then configuring SQL Server for optimal performance on Google Cloud. You will learn about a number of configuration options that are available to help you adjust the performance of the system.
This tutorial uses SQL Server Standard Edition 2014, so not every configuration option presented in this guide works for everyone, and not all of them provide noticeable performance benefits for every workload.
- Setting up the Compute Engine instance and disks.
- Configuring the Windows operating system.
- Configuring SQL Server.
This tutorial uses billable components of Google Cloud, including:
- Compute Engine high-memory instance
- Compute Engine SSD persistent disk storage
- Compute Engine Local SSD disk storage
- SQL Server Standard preconfigured image
The Pricing Calculator can generate a cost estimate based on your projected usage. The provided link shows the cost estimate for the products used in this tutorial, which can cost over 4 dollars (US) per hour and over 3000 dollars per month. New Google Cloud users might be eligible for a free trial.
Before you begin
Sign in to your Google Account.
If you don't already have one, sign up for a new account.
In the Cloud Console, on the project selector page, select or create a Google Cloud project.
Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.
- If you aren't using Windows on your local machine, install a third-party RDP client such as Chrome RDP by FusionLabs.
Creating the Compute Engine instance and disks
Create the Compute Engine instance with SQL Server and two persistent disks.
A local SSD provides a high-performance location for
tempdband the Windows pagefile.
There are some important considerations to note when using a local SSD. When you shut down your instance from Windows or reset it by using the API, the local SSD is removed. This action renders the instance unbootable. To get the machine running again, you would need to detach your persistent disks, create a new instance with them, and then define a new local SSD. After startup you would also need to format the new disk and reboot. Therefore, you should not permanently store critical data on a local SSD, or power off the instance, unless you are prepared to rebuild it.
An SSD persistent disk provides high-performance storage for the database files.
SSD persistent disk performance is based on a calculation that uses the number of CPUs and the size of the disk. With 32 vCPUs and a 1 TB disk, the performance peaks at 40,000 read operations per second (ops) and 30,000 write ops. The total sustained throughput for reads and writes is 800 MB/s and 400 MB/s respectively. These measurements represent a summation of all the persistent disks attached to the virtual machine, including the
C:\drive. This is why you should create a local SSD to offload all the IOPS needed for the paging file,
tempdb, staging data, and backups.
To read more about disk performance, see Block storage performance.
Creating the Compute Engine instance
Create a VM that has SQL Server 2014 Standard preinstalled on Windows Server 2012.
In the Google Cloud Console, go to the VM Instances page.
Click the Create instance button.
Name your instance "ms-sql-server".
Set Machine configuration to 16 vCPUs 104 GB, n1-highmem-16.
In the Boot disk section, click Change to begin configuring your boot disk.
In the Application images tab, choose SQL Server 2014 Standard on Windows Server 2012 R2.
In the Boot disk type section, select Standard persistent disk.
In the Size (GB) section, set the boot disk size to 50 GB.
Expand Management, security, disks, networking, sole tenancy.
Under Additional disks, click Add new disk to create a new additional disk.
Leave the Name field unchanged.
Under Type, select Local SSD scratch disk (maximum 8).
Click Done to finish creating this disk.
Under Additional disks, click Add new disk again to create a second additional disk.
Leave the Name field unchanged.
Under Type, select SSD persistent disk.
Under Source type, select Blank disk.
Click Done to finish creating the second disk.
Click Create to create the instance.
Now that you have a working instance running SQL Server, connect to your instance and configure the Windows operating system. After that, you learn to configure SQL Server in an upcoming section.
Connect to your instance
Go to the VM Instances page in the Cloud Console.
Under the Name column, click the name of your instance,
At the top of the instance's details page, click the Set Windows Password button.
Specify a username.
Click Set to generate a new password for this Windows instance.
Note the username and password so you can log into the instance.
Connect to your instance using RDP:
- If you installed Chrome RDP by FusionLabs, click the RDP button at the top of the instance's details page.
- If you're using a different RDP client, such as the Windows Remote Desktop Connection, click the RDP button's overflow menu and download the RDP file. Open the RDP file by using your client.
Setting up disk volumes
Create and format the volumes:
- From the Start menu, search for "Server Manager" and then open it.
Select File and Storage Services and then select Disks.
The local SSD is named
Google EphemeralDisk. Both the local SSD and persistent SSD are marked as having
Right-click the 375 GB local SSD disk named "Google Ephemeral Disk", and then select New Volume.
Proceed with the defaults and choose P: for the disk label, because this will be the paging-file disk.
When you get to the file-system-settings step, change the Allocation unit size to 8192 and enter "pagefile" for the Volume label .
Repeat the same steps above for the second SSD persistent disk, except with the following three changes:
- Choose D: for the drive letter.
Set the Allocation unit size to "32k".
Microsoft recommends that the SQL Server data and log disks be formatted as 64k, but the persistent disk technology inside GCP aligns better with 32k. This change also decreases the number of disk operations that count toward your persistent disk I/O limit.
Enter "sqldata" for the Volume label.
Failed to mount path - Invalid Parameter error
If you encounter this error, follow these steps:
- Click Close.
- Click the refresh disks icon in the upper right.
- Click the 500 GB persistent disk in the list.
In the Volumes panel, right-click the volume and then choose Manage Drive Letter and Access Paths.
Select D: for the drive letter.
Moving the Windows paging file
Now that the new volumes are created and mounted, move the Windows paging file onto the local SSD, which frees up persistent disk IOPS and improves the access time of your virtual memory.
- From the Start menu, search for View advanced system settings and then open the dialog.
- Click the Advanced tab, and in the Performance section, click Settings.
- In the Virtual memory section, click the Change button.
- Uncheck the box Automatically manage paging file size for all drives. The
system should have already set up your paging file on the
C:\drive, and you need to move it.
- Click C: and then click the No paging file radio button.
- Click the Set button.
- To create the new paging file, click the P: drive, and then click the System managed size radio button.
- Click the Set button.
Click OK three times to exit the advanced system properties.
Microsoft Support has published a additional tips for virtual memory settings.
Setting the power profile
Set the power profile to
High-Performance instead of
- From the Start menu, search for "Choose a Power Plan", and then open the power options.
- Select the High Performance radio button.
- Exit the dialog.
Configuring SQL Server
Use SQL Server Management Studio to perform most administrative tasks. The preconfigured images for SQL Server 2014 come with Management Studio already installed, but if you are using the SQL Server 2016 image you need to download and install it manually. After installation, launch Management Studio and then click Connect to connect to the default database.
Moving the data and log files
The preconfigured image for SQL Server comes with everything installed on
C:\ drive, including the system databases. To optimize your setup,
move those files to the new
D:\ drive you created. Also remember to create
all new databases on the
D:\ drive. Because you are using an SSD persistent
disk, you do not need to store the data files and log files on separate disk
There are two ways to move the installation to the secondary disk: using the installer or moving the files manually.
Using the installer
To use the installer, run
c:\setup.exe and select a new installation path on
your secondary disk.
Moving the files manually
Move the system databases and configure SQL Server to save the data and log files on the same volume:
- Create a new folder named
- Open a Command Window.
Enter the following command to grant full access to
icacls D:\SQLData /Grant "NT Service\MSSQLServer":(OI)(CI)F
If you plan on using Report Server features, move the ReportServer and ReportServerTempDB files as well.
After you move the master files and restart, you need to configure the system to point to the new location for the model and MSDB databases. Here is a helper script to run in Management Studio:
ALTER DATABASE model MODIFY FILE ( NAME = modeldev , FILENAME = 'D:\SQLData\model.mdf' ) ALTER DATABASE model MODIFY FILE ( NAME = modellog , FILENAME = 'D:\SQLData\modellog.ldf' ) ALTER DATABASE msdb MODIFY FILE ( NAME = MSDBData , FILENAME = 'D:\SQLData\MSDBData.mdf' ) ALTER DATABASE msdb MODIFY FILE ( NAME = MSDBlog , FILENAME = 'D:\SQLData\MSDBLog.ldf' )
After you execute these commands:
- Use the
services.mscsnap-in to stop the SQL Server database service.
- Use the Windows file explorer to move the physical files from the
C:\drive where the
Masterdatabase was located to the
- Start the SQL Server database service.
Setting system permissions
After moving the system databases, modify some additional settings, starting
with permissions for the Windows user account created to run your SQL
Server process, which is named
Lock Pages in Memory permission
The group policy
Lock Pages in Memory permission prevents Windows from
moving pages in physical memory to virtual memory. To keep physical
memory free and organized, Windows tries to swap old, rarely modified pages to
the virtual-memory paging file on disk.
SQL Server stores important information in memory, such as table structures,
execution plans, and cached queries. Some of this information rarely changes, so
it becomes a target for the paging file. If this information gets moved to the
paging file, SQL Server performance can degrade. Granting the group policy
Pages in Memory permission for SQL Server’s service account prevents this
Follow these steps:
- Click Start and then search for Edit Group Policy to open the console.
- Expand Local Computer Policy > Computer Configuration > Windows Settings > Local Policies > User Rights Assignment.
- Search for and then double-click Lock pages in memory.
- Click Add User or Group.
- Search for "NT Service\MSSQLSERVER".
- If you see multiple names, double-click the MSSQLSERVER name.
- Click** OK** twice.
- Keep the Group Policy Editor console open.
Perform volume maintenance tasks permission
By default, when an application requests a slice of disk space from Windows, the operating system locates an appropriately sized chunk of disk space, and then zeroes out the entire chunk of disk, before handing it back to the application. Because SQL Server is good at growing files and filling disk space, this behavior is not optimal.
There is a separate API for allocating disk space to an application, often
referred to as instant file initialization. Unfortunately, this setting only
works for data files, but you will learn in an upcoming section about log-file
growth. Instant file initialization requires the service account running the SQL
Server process to have another group policy permission, called
- In the Group Policy Editor, search for "Perform volume maintenance tasks".
- Add the "NT Service\MSSQLSERVER" account as you did in the previous section.
- Restart the SQL Server process to activate both settings.
It used to be a best practice to optimize the SQL Server
CPU usage by creating one
TempDB file per CPU. However, because CPU counts
have grown over time, following this guideline can cause performance to
decrease. As a good starting point, use 4
TempDB files. As you measure your
system’s performance, in rare cases you might need to incrementally increase the
TempDB files to a maximum of 8.
You can run a T-SQL script inside SQL Server Management Studio to move the
TempDB files to a folder in the
- Create the directory
Grant full security access to the "NT Service\MSSQLSERVER" user account:
icacls p:\tempdb /Grant "NT Service\MSSQLServer":(OI)(CI)F
Run the following script inside SQL Server Management Studio, to move the TempDB data file and log file:
USE Master GO ALTER DATABASE [tempdb] MODIFY FILE (NAME = tempdev, FILENAME = 'p:\tempdb\tempdb.mdf') GO ALTER DATABASE [tempdb] MODIFY FILE (NAME = templog, FILENAME = 'p:\tempdb\templog.ldf') GO
Restart SQL Server.
Run the following script to modify the file sizes and create three additional data files for the new
ALTER DATABASE [tempdb] MODIFY FILE (NAME = tempdev, FILENAME = 'p:\tempdb\tempdb.mdf', SIZE=8GB) ALTER DATABASE [tempdb] MODIFY FILE (NAME = templog, FILENAME = 'p:\tempdb\templog.ldf' , SIZE = 2GB) ALTER DATABASE [tempdb] ADD FILE (NAME = 'tempdev1', FILENAME = 'p:\tempdb\tempdev1.ndf' , SIZE = 8GB, FILEGROWTH = 0); ALTER DATABASE [tempdb] ADD FILE (NAME = 'tempdev2', FILENAME = 'p:\tempdb\tempdev2.ndf' , SIZE = 8GB, FILEGROWTH = 0); ALTER DATABASE [tempdb] ADD FILE (NAME = 'tempdev3', FILENAME = 'p:\tempdb\tempdev3.ndf' , SIZE = 8GB, FILEGROWTH = 0); GO
If you use SQL Server 2016, there are 3 additional
TempDBfiles to remove after you do the previous steps:
ALTER DATABASE [tempdb] REMOVE FILE temp2; ALTER DATABASE [tempdb] REMOVE FILE temp3; ALTER DATABASE [tempdb] REMOVE FILE temp4;
Restart SQL Server again.
tempdbfiles from the original location on the
You successfully moved your
TempDB files onto the local SSD partition. This
move carries some risks, mentioned earlier, but if they are lost for any
reason,SQL Server rebuilds the
TempDB files. Moving
TempDB gives you the
added performance of the local SSD, and decreases the IOPS used on your
max degree of parallelism
The recommended default setting for
max degree of parallelism is to match it
to the number of CPUs on the server. However, there is a point where executing a
query in 16 or 32 parallel chunks and merging the results is much slower than
running it in a single process. If you are using a 16- or 32-core instance, you
can set the
max degree of parallelism value to 8 using the following T-SQL:
USE Master GO EXEC sp_configure 'show advanced options', 1 GO RECONFIGURE WITH OVERRIDE GO EXEC sp_configure 'max degree of parallelism', 8 GO RECONFIGURE WITH OVERRIDE GO
max server memory
This setting defaults to a very high number, but you should set it to the number
of megabytes of available physical RAM, minus a couple gigabytes for operating
system and overhead. The following T-SQL example adjusts
max server memory to
100 GB. Modify it to adjust the value to match your instance. Review the
Server memory server configuration options document
for more information.
EXEC sp_configure 'show advanced options', 1 GO RECONFIGURE WITH OVERRIDE GO exec sp_configure 'max server memory', 100000 GO RECONFIGURE WITH OVERRIDE GO
Restart the instance one more time to make sure all of the new settings take effect. Your SQL Server system is configured and you are ready to create your own databases and start testing your specific workloads. Review the SQL Server Best Practices guide for more information on operational activities, other performance considerations, and Enterprise Edition capabilities.
After you've finished the SQL Server tutorial, you can clean up the resources that you created on GCP so they won't take up quota and you won't be billed for them in the future. The following sections describe how to delete or turn off these resources.
Deleting the project
The easiest way to eliminate billing is to delete the project that you created for the tutorial.
To delete the project:
- In the Cloud Console, go to the Manage resources page.
- In the project list, select the project you want to delete and click Delete delete.
- In the dialog, type the project ID, and then click Shut down to delete the project.
To delete a Compute Engine instance:
- In the Cloud Console, go to the VM Instances page.
- Click the checkbox for the instance you want to delete.
- Click Delete delete to delete the instance.
Deleting persistent disks
To delete a persistent disk:
In the Cloud Console, go to the Disks page.
Select the checkbox next to the name of the disk you want to delete.
Click the Delete button at the top of the page.