Edit on GitHub
Report issue
Page history

Monitoring slow queries in MySQL with Cloud Logging and Monitoring

Author(s): @jwlee98 @jpatokal ,   Published: 2019-07-10

Jungwoon Lee and Jani Patokallio | Google

Contributed by Google employees.

This tutorial describes how to log and monitor Cloud SQL for MySQL slow queries using Cloud Logging and Monitoring. This tutorial is intended for database administrators and DevOps personnel responsible for monitoring and maintaining MySQL databases.

The slow query log feature in MySQL enables you to log queries that exceed a predefined time limit. This greatly simplifies the task of finding inefficient or time-consuming queries. By integrating these logs with Cloud Logging and Monitoring, you can keep an eye on database performance, allowing you to identify which queries need optimization and, optionally, be alerted when there are too many slow queries.

architecture diagram

This tutorial can also be used with self-managed MySQL instances running on Compute Engine.

See the appendix at the end of this document for instructions on how to configure a self-managed instance to send slow query logs to Cloud Logging.


  • Learn how to enable slow query logs in Cloud SQL.
  • Learn how to access slow query logs in Cloud Logging.
  • Learn how to create log-based metrics for alerting and troubleshooting.


This tutorial uses the following billable components of Google Cloud:

  • Cloud SQL for MySQL
  • Cloud Logging
  • Cloud Monitoring

You can use the pricing calculator to generate a cost estimate based on your projected usage. For example, if your slow query log contains 10 distinct SQL statements every minute, so they generate 0.033 MB of metrics per month, your total monthly cost is $0.00 if within the 150 MB free quota, or $0.78 if you have already exceeded 150 MB. For details, see this pricing estimate.

Turn on slow query logs for Cloud SQL for MySQL

Cloud SQL for MySQL is a fully managed database service that makes it easy to set up, maintain, manage, and administer your MySQL databases on Google Cloud. Cloud SQL comes with built-in Cloud Logging integration, so MySQL logs are available in the Logs Viewer.

  1. Open Cloud Shell and set a default zone:

    gcloud config set compute/zone us-central1-a
  2. Enable the SQL Admin API:

    gcloud services enable sqladmin.googleapis.com
  3. Create a test instance of Cloud SQL, with the appropriate MySQL configuration flags to enable slow query logs:

    gcloud sql instances create cloudsql001 --zone us-central1-a --root-password=[PASSWORD] --database-flags=log_output='FILE',slow_query_log=on,long_query_time=2

    Replace [PASSWORD] with a secure password of your choice.

The slow query log defaults to logging SQL statements that take more than two seconds. You can control this filtering by adjusting the value of the long_query_time flag.

By default, slow query logs are not enabled in MySQL. If you are using an existing database instance that does not have slow queries enabled, you can use gcloud sql instances patch to set the flags:

gcloud sql instances patch cloudsql001 --database-flags=log_output='FILE',slow_query_log=on,long_query_time=2

View the slow query log

To view the slow query logs in the Logging viewer, do the following:

  1. Go to the Logs Viewer page.
  2. From the leftmost dropdown, select Cloud SQL and the cloudsql001 database.
  3. From the All logs dropdown, select cloudsql.googleapis.com/mysql-slow.log.


Generate sample data with a load test

You now have slow query logs enabled, but your new database does not have any slow logs yet. In this section, you run a load test to create a test database and some slow queries to monitor.

If you are using an existing database that already has slow queries visible in the Logs Viewer, you can skip this section and proceed directly to the next section.

MySQL comes with a diagnostic tool called mysqlslap. This highly configurable tool is designed for benchmarking database performance. You will use this tool to perform a load test by simulating simultaneous connections to the database server.

  1. In Cloud Shell, clone an open source Git repository containing sample data provided by MySQL:

    git clone https://github.com/datacharmer/test_db
    cd test_db
  2. Import the employees database to your MySQL instance:

    gcloud sql connect cloudsql001 --user=root < employees.sql

    Enter the root password you selected earlier when prompted to do so.

    Expected output:

    Connecting to database with SQL user [root]. Enter password:
    storage engine: InnoDB
    LOADING departments
    LOADING employees
    LOADING dept_emp
    LOADING dept_manager
    LOADING titles
    LOADING salaries
  3. Create a file with a list of resource-intensive SQL queries.

    cat << EOF >select_query.sql
    use employees;
    SELECT * FROM salaries WHERE salary > 0 LIMIT 100000;
    SELECT departments.dept_name,dept_emp.from_date,employees.* FROM departments LEFT JOIN dept_emp ON (departments.dept_no = dept_emp.dept_no) LEFT JOIN employees ON (dept_emp.emp_no = employees.emp_no) WHERE to_date = '9999-01-01' LIMIT 100000;
    SELECT employees.* FROM employees LEFT JOIN dept_emp ON ( dept_emp.emp_no = employees.emp_no ) LEFT JOIN salaries ON ( salaries.emp_no = salaries.emp_no ) WHERE employees.first_name LIKE '%Jo%' AND salaries.from_date > '1993-01-21' AND salaries.to_date < '1998-01-01' limit 1000000;
  4. Grant the public IP address of your Cloud Shell instance access to your Cloud SQL instance:

    gcloud sql instances patch cloudsql001 --authorized-networks=$(dig +short myip.opendns.com @resolver1.opendns.com)

    This command will overwrite and deauthorize any previously authorized networks. If you are using an existing Cloud SQL instance, include any other networks or IP addresses that need access.

    If your Cloud Shell instance times out and you reconnect, it may have a new public IP. Run this command again to re-enable access.

  5. Extract the public IP address of your Cloud SQL instance into an environment variable:

    MYSQL_IP=$(gcloud sql instances list --filter="name:cloudsql001" --format="value(PRIMARY_ADDRESS)")
  6. Start the load test, with mysqlslap performing 50 runs of the queries (option --number-of-queries) above using 5 concurrent connections (option --concurrency). Replace [PASSWORD] with your database root password.

    mysqlslap --no-defaults --user=root --password=[PASSWORD] --host=${MYSQL_IP} --concurrency=5 --number-of-queries=50 --create-schema=employees --query="./select_query.sql" --delimiter=";" --verbose --iterations=1

    Sample output:

        Average number of seconds to run all queries: 103.712 seconds
        Minimum number of seconds to run all queries: 103.712 seconds
        Maximum number of seconds to run all queries: 103.712 seconds
        Number of clients running queries: 10
        Average number of queries per client: 50

    The test may take several minutes.

  7. After the test has completed, return to the Cloud Logging Logs Viewer page.


Slow queries from the load test starting with SELECT should now be visible.

Create log-based metrics with Cloud Monitoring

Logs-based metrics are Cloud Monitoring metrics that are based on the content of log entries. For example, metrics can record the number of log entries containing particular messages, or they can extract latency information reported in log entries. In this section, you create a metric that measures the number of MySQL slow queries.

Filter the Cloud Logging view

To create a logs-based metric, start by filtering your Cloud Logging view to match the logs you want to measure:

  1. Click the small triangle at the far right side of the Filter by label or text search search box and choose Convert to advanced filter.

  2. Add the following row to further limit to slow logs with SQL statements:


    If you are targeting MySQL on Compute Engine, the log format is different, so add textPayload:"# User@Host" instead.

  3. Click Submit Filter


Create the log-based metric

  1. Click the Create Metric button at the top left to create a logs-based metric.

  2. Enter slow_sql for Name, and leave other fields at their default values.

  3. Click Create Metric.

Generate new data and view the new metric

Metrics only start recording data after they have been created.

  1. In Cloud Shell, rerun the earlier mysqlslap command to generate some new data:

    mysqlslap --no-defaults --user=root --password=[PASSWORD] --host=${MYSQL_IP} --concurrency=5 --number-of-queries=50 --create-schema=employees --query="./select_query.sql" --delimiter=";" --verbose --iterations=1

    The new metric is now visible under User-defined metrics.

  2. Click the three dots icon to the left of the user/sql_slow metric to open the menu, and then click View in Metrics Explorer to see the new metric in Cloud Monitoring.


    It may take several minutes for the resulting data to appear in Cloud Monitoring.


You now have a metric that measures the number of slow query logs. You can use this to, for example, trigger an alert if the number of slow queries exceeds a given threshold.

The y-axis of the chart measures the number of slow queries, not how long the slow queries took. To check the execution time of a single slow query, see the Query_time parameter in the slow log with Cloud Logging. You can also create a distribution metric on the regular expression Query_time: ([0-9]+\.[0-9]), which can be visualized as a heatmap.

Visualize slow query logs with Cloud Monitoring

The previously created metric measures the number of slow queries, but it does not show their content. You can make this metric more useful by parsing the slow SQL statements to be metric labels, which are visible in the dashboard legend in Cloud Monitoring.

Return to the User-Defined Metrics view. Click the three dots icon to the left to open the menu, then select Edit metric. Choose Add item under Labels and enter the following values:

  • Name: sql
  • Label type: String
  • Field name: textPayload
  • Extraction regular expression: \b(SELECT.* WHERE|INSERT.* VALUES|UPDATE.* WHERE|CREATE.*|DELETE.*).*\z


This regular expression instructs Cloud Monitoring to find strings beginning with the common SQL operations SELECT, INSERT, UPDATE, CREATE, or DELETE. For CREATE and DELETE, text is extracted until the end of the line. For SELECT, INSERT, and UPDATE, text is extracted until a keyword (WHERE or VALUES) that is typically followed by values that are likely to change in every execution. This groups together similar queries.

To test the expression, click Build to open the Regular Expression Editor, which shows what will be extracted from the log. Matching parts of SQL statements are highlighted in blue; discarded parts are highlighted in yellow.


Click Done and then Update metric.

With this regular expression, every distinct SQL statement in the slow query log will create a new label value and a corresponding time series. The time series are sparse, meaning that you will be charged for non-zero values after you exceed your free quota. In addition, having too many time series for a single metric can cause throttling. You can adjust the MySQL long_query_time configuration setting or the content of the regular expression to reduce the number of labels created.

Click the three dots icon to the left to open the menu, and then click View in Metrics Explorer to see the updated metric in Cloud Monitoring.

Click Show Advanced Options and add sql in Legend Template:


Rerun the earlier mysqlslap command once more to generate some new data with labels:

mysqlslap --no-defaults --user=root --password=[PASSWORD] --host=${MYSQL_IP} --concurrency=5 --number-of-queries=50 --create-schema=employees --query="./select_query.sql" --delimiter=";" --verbose --iterations=1

The dashboard now shows you slow MySQL queries, broken down by SQL statement. You can also hover over the graph to see queries per-second rates for each statement. This allows you to quickly identify the culprit when alerted about database slowness.


Cleaning up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, run the following commands in Cloud Shell to delete the test database instance, the log-based metrics, and the test repository when you are finished:

gcloud sql instances delete cloudsql001
gcloud logging metrics delete slow_sql
rm -r ~/test_db

Appendix: Configuring self-managed MySQL on Compute Engine to send slow query logs to Cloud Logging

The following instructions let you send slow query logs from a previously deployed self-managed MySQL instance running on Compute Engine to Cloud Logging.

The commands below assume that you already have a VM containing a self-managed MySQL database, and that the VM is running Debian/Ubuntu. Commands and file locations may vary in other Linux distributions. You can quickly deploy a new VM for test purposes by launching a MySQL Google Click to Deploy instance from Marketplace.

Enable Cloud Logging

Capturing MySQL slow query logs in Cloud Logging requires installing the logging agent. Log into the VM running MySQL and check if the agent is already installed:

sudo service google-fluentd status

Sample output if already running:

google-fluentd.service - LSB: data collector for Treasure Data
Loaded: loaded (/etc/init.d/google-fluentd; generated; vendor preset: enabled)
Active: active (running) since Mon 2019-07-01 06:01:14 UTC; 2min 38s ago

If yes, skip to the next section. Otherwise run the following commands:

curl -sSO https://dl.google.com/cloudagents/install-logging-agent.sh
sudo bash install-logging-agent.sh

Confirm that the google-fluentd process is active:

sudo service google-fluentd status

The log collection agent, which is based on Fluentd, comes with a collection of predefined configuration files for collecting the logs of various third-party applications, including MySQL. To start collecting MySQL logs, move the configuration into the /etc/google-fluentd folder:

cd /etc/google-fluentd
sudo cp config.d/mysql.conf mysql.conf

Restart the google-fluentd agent to read the updated configuration:

sudo service google-fluentd restart

Turn on MySQL slow query logs

By default, MySQL does not log slow queries. To enable them, connect to MySQL as root and enter the root password.

mysql -u root -p

Type the following commands at the mysql> prompt:

SET GLOBAL slow_query_log_file = '/var/log/mysql/mysql-slow.log';
SET GLOBAL long_query_time = 2;
SET GLOBAL slow_query_log = 'ON';

MySQL will now log all queries that take longer than 2 seconds. You can change this threshold by adjusting the value of long_query_time.

Confirm that slow query logs are now being written to the location where google-fluentd expects them:

sudo cat /var/log/mysql/mysql-slow.log

Expected output:

/usr/sbin/mysqld, Version: 5.7.26-log (MySQL Community Server (GPL)). started with:
Tcp port: 0 Unix socket: /var/run/mysqld/mysqld.sock
Time         Id Command  Argument

These settings are temporary and will be lost if MySQL is restarted. To persist them, locate your MySQL option file (typically at /etc/mysql/mysql.conf.d/mysqld.cnf), edit the file as root, and add or uncomment the following lines:

slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
long_query_time = 2

You have now configured MySQL and Cloud Logging for slow query logs. Proceed to the "Generate sample data with a load test" section above.

Submit a tutorial

Share step-by-step guides

Submit a tutorial

Request a tutorial

Ask for community help

Submit a request

View tutorials

Search Google Cloud tutorials

View tutorials

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.