Instances: demoteMaster

Requires authorization

Demotes the stand-alone instance to be a Cloud SQL read replica for an external database server.

Request

HTTP request

POST https://www.googleapis.com/sql/v1beta4/projects/project/instances/instance/demoteMaster

Parameters

Parameter name Value Description
Path parameters
instance string Cloud SQL instance name.
project string ID of the project that contains the instance.

Authorization

This request requires authorization with at least one of the following scopes (read more about authentication and authorization).

Scope
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/sqlservice.admin

Request body

In the request body, supply data with the following structure:

{
  "demoteMasterContext": {
    "kind": "sql#demoteMasterContext",
    "masterInstanceName": string,
    "replicaConfiguration": {
      "kind": "sql#demoteMasterConfiguration",
      "mysqlReplicaConfiguration": {
        "kind": "sql#demoteMasterMysqlReplicaConfiguration",
        "username": string,
        "password": string,
        "caCertificate": string,
        "clientCertificate": string,
        "clientKey": string
      }
    },
    "verifyGtidConsistency": boolean
  }
}
Property name Value Description Notes
demoteMasterContext nested object Contains details about the demoteMaster operation.
demoteMasterContext.kind string This is always sql#demoteMasterContext.
demoteMasterContext.masterInstanceName string The name of the instance which will act as on-premises master in the replication setup. writable
demoteMasterContext.replicaConfiguration nested object Configuration specific to read-replicas replicating from the on-premises master. writable
demoteMasterContext.replicaConfiguration.kind string This is always sql#demoteMasterConfiguration.
demoteMasterContext.replicaConfiguration.mysqlReplicaConfiguration nested object MySQL specific configuration when replicating from a MySQL on-premises master. Replication configuration information such as the username, password, certificates, and keys are not stored in the instance metadata. The configuration information is used only to set up the replication connection and is stored by MySQL in a file named master.info in the data directory. writable
demoteMasterContext.replicaConfiguration.mysqlReplicaConfiguration.kind string This is always sql#demoteMasterMysqlReplicaConfiguration.
demoteMasterContext.replicaConfiguration.mysqlReplicaConfiguration.username string The username for the replication connection. writable
demoteMasterContext.replicaConfiguration.mysqlReplicaConfiguration.password string The password for the replication connection. writable
demoteMasterContext.replicaConfiguration.mysqlReplicaConfiguration.caCertificate string PEM representation of the trusted CA's x509 certificate. writable
demoteMasterContext.replicaConfiguration.mysqlReplicaConfiguration.clientCertificate string PEM representation of the slave's x509 certificate. writable
demoteMasterContext.replicaConfiguration.mysqlReplicaConfiguration.clientKey string PEM representation of the slave's private key. The corresponsing public key is encoded in the client's certificate. The format of the slave's private key can be either PKCS #1 or PKCS #8. writable
demoteMasterContext.verifyGtidConsistency boolean Verify GTID consistency for demote operation. Default value: True. Second Generation instances only. Setting this flag to false enables you to bypass GTID consistency check between on-premises master and Cloud SQL instance during the demotion operation but also exposes you to the risk of future replication failures. Change the value only if you know the reason for the GTID divergence and are confident that doing so will not cause any replication issues. writable

Response

If successful, this method returns a response body with the following structure:

{
  "kind": "sql#operation",
  "selfLink": string,
  "targetProject": string,
  "targetId": string,
  "targetLink": string,
  "targetLink": string,
  "name": string,
  "operationType": string,
  "status": string,
  "user": string,
  "insertTime": datetime,
  "startTime": datetime,
  "endTime": datetime,
  "error": {
    "kind": "sql#operationErrors",
    "errors": [
      {
        "kind": "sql#operationError",
        "code": string,
        "message": string
      }
    ]
  },
  "importContext": {
    "kind": "sql#importContext",
    "fileType": string,
    "uri": string,
    "database": string,
    "importUser": string,
    "csvImportOptions": {
      "table": string,
      "columns": [
        string
      ]
    }
  },
  "exportContext": {
    "kind": "sql#exportContext",
    "fileType": string,
    "uri": string,
    "databases": [
      string
    ],
    "sqlExportOptions": {
      "tables": [
        string
      ],
      "schemaOnly": boolean,
      "mysqlExportOptions": {
        "masterData": integer
      }
    },
    "csvExportOptions": {
      "selectQuery": string
    }
  }
}
Property name Value Description Notes
kind string This is always sql#operation.
targetProject string The project ID of the target instance related to this operation.
targetId string Name of the database instance related to this operation.
name string An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
operationType string The type of the operation. Valid values are CREATE, DELETE, UPDATE, RESTART, IMPORT, EXPORT, BACKUP_VOLUME, RESTORE_VOLUME, CREATE_USER, DELETE_USER, CREATE_DATABASE, DELETE_DATABASE .
status string The status of an operation. Valid values are PENDING, RUNNING, DONE, UNKNOWN.
user string The email address of the user who initiated this operation.
insertTime datetime The time this operation was enqueued in UTC timezone in RFC 3339 format, for example 2012-11-15T16:19:00.094Z.
startTime datetime The time this operation actually started in UTC timezone in RFC 3339 format, for example 2012-11-15T16:19:00.094Z.
endTime datetime The time this operation finished in UTC timezone in RFC 3339 format, for example 2012-11-15T16:19:00.094Z.
error nested object If errors occurred during processing of this operation, this field will be populated.
error.kind string This is always sql#operationErrors.
error.errors[] list The list of errors encountered while processing this operation.
error.errors[].kind string This is always sql#operationError.
error.errors[].code string Identifies the specific error that occurred.
error.errors[].message string Additional information about the error encountered.
importContext nested object The context for import operation, if applicable.
importContext.kind string This is always sql#importContext.
importContext.fileType string The file type for the specified uri.
SQL: The file contains SQL statements.
CSV: The file contains CSV data.
importContext.uri string Path to the import file in Cloud Storage, in the form gs://bucketName/fileName. Compressed gzip files (.gz) are supported when fileType is SQL. The instance must have write permissions to the bucket and read access to the file.
importContext.database string The target database for the import. If fileType is SQL, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If fileType is CSV, one database must be specified.
importContext.importUser string The PostgreSQL user for this import operation. PostgreSQL instances only.
importContext.csvImportOptions object Options for importing data as CSV.
importContext.csvImportOptions.table string The table to which CSV data is imported.
importContext.csvImportOptions.columns[] list The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
exportContext nested object The context for export operation, if applicable.
exportContext.kind string This is always sql#exportContext.
exportContext.fileType string The file type for the specified uri.
SQL: The file contains SQL statements.
CSV: The file contains CSV data.
exportContext.uri string The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form gs://bucketName/fileName. If the file already exists, the requests succeeds, but the operation fails. If fileType is SQL and the filename ends with .gz, the contents are compressed.
exportContext.databases[] list Databases to be exported.
MySQL instances: If fileType is SQL and no database is specified, all databases are exported, except for the mysql system database. If fileType is CSV, you can specify one database, either by using this property or by using the csvExportOptions.selectQuery property, which takes precedence over this property.
PostgreSQL instances: Specify exactly one database to be exported. If fileType is CSV, this database must match the database used in the csvExportOptions.selectQuery property.
exportContext.sqlExportOptions object Options for exporting data as SQL statements.
exportContext.sqlExportOptions.tables[] list Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
exportContext.sqlExportOptions.schemaOnly boolean Export only schemas.
exportContext.csvExportOptions object Options for exporting data as CSV.
exportContext.csvExportOptions.selectQuery string The select query used to extract the data.
exportContext.sqlExportOptions.mysqlExportOptions object Options for exporting from MySQL.
exportContext.sqlExportOptions.mysqlExportOptions.masterData integer Option to include SQL statement required to set up replication. If set to 1, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates. If set to 2, the CHANGE MASTER TO statement is written as a SQL comment, and has no effect. All other values are ignored.
Was this page helpful? Let us know how we did:

Send feedback about...

Cloud SQL for PostgreSQL