Manage and monitor AlloyDB Omni

This page describes how to manage AlloyDB Omni user roles, monitor the activity of your AlloyDB Omni server, and update or remove your AlloyDB Omni installation.

Manage user roles

AlloyDB Omni uses the same set of predefined PostgreSQL user roles that AlloyDB includes, with the following differences:

  • AlloyDB Omni does not have an alloydbiamuser role.

  • AlloyDB Omni includes a superuser role named alloydbadmin.

As with AlloyDB, it's a best practice to follow these steps when setting up a database:

  1. Define or import your databases using the postgres user role. In a new installation, this role has database-creation and role-creation privileges, and no password.

  2. Create new user roles that have the correct level of access to your application's tables, again using the postgres user role.

  3. Configure your application to connect to the database using these new, limited-access roles.

You can create and define as many new user roles as you need. Don't modify or delete any of the user roles with which AlloyDB Omni ships.

For more information, see Manage AlloyDB user roles.

Monitor AlloyDB Omni

Monitoring your AlloyDB Omni installation means reading and analyzing its log files.

AlloyDB Omni running on Kubernetes also has a set of basic metrics available as Prometheus endpoints. For a list of available metrics, see AlloyDB Omni metrics.

Single-server

AlloyDB Omni logs its activity in two locations:

  • AlloyDB Omni logs database activity at data/log/postgres, relative to the directory that you defined in your dataplane.conf configuration file.

    You can customize the name and format of this log file through the various log_* directives defined in /var/alloydb/config/postgresql.conf. For more information, see Error Reporting and Logging.

  • AlloyDB Omni logs its installation, startup, and shutdown activity at /var/alloydb/logs/alloydb.log.

To check the immediate running status of your server, see Check the status of AlloyDB Omni.

Kubernetes

Find your database cluster log files

You can find postgresql.audit and postgresql.log files on the file system of the database pod. To access these files, follow these steps:

  1. Define an environment variable containing the name of the database pod.

    export DB_POD=`kubectl get pod -l alloydbomni.internal.dbadmin.goog/dbcluster=DB_CLUSTER_NAME,alloydbomni.internal.dbadmin.goog/task-type=database -o jsonpath='{.items[0].metadata.name}'`
    

    Replace DB_CLUSTER_NAME with the name of your database cluster. It's the same database cluster name you declared when you created it.

  2. Run a shell on the database pod as root.

    kubectl exec ${DB_POD} -it -- /bin/bash
    
  3. Find the log files in the /obs/diagnostic/ directory:

    • /obs/diagnostic/postgresql.audit
    • /obs/diagnostic/postgresql.log

List monitoring services

v1.0

When you create a database cluster, AlloyDB Omni creates the following monitoring service for each instance CR of the database cluster in the same namespace:

al-INSTANCE_NAME-monitoring-system

To list the monitoring services, run the following command.

kubectl get svc -n NAMESPACE | grep monitoring

Replace NAMESPACE with a namespace where your cluster belongs.

The following example response shows the al-1060-dbc-monitoring-system, al-3de6-dbc-monitoring-system, and al-4bc0-dbc-monitoring-system services. Each service corresponds to one instance.

al-1060-dbc-monitoring-system   ClusterIP   10.0.15.227   <none>        9187/TCP   7d20h
al-3de6-dbc-monitoring-system   ClusterIP   10.0.5.205    <none>        9187/TCP   7d19h
al-4bc0-dbc-monitoring-system   ClusterIP   10.0.15.92    <none>        9187/TCP   7d19h

Version < 1.0

When you create a database cluster, AlloyDB Omni creates the following monitoring services in the same namespace as the database cluster:

  • DB_CLUSTER-monitoring-db

  • DB_CLUSTER-monitoring-system

To list the monitoring services, run the following command.

kubectl get svc -n NAMESPACE | grep monitoring

Replace NAMESPACE with a namespace where your cluster belongs.

The following example response shows the al-2953-dbcluster-foo7-monitoring-system and the al-2953-dbcluster-foo7-monitoring-db service.

al-2953-dbcluster-foo7-monitoring-db           ClusterIP   10.36.3.243    <none>        9187/TCP   44m
al-2953-dbcluster-foo7-monitoring-system       ClusterIP   10.36.7.72     <none>        9187/TCP   44m

View Prometheus metrics from the command line

The port 9187 is named as metricsalloydbomni for all monitoring services.

  1. Set up port forwarding from your local environment to the monitoring service.

    kubectl port-forward service/MONITORING_SERVICE -n NAMESPACE MONITORING_METRICS_PORT:metricsalloydbomni
    

    Replace the following:

    • MONITORING_SERVICE: The name of the monitoring service that you want to forward—for example, al-1060-dbc-monitoring-system.

    • NAMESPACE: The namespace where your cluster belongs.

    • MONITORING_METRICS_PORT: A local available TCP port.

    The following response shows that the services are being forwarded.

    Forwarding from 127.0.0.1:9187 -> 9187
    Forwarding from [::1]:9187 -> 9187
    
  2. While the previous command runs, you can access monitoring metrics through HTTP on the port that you specified. For example, you can use curl to see all of the metrics as plain text:

    curl http://localhost:MONITORING_METRICS_PORT/metrics
    

View metrics using the Prometheus API

The alloydbomni.internal.dbadmin.goog/task-type label key and the metricsalloydbomni port is available as a default for all monitoring services in AlloyDB Omni. You can use them together with a single serviceMonitor custom resource to select all the services for all namespaces in your database cluster.

For more information about using the Prometheus API, see the Prometheus Operator documentation.

The following is an example spec field of the serviceMonitor custom resource that includes the alloydbomni.internal.dbadmin.gdc.goog/task-type label key and metricsalloydbomni port. The serviceMonitor custom resource monitors and collects all the kubernetes services in all namespaces

For more information about the complete ServiceMonitor definition, see the ServiceMonitor custom resource definition .

v1.0

    spec:
      selector:
        matchLabels:
          alloydbomni.internal.dbadmin.goog/task-type: monitoring
      namespaceSelector:
        any: true
      endpoints:
        - port: metricsalloydbomni

Version < 1.0

    spec:
      selector:
        matchExpressions:
        - key: alloydbomni.internal.dbadmin.gdc.goog/task-type
          operator: Exists
          values: []
      namespaceSelector:
        any: true
      endpoints:
      - port: metricsalloydbomni

Upgrade AlloyDB Omni

Single-server

These instructions apply only to AlloyDB Omni version 15.2.0 and later.

Before you begin

Your machine must have version 1.2 or later of the AlloyDB Omni CLI installed.

Perform the upgrade

To upgrade your AlloyDB Omni installation, run the following command:

sudo alloydb database-server upgrade

Kubernetes

Upgrading a Kubernetes-based AlloyDB Omni installation involves uninstalling and then re-installing the AlloyDB Omni Kubernetes Operator after backing up your data. Follow these steps:

  1. List all of your database clusters:

    kubectl get dbclusters.alloydbomni.dbadmin.goog --all-namespaces
    
  2. For each database cluster, use the pg_dumpall command to export all of its data.

  3. Uninstall the AlloyDB Omni Operator. This includes deleting all of your database clusters.

  4. Reinstall the AlloyDB Omni Operator. You can use the same commands that you used to install the previous version of the AlloyDB Omni Operator, with no need to specify a new version number.

  5. Recreate your database clusters. You can use the same manifest files that you used when previously creating your database clusters.

  6. Use pg_restore or the \i command in psql to import your previously exported data into the recreated clusters.

Roll back an upgrade

These instructions apply only to AlloyDB Omni version 15.2.1 and later. They don't apply to Kubernetes-based deployments of AlloyDB Omni.

To roll back AlloyDB Omni to its previously installed version, run this command:

sudo alloydb database-server rollback

Uninstall AlloyDB Omni

Single-server

To uninstall AlloyDB Omni, run the following command:

sudo alloydb database-server uninstall

Your data directory remains on your file system after you uninstall AlloyDB Omni. You can move, archive, or delete this directory, depending on whether and how you want to preserve your data after uninstalling AlloyDB Omni.

Kubernetes

Delete your database cluster

To delete your database cluster, set isDeleted to true in its manifest. You can accomplish this with the following command.

kubectl patch dbclusters.alloydbomni.dbadmin.goog DB_CLUSTER_NAME -p '{"spec":{"isDeleted":true}}' --type=merge

Replace DB_CLUSTER_NAME with the name of your database cluster. It's the same database cluster name you declared when you created it.

Uninstall the AlloyDB Omni Operator

To uninstall the AlloyDB Omni Kubernetes Operator from your Kubernetes cluster, take the following steps:

  1. Delete all of your database clusters:

    for ns in $(kubectl get dbclusters.alloydbomni.dbadmin.goog --all-namespaces -o=jsonpath='{range .items[*]}{.metadata.namespace}{"\n"}{end}'); do
    for cr in $(kubectl get dbclusters.alloydbomni.dbadmin.goog -n $ns -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'); do
    kubectl patch dbclusters.alloydbomni.dbadmin.goog $cr -n $ns --type=merge -p '{"spec":{"isDeleted":true}}'
    done
    done
    
  2. Wait for the AlloyDB Omni Kubernetes Operator to delete all of your database clusters. Use the following command to check whether any database resources remain:

    kubectl get dbclusters.alloydbomni.dbadmin.goog --all-namespaces
    
  3. Delete other resources that the AlloyDB Omni Kubernetes Operator created:

    kubectl delete failovers.alloydbomni.dbadmin.goog --all --all-namespaces
    kubectl delete restores.alloydbomni.dbadmin.goog --all --all-namespaces
    kubectl delete switchovers.alloydbomni.dbadmin.goog --all --all-namespaces
    
  4. Uninstall the AlloyDB Omni Kubernetes Operator:

    helm uninstall alloydbomni-operator --namespace alloydb-omni-system
    
  5. Clean up secrets, custom resource descriptions, and namespaces related to the AlloyDB Omni Kubernetes Operator:

    kubectl delete certificate -n alloydb-omni-system --all
    kubectl get secrets --all-namespaces -o custom-columns=NAMESPACE:.metadata.namespace,NAME:.metadata.name,ANNOTATION:.metadata.annotations.cert-manager\.io/issuer-name | grep -E 'alloydbomni|dbs-al' | awk '{print $1 " " $2}' | xargs -n 2 kubectl delete secret -n
    kubectl delete crd -l alloydb-omni=true
    kubectl delete ns alloydb-omni-system
    

Resize your Kubernetes-based database cluster

To resize the CPU, memory, or storage of your Kubernetes-based database cluster, update the resources field of the manifests that define its pod. The AlloyDB Omni Operator applies the new specifications to your database pod immediately.

For more information about the AlloyDB Omni Operator manifest syntax, see Create a database cluster.

The following restrictions apply to modifying a running database cluster's resources:

  • You can increase a disk's size only if the specified storageClass supports volume expansion.
  • You can't decrease a disk's size.