Edit on GitHub
Report issue
Page history

Monitor a Micronaut JVM application on GKE using Micrometer

Author(s): @viniciusccarvalho ,   Published: 2020-09-15

Vinicius Carvalho | Customer Engineer | Google

Contributed by Google employees.

Google Kubernetes Engine (GKE) offers built-in capabilities to monitor containers, providing insights into memory, CPU, and I/O resources. JVM applications, however, have different memory configurations (heap versus non-heap), and each memory space is split in several parts (such as eden, tenured, and survivor). Often, Java developers face issues with memory configurations, and having the capability to inspect an application's memory utilization is essential.

Conventional APM tools make use of a Java agent that's added to the class path of the application. In certain environments, such as Kubernetes and App Engine, configuring a Java agent is not always possible; this is where metrics frameworks such as Micrometer are useful.

In this tutorial, you learn how to use Micrometer integration with Cloud Monitoring to publish metrics without having to use a javaagent on your class path. You create a Micronaut microservice application, deploy it to GKE, and create a dashboard to monitor the Java memory heap.

The GitHub repository for this tutorial includes the complete working source code for the tutorial, which you can use as a reference as you go through the steps in the tutorial.

Before you begin

For this tutorial, you must set up a Google Cloud project to host your Micronaut application, and you must have Docker and the Cloud SDK installed. We recommend that you create a new project for this tutorial, which makes the cleanup at the end easier.

  1. Use the Cloud Console to create a new Google Cloud project. Remember the project ID; you will need it later.

  2. Enable billing for your project.

  3. Install the Google Cloud SDK. Make sure that you initialize the SDK and set the default project to the new project that you created.

  4. Install JDK 11 or higher if you do not already have it.

Prepare your environment

You need a new GKE cluster with Workload Identity enabled.

Run the following commands to set up your environment:

export CLUSTER_NAME=metrics-demo
export GSA=micronaut-application
export KSA=$GSA
export NAMESPACE=default

gcloud config set project $PROJECT_ID

gcloud services enable container.googleapis.com \

gcloud iam service-accounts create ${GSA} --project=${PROJECT_ID}

gcloud container clusters create ${CLUSTER_NAME} \
  --release-channel regular \
  --zone "us-central1-c" \

gcloud container clusters --zone "us-central1-c" get-credentials ${CLUSTER_NAME}

gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member="serviceAccount:${GSA}@${PROJECT_ID}.iam.gserviceaccount.com" \

gcloud iam service-accounts add-iam-policy-binding \
  --role roles/iam.workloadIdentityUser \
  --member "serviceAccount:${PROJECT_ID}.svc.id.goog[${NAMESPACE}/${KSA}]" \

kubectl create serviceaccount --namespace $NAMESPACE $KSA

kubectl annotate serviceaccount \
  --namespace ${NAMESPACE} \
  ${KSA} \

Create a new application

  1. Go to the Micronaut Launch page.
  2. Add features:
    1. Click Features.
    2. In the Search Features field, search for jib and then click to add the Jib component.
    3. In the Search Features field, search for micrometer-stackdriver and then click to add the Micrometer component.
    4. Click Done.
  3. In the Base package section, enter com.google.example.
  4. In the Name section, enter micronaut-jvm-metrics.
  5. Click Generate Project.
  6. Download and extract the ZIP file and use it as your base directory for the rest of this tutorial.

Set up the application

In this section, you set up the artifacts and necessary classes to get the application running.

  1. Open your build.gradle file and locate the line containing jib.to.image = 'gcr.io/micronaut-jvm-metrics/jib-image'. Replace that line with the following:

        from {
            image= "gcr.io/distroless/java:11"
            image = "gcr.io/[PROJECT_ID]/micronaut-jvm-metrics"

    Replace [PROJECT_ID] with the project ID for the project that you created at the beginning of this tutorial.

    The application uses Jib to build and push your images to gcr.io. This code forces a JDK 11 base image and sets the target image that is used in the deployment.yml file.

  2. Replace your application.yml file with the following command:

    cat << EOL > src/main/resources/application.yml
        name: micronautJvmMetrics
            enabled: true
            projectId: $PROJECT_ID
            step: PT1M
        enabled: true
        enabled: true
        sensitive: false
  3. Add the following class to your src/main/java/com/google/example directory:

    package com.google.example;
    import io.micrometer.core.instrument.MeterRegistry;
    import io.micronaut.configuration.metrics.aggregator.MeterRegistryConfigurer;
    import org.slf4j.Logger;
    import org.slf4j.LoggerFactory;
    import javax.inject.Singleton;
    import java.util.Optional;
    public class ApplicationMeterRegistryConfigurer implements MeterRegistryConfigurer {
        private final Logger logger = LoggerFactory.getLogger(ApplicationMeterRegistryConfigurer.class);
        public void configure(MeterRegistry meterRegistry) {
            String instanceId = Optional.ofNullable(System.getenv("HOSTNAME")).orElse("localhost");
            logger.info("Publishing metrics for pod " + instanceId);
            meterRegistry.config().commonTags("instance_id", instanceId);
        public boolean supports(MeterRegistry meterRegistry) {
            return true;

    This class adds labels that make the metrics unique for each application. Each container inside Kubernetes gets a hostname that is the same as the unique pod name. If you don't add a unique label and have multiple replicas of your application running, you can run into concurrency issues with Cloud Monitoring because multiple agents will report the same metrics with a window shorter than what was configured. Also, without a unique identifier, you wouldn't be able to filter or group metrics on the dashboard for each instance.

  4. Create a deployment.yml file:

    cat << EOL > deployment.yml
    apiVersion: apps/v1
    kind: Deployment
      name: "micronaut-jvm-metrics"
          app: "micronaut-jvm-metrics"
            app: "micronaut-jvm-metrics"
          serviceAccount: micronaut-application
            - name: "micronaut-jvm-metrics"
              image: "gcr.io/[PROJECT_ID]/micronaut-jvm-metrics"
                - name: http
                  containerPort: 8080
                  path: /health
                  port: 8080
                initialDelaySeconds: 5
                timeoutSeconds: 3
                  path: /health
                  port: 8080
                initialDelaySeconds: 5
                timeoutSeconds: 3
                failureThreshold: 10
      replicas: 2
    apiVersion: v1
    kind: Service
      name: "micronaut-jvm-metrics-svc"
        app: "micronaut-jvm-metrics"
      type: LoadBalancer
        - protocol: "TCP"
          port: 80
          targetPort: 8080
  5. Deploy the application:

    kubectl apply -f deployment.yml

Wait a few minutes before creating the dashboard in the next section, so that you can see the metrics.

Create the dashboard

  1. Go to Cloud Monitoring and create a workspace.

  2. Go to the Metrics Explorer and add a few charts:

    • Heap memory chart

      • Chart title: Heap memory
      • On Metric use custom/jvm/memory/used
      • On resource use Global
      • On filter use area = heap and instance_id = starts_with("micronaut")
      • Group by: id and instance_id
    • JVM memory chart

      • Chart title: Heap memory
      • On Metric use custom/jvm/memory/used
      • On resource use Global
      • On filter use instance_id = starts_with("micronaut")
      • Group by: instance_id
      • Aggregator: sum
    • Container memory chart

      • Chart title: Container memory
      • On Metric use Memory usage
      • On resource use k8s_container
      • On filter use container_name = micronaut-jvm-metrics)
      • Group by: pod_name
      • Aggregator: sum
  3. Add the charts to a new dashboard.

After letting your application running for a while, your dashboard should look similar to this:

JVM metrics

Check various heap spaces on the Heap memory usage chart, such as eden, tenured, and survivor spaces. You should see that the chart follows the sawtooth pattern expected from garbage collection.

(Optional) Add some load to simulate memory pressure

You can simulate some traffic to see some memory pressure on your charts using the ab tool:

export APPLICATION_URL=$(kubectl get service micronaut-jvm-metrics-svc -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

ab -n 50000 -c 100  $APPLICATION_URL/health

Clean up

When you're done with this tutorial, you can delete the project to avoid incurring additional costs for the resources that you created for the tutorial.

To delete a project, do the following:

  1. In the Cloud Console, go to the Manage resources page.

    Go to the Manage resources page

  2. In the project list, select the project that you want to delete and then click Delete.

  3. In the dialog, type the project ID and then click Shut down to delete the project.

Submit a tutorial

Share step-by-step guides

Submit a tutorial

Request a tutorial

Ask for community help

Submit a request

View tutorials

Search Google Cloud tutorials

View tutorials

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.