Monitor a Micronaut JVM application on GKE using Micrometer
Contributed by Google employees.
Google Kubernetes Engine (GKE) offers built-in capabilities to monitor containers, providing insights into memory, CPU, and I/O resources. JVM applications, however, have different memory configurations (heap versus non-heap), and each memory space is split in several parts (such as eden, tenured, and survivor). Often, Java developers face issues with memory configurations, and having the capability to inspect an application's memory utilization is essential.
Conventional APM tools make use of a Java agent that's added to the class path of the application. In certain environments, such as Kubernetes and App Engine, configuring a Java agent is not always possible; this is where metrics frameworks such as Micrometer are useful.
In this tutorial, you learn how to use Micrometer integration with Cloud Monitoring to publish metrics without having to use a javaagent
on your class path.
You create a Micronaut microservice application, deploy it to GKE, and create a dashboard to monitor the Java memory heap.
The GitHub repository for this tutorial includes the complete working source code for the tutorial, which you can use as a reference as you go through the steps in the tutorial.
Before you begin
For this tutorial, you must set up a Google Cloud project to host your Micronaut application, and you must have Docker and the Cloud SDK installed. We recommend that you create a new project for this tutorial, which makes the cleanup at the end easier.
Use the Cloud Console to create a new Google Cloud project. Remember the project ID; you will need it later.
Enable billing for your project.
Install the Google Cloud SDK. Make sure that you initialize the SDK and set the default project to the new project that you created.
Install JDK 11 or higher if you do not already have it.
Prepare your environment
You need a new GKE cluster with Workload Identity enabled.
Run the following commands to set up your environment:
export CLUSTER_NAME=metrics-demo
export PROJECT_ID=[PROJECT_ID]
export GSA=micronaut-application
export KSA=$GSA
export NAMESPACE=default
gcloud config set project $PROJECT_ID
gcloud services enable container.googleapis.com \
containerregistry.googleapis.com
gcloud iam service-accounts create ${GSA} --project=${PROJECT_ID}
gcloud container clusters create ${CLUSTER_NAME} \
--release-channel regular \
--zone "us-central1-c" \
--workload-pool=${PROJECT_ID}.svc.id.goog
gcloud container clusters --zone "us-central1-c" get-credentials ${CLUSTER_NAME}
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member="serviceAccount:${GSA}@${PROJECT_ID}.iam.gserviceaccount.com" \
--role="roles/monitoring.metricWriter"
gcloud iam service-accounts add-iam-policy-binding \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:${PROJECT_ID}.svc.id.goog[${NAMESPACE}/${KSA}]" \
${GSA}@${PROJECT_ID}.iam.gserviceaccount.com
kubectl create serviceaccount --namespace $NAMESPACE $KSA
kubectl annotate serviceaccount \
--namespace ${NAMESPACE} \
${KSA} \
iam.gke.io/gcp-service-account=${GSA}@${PROJECT_ID}.iam.gserviceaccount.com
Create a new application
- Go to the Micronaut Launch page.
- Add features:
- Click Features.
- In the Search Features field, search for
jib
and then click to add the Jib component. - In the Search Features field, search for
micrometer-stackdriver
and then click to add the Micrometer component. - Click Done.
- In the Base package section, enter
com.google.example
. - In the Name section, enter
micronaut-jvm-metrics
. - Click
Generate Project
. - Download and extract the ZIP file and use it as your base directory for the rest of this tutorial.
Set up the application
In this section, you set up the artifacts and necessary classes to get the application running.
Open your
build.gradle
file and locate the line containingjib.to.image = 'gcr.io/micronaut-jvm-metrics/jib-image'
. Replace that line with the following:jib{ from { image= "gcr.io/distroless/java:11" } to{ image = "gcr.io/[PROJECT_ID]/micronaut-jvm-metrics" } }
Replace
[PROJECT_ID]
with the project ID for the project that you created at the beginning of this tutorial.The application uses Jib to build and push your images to
gcr.io
. This code forces aJDK 11
base image and sets the target image that is used in thedeployment.yml
file.Replace your
application.yml
file with the following command:cat << EOL > src/main/resources/application.yml micronaut: application: name: micronautJvmMetrics metrics: export: stackdriver: enabled: true projectId: $PROJECT_ID step: PT1M enabled: true endpoints: health: enabled: true sensitive: false EOL
Add the following class to your
src/main/java/com/google/example
directory:package com.google.example; import io.micrometer.core.instrument.MeterRegistry; import io.micronaut.configuration.metrics.aggregator.MeterRegistryConfigurer; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import javax.inject.Singleton; import java.util.Optional; @Singleton public class ApplicationMeterRegistryConfigurer implements MeterRegistryConfigurer { private final Logger logger = LoggerFactory.getLogger(ApplicationMeterRegistryConfigurer.class); @Override public void configure(MeterRegistry meterRegistry) { String instanceId = Optional.ofNullable(System.getenv("HOSTNAME")).orElse("localhost"); logger.info("Publishing metrics for pod " + instanceId); meterRegistry.config().commonTags("instance_id", instanceId); } @Override public boolean supports(MeterRegistry meterRegistry) { return true; } }
This class adds labels that make the metrics unique for each application. Each container inside Kubernetes gets a hostname that is the same as the unique pod name. If you don't add a unique label and have multiple replicas of your application running, you can run into concurrency issues with Cloud Monitoring because multiple agents will report the same metrics with a window shorter than what was configured. Also, without a unique identifier, you wouldn't be able to filter or group metrics on the dashboard for each instance.
Create a
deployment.yml
file:cat << EOL > deployment.yml apiVersion: apps/v1 kind: Deployment metadata: name: "micronaut-jvm-metrics" spec: selector: matchLabels: app: "micronaut-jvm-metrics" template: metadata: labels: app: "micronaut-jvm-metrics" spec: serviceAccount: micronaut-application containers: - name: "micronaut-jvm-metrics" image: "gcr.io/[PROJECT_ID]/micronaut-jvm-metrics" ports: - name: http containerPort: 8080 readinessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 5 timeoutSeconds: 3 livenessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 5 timeoutSeconds: 3 failureThreshold: 10 replicas: 2 --- apiVersion: v1 kind: Service metadata: name: "micronaut-jvm-metrics-svc" spec: selector: app: "micronaut-jvm-metrics" type: LoadBalancer ports: - protocol: "TCP" port: 80 targetPort: 8080 EOL
Deploy the application:
kubectl apply -f deployment.yml
Wait a few minutes before creating the dashboard in the next section, so that you can see the metrics.
Create the dashboard
Go to Cloud Monitoring and create a workspace.
Go to the Metrics Explorer and add a few charts:
Heap memory chart
- Chart title: Heap memory
- On Metric use
custom/jvm/memory/used
- On resource use
Global
- On filter use
area
=
heap
andinstance_id
=
starts_with("micronaut")
- Group by:
id
andinstance_id
JVM memory chart
- Chart title: Heap memory
- On Metric use
custom/jvm/memory/used
- On resource use
Global
- On filter use
instance_id
=
starts_with("micronaut")
- Group by:
instance_id
- Aggregator:
sum
Container memory chart
- Chart title: Container memory
- On Metric use
Memory usage
- On resource use
k8s_container
- On filter use
container_name
=
micronaut-jvm-metrics)
- Group by:
pod_name
- Aggregator:
sum
Add the charts to a new dashboard.
After letting your application running for a while, your dashboard should look similar to this:
Check various heap spaces on the Heap memory usage chart, such as eden, tenured, and survivor spaces. You should see that the chart follows the sawtooth pattern expected from garbage collection.
(Optional) Add some load to simulate memory pressure
You can simulate some traffic to see some memory pressure on your charts using the ab tool:
export APPLICATION_URL=$(kubectl get service micronaut-jvm-metrics-svc -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
ab -n 50000 -c 100 $APPLICATION_URL/health
Clean up
When you're done with this tutorial, you can delete the project to avoid incurring additional costs for the resources that you created for the tutorial.
To delete a project, do the following:
In the Cloud Console, go to the Manage resources page.
In the project list, select the project that you want to delete and then click Delete.
In the dialog, type the project ID and then click Shut down to delete the project.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.