By Dan Roscigno, Customer Success Programs Manager, Elastic
This tutorial describes how to install and use the Elastic Stack (Elasticsearch and Kibana) to monitor Kubernetes apps running on-premises with Anthos clusters on VMware, a component of Anthos. Anthos lets you take advantage of Kubernetes and cloud technology in your data center. You get the Google Kubernetes Engine (GKE) experience with managed installs and upgrades validated by Google Cloud. With the Elastic Stack, your logs and metrics are indexed, stored, analyzed, and visualized fully on-premises.
The tutorial is intended for admins who need access to logs and monitoring. In
this tutorial, you deploy DaemonSets
containing
Beats,
lightweight shippers for logs, metrics, and network data. The Beats autodiscover
your apps and Anthos clusters on VMware infrastructure by synchronizing with
the Kubernetes API.
This tutorial assumes that you're familiar with Kubernetes and meet the following technical requirements:
- You're an Anthos customer or participate in the Anthos Free Trial program.
- You have Anthos clusters on VMware installed and configured with a running user cluster.
- You're currently running the Elastic Stack on-premises in your organization.
The following diagram shows the high-level infrastructure of using the Elastic Stack to monitor traditional and Anthos clusters on VMware environments.
In the preceding diagram, Kibana and Elasticsearch collect logs and metrics from your Anthos clusters on VMware and traditional environments.
Apps are monitored by using pre-packaged collections of configuration details called modules. For a full list of available modules, see the documentation for Filebeat and Metricbeat.
Objectives
- Deploy a sample guestbook app running Apache HTTP Server and Redis.
- Configure the Elastic Stack to use the Apache and Redis modules to collect the sample app's logs and metrics.
- View logs and metrics with the Kibana dashboard.
Costs
This tutorial uses billable components of Google Cloud, including:
- GKE
- Anthos clusters on VMware
To understand a cost estimate based on your projected usage, use the Anthos pricing overview
Before you begin
- Make sure that you have the Elastic Stack with Elasticsearch and Kibana deployed and configured.
- Make sure that you have a Anthos clusters on VMware user cluster deployed and registered on the Google Cloud Console. This tutorial assumes there are three nodes in the cluster, but this number isn't required. Follow the Anthos clusters on VMware documentation to deploy this Anthos component, or work with your Anthos technical contact to get Anthos clusters on VMware up and running.
- Ensure that there is network connectivity between your Anthos clusters on VMware cluster and both of the Elastic Stack components (Elasticsearch and Kibana).
Understanding the Anthos clusters on VMware and Elastic Stack architecture
In this example, the Anthos clusters on VMware cluster has three nodes. Within each node, there are app pods and Beats pods. The Beats collect logs and metrics from their associated Kubernetes node, and from the containers deployed on their associated Kubernetes node.
This tutorial focuses on a single three node Anthos clusters on VMware user cluster and the connections from that cluster to the Elasticsearch cluster, as illustrated in the following diagram.
Elasticsearch nodes and Kibana are running outside of
Anthos clusters on VMware. Within Anthos clusters on VMware are your apps and
Elastic Beats. Beats are
lightweight shippers,
and are deployed as Kubernetes
DaemonSets
.
When you deploy DaemonSets
, there is one instance of each Beat deployed on
each Kubernetes node. This architecture facilitates efficient processing of the
logs and metrics from each node, and from each pod deployed on that node. As
your Anthos clusters on VMware clusters grow in node count, Beats are deployed
along with those nodes. There are three types of Beats used in this tutorial:
- Metricbeat: Collects monitoring metrics from the app's pods, the Kubernetes nodes, and the Kubernetes infrastructure. Metricbeat can also collect metrics from many popular apps, such as Apache and Redis.
- Filebeat: Collects app logs. In this tutorial, Filebeat collects Redis and Apache logs from the sample app's pods.
- Journalbeat:
Collects
systemd-journald
entries. The Kubernetes nodes usejournald
for logging, and Journalbeat collects those logs.
Within each Anthos clusters on VMware node, there are one or more app pods and
the Beats, plus the standard Kubernetes pods, such as kube-dns
.
Preparing the Kubernetes environment
Logging and metrics tools, such as kube-state-metrics
, Filebeat, Journalbeat,
fluentd, Metricbeat, and Prometheus are deployed in the kube-system
namespace
and have access to all namespaces. To facilitate this access, run the following
commands as a user with the cluster role cluster-admin
. For more information
on role-based access control in Anthos clusters on VMware, see
Logging in to a cluster.
kube-state-metrics
is a service that exposes metrics and events about the state of the nodes, pods,
and containers. The Metricbeat Kubernetes module connects to
kube-state-metrics
.
In a terminal configured to access your Anthos clusters on VMware environment, check to see if
kube-state-metrics
is already running:kubectl get pods --namespace=kube-system | grep kube-state
If
kube-state-metrics
is already running, upgrade to a current version of Anthos clusters on VMware that runskube-state-metrics
in a separate namespace before continuing.Install
kube-state-metrics
:git clone \ https://github.com/kubernetes/kube-state-metrics.git kubectl create -f kube-state-metrics/examples/standard kubectl get pods --namespace=kube-system | grep kube-state
Deploying a sample app
In this section, you deploy a custom version of the Kubernetes sample guestbook app. The YAML file is concatenated into a single manifest and some changes made to serve as an example for enabling Beats to autodiscover the components of the app. The remaining steps refer to files from this repository.
Clone the Elastic examples GitHub repository
In your terminal, install git and clone the Elastic examples repository from GitHub:
git clone https://github.com/elastic/examples.git
Change into the Anthos clusters on VMware logging and metrics directory:
cd examples/GKE-On-Prem
Configure the app load balancer IP address
As shown in the following diagram, the sample app exposes a port to the network of your on-premises cluster.
Edit the
loadBalancerIP
field of theguestbook.yaml
file to a VIP address on your Anthos clusters on VMware load balancer, which is accessible within your environment:apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: type: LoadBalancer ports: - port: 80 protocol: TCP selector: app: guestbook tier: frontend loadBalancerIP: load-balancer-ip
Add metadata labels for Beats autodiscover
The Kubernetes metadata facilitates the Beats autodiscover functionality. In the example manifest file, there are metadata labels assigned to the deployments, and the Filebeat and Metricbeat configurations are updated to expect this metadata.
The guestbook.yaml
manifest file has already been modified to add the
app: redis
label to the Redis deployments:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis-master
spec:
replicas: 1
template:
metadata:
labels:
app: redis
The app: redis
label is added to the metadata for the Kubernetes deployment
and, therefore, is applied to each pod in the deployment.
The following lines from the filebeat-kubernetes.yaml
manifest
file configure Filebeat to autodiscover Redis pods that have the appropriate
label:
filebeat.autodiscover:
providers:
- type: kubernetes
templates:
- condition.contains:
kubernetes.labels.app: redis
config:
- module: redis
Here, the condition.contains
key specifies that the condition is looking for a
substring and not an exact match. kubernetes.labels.app
specifies the label
to inspect, and redis
is the substring to look for. Finally, module: redis
defines the module to use when collecting, parsing, indexing, and visualizing
logs from pods that meet the condition.
Deploy and test the sample app
In your terminal, deploy the sample app:
kubectl create -f guestbook.yaml
Check that the app has deployed successfully:
kubectl get deployments
When the deployment finishes, the output is similar to the following:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE frontend 1 1 1 1 30s redis-master 1 1 1 1 30s redis-slave 1 1 1 1 30s
When the deployment becomes available, the sample app is accessible at the load balancer IP address you configured earlier.
curl load-balancer-ip
The output is the raw content of a web page beginning with the following:
<html ng-app="redis"> <head> <title>Guestbook</title> [...]
Deploying Beats
Next, you deploy Beats to start collecting data from Anthos clusters on VMware to Elasticsearch and Kibana.
Verify connectivity from Anthos clusters on VMware to Elasticsearch and Kibana
In your terminal, identify the frontend pod of the guestbook app running on Anthos clusters on VMware:
POD=$(kubectl get pods \ --selector=tier=frontend \ -o=jsonpath='{.items[0].metadata.name}')
Ensure that Anthos clusters on VMware has connectivity to Elasticsearch and Kibana. If you have multiple Elasticsearch nodes, test all of them.
kubectl exec $POD \ -- curl -I http://elastic-ip:9200 kubectl exec $POD \ -- curl -I http://kibana-ip:5601/app/kibana
Replace the following:
elastic-ip
: Your Elasticsearch local IP addresskibana-ip
: Your Kibana local IP address.
If the output for both commands doesn't contain
HTTP/1.1 200 OK
, do the following: - Check your firewalls. - If you're using a new install of Elasticsearch and Kibana, check that they're configured to listen to external network interfaces:- In the `elasticsearch.yml`, in the `network.host` line. - In the `kibana.yml` files, in the line `server.host` line.
Store Elasticsearch and Kibana endpoints as Kubernetes secrets
Rather than putting the Elasticsearch and Kibana endpoints into the manifest files, they're provided to the Beat pods as Kubernetes secrets.
In the
GKE-On-Prem
directory, edit theelasticsearch-hosts-ports
file to point to your Elasticsearch host IP addresses and ports. The final file should resemble the following, replacingelastic-ipN
with the IP addresses for your Elasticsearch nodes:["http://elastic-ip1:9200", "http://elastic-ip2:9200"]
Edit the
kibana-host-port
file to point to your Kibana host and port. The file should resemble the following, replacingkibana-ip
with the IP address for your Kibana server:"http://kibana-ip:5601"
Create the secret from these files:
kubectl create secret generic elastic-stack \ --from-file=./elasticsearch-hosts-ports \ --from-file=./kibana-host-port \ --namespace=kube-system
Deploy index patterns, visualizations, dashboards, and machine learning jobs
Filebeat and Metricbeat (called Beats modules) provide the configuration for things, such as web servers, caches, proxies, operating systems, container environments, and databases. By deploying these configurations you populate Elasticsearch and Kibana with index patterns, visualizations, dashboards, and machine learning jobs.
In your terminal, deploy the Filebeat and Metricbeat configurations:
kubectl create -f filebeat-setup.yaml kubectl create -f metricbeat-setup.yaml
Verify setup of the Filebeat and metricbeat pods
Verify that the Filebeat setup pod completes successfully.
In your terminal, list the Filebeat pods:
kubectl get pods -n kube-system | grep filebeat-setup
The output is similar to the following:
filebeat-setup-7dj2b 0/1 Completed 0 1m47s
Examine the logs from the setup pod:
kubectl logs -f filebeat-setup-7dj2b -n kube-system | grep "dashboards\|template"
In the output, look for the following records to verify the Filebeat setup:
2019-06-20T15:50:42.464Z INFO [index-management] idxmgmt/std.go:272 Loaded index template. Loading dashboards (Kibana must be running and reachable Loaded dashboards 2019-06-20T15:51:19.032Z INFO instance/beat.go:741 Kibana dashboards successfully loaded.
List the Metricbeat setup pod:
kubectl get pods -n kube-system | grep metricbeat-setup
The output is similar to the following:
metricbeat-setup-7mkvx 0/1 Completed 0 2m
Examine the logs from the setup pod:
kubectl logs metricbeat-setup-7mkvx -n kube-system | grep "template\|dashboard"
In the output, look for the following records to verify the Metricbeat setup:
2019-06-20T16:01:15.740Z INFO [index-management] idxmgmt/std.go:272 Loaded index template. Loading dashboards (Kibana must be running and reachable) 2019-06-20T16:01:47.872Z INFO instance/beat.go:741 Kibana dashboards successfully loaded.
Deploy the Beat DaemonSets
Deploy the Beat
DaemonSets
:kubectl create -f filebeat-kubernetes.yaml kubectl create -f metricbeat-kubernetes.yaml kubectl create -f journalbeat-kubernetes.yaml
Check for the running
DaemonSets
. Verify that there is one Filebeat, Metricbeat, and Journalbeat pod running, per Kubernetes node.kubectl get pods -n kube-system | grep beat
The output is similar to the following:
filebeat-dynamic-6zl9d 1/1 Running 0 57m filebeat-dynamic-prh7v 1/1 Running 0 57m filebeat-dynamic-z5p2l 1/1 Running 0 57m filebeat-setup-ddzbd 0/1 Completed 0 57m metricbeat-7fc47c7cdf-qtw87 1/1 Running 0 46m metricbeat-nxwh4 1/1 Running 1 46m metricbeat-setup-ddzbd 0/1 Completed 0 46m metricbeat-vzdkn 1/1 Running 1 46m metricbeat-w6lnq 1/1 Running 0 46m journalbeat-nxwh4 1/1 Running 1 46m journalbeat-vzdkn 1/1 Running 1 46m journalbeat-w6lnq 1/1 Running 0 46m
Viewing your logs and metrics in Kibana
You can now visualize your logs and metrics in the Kibana Discover app at
http://kibana-ip:5601
and in dashboards provided by the
Beats modules that you're using.
Data for the sample app is visible in the following dashboards:
- Apache
- Redis
- Kubernetes
- System
For more information about Kibana, see the getting started guide. If you're collecting logs and metrics from your own app, see the dashboards for the modules related to your app.
Cleaning up
After completing this tutorial, follow these steps to remove the created resources:
In the directory
examples/GKE-On-Prem
, run the following commandskubectl delete -f filebeat-kubernetes.yaml kubectl delete -f filebeat-setup.yaml kubectl delete -f guestbook.yaml kubectl delete -f journalbeat-kubernetes.yaml kubectl delete -f metricbeat-kubernetes.yaml kubectl delete -f metricbeat-setup.yaml
In the directory where you cloned
kube-state-metrics
, run the following commands:kubectl delete -f kube-state-metrics/kubernetes
What's next
- To extend this tutorial to manage logs and metrics from your own app,
examine your pods for existing labels and update the Filebeat and
Metricbeat autodiscover configuration in the
filebeat-kubernetes.yaml
andmetricbeat-kubernetes.yaml
files. For more information, see the documentation for configuring Filebeat autodiscover and Metricbeat autodiscover. - Review the list of Filebeat modules and Metricbeat modules.
- Learn more about Elastic products and solutions:
- Elastic Cloud is a growing family of software as a service (SaaS) offerings that make it easy to deploy, operate, and scale Elastic products and solutions in the cloud, and it runs on Google Cloud.
- Elastic Cloud Enterprise in your own data center.
- Elastic Cloud on Kubernetes is the official way to streamline and automate the deployment, provisioning, management, and orchestration of Elasticsearch and Kibana on Kubernetes.
- Elastic Stack supports on-premises deployments using the downloaded binaries.
- Try out other Google Cloud features for yourself. Have a look at our tutorials.