Creating an Autopilot cluster


This page explains how to create a Google Kubernetes Engine (GKE) cluster in Autopilot mode. The Autopilot mode of operation is a hands-off Kubernetes experience that lets you focus on your services and applications, while Google takes care of node management and infrastructure. You can schedule your Pods without having to plan your node usage. After you create an Autopilot cluster, you can deploy your workload and scale your application as needed. GKE provisions, configures, and manages the resources and hardware to run your workload.

Before you begin

Before you start, make sure you have performed the following tasks:

Set up default gcloud settings using one of the following methods:

  • Using gcloud init, if you want to be walked through setting defaults.
  • Using gcloud config, to individually set your project ID, zone, and region.

Using gcloud init

If you receive the error One of [--zone, --region] must be supplied: Please specify location, complete this section.

  1. Run gcloud init and follow the directions:

    gcloud init

    If you are using SSH on a remote server, use the --console-only flag to prevent the command from launching a browser:

    gcloud init --console-only
  2. Follow the instructions to authorize gcloud to use your Google Cloud account.
  3. Create a new configuration or select an existing one.
  4. Choose a Google Cloud project.
  5. Choose a default Compute Engine zone for zonal clusters or a region for regional or Autopilot clusters.

Using gcloud config

  • Set your default project ID:
    gcloud config set project PROJECT_ID
  • If you are working with zonal clusters, set your default compute zone:
    gcloud config set compute/zone COMPUTE_ZONE
  • If you are working with Autopilot or regional clusters, set your default compute region:
    gcloud config set compute/region COMPUTE_REGION
  • Update gcloud to the latest version:
    gcloud components update

Create an Autopilot cluster

You can create an Autopilot cluster by using the gcloud command-line tool or the Google Cloud Console.

gcloud

To create a public Autopilot cluster using the gcloud command-line tool, run the following command:

gcloud container clusters create-auto CLUSTER_NAME \
    --region REGION \
    --project=PROJECT_ID

Replace the following:

  • CLUSTER_NAME: the name of your new Autopilot cluster.
  • REGION: the region for your cluster, such as us-central1.
  • PROJECT_ID: your project ID.

You can specify additional options when creating your Autopilot cluster, for example the name of your network or specifying a private cluster.

  • --network specifies the network that your cluster connects to.
  • --subnetwork specifies the subnetwork that your cluster connects to.
  • --enable-master-authorized-network specifies that access to the public endpoint is restricted to IP address ranges that you authorize.
  • --master-authorized-networks specifies the address for the group of machines authorized to access the public endpoint.
  • --cluster-ipv4-cidr specifies the Pod address range.
  • --services-ipv4-cidr specifies the Service address range.
  • --enable-private-nodes creates a private cluster with no external IP addresses.
  • --master-ipv4-cidr specifies an internal address range for the control plane (for private clusters).
  • --enable-private-endpoint indicates that the cluster is managed using the private IP address of the control plane API endpoint (for private clusters).

For the full list of options, run the following command:

gcloud container clusters create-auto --help

Console

To create an Autopilot cluster with the Google Cloud Console, perform the following tasks:

  1. Go to the Google Kubernetes Engine page in Cloud Console.

    Go to Google Kubernetes Engine

  2. Click Create.

  3. In the Autopilot section, click Configure.

  4. Enter the Name for your cluster.

  5. Select a region for your cluster.

  6. Choose a public or private cluster.

  7. (Optional) Expand Networking Options to specify network settings:

    1. If you choose a private cluster:
      1. To create a control plane that is accessible from authorized external IP addresses, select the Access control plane using its external IP address checkbox. Clear this checkbox to disable public endpoint access.
      2. (Optional) Set the Control plane IP range, for example 172.16.0.0/28.
    2. If you want to create a cluster with limited access to the public endpoint, select the Enable control plane authorized networks checkbox.
      1. Click Add Authorized Network to grant access to a specific set of addresses that you designate.
      2. For Name, enter the desired name for the network.
      3. For Network, enter a CIDR range that you want to grant allowed access to your cluster control plane.
      4. Click Done. Add additional authorized networks as needed.
    3. Enter a Network and Node subnet, or accept the default setting. This option generates a subnet for your cluster.
    4. In the Pod address range field, enter a pod range, mask, or accept the defaults (example: 10.0.0.0/14).
    5. In the Service address range field, enter a service range pod range, mask, or accept the defaults (example: 10.4.0.0/19).
  8. (Optional) Expand Advanced options to specify more settings:

    1. Select a release channel for the control plane.
    2. Click Enable Maintenance Window to control when automatic cluster maintenance occurs on your clusters.
      1. Click Add maintenance exclusion. For weekly maintenance, select the start time and length, and then select the days of the week that the maintenance window occurs on. Switch to the custom editor to edit the rule directly,
    3. In the Metadata field, enter a description of your cluster.
    4. Click Add label to add key-value pairs to help organize your clusters.
  9. Click Create.

Enabling outbound internet access on private clusters with Cloud NAT

By default, Autopilot clusters are public. If you created a private Autopilot cluster, these nodes do not have external IP addresses. To make outbound internet connections from your cluster, for example pulling images from DockerHub, you must configure Cloud NAT. Cloud NAT lets private clusters send outbound packets to the internet and receive any corresponding established inbound response packets. Perform the following tasks to create a NAT configuration on a Cloud Router.

gcloud

To NAT your cluster using the gcloud command-line tool, run the following commands:

  1. Create a Cloud Router :

    gcloud compute routers create NAT_ROUTER \
        --network NETWORK \
        --region REGION \
        --project=PROJECT_ID
    

    Replace the following:

    • NAT_ROUTER: the name of your Cloud Router.
    • NETWORK: the network name that you want to create the Cloud Router for. For example, if you want to NAT your default network, use the default network name when creating the router.
    • REGION: the region for your cluster, such as us-central1.
    • PROJECT_ID: your project ID.
  2. Add a configuration to the router. This configuration allows all instances in the region to use Cloud NAT for all primary and alias IP ranges. It also automatically allocates the external IP addresses for the NAT gateway. For more options, see the gcloud command-line interface documentation.:

    gcloud compute routers nats create NAT_CONFIG \
        --region REGION \
        --router NAT_ROUTER \
        --nat-all-subnet-ip-ranges \
        --auto-allocate-nat-external-ips \
        --project=PROJECT_ID
    

    Replace the following:

    • NAT_CONFIG: the name of your NAT configuration.
    • REGION: the region for your cluster, such as us-central1.
    • NAT_ROUTER: the name of your Cloud Router.
    • PROJECT_ID: your project ID.

Console

  1. Go to the Cloud NAT page on Cloud Console.

    Go to Cloud NAT

  2. Click Get started or Create NAT gateway.

  3. Enter a Gateway name.

  4. Choose a VPC network.

  5. Set the Region for the NAT gateway.

  6. Select or create a Cloud Router in the region.

  7. Click Create.

Connecting to the cluster

After creating your cluster, you need to get authentication credentials to connect to the cluster.

gcloud

gcloud container clusters get-credentials CLUSTER_NAME \
    --region REGION \
    --project=PROJECT_ID

Replace the following:

  • CLUSTER_NAME: the name of your new Autopilot cluster.
  • REGION: the region for your cluster, such as us-central1.
  • PROJECT_ID: your project ID.

This command configures kubectl to use the cluster you created.

Console

  1. Go to the Google Kubernetes Engine page on Cloud Console.

    Go to Google Kubernetes Engine

  2. In the cluster list, beside the cluster that you want to connect to, click Actions, and then click Connect.

  3. Click Run in Cloud Shell when prompted. The generated command is copied into your Cloud Shell, for example:

    gcloud container clusters get-credentials autopilot-cluster --region us-east1 --project autopilot-test
    
  4. Press Enter to run the command.

Verifying the cluster mode

You can verify that your cluster is an Autopilot cluster by using the gcloud command-line tool or the Google Cloud Console.

gcloud

To verify that your cluster is created in Autopilot mode, run the following command:

gcloud container clusters describe CLUSTER_NAME \
    --region REGION

Replace the following:

  • CLUSTER_NAME: the name of your Autopilot cluster.
  • REGION: the region for your cluster, such as us-central1.

The output of the command contains the following:

autopilot:
  enabled: true

Console

To verify that your cluster is created in Autopilot mode:

  1. Go to the Google Kubernetes Engine page on Cloud Console.

    Go to Google Kubernetes Engine

  2. Find your cluster in the cluster list. In the Mode column, the status shows Autopilot.

Verifying the cluster configuration

To see all of your resources across namespaces, run the following command:

kubectl get all --all-namespaces

You'll see the new resources for the cluster such as Pods, Services, Deployments, and DaemonSets for the cluster.

What's next