This page shows you how to create a standalone cluster, which is a self-managing cluster that runs workloads. Standalone clusters do not manage other clusters, eliminating the need for running a separate admin cluster in resource- constrained scenarios. Furthermore, standalone clusters offer two installation profiles to choose from:
- Default: The default profile has limited resource requirements.
- Edge: The edge profile has significantly reduced system resource requirements and is recommended for edge devices with high resource constraints.
Before creating a standalone cluster, consider the tradeoff between reducing resources and overall security. Since standalone clusters manage themselves, running workloads on the same cluster increases the risk of exposing sensitive administrative data, like SSH keys.
Before you begin
Before you can create a standalone cluster, ensure the following:
bmctlis downloaded (
gs://anthos-baremetal-release/bmctl/1.8.4/linux-amd64/bmctl) from Cloud Storage.
- Workstation running
bmctlhas network connectivity to all nodes in the target standalone cluster.
- Workstation running
bmctlhas network connectivity to the control plane VIP of the target standalone cluster.
- SSH key used to create the standalone cluster is available to root, or there is SUDO user access on all nodes in the target standalone cluster.
- Connect-register service account is configured for use with Connect.
If you want to enable SELinux to secure your containers, you must make sure that
SELinux is enabled in
Enforced mode host machines before installing
Anthos clusters on bare metal. SELinux is enable by default on RHEL and CentOS
systemes.If SELinux is disabled in your clusters or you aren't sure, see
Securing your containers using SELinux
for instructions on how to enable it.
Anthos clusters on bare metal supports SELinux in only RHEL and CentOS systems.
Create a standalone cluster
You can create a standalone cluster that has a single control node plane using
bmctl command. This type of configuration reduces resource consumption but
does not provide high availability (HA), and the resulting cluster has a single
You can also create an HA standalone cluster. In HA mode, if a node fails, then others will take its place. To create an HA standalone cluster, you must specify at least three nodes for the control plane.
bmctl command can typically be run on a separate workstation or on one of
the standalone cluster nodes. However, if you are creating a standalone cluster
with the edge profile enabled and have the minimum required resources
configured, we recommend running
bmctl on a separate workstation.
gcloudas a user:
gcloud auth application-default login
You need to have a Project Owner or Editor role to use the automatic API enablement and Service Account creation features described below.
You can also add the following IAM roles to the user:
- Service Account Admin
- Service Account Key Admin
- Project IAM Admin
- Compute Viewer
- Service Usage Admin
Alternatively, if you already have a service account with those roles, run:
Replace JSON_KEY_FILE with the path to your service account JSON key file.
Get your Google Cloud project ID to use with cluster creation:
export CLOUD_PROJECT_ID=$(gcloud config get-value project)
Create a standalone cluster config file
After you've logged into gcloud and have your project set up, you can create the
cluster config file with the
bmctl command. In this example, all service
accounts are automatically created by the
bmctl create config command:
bmctl create config -c STANDALONE_CLUSTER_NAME --enable-apis \ --create-service-accounts --project-id=$CLOUD_PROJECT_ID
Replace the following:
- STANDALONE_CLUSTER_NAME with the name of the standalone cluster that you want to create.
The following command creates a config file for a standalone cluster
standalone1 associated with project ID
bmctl create config -c standalone1 --create-service-accounts --project-id=my-gcp-project
The file is written to
As an alternative to automatically enabling APIs and creating service accounts,
you can also provide your existing service accounts if you have the
proper IAM permissions.
This way, you can skip the automatic service account creation in the previous
step in the
bmctl create config -c standalone1
Edit the cluster config file
Now that you have a cluster config file, make the following changes to it:
Add the SSH private key to access the standalone cluster nodes:
# bmctl configuration variables. Because this section is valid YAML but not a valid Kubernetes # resource, this section can only be included when using bmctl to # create the initial admin/hybrid cluster. Afterwards, when creating user clusters by directly # applying the cluster and node pool resources to the existing cluster, you must remove this # section. gcrKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-gcr.json sshPrivateKeyPath: /path/to/your/ssh_private_key gkeConnectAgentServiceAccountKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-connect.json gkeConnectRegisterServiceAccountKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-register.json cloudOperationsServiceAccountKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-cloud-ops.json
Register your clusters with your project fleet using Connect.
- If you created your config file using automatic API enablement and Service Account creation features, you can skip this step.
- If you created the config file without using the automatic API enablement
and Service Account creation features, reference the downloaded service
account JSON keys in the corresponding
gkeConnectRegisterServiceAccountKeyPathfields of the cluster config file.
Change the config to specify a cluster type of
admin. If you want to enable the edge profile to minimize resource consumption, specify
spec: # Cluster type. This can be: # 1) admin: to create an admin cluster. This can later be used to create user clusters. # 2) user: to create a user cluster. Requires an existing admin cluster. # 3) hybrid: to create a hybrid cluster that runs admin cluster components and user workloads. # 4) standalone: to create a cluster that manages itself, runs user workloads, but does not manage other clusters. type: standalone # Edge profile minimizes the resource consumption of Anthos clusters on bare metal. It is only available for standalone clusters. profile: edge
(Optional) Change the config to specify a multi-node, high availability, control plane. Specify an odd number of nodes to be able to have a majority quorum for HA:
# Control plane configuration controlPlane: nodePoolSpec: nodes: # Control plane node pools. Typically, this is either a single machine # or 3 machines if using a high availability deployment. - address: 10.200.0.4 - address: 10.200.0.5 - address: 10.200.0.6
If you have an even number of nodes temporarily while adding or removing nodes for maintenance or replacement, your deployment maintains HA as long as you have quorum.
Specify the pod density of cluster nodes and the container runtime:
.... # NodeConfig specifies the configuration that applies to all nodes in the cluster. nodeConfig: # podDensity specifies the pod density configuration. podDensity: # maxPodsPerNode specifies at most how many pods can be run on a single node. maxPodsPerNode: 250 # containerRuntime specifies which container runtime to use for scheduling containers on nodes. # containerd and docker are supported. containerRuntime: containerd ....
For standalone clusters, allowable values for
32-250for HA clusters and
64-250for non-HA clusters. The default value if unspecified is
110. Once the cluster is created, this value cannot be updated.
Pod density is also limited by your cluster's available IP resources. For details, see Pod networking.
If you are enabling the edge profile with the minimum resource requirements configured, we recommend using
containerdas the container runtime.
Create the standalone cluster with the cluster config
bmctl command to deploy the standalone cluster:
bmctl create cluster -c <var>CLUSTER_NAME</var>
Replace CLUSTER_NAME with the name of the cluster you created in the previous section.
The following shows an example of the command to create a cluster
bmctl create cluster -c standalone1
Sample complete standalone cluster config
The following is a sample standalone cluster configuration file created by the
bmctl command. In this sample configuration, placeholder cluster names,
VIPs, and addresses are used. They may not work for your network.
gcrKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-gcr.json sshPrivateKeyPath: /bmctl/bmctl-workspace/.ssh/id_rsa gkeConnectAgentServiceAccountKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-connect.json gkeConnectRegisterServiceAccountKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-register.json cloudOperationsServiceAccountKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-cloud-ops.json --- apiVersion: v1 kind: Namespace metadata: name: cluster-standalone1 --- apiVersion: baremetal.cluster.gke.io/v1 kind: Cluster metadata: name: standalone1 namespace: cluster-standalone1 spec: # Cluster type. This can be: # 1) admin: to create an admin cluster. This can later be used to create user clusters. # 2) user: to create a user cluster. Requires an existing admin cluster. # 3) hybrid: to create a hybrid cluster that runs admin cluster components and user workloads. # 4) standalone: to create a cluster that manages itself, runs user workloads, but does not manage other clusters. type: standalone # Anthos cluster version. anthosBareMetalVersion: 1.8.4 # GKE connect configuration gkeConnect: projectID: $GOOGLE_PROJECT_ID # Control plane configuration controlPlane: nodePoolSpec: nodes: # Control plane node pools. Typically, this is either a single machine # or 3 machines if using a high availability deployment. - address: 10.200.0.4 # Cluster networking configuration clusterNetwork: # Pods specify the IP ranges from which pod networks are allocated. pods: cidrBlocks: - 192.168.0.0/16 # Services specify the network ranges from which service virtual IPs are allocated. # This can be any RFC 1918 range that does not conflict with any other IP range # in the cluster and node pool resources. services: cidrBlocks: - 10.96.0.0/20 # Load balancer configuration loadBalancer: # Load balancer mode can be either 'bundled' or 'manual'. # In 'bundled' mode a load balancer will be installed on load balancer nodes during cluster creation. # In 'manual' mode the cluster relies on a manually-configured external load balancer. mode: bundled # Load balancer port configuration ports: # Specifies the port the load balancer serves the Kubernetes control plane on. # In 'manual' mode the external load balancer must be listening on this port. controlPlaneLBPort: 443 # There are two load balancer virtual IP (VIP) addresses: one for the control plane # and one for the L7 Ingress service. The VIPs must be in the same subnet as the load balancer nodes. # These IP addresses do not correspond to physical network interfaces. vips: # ControlPlaneVIP specifies the VIP to connect to the Kubernetes API server. # This address must not be in the address pools below. controlPlaneVIP: 10.200.0.71 # IngressVIP specifies the VIP shared by all services for ingress traffic. # Allowed only in non-admin clusters. # This address must be in the address pools below. ingressVIP: 10.200.0.72 # AddressPools is a list of non-overlapping IP ranges for the data plane load balancer. # All addresses must be in the same subnet as the load balancer nodes. # Address pool configuration is only valid for 'bundled' LB mode in non-admin clusters. addressPools: - name: pool1 addresses: # Each address must be either in the CIDR form (126.96.36.199/24) # or range form (188.8.131.52-184.108.40.206). - 10.200.0.72-10.200.0.90 # A load balancer node pool can be configured to specify nodes used for load balancing. # These nodes are part of the Kubernetes cluster and run regular workloads as well as load balancers. # If the node pool config is absent then the control plane nodes are used. # Node pool configuration is only valid for 'bundled' LB mode. # nodePoolSpec: # nodes: # - address: <Machine 1 IP> # Proxy configuration # proxy: # url: http://[username:password@]domain # # A list of IPs, hostnames or domains that should not be proxied. # noProxy: # - 127.0.0.1 # - localhost # Logging and Monitoring clusterOperations: # Cloud project for logs and metrics. projectID: $GOOGLE_PROJECT_ID # Cloud location for logs and metrics. location: us-central1 # Whether collection of application logs/metrics should be enabled (in addition to # collection of system logs/metrics which correspond to system components such as # Kubernetes control plane or cluster management agents). # enableApplication: false # Storage configuration storage: # lvpNodeMounts specifies the config for local PersistentVolumes backed by mounted disks. # These disks need to be formatted and mounted by the user, which can be done before or after # cluster creation. lvpNodeMounts: # path specifies the host machine path where mounted disks will be discovered and a local PV # will be created for each mount. path: /mnt/localpv-disk # storageClassName specifies the StorageClass that PVs will be created with. The StorageClass # is created during cluster creation. storageClassName: local-disks # lvpShare specifies the config for local PersistentVolumes backed by subdirectories in a shared filesystem. # These subdirectories are automatically created during cluster creation. lvpShare: # path specifies the host machine path where subdirectories will be created on each host. A local PV # will be created for each subdirectory. path: /mnt/localpv-share # storageClassName specifies the StorageClass that PVs will be created with. The StorageClass # is created during cluster creation. storageClassName: local-shared # numPVUnderSharedPath specifies the number of subdirectories to create under path. numPVUnderSharedPath: 5 # NodeConfig specifies the configuration that applies to all nodes in the cluster. nodeConfig: # podDensity specifies the pod density configuration. podDensity: # maxPodsPerNode specifies at most how many pods can be run on a single node. maxPodsPerNode: 250 # containerRuntime specifies which container runtime to use for scheduling containers on nodes. # containerd and docker are supported. containerRuntime: containerd # KubeVirt configuration, uncomment this section if you want to install kubevirt to the cluster # kubevirt: # # if useEmulation is enabled, hardware accelerator (i.e relies on cpu feature like vmx or svm) # # will not be attempted. QEMU will be used for software emulation. # # useEmulation must be specified for KubeVirt installation # useEmulation: false # Authentication; uncomment this section if you wish to enable authentication to the cluster with OpenID Connect. # authentication: # oidc: # # issuerURL specifies the URL of your OpenID provider, such as "https://accounts.google.com". The Kubernetes API # # server uses this URL to discover public keys for verifying tokens. Must use HTTPS. # issuerURL: <URL for OIDC Provider; required> # # clientID specifies the ID for the client application that makes authentication requests to the OpenID # # provider. # clientID: <ID for OIDC client application; required> # # clientSecret specifies the secret for the client application. # clientSecret: <Secret for OIDC client application; optional> # # kubectlRedirectURL specifies the redirect URL (required) for the gcloud CLI, such as # # "http://localhost:[PORT]/callback". # kubectlRedirectURL: <Redirect URL for the gcloud CLI; optional, default is "http://kubectl.redirect.invalid"> # # username specifies the JWT claim to use as the username. The default is "sub", which is expected to be a # # unique identifier of the end user. # username: <JWT claim to use as the username; optional, default is "sub"> # # usernamePrefix specifies the prefix prepended to username claims to prevent clashes with existing names. # usernamePrefix: <Prefix prepended to username claims; optional> # # group specifies the JWT claim that the provider will use to return your security groups. # group: <JWT claim to use as the group name; optional> # # groupPrefix specifies the prefix prepended to group claims to prevent clashes with existing names. # groupPrefix: <Prefix prepended to group claims; optional> # # scopes specifies additional scopes to send to the OpenID provider as a comma-delimited list. # scopes: <Additional scopes to send to OIDC provider as a comma-separated list; optional> # # extraParams specifies additional key-value parameters to send to the OpenID provider as a comma-delimited # # list. # extraParams: <Additional key-value parameters to send to OIDC provider as a comma-separated list; optional> # # proxy specifies the proxy server to use for the cluster to connect to your OIDC provider, if applicable. # # Example: https://user:firstname.lastname@example.org:8888. If left blank, this defaults to no proxy. # proxy: <Proxy server to use for the cluster to connect to your OIDC provider; optional, default is no proxy> # # deployCloudConsoleProxy specifies whether to deploy a reverse proxy in the cluster to allow Google Cloud # # Console access to the on-premises OIDC provider for authenticating users. If your identity provider is not # # reachable over the public internet, and you wish to authenticate using Google Cloud Console, then this field # # must be set to true. If left blank, this field defaults to false. # deployCloudConsoleProxy: <Whether to deploy a reverse proxy for Google Cloud Console authentication; optional> # # certificateAuthorityData specifies a Base64 PEM-encoded certificate authority certificate of your identity # # provider. It's not needed if your identity provider's certificate was issued by a well-known public CA. # # However, if deployCloudConsoleProxy is true, then this value must be provided, even for a well-known public # # CA. # certificateAuthorityData: <Base64 PEM-encoded certificate authority certificate of your OIDC provider; optional> # Node access configuration; uncomment this section if you wish to use a non-root user # with passwordless sudo capability for machine login. # nodeAccess: # loginUser: <login user name> --- # Node pools for worker nodes apiVersion: baremetal.cluster.gke.io/v1 kind: NodePool metadata: name: node-pool-1 namespace: cluster-standalone1 spec: clusterName: standalone1 nodes: - address: 10.200.0.5 - address: 10.200.0.6