以下示例配置文件可用于在断开连接模式下运行的 Anthos,帮助您了解用于管理集群或配置 Anthos 功能的产品 API。
管理员集群和 NodePool
以下是在断开连接模式下运行的 Anthos 管理员集群配置文件示例。
关于示例的说明:
- 文件开头的
actl
配置变量是有效的 YAML,但不是有效的 Kubernetes 资源;只有在使用actl
创建初始管理员集群时,才能包含该变量。 - 必须将集群名称命名为
admin
。 spec.type
字段必须是admin
。- 配置参数不支持 shell 扩展。必须指定绝对路径。
# actl configuration variables. Because this section is valid YAML but not a
# valid Kubernetes resource, this section can only be included when using actl
# to create the initial admin cluster. Afterwards, when creating user clusters
# by directly applying the cluster and node pool resources to the existing admin
# cluster, you must remove this section.
#
sshPrivateKeyPath: <path to SSH private key, used for node access>
registryMirrors:
# Registry endpoint to pull images from. If the registry has a namespace append
# 'v2' after the registry ip or hostname.
# Example: https://registry.example.com/v2/library
- endpoint: <private registry>
# Example: /home/USER/.docker/config.json
pullCredentialConfigPath: <private registry config file>
# Not needed for trusted domain.
# Example: /etc/docker/certs.d/registry.example.com/ca.crt
caCertPath: <private registry TLS cert>
---
apiVersion: v1
kind: Namespace
metadata:
name: cluster-admin
---
apiVersion: baremetal.cluster.gke.io/v1
kind: Cluster
metadata:
name: admin
namespace: cluster-admin
annotations:
baremetal.cluster.gke.io/private-mode: "true"
spec:
# Cluster type. This can only be admin for an admin cluster.
type: admin
# Anthos cluster version.
anthosBareMetalVersion: 1.11.3
# NodeConfig specifies the configuration that applies to all nodes in the cluster.
nodeConfig:
containerRuntime: containerd
# Control plane configuration
controlPlane:
nodePoolSpec:
nodes:
# Control plane node pools. Typically, this is either a single machine
# or 3 machines if using a high availability deployment.
- address: <Machine 1 IP>
# Cluster networking configuration
clusterNetwork:
# Pods specify the IP ranges from which Pod networks are allocated.
pods:
cidrBlocks:
- 192.168.0.0/16
# Services specify the network ranges from which service VIPs are allocated.
# This can be any RFC 1918 range that does not conflict with any other IP range
# in the cluster and node pool resources.
services:
cidrBlocks:
- 10.96.0.0/12
# Load balancer configuration
loadBalancer:
# Load balancer mode can only be 'bundled'.
# In 'bundled' mode a load balancer will be installed on load balancer nodes during cluster creation.
mode: bundled
# Load balancer port configuration
ports:
# Specifies the port the LB serves the kubernetes control plane on.
# In 'manual' mode the external load balancer must be listening on this port.
controlPlaneLBPort: 443
# The VIPs must be in the same subnet as the load balancer nodes.
vips:
# ControlPlaneVIP specifies the VIP to connect to the Kubernetes API server.
# This address must not be in the address pools below.
controlPlaneVIP: <control plane VIP>
# AddressPools is a list of non-overlapping IP ranges for the data plane load balancer.
# All addresses must be in the same subnet as the load balancer nodes.
# Address pool configuration is only valid for 'bundled' LB mode in non-admin clusters.
addressPools:
- name: pool1
addresses:
# Each address must be either in the CIDR form (1.2.3.0/24)
# or range form (1.2.3.1-1.2.3.5).
- <VIP address pools>
# A load balancer nodepool can be configured to specify nodes used for load balancing.
# These nodes are part of the kubernetes cluster and run regular workloads as well as load balancers.
# If the node pool config is absent then the control plane nodes are used.
# It's recommended to have the LB node pool for non-admin clusters.
# Node pool configuration is only valid for 'bundled' LB mode.
# nodePoolSpec:
# nodes:
# - address: <Machine 1 IP>
# Proxy configuration
# proxy:
# url: http://[username:password@]domain
# # A list of IPs, hostnames or domains that should not be proxied.
# noProxy:
# - 127.0.0.1
# - localhost
# Storage configuration
storage:
# lvpNodeMounts specifies the config for local PersistentVolumes backed by mounted disks.
# These disks need to be formatted and mounted by the user, which can be done before or after
# cluster creation.
lvpNodeMounts:
# path specifies the host machine path where mounted disks will be discovered and a local PV
# will be created for each mount.
path: /mnt/localpv-disk
# storageClassName specifies the StorageClass that PVs will be created with. The StorageClass
# is created during cluster creation.
storageClassName: local-disks
# lvpShare specifies the config for local PersistentVolumes backed by subdirectories in a shared filesystem.
# These subdirectories are automatically created during cluster creation.
lvpShare:
# path specifies the host machine path where subdirectories will be created on each host. A local PV
# will be created for each subdirectory.
path: /mnt/localpv-share
# storageClassName specifies the StorageClass that PVs will be created with. The StorageClass
# is created during cluster creation.
storageClassName: local-shared
# numPVUnderSharedPath specifies the number of subdirectories to create under path.
numPVUnderSharedPath: 5
# Node access configuration; to use a non-root user with passwordless sudo capability for machine login.
nodeAccess:
loginUser: <login user name>
---
# Node pools for worker nodes
apiVersion: baremetal.cluster.gke.io/v1
kind: NodePool
metadata:
name: node-pool-1
namespace: cluster-admin
spec:
clusterName: admin
nodes:
- address: <Machine 2 IP>
- address: <Machine 3 IP>
用户集群和 NodePool
以下是 Anthos 不公开模式用户集群配置文件的示例。
如需详细了解 Anthos on Bare Metal 文档,请访问 https://cloud.google.com/anthos/clusters/docs/bare-metal。
注意:
- 这与管理员集群非常相似,但有一些不同的默认值。
- 您可以将集群和节点池资源直接应用于管理员集群。
apiVersion: v1
kind: Namespace
metadata:
name: cluster-<cluster-name>
---
apiVersion: baremetal.cluster.gke.io/v1
kind: Cluster
metadata:
name: <cluster-name>
namespace: cluster-<cluster-name>
annotations:
baremetal.cluster.gke.io/private-mode: "true"
spec:
# Cluster type. This can only be user for a user cluster.
type: user
# Anthos cluster version.
anthosBareMetalVersion: 1.11.3
# NodeConfig specifies the configuration that applies to all nodes in the cluster.
nodeConfig:
containerRuntime: containerd
# Control plane configuration
controlPlane:
nodePoolSpec:
nodes:
# Control plane node pools. Typically, this is either a single machine
# or 3 machines if using a high availability deployment.
- address: <Machine 4 IP>
# Cluster networking configuration
clusterNetwork:
# Pods specify the IP ranges from which Pod networks are allocated.
pods:
cidrBlocks:
- 192.168.0.0/16
# Services specify the network ranges from which service VIPs are allocated.
# This can be any RFC 1918 range that does not conflict with any other IP range
# in the cluster and node pool resources.
services:
cidrBlocks:
- 10.96.0.0/12
# Credentials specify the secrets that hold SSH key and image pull credential for the new cluster.
# credentials:
# # Optionally override default ssh key secret inherited from the admin cluster.
# sshKeySecret:
# name: SSH_KEY_SECRET
# namespace: cluster-<cluster-name>
# # Optionally override default image pull secret inherited from the admin cluster.
# imagePullSecret:
# name: IMAGE_PULL_SECRET
# namespace: cluster-<cluster-name>
# Load balancer configuration
loadBalancer:
# Load balancer mode can only be 'bundled'.
mode: bundled
# Load balancer port configuration
ports:
# Specifies the port the LB serves the kubernetes control plane on.
# In 'manual' mode the external load balancer must be listening on this port.
controlPlaneLBPort: 443
# The VIPs must be in the same subnet as the load balancer nodes.
vips:
# ControlPlaneVIP specifies the VIP to connect to the Kubernetes API server.
# This address must not be in the address pools below.
controlPlaneVIP: <control plane VIP>
# AddressPools is a list of non-overlapping IP ranges for the data plane load balancer.
# All addresses must be in the same subnet as the load balancer nodes.
# Address pool configuration is only valid for 'bundled' LB mode in non-admin clusters.
addressPools:
- name: pool1
addresses:
# Each address must be either in the CIDR form (1.2.3.0/24)
# or range form (1.2.3.1-1.2.3.5).
- <VIP address pools>
# A load balancer nodepool can be configured to specify nodes used for load balancing.
# These nodes are part of the kubernetes cluster and run regular workloads as well as load balancers.
# If the node pool config is absent then the control plane nodes are used.
# Node pool configuration is only valid for 'bundled' LB mode.
# nodePoolSpec:
# nodes:
# - address: <Machine 7 IP>
# Proxy configuration
# proxy:
# url: http://[username:password@]domain
# # A list of IPs, hostnames or domains that should not be proxied.
# noProxy:
# - 127.0.0.1
# - localhost
# Storage configuration
storage:
# lvpNodeMounts specifies the config for local PersistentVolumes backed by mounted disks.
# These disks need to be formatted and mounted by the user, which can be done before or after
# cluster creation.
lvpNodeMounts:
# path specifies the host machine path where mounted disks will be discovered and a local PV
# will be created for each mount.
path: /mnt/localpv-disk
# storageClassName specifies the StorageClass that PVs will be created with. The StorageClass
# is created during cluster creation.
storageClassName: local-disks
# lvpShare specifies the config for local PersistentVolumes backed by subdirectories in a shared filesystem.
# These subdirectories are automatically created during cluster creation.
lvpShare:
# path specifies the host machine path where subdirectories will be created on each host. A local PV
# will be created for each subdirectory.
path: /mnt/localpv-share
# storageClassName specifies the StorageClass that PVs will be created with. The StorageClass
# is created during cluster creation.
storageClassName: local-shared
# numPVUnderSharedPath specifies the number of subdirectories to create under path.
numPVUnderSharedPath: 5
# Node access configuration; to use a non-root user with passwordless sudo capability for machine login.
nodeAccess:
loginUser: <login user name>
---
# Node pools for worker nodes
apiVersion: baremetal.cluster.gke.io/v1
kind: NodePool
metadata:
name: <cluster-name>-worker-node-pool
namespace: cluster-<cluster-name>
spec:
clusterName: <cluster-name>
nodes:
- address: <Machine 5 IP>
- address: <Machine 6 IP>
Admin Operator
以下是在断开连接模式下运行的 Anthos Admin Operator 配置文件示例此配置文件控制管理中心。
apiVersion: managementcenter.anthos.cloud.google.com/v1
kind: AdminOperator
metadata:
name: admin-operator
spec:
billingInfo:
projectNumber: <your Google Cloud Platform project number>
# FreeTrialExpiration indicates if the project has a free trial and the time
# when that free trial ends. Format: date-time in RFC 3339.
# It's not a free trial by default when not specified.
# freeTrialExpiration: <2021-07-01T00:00:00Z>
# UpdateConfigOverride can be optionally provided to override the default
# update configuration for components.
# All the components will be running on the same version as the admin operator
# by default, unless an override is set via this field.
updateConfigOverride:
policies:
- name: "<component name, for example: anthos-config-management>"
versionConstraint: "<=1.9.0"
InventoryMachine
以下是在断开连接模式下运行的 Anthos InventoryMachine
配置文件示例。此文件应用于管理员集群,并提供创建用户集群所必需的机器。
apiVersion: baremetal.cluster.gke.io/v1alpha1
kind: InventoryMachine
metadata:
name: <Machine IP address>
# Optional: used by the Management Center to inform customers
labels:
key1: value1
key2: value2
spec:
# Address specifies the default IPv4 address for SSH access and Kubernetes node.
# Routable from the admin cluster.
# Example: 192.168.0.1
# This field is immutable.
# This field is required.
address: <Machine IP address>
AddressPool
以下是在断开连接模式下运行的 Anthos AddressPool
配置文件示例。此文件应用于管理员集群,并提供创建用户集群所必需的用户 IP 地址。
apiVersion: managementcenter.anthos.cloud.google.com/v1
kind: AddressPool
metadata:
# Don't change the name, only `anthos-addresspool-default` allowed.
name: anthos-addresspool-default
spec:
description: <description text>
addresses:
# All addresses below are a list of non-overlapping IP ranges.
# Address Range, must be in the single IP address form (1.2.3.4),
# CIDR form (1.2.3.0/24) or range form (1.2.3.1-1.2.3.5).
- <VIP address range>
- <VIP address>
BootstrapService
以下是在断开连接模式下运行的 Anthos BootstrapService
配置文件示例。该文件应用于管理员集群,并提供创建用户集群所必需的启动服务(例如,第三方存储空间服务或 GPU 驱动程序)。
configmap 可通过 kubectl create configmap <name of configmap> --from-file=<name of manifest>.yaml
创建
apiVersion: managementcenter.anthos.cloud.google.com/v1
kind: BootstrapService
metadata:
name: <name of the bootstrap service>
namespace: anthos-management-center
spec:
# If set to True, this configuration can be applied to many user clusters,
# e.g. a GPU driver configuration. If False, this configuration can only be
# applied to a single user cluster, e.g. a CSI Driver + StorageClass
# combination which is intended for exclusive use by a single user cluster.
# Defaults to False.
isReusable: False
configMapRef:
name: <name of configmap>
namespace: anthos-management-center
BootstrapServiceBinding
以下是在断开连接模式下运行的 Anthos BootstrapServiceBinding
配置文件示例。该文件应用于管理员集群,并在创建目标群集时将 BootstrapService
绑定这些集群。
apiVersion: managementcenter.anthos.cloud.google.com/v1
kind: BootstrapServiceBinding
metadata:
name: <name of the bootstrap service binding>
namespace: anthos-management-center
spec:
configs:
- configRef:
name: <name of the bootstrap service>
namespace: anthos-management-center
placement:
clusterIDs:
- "<cluster-name>"
ConfigManagementFeatureSpec
以下是在断开连接模式下运行的 Anthos ConfigManagementFeatureSpec
配置文件示例。此文件应用于管理员集群,并提供 Anthos Config Management 的规范定义。
如需详细了解 Anthos Config Management 文档,请访问 https://cloud.google.com/anthos/config-management。
apiVersion: managementcenter.anthos.cloud.google.com/v1
kind: ConfigManagementFeatureSpec
metadata:
name: <name of config management spec>
namespace: anthos-management-center
spec:
version: "1.7.1"
git:
syncRepo: "git@<YOUR_GIT_REPO>.git"
policyDir: "."
secretType: "ssh"
syncBranch: "master"
syncRev: "HEAD"
syncWait: 15
# See https://cloud.google.com/kubernetes-engine/docs/add-on/config-sync/how-to/unstructured-repo
# for the difference between `hierarchy` and `unstructured` source format.
sourceFormat: unstructured
# See https://cloud.google.com/anthos-config-management/docs/concepts/policy-controller
# for more about Policy Controller.
policyController:
enabled: true
# See https://cloud.google.com/kubernetes-engine/docs/add-on/config-sync/concepts/hierarchy-controller
# for more background regarding Hierarchy Controller.
hierarchyController:
enabled: true
# [Optional] The Secret on the admin cluster to access the config-management repo.
# If set, the secret referenced will be copied to user clusters to allow ACM to access the Git repo.
# If not set, users will need to create the Git credential secret on the user cluster by themselves.
secretRef:
name: git-creds
namespace: anthos-management-center
ConfigManagementBinding
以下是在断开连接模式下运行的 Anthos ConfigManagementBinding
配置文件示例。此文件应用于管理员集群,并在用户集群上安装 Anthos Config Management。
apiVersion: managementcenter.anthos.cloud.google.com/v1
kind: ConfigManagementBinding
metadata:
name: <name of config management binding>
namespace: anthos-management-center
spec:
configs:
- configRef:
name: <name of config management spec>
namespace: anthos-management-center
placement:
clusterIDs:
- "<cluster-name>"
ServiceMeshFeatureSpec
以下是在断开连接模式下运行的 Anthos ServiceMeshFeatureSpec
配置文件示例。此文件应用于管理员集群,并提供 Anthos Service Mesh 的规范定义。
如需详细了解 Anthos Service Mesh 文档,请访问 https://cloud.google.com/anthos/service-mesh。
apiVersion: managementcenter.anthos.cloud.google.com/v1alpha1
kind: ServiceMeshFeatureSpec
metadata:
name: <name of service mesh spec>
namespace: anthos-management-center
spec:
version: 1.9.6-asm.1
ServiceMeshBinding
以下是在断开连接模式下运行的 Anthos ServiceMeshBinding
配置文件示例。此文件将应用于管理员集群,并在用户集群上安装 Anthos Service Mesh。
apiVersion: managementcenter.anthos.cloud.google.com/v1alpha1
kind: ServiceMeshBinding
metadata:
name: <name of service mesh binding>
namespace: anthos-management-center
spec:
configs:
- configRef:
name: <name of service mesh spec>
namespace: anthos-management-center
placement:
clusterIDs:
- "<cluster-name>"
Anthos Identity Service
以下是在断开连接模式下运行的 Anthos“ClientConfig”配置文件的示例。
此文件应用于管理员集群,并提供客户端标识。
apiVersion: authentication.gke.io/v2alpha1
kind: ClientConfig
spec:
authentication:
- name: https://accounts.google.com
oidc:
clientID: <redacted>
clientSecret: <redacted>
cloudConsoleRedirectURI: http://cloud.console.not.enabled
extraParams: prompt=consent,access_type=offline
issuerURI: https://accounts.google.com
kubectlRedirectURI: http://localhost:9879/callback
scopes: email
userClaim: email
certificateAuthorityData: <DO NOT CHANGE>
name: <DO NOT CHANGE>
server: <DO NOT CHANGE>
DomainConfig
以下是在断开连接模式下运行的 Anthos DomainConfig
配置文件示例。此文件将应用于管理员集群,并用于配置身份验证方法名称及用来保护与在断开连接模式下运行的 Anthos Web 端点之间的 HTTPS 连接的证书(用于域名)。设置此配置后,登录重定向器会根据在请求中使用的域名,自动将未经身份验证的请求重定向到相应的登录页面。
apiVersion: managementcenter.anthos.cloud.google.com/v1
kind: DomainConfig
metadata:
# name is the domain name used to serve the Anthos web endpoints.
# This should be a valid fully qualified domain name.
# It should not include the protocol such as http or https.
# Example of incorrect domain names: http://anthos, anthos, anthos*.com
# Example of correct domain names: anthos.example.com
name: <name of the domain>
spec:
# authMethodName is the name of the authentication configured
# in the Anthos Identity Service's ClientConfig that should be used for
# this domain name.
authMethodName: <name in ClientConfig.Spec.Authentication.Name>
# If not specified, a self-signed certificate (untrusted) will be used.
# To configure the TLS certificate, copy the certificate in a secret in
# istio-system namespace and reference the name of the secret in certSecretName.
# The referred secret must be of the type "kubernetes.io/tls".
# The referred secret must in istio-system namespace.
certSecretName: <cert secret name>
适用于其他配置的 Logmon 和 ConfigMap
以下示例 Logmon 配置文件在断开连接模式下运行的 Anthos 中用于管理集群中的监控和日志记录。
关于示例的说明:
Logmon
资源的name
必须是logmon-default
。Logmon
资源的namespace
必须是kube-system
。fluentbitConfigmaps
中所列 ConfigMap 中的配置的语法必须遵循 fluent-bit 输出插件。alertmanagerConfigurationConfigmaps
中所列 ConfigMap 中的配置的语法必须遵循 alertmanager 配置。prometheusRulesConfigmaps
中所列 ConfigMap 中配置的语法必须遵循 prometheus 记录规则和 prometheus 提醒规则。
apiVersion: addons.gke.io/v1alpha1
kind: Logmon
metadata:
# Don't change the name
name: logmon-default
# Don't change the namespace
namespace: kube-system
spec:
system_logs:
outputs:
additionalOutput:
fluentbitConfigmaps:
# Same syntax as fluent-bit output plugins, see 'Sample fluentbitConfigmaps' below as example
- "<customized-system-logs-fluent-bit-output-config>"
# Scheme: []v1.VolumeMount
volumeMounts:
- ...
- ...
# Scheme: []v1.Volume
volumes:
- ...
- ...
default_loki:
deployment:
components:
loki:
storageSize: 20Gi # "<storage-size>"
retentionPolicy:
retentionTime: 720h # "<retention-time>"
storageClassName: anthos-system # "<storage-class-name>"
system_metrics:
outputs:
default_prometheus:
deployment:
components:
alertmanager:
alertmanagerConfigurationConfigmaps:
# Same syntax as alertmanager configuration, see 'Sample alertmanagerConfigurationConfigmaps' below as example
- "<customized-alertmanager-configmap-name>"
storageSize: 1Gi # "<storage-size>"
grafana:
storageSize: 1Gi # "<storage-size>"
prometheus:
prometheusRulesConfigmaps:
# Same syntax as prometheus recording rules and prometheus alerting rules, see 'Sample prometheusRulesConfigmaps' below as example
- "<customized-prometheus-rules-configmap-name>"
storageSize: 20Gi # "<storage-size>"
retentionPolicy:
retentionTime: 720h # "<retention-time>"
storageClassName: anthos-system # "<storage-class-name>"
示例 fluentbitConfigmaps
关于示例的说明:
namespace
必须是kube-system
。logmon
标签为必需项。- ConfigMap 中的键必须是
output.conf
。
apiVersion: v1
kind: ConfigMap
metadata:
name: <customized-system-logs-fluent-bit-output-config>
# Don't change the namespace
namespace: kube-system
labels:
# This label is required.
logmon: system_logs
data:
# The file name must be output.conf
output.conf: |
# Please fill customized fluent-bit output plugin configuration below
[OUTPUT]
Name: stdout
Match: *
示例 alertmanagerConfigurationConfigmaps
关于示例的说明:
namespace
必须是kube-system
。logmon
标签为必需项。- ConfigMap 中的键必须是
alertmanager.yml
。
apiVersion: v1
kind: ConfigMap
metadata:
name: <customized-alertmanager-configmap-name>
# Don't change the namespace
namespace: kube-system
labels:
# This label is required.
logmon: system_metrics
data:
# The file name must be alertmanager.yml
alertmanager.yml: |
# Please fill customized alertmanager configuration below
global:
# Also possible to place this URL in a file.
# Ex: `slack_api_url_file: '/etc/alertmanager/slack_url'`
slack_api_url: '<slack_webhook_url>'
route:
receiver: 'slack-notifications'
group_by: [alertname, datacenter, app]
receivers:
- name: 'slack-notifications'
slack_configs:
- channel: '#alerts'
text: 'https://internal.myorg.net/wiki/alerts/'
示例 prometheusRulesConfigmaps
关于示例的说明:
namespace
必须是kube-system
。logmon
标签为必需项。- 如果
Logmon
资源中的prometheusRulesConfigmaps
下列出了多个 ConfigMap,则键在所有 ConfigMap 中都必须唯一。
apiVersion: v1
kind: ConfigMap
metadata:
name: <customized-prometheus-rules-configmap-name>
# Don't change the namespace
namespace: kube-system
labels:
# This label is required.
logmon: system_metrics
data:
# The file name must be unique across all customized prometheus rule files.
<a-unique-file-name>: |
# Please fill customized recording rules below
groups:
- name: kubernetes-apiserver
rules:
- alert: KubeAPIDown
annotations:
message: KubeAPI has disappeared from Prometheus target discovery.
runbook_url: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubeapidown
expr: |
absent(up{job="kube-apiserver"} == 1)
for: 15m
labels:
severity: critical
# The file name must be unique across all customized prometheus rule files.
<a-unique-file-name>: |
# Please fill customized alerting rules below
groups:
- name: node.rules
rules:
- expr: |
topk by(cluster, namespace, pod) (1,
max by (cluster, node, namespace, pod) (
label_replace(kube_pod_info{job="kube-state-metrics",node!=""}, "pod", "$1", "pod", "(.*)")
))
record: 'node_namespace_pod:kube_pod_info:'