This document helps you to plan, design, and implement migration of your projects from OpenShift to GKE Enterprise. If done incorrectly, moving your workloads from one environment to another can be a challenging task, so plan and execute your migration carefully.
This document is part of a multi-part series about migrating to Google Cloud. If you're interested in an overview of the series, see Migration to Google Cloud: Choosing your migration path.
This document is part of a series that discusses migrating containers to Google Cloud:
- Migrating containers to Google Cloud: Migrating Kubernetes to Google Kubernetes Engine (GKE)
- Migrating containers to Google Cloud: Migrating from OpenShift to GKE Enterprise
- Migrating containers to Google Cloud: Migrate OpenShift projects to GKE Enterprise (this document)
- Migrating from OpenShift to GKE Enterprise: Migrate OpenShift SCCs to Policy Controller Constraints
This document is useful if you're planning to migrate OpenShift projects to GKE Enterprise. This document is also useful if you're evaluating the opportunity to migrate and want to explore what it might look like.
This document relies on concepts that are discussed in Migration to Google Cloud: Getting started, in Migrating containers to Google Cloud: Migrating Kubernetes to GKE, in Migrating containers to Google Cloud: Migrating from OpenShift to GKE Enterprise, and in Best practices for GKE networking. This document links to the preceding documents where appropriate.
The guidance in this document assumes that you want to execute a lift and shift migration of your workloads. In a lift and shift migration, you apply only the minimum changes that you need for your workloads to operate in the target GKE Enterprise environment.
To migrate OpenShift resources to GKE Enterprise, you map and convert them to their Kubernetes equivalents. This document describes the migration of the following OpenShift project configuration resources necessary to deploy and operate your workloads to GKE Enterprise:
- OpenShift projects
- Resource quotas for each OpenShift project and resource quotas across multiple OpenShift projects
- Roles and ClusterRoles
- RoleBindings and ClusterRoleBindings
- OpenShift network namespaces configuration and NetworkPolicies
To migrate OpenShift project configuration and related resources to GKE Enterprise, we recommend that you do the following:
- Export OpenShift project configuration resource descriptors.
- Map the OpenShift project configuration resources to Kubernetes resources.
- Create Kubernetes resources that map to OpenShift project configuration resources.
- Manage the Kubernetes resources using Config Sync.
This document provides examples of how you can complete the migration steps.
Export OpenShift project configuration resource descriptors
To export the OpenShift project configuration resources, we recommend that you do the following:
- Export OpenShift project descriptors.
- Export cluster-scoped resource descriptors.
- Export project-scoped resource descriptors.
The descriptors that you export from an OpenShift cluster include
fields that describe the configuration and the status of resources,
such as the spec
and status
fields. The descriptors also include
fields that hold resource status information,
such as the metadata.managedFields
field. Kubernetes and OpenShift manage the
fields that hold resource status information and their values for you. To
simplify the assessment of OpenShift resource descriptors, we recommend that you
do the following for each resource descriptor:
Record the fields that hold dynamically generated resource status information along with their values, such as the following:
- Any field nested under
metadata.annotations
that starts with theopenshift.io
prefix metadata.creationTimestamp
metadata.generation
metadata.managedFields
metadata.resourceVersion
metadata.selfLink
metadata.uid
status
- Any field nested under
Remove the fields that hold dynamically generated resource status information from the resource descriptor.
To export OpenShift project configuration resource descriptors, you use the
OpenShift command-line interface (oc
CLI).
To export resource descriptors in oc
CLI, you need to authenticate with the
cluster-admin role.
For a list of all the OpenShift resources that the oc
CLI supports, run the
oc api-resources
command.
Export OpenShift project descriptors
This section describes how to export project descriptors. We recommend that you
exclude OpenShift projects that run system components, such as the
istio-system
component, and exclude OpenShift projects that have names
starting with openshift-
, kube-
, or knative-
.
OpenShift manages
these OpenShift projects for you, and they're out of the scope of this migration
because you don't use them to deploy your workloads. To export OpenShift project
descriptors, do the following for each OpenShift cluster:
In a terminal that has access to the OpenShift cluster, get the list of OpenShift projects by using the
oc get
command:oc get projects
The output is similar to the following:
NAME DISPLAY NAME STATUS example-project Active ...
The output displays a list of the OpenShift projects that are currently set up in your OpenShift cluster.
For each OpenShift project in the list, export its descriptor in YAML file format, display the output, and save it to a file using the
tee
command, for later processing. For example, export the descriptor of anexample-project
OpenShift project:oc get project example-project -o yaml | tee project-example-project.yaml
The output is similar to the following:
apiVersion: project.openshift.io/v1 kind: Project metadata: annotations: name: example-project spec: finalizers: - kubernetes
The output displays the descriptor of the
example-project
OpenShift project in YAML file format. The output is saved to theproject-example-project.yaml
file.
Export cluster-scoped resource descriptors
This section describes how to export the descriptors for resources that have a cluster scope, not including security context constraints. For information about migrating security policies, see Migrating from OpenShift to GKE Enterprise: Migrate OpenShift SCCs to Policy Controller Constraints. To export other resource descriptors, do the following for each OpenShift cluster:
In your terminal, get the list of ClusterResourceQuotas:
oc get clusterresourcequotas
The output is similar to the following:
NAME AGE for-name 6m15s for-user 4s ...
The output displays a list of ClusterResourceQuotas that are currently set up in your OpenShift cluster.
For each ClusterResourceQuota in the list, export its descriptor in YAML file format, display the output, and save it to a file for later processing. For example, export the descriptor of the
for-name
ClusterResourceQuota:oc get clusterresourcequota for-name -o yaml | tee clusterresourcequota-for-name.yaml
The output is similar to the following:
apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: name: for-name spec: quota: hard: pods: "10" secrets: "20" selector: annotations: null labels: matchLabels: name: frontend
The output displays the descriptor of the
for-name
ClusterResourceQuota in YAML format. The output is saved to theclusterresourcequota-for-name.yaml
file.Get the list of ClusterRoles:
oc get clusterroles
The output is similar to the following:
NAME CREATED AT admin 2021-02-02T06:17:02Z aggregate-olm-edit 2021-02-02T06:17:59Z aggregate-olm-view 2021-02-02T06:18:01Z alertmanager-main 2021-02-02T06:48:26Z basic-user 2021-02-02T06:26:42Z ...
The output displays a list of ClusterRoles that are currently set up in your OpenShift cluster. The list of ClusterRoles includes OpenShift default ClusterRoles, and ClusterRoles that refer to OpenShift system components. We recommend that you assess all the ClusterRoles in the list to evaluate which roles you need to migrate, and which roles aren't applicable in the target GKE Enterprise environment.
For each ClusterRole in the list, export its descriptor in YAML file format, display the output, and save it to a file for later processing. For example, export the descriptor of the
admin
ClusterRole:oc get clusterrole admin -o yaml | tee clusterrole-admin.yaml
The output is similar to the following:
aggregationRule: clusterRoleSelectors: - matchLabels: rbac.authorization.k8s.io/aggregate-to-admin: "true" apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: admin rules: - apiGroups: - operators.coreos.com resources: - subscriptions verbs: - create - update - patch - delete ...
The output displays the descriptor of the
admin
ClusterRole in YAML format. The output is saved to theclusterrole-admin.yaml
file.Get the list of ClusterRoleBindings:
oc get clusterrolebindings
The output is similar to the following:
NAME ROLE AGE alertmanager-main ClusterRole/alertmanager-main 21d basic-users ClusterRole/basic-user 21d cloud-credential-operator-rolebinding ClusterRole/cloud-credential-operator-role 21d cluster-admin ClusterRole/cluster-admin 21d cluster-admins ClusterRole/cluster-admin 21d cluster-autoscaler ClusterRole/cluster-autoscaler 21d cluster-autoscaler-operator ClusterRole/cluster-autoscaler-operator 21d cluster-monitoring-operator ClusterRole/cluster-monitoring-operator 21d ...
The output displays a list of ClusterRoleBindings that are currently set up in your OpenShift cluster. The list of ClusterRoleBindings includes ClusterRoleBindings that refer to OpenShift system components. We recommend that you assess all the ClusterRoleBindings in the list to evaluate which bindings you need to migrate, and which bindings aren't applicable in the target GKE Enterprise environment.
For each ClusterRoleBinding in the list, export its descriptor in YAML file format, display the output, and save it to a file for later processing. For example, export the descriptor of the
cluster-admin
ClusterRoleBinding:oc get clusterrolebinding cluster-admin -o yaml | tee clusterrolebinding-cluster-admin.yaml
The output is similar to the following:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: cluster-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:masters
The output displays the descriptor of the
cluster-admin
ClusterRoleBinding in YAML format. The output is saved to theclusterrolebinding-cluster-admin.yaml
file.
Export customized NetNamespaces
This section describes how to assess the configuration of multi-tenant isolation. This section applies if you created customized NetNamespaces for any OpenShift project in your cluster to isolate or join network namespaces. If you didn't create customized NetNamespaces, skip to Export project-scoped resource descriptors.
OpenShift automatically creates and manages NetNamespaces for managed OpenShift projects. NetNamespaces for OpenShift-managed projects are out of the scope of this migration.
To export customized NetNamespaces, do the following:
Get the list of NetNamespaces:
oc get netnamespaces
The output is similar to the following.
NAME NETID EGRESS IPS default 0 kube-node-lease 13240579 kube-public 15463168 kube-system 16247265 openshift 9631477 openshift-apiserver 12186643 openshift-apiserver-operator 6097417 openshift-authentication 2862939 openshift-authentication-operator 723750 openshift-cloud-credential-operator 11544971 openshift-cluster-csi-drivers 7650297 openshift-cluster-machine-approver 7836750 openshift-cluster-node-tuning-operator 7531826 ...
The output displays a list of NetNamespaces that are currently set up in your OpenShift cluster.
For each NetNamespace in the list, export its descriptor in YAML file format, display the output, and save it to a file for later processing. For example, export the descriptor of the
default
NetNamespace
:oc get netnamespace example-project -o yaml | tee netnamespace-example-project.yaml
For NetNamespaces that don't have the same
netid
value, the output is similar to the following:apiVersion: network.openshift.io/v1 kind: NetNamespace metadata: name: example-project netid: 1234 netname: example-project
The output displays the descriptor of the
example-project
NetNamespace in YAML file format. The output is saved to thenetnamespace-example-project.yaml
file.For NetNamespaces that have the same
netid
value, the output is similar to the following:apiVersion: network.openshift.io/v1 kind: NetNamespace metadata: name: example-project netid: 1234 netname: example-project apiVersion: network.openshift.io/v1 kind: NetNamespace metadata: name: example-project-2 netid: 1234 netname: example-project-2
Export project-scoped resource descriptors
To export the descriptors for resources that have a project scope, do the following for each OpenShift project.
In your terminal, select the OpenShift project that you want to assess. For example, select the
example-project
OpenShift project:oc project example-project
Get the list of ResourceQuotas:
oc get resourcequotas
The output is similar to the following:
NAME AGE REQUEST LIMIT gpu-quota 6s requests.nvidia.com/gpu: 1/1 ...
The output displays a list of
ResourceQuotas
that are currently set up in your OpenShift cluster for the selected OpenShift project.For each ResourceQuota in the list, export its descriptor in YAML format, display the output, and save it to a file for later processing. For example, export the descriptor of the
gpu-quota
ResourceQuota:oc get resourcequota gpu-quota -o yaml | tee resourcequota-gpu-quota.yaml
The output is similar to the following:
apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: example-project spec: hard: requests.nvidia.com/gpu: "1"
The output displays the descriptor of the
gpu-quota
ResourceQuota in YAML file format. The output is saved to theresourcequota-gpu-quota.yaml
file.Get the list of Roles:
oc get roles
The output is similar to the following:
NAME CREATED AT example 2021-02-02T06:48:27Z ...
The output displays a list of Roles that are currently set up in your OpenShift cluster for the selected OpenShift project. The list of Roles includes Roles that refer to OpenShift system components. We recommend that you assess all the Roles in the list to evaluate which roles you need to migrate, and which roles aren't applicable in the target GKE Enterprise environment.
For each Role in the list, export its descriptor in YAML file format, display the output, and save it to a file for later processing. For example, export the descriptor of the
example
Role:oc get role example -o yaml | tee role-example.yaml
The output is similar to the following:
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: example namespace: example-project rules: - apiGroups: - "" resources: - services - endpoints - pods verbs: - get - list - watch
The output displays the descriptor of the
example
Role in YAML file format. The output is saved to therole-example.yaml
file.Get the list of RoleBindings:
oc get rolebindings
The output is similar to the following:
NAME ROLE AGE machine-config-controller-events ClusterRole/machine-config-controller-events 21d machine-config-daemon-events ClusterRole/machine-config-daemon-events 21d example Role/example 21d system:deployers ClusterRole/system:deployer 21d system:image-builders ClusterRole/system:image-builder 21d system:image-pullers ClusterRole/system:image-puller 21d ...
The output displays a list of RoleBindings that are set up in your OpenShift cluster for the selected OpenShift project. The list of RoleBindings includes RoleBindings that refer to OpenShift system components. We recommend that you assess all the RoleBindings in the list to evaluate which bindings you need to migrate, and which bindings aren't applicable in the target GKE Enterprise environment.
For each RoleBinding in the list, export its descriptor in YAML file format, display the output, and save it to a file for later processing. For example, export the descriptor of the
example
RoleBinding:oc get rolebinding example -o yaml | tee rolebinding-example.yaml
The output is similar to the following:
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: example namespace: example-project roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: example subjects: - kind: ServiceAccount name: example namespace: example-ns
The output displays the descriptor of the
example
RoleBinding in YAML file format. The output is saved to therolebinding-example.yaml
file.Get the list of EgressNetworkPolicies:
oc get egressnetworkpolicies
The output is similar to the following:
NAME AGE default 2m2s ...
The output displays a list of EgressNetworkPolicies that are currently set up in your OpenShift cluster for the selected OpenShift project.
For each EgressNetworkPolicy in the list, export its descriptor in YAML file format, display the output, and save it to a file for later processing. For example, export the descriptor of the
default
EgressNetworkPolicy:oc get egressnetworkpolicy default -o yaml | tee egressnetworkpolicy-default.yaml
The output is similar to the following:
apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default namespace: example-project spec: egress: - to: cidrSelector: 1.2.3.0/24 type: Allow - to: dnsName: www.example.com type: Allow - to: cidrSelector: 0.0.0.0/0 type: Deny
The output displays the descriptor of the
default
EgressNetworkPolicy in YAML format. The output is also saved to theegressnetworkpolicy-default.yaml
file.Get the list of NetworkPolicies:
oc get networkpolicies
The output is similar to the following:
NAME POD-SELECTOR AGE test-policy app=mongodb 3s ...
The output displays a list of NetworkPolicies that are currently set up in your OpenShift cluster for the selected OpenShift project.
For each NetworkPolicy in the list, export its descriptor in YAML file format, display the output, and save it to a file for later processing. For example, export the descriptor of the
test-policy
NetworkPolicy:oc get networkpolicy test-policy -o yaml | tee networkpolicy-test-policy.yaml
The output is similar to the following:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-policy namespace: example-project spec: ingress: - from: - podSelector: matchLabels: app: app ports: - port: 27017 protocol: TCP podSelector: matchLabels: app: mongodb policyTypes: - Ingress
The output displays the descriptor of the
test-policy
NetworkPolicy in YAML file format. The output is saved to thenetworkpolicy-test-policy.yaml
file.
Map OpenShift project configuration resources to Kubernetes resources
After you complete the inventory of the OpenShift project configuration resources, you assess those resources as follows:
- Evaluate which resources in the inventory are Kubernetes resources and which are OpenShift resources.
- Map OpenShift resources to their Kubernetes, GKE, and GKE Enterprise equivalents.
The following list helps you to evaluate which resources that you provisioned in your OpenShift clusters are Kubernetes resources and which resources are available in OpenShift only:
- OpenShift projects are Kubernetes Namespaces with additional annotations.
- Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings are Kubernetes resources.
- ResourceQuotas are Kubernetes resources.
- NetworkPolicies are Kubernetes resources.
- ClusterResourceQuotas aren't Kubernetes resources; they are available in OpenShift only.
- NetNamespaces and EgressNetworkPolicies aren't Kubernetes resources; they are available in OpenShift only.
The following table provides a summary of how to map OpenShift project configuration resources to the resources that you used in GKE Enterprise.
OpenShift | GKE Enterprise |
---|---|
Projects | Convert to Kubernetes Namespaces with additional annotations |
Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings | Kubernetes RBAC resources |
ResourceQuotas | Kubernetes RBAC resources |
ClusterResourceQuotas | Convert to ResourceQuotas or use hierarchical resource quotas |
NetworkPolicies | Kubernetes Network resources |
NetNamespaces, EgressNetworkPolicies | Convert to NetworkPolicies |
After you evaluate the resources from your OpenShift environment, map the OpenShift resources to resources that you can provision and configure in GKE and GKE Enterprise. You don't need to map the Kubernetes resources that you're using in your OpenShift clusters because GKE Enterprise supports them directly. As described in Summary of OpenShift to GKE Enterprise capability mapping, we recommend that you map the following:
- OpenShift projects to Kubernetes Namespaces.
- ClusterResourceQuotas to ResourceQuotas.
- NetNamespaces and EgressNetworkPolicies to NetworkPolicies.
Create Kubernetes resources that map to OpenShift project configuration resources
After you complete the mapping, you create the Kubernetes resources that you mapped to your OpenShift resources. We recommend that you create the following:
- One Kubernetes Namespace for each OpenShift project.
- One ResourceQuota for each Kubernetes Namespace that your ClusterResourceQuotas are limiting.
- NetworkPolicies to match your NetNamespaces and EgressNetworkPolicies.
Create Kubernetes Namespaces
OpenShift projects are Kubernetes Namespaces with additional annotations. The OpenShift project API closely matches the Kubernetes Namespace API. To migrate your OpenShift projects, we recommend that you create a Kubernetes Namespace for each OpenShift project. The APIs are compatible, so you can create a Kubernetes Namespace from an OpenShift project.
To create a Kubernetes Namespace from an OpenShift project, we recommend that
you change values in the OpenShift project descriptor to the corresponding
Kubernetes Namespace API version for each OpenShift project. To do so, you
change the apiVersion
field value in the OpenShift project descriptor from the
OpenShift project object API version and the kind
field value to the
corresponding
Kubernetes Namespace object API version.
For example, the OpenShift project that you assessed in the previous section
is similar to the following:
apiVersion: project.openshift.io/v1
kind: Project
metadata:
annotations:
name: default
spec:
finalizers:
- kubernetes
To migrate the project, change the apiVersion
field value from
project.openshift.io/v1
to v1
and change the kind
field value from
Project
to Namespace
:
apiVersion: v1
kind: Namespace
Metadata:
annotations:
name: default
spec:
finalizers:
- kubernetes
Create Kubernetes ResourceQuotas
OpenShift ClusterResourceQuotas let you share quotas across multiple OpenShift projects. When you create a ClusterResourceQuota, you define the quota and you define the selector to match OpenShift projects for which you want to enforce the quota. In this section, you migrate your OpenShift ClusterResourceQuotas to Kubernetes ResourceQuotas in the Namespaces that you created earlier.
To migrate your ClusterResourceQuotas, we recommend that you do the following, for each ClusterResourceQuota:
Map the ClusterResourceQuota to OpenShift projects by assessing the
spec.quota
field and thespec.selector
field of the ClusterResourceQuota. For example, thefor-name
ClusterResourceQuota that you exported in the previous section looked like the following:apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: name: for-name spec: quota: hard: pods: "10" secrets: "20" selector: annotations: null labels: matchLabels: name: frontend
The
for-name
ClusterResourceQuota enforces Pod and Secret quota limits. Thespec.selector
field enforces those limits on thefrontend
OpenShift project.In Create Kubernetes Namespace earlier, you created Kubernetes Namespaces that correspond to your OpenShift projects. Use that information to map the ClusterResourceQuota to the new Kubernetes Namespaces. For example, the
frontend
OpenShift project contains thefor-name
ClusterResourceQuota. The project corresponds to thefrontend
Kubernetes Namespace, so you map thefor-name
ClusterResourceQuota to thefrontend
Kubernetes Namespace.For each quota definition in the
quota
field of the ClusterResourceQuota, divide the quota amount among the Kubernetes Namespaces that you mapped the ClusterResourceQuota to, according to the criteria of interest. For example, you can divide the quota amounts set by thefor-name
ClusterResourceQuota equally among thefrontend
Kubernetes Namespaces.For each Kubernetes Namespace that you mapped to the ClusterResourceQuota, you create a Kubernetes ResourceQuota that enforces the quota on that Namespace. You set the quota amounts according to the information that you gathered in the previous step. For example, you create a ResourceQuota for the
frontend
Kubernetes Namespace:apiVersion: v1 kind: ResourceQuota metadata: name: pods-secrets namespace: frontend spec: hard: pods: "10" secrets: "20"
As another example, you map the
for-name
ClusterResourceQuota to two distinct Kubernetes Namespaces,example-1
andexample-2
. To complete the mapping, divide the resources equally between them by creating a ResourceQuota for the Kubernetes Namespaces. Allocate the first half of the ResourceQuota:apiVersion: v1 kind: ResourceQuota metadata: name: pods-secrets namespace: example-1 spec: hard: pods: "5" secrets: "10"
After you allocate the first half of the ResourceQuota, you then allocate the second half of the ResourceQuota:
apiVersion: v1 kind: ResourceQuota metadata: name: pods-secrets namespace: example-2 spec: hard: pods: "5" secrets: "10"
This approach lets you enforce limits on each Kubernetes Namespace that you create ResourceQuotas in, rather than setting a single limit for multiple Kubernetes Namespaces.
Using ResourceQuotas to enforce quotas on your Kubernetes Namespaces isn't the same as using a ClusterResourceQuota to enforce one quota on all Kubernetes Namespaces. Dividing a cluster-scoped quota among different Kubernetes Namespaces might be suboptimal: the division might over-provision quotas for some Kubernetes Namespaces, and under-provision quotas for other Kubernetes Namespaces.
We recommend that you optimize the quota allocation by dynamically tuning the
configuration of the ResourceQuotas in your Kubernetes Namespaces, respecting
the total amount of quota that the corresponding ClusterResourceQuota
established. For example, you can dynamically increase or decrease the quota
amounts that are enforced by the pods-secrets
ResourceQuotas to avoid
over-provisioning or under-provisioning quota amounts for the example-1
and
example-2
Kubernetes Namespaces. The total quota amounts of the pods-secrets
ResourceQuotas shouldn't exceed the quota amounts in the corresponding
ClusterResourceQuota.
When you configure your ResourceQuotas, consider GKE quotas and limits and Google Distributed Cloud quotas and limits. These quotas and limits might impose lower limits than your ResourceQuotas. For example, GKE limits the maximum number of Pods per node, regardless of how you configured your ResourceQuotas.
Create NetworkPolicies
OpenShift NetNamespaces let you configure network isolation between OpenShift projects. OpenShift EgressNetworkPolicies let you regulate outbound traffic leaving your OpenShift clusters. This section relies on traffic restricting concepts.
To migrate your NetNamespaces and EgressNetworkPolicies, do the following:
Assess your NetNamespaces and EgressNetworkPolicies to understand how they are regulating network traffic between OpenShift projects and outbound traffic that's leaving your OpenShift clusters. For example, assess the NetNamespace and the EgressNetworkPolicy that you exported in the previous section:
apiVersion: network.openshift.io/v1 kind: NetNamespace metadata: name: example-project netid: 1234 netname: example-project apiVersion: network.openshift.io/v1 kind: NetNamespace metadata: name: example-project-2 netid: 1234 netname: example-project-2 apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default namespace: example-project spec: egress: - to: cidrSelector: 1.2.3.0/24 type: Allow - to: cidrSelector: 0.0.0.0/0 type: Deny
The
example-project
and theexample-project-2
NetNamespaces define an overlay network with the samenetid
value of1234
. Therefore, Pods in theexample-project
OpenShift project can communicate with Pods inexample-project-2
OpenShift project and the other way around.The
default
EgressNetworkPolicy defines the following outbound network traffic rules:- Allow outbound traffic to the
1.2.3.0/24
subnet. - Deny outbound traffic that doesn't match with other rules.
- Allow outbound traffic to the
Create NetworkPolicies and OPA policies to match your network traffic restriction requirements. For example, the
default-np
and thedefault-np-2
implement the following policies:- The policies that are enforced by the
default
EgressNetworkPolicy. - The policies that are enforced by the
example-project
andexample-project-2
NetNamespaces on Namespaces that have thenetid
label set toexample-project
andexample-project-2
:
The policies are similar to the following:
--- kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: default-np namespace: example-project spec: policyTypes: - Ingress - Egress podSelector: {} egress: - to: - namespaceSelector: matchLabels: netid: example-project-2 - podSelector: {} - ipBlock: cidr: 1.2.3.0/24 ingress: - from: - namespaceSelector: matchLabels: netname: example-project-2 --- kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: default-np-2 namespace: example-project-2 spec: policyTypes: - Ingress - Egress podSelector: {} egress: - to: - namespaceSelector: matchLabels: netid: example-project-2 - podSelector: {} - ipBlock: cidr: 1.2.3.0/24 ingress: - from: - namespaceSelector: matchLabels: netname: example-project
- The policies that are enforced by the
Manage Kubernetes resources using Config Sync
To manage the Kubernetes resources and the configuration of your GKE clusters, we recommend that you use Config Sync.
To learn how to enable Config Sync on your GKE clusters, see Install Config Sync.
After you provision and configure Config Sync in your GKE Enterprise environment, you use it to create and automatically apply configuration to your GKE clusters.
What's next
- Read about how to get started with your migration to Google Cloud.
- Learn best practices for GKE networking.
- Understand how to harden your cluster's security.
- Read the GKE security overview.
- Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center.