Halaman ini menjelaskan cara meningkatkan keamanan jaringan dan kontrol traffic dalam cluster dengan mengonfigurasi kebijakan jaringan multi-jaringan yang berlaku khusus untuk jaringan Pod yang ditetapkan. Kebijakan jaringan multi-jaringan ini
mengontrol traffic menggunakan aturan firewall di tingkat Pod, dan mengontrol
aliran traffic antara Pod dan Layanan.
Kebijakan jaringan FQDN dan kebijakan jaringan CiliumClusterWide tidak didukung:
Jika Anda menggunakan kebijakan jaringan FQDN dan kebijakan jaringan CiliumClusterWide di Pod yang terhubung ke beberapa jaringan, kebijakan tersebut akan memengaruhi semua koneksi Pod, termasuk koneksi yang tidak menerapkan kebijakan tersebut.
Mengonfigurasi kebijakan jaringan multi-jaringan
Untuk menggunakan kebijakan jaringan multi-jaringan, lakukan hal berikut:
Jika ingin menggunakan Google Cloud CLI untuk tugas ini,
instal lalu
lakukan inisialisasi
gcloud CLI. Jika sebelumnya Anda telah menginstal gcloud CLI, dapatkan versi terbaru dengan menjalankan gcloud components update.
Buat kebijakan jaringan
Untuk membuat kebijakan jaringan yang menerapkan aturan pada jaringan Pod yang sama dengan
workload Anda, lihat jaringan Pod tertentu dalam definisi kebijakan jaringan.
Untuk menentukan aturan traffic ingress yang dipilih dan menargetkan Pod berdasarkan label
atau pemilih lainnya, buat Kebijakan Jaringan standar.
Simpan manifes contoh berikut sebagai sample-ingress-network-policy1.yaml:
apiVersion:networking.k8s.io/v1
kind:NetworkPolicy
metadata:
name:sample-network-policy
namespace:default
annotations:
networking.gke.io/network:blue-pod-network# GKE-specific annotation for network selection
spec:
podSelector:
matchLabels:
app:test-app-2# Selects pods with the label "app: test-app-2"policyTypes:
-Ingress# Specifies the policy applies only to incoming trafficingress:
-from:# Allow incoming traffic only from...-podSelector:
matchLabels:
app:test-app-1# ...pods with the label "app: test-app-1"
Memecahkan masalah kebijakan jaringan multi-jaringan
Jika Anda mengalami masalah dengan kebijakan jaringan, baik diterapkan ke jaringan Pod tertentu atau tidak, Anda dapat mendiagnosis dan memecahkan masalah dengan menjalankan perintah berikut:
kubectl get networkpolicy: mencantumkan semua objek kebijakan jaringan dan
informasi tentangnya.
iptables-save: mengambil dan mencantumkan semua rantai tabel alamat IP untuk node tertentu. Anda harus menjalankan perintah ini di node sebagai root.
cilium bpf policy get <endpoint-id>: mengambil dan mencantumkan alamat IP yang diizinkan dari peta kebijakan setiap endpoint.
cilium policy selectors: mencetak
identitas
dan kebijakan terkait yang telah memilihnya.
cilium identity list: menampilkan pemetaan dari identitas ke alamat IP.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-09-04 UTC."],[],[],null,["# Set up multi-network network policies\n\n[Standard](/kubernetes-engine/docs/concepts/choose-cluster-mode)\n\n*** ** * ** ***\n\n|\n| **Preview**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nThis page explains how you can enhance network security and traffic control\nwithin your cluster by configuring multi-network network policies that apply\nspecifically to a designated Pod network. These multi-network network policies\ncontrol traffic by using firewall rules at the Pod level, and they control\ntraffic flow between Pods and Services.\n\nTo understand how multi-network network policies work, see [how Network\nPolicies\nwork](/kubernetes-engine/docs/concepts/about-multi-networking-policies#how-it-works)\nwith Pod networks.\n\nRequirements\n------------\n\nTo use multi-network network policies, consider the following requirements:\n\n- Google Cloud CLI version 459 and later.\n- You must have a GKE cluster running one of the following versions:\n - 1.28.5-gke.1293000 or later\n - 1.29.0-gke.1484000 or later\n- Your cluster must use [GKE Dataplane V2](/kubernetes-engine/docs/concepts/dataplane-v2).\n\nLimitations\n-----------\n\n**FQDN network policy and CiliumClusterWide network policy are not supported**:\nIf you use an FQDN network policy and a CiliumClusterWide network policy on a\nPod that's connected to multiple networks, the policies affect all the Pod's\nconnections, including connections where the policies aren't applied.\n\nConfigure multi-network network policies\n----------------------------------------\n\nTo use multi-network network policies, do the following:\n\n1. Create a cluster with [multi-network enabled GKE](/kubernetes-engine/docs/how-to/setup-multinetwork-support-for-pods#create-a-gke-cluster).\n2. Create a [node pool](/kubernetes-engine/docs/how-to/setup-multinetwork-support-for-pods#create-gke-node-pool) and a [Pod network](/kubernetes-engine/docs/how-to/setup-multinetwork-support-for-pods#create-pod-network).\n3. [Reference the Pod network](/kubernetes-engine/docs/how-to/setup-multinetwork-support-for-pods#reference-the-prepared-network).\n4. [Create a network policy](#create-network-policy) to be enforced that references the same Pod network utilized by the workload.\n\nBefore you begin\n----------------\n\nBefore you start, make sure that you have performed the following tasks:\n\n- Enable the Google Kubernetes Engine API.\n[Enable Google Kubernetes Engine API](https://console.cloud.google.com/flows/enableapi?apiid=container.googleapis.com)\n- If you want to use the Google Cloud CLI for this task, [install](/sdk/docs/install) and then [initialize](/sdk/docs/initializing) the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running `gcloud components update`. **Note:** For existing gcloud CLI installations, make sure to set the `compute/region` [property](/sdk/docs/properties#setting_properties). If you use primarily zonal clusters, set the `compute/zone` instead. By setting a default location, you can avoid errors in the gcloud CLI like the following: `One of [--zone, --region] must be supplied: Please specify location`. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.\n\nCreate network policy\n---------------------\n\n1. To create a network policy that enforces rules on the same Pod network as\n your workload, reference the specific Pod network in the network policy\n definition.\n\n2. To define the selected ingress traffic rules and target Pods based on labels\n or other selectors, create a standard Network Policy.\n\n Save the following sample manifest as `sample-ingress-network-policy1.yaml`: \n\n apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: sample-network-policy\n namespace: default\n annotations:\n networking.gke.io/network: blue-pod-network # GKE-specific annotation for network selection\n spec:\n podSelector:\n matchLabels:\n app: test-app-2 # Selects pods with the label \"app: test-app-2\"\n policyTypes:\n - Ingress # Specifies the policy applies only to incoming traffic\n ingress:\n - from: # Allow incoming traffic only from...\n - podSelector:\n matchLabels:\n app: test-app-1 # ...pods with the label \"app: test-app-1\"\n\n3. Apply the `sample-ingress-network-policy1.yaml` manifest:\n\n kubectl apply -f sample-ingress-network-policy1.yaml\n\n4. To define the selected egress traffic rules and target Pods based on labels\n or other selectors, create a standard network policy.\n\n Save the following sample manifest as `sample-egress-network-policy2.yaml`: \n\n apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: sample-network-policy-2\n namespace: default\n annotations:\n networking.gke.io/network: blue-pod-network # GKE-specific annotation (optional)\n spec:\n podSelector:\n matchLabels:\n app: test-app-2\n policyTypes:\n - Egress # Only applies to outgoing traffic\n egress:\n - to:\n - podSelector:\n matchLabels:\n app: test-app-3\n\n5. Apply the `sample-egress-network-policy2.yaml` manifest:\n\n kubectl apply -f sample-egress-network-policy2.yaml\n\nTroubleshoot multi-network network policies\n-------------------------------------------\n\nIf you experience issues with network policies, whether they are applied to\nspecific Pod networks or not, you can diagnose and troubleshoot the problem by\nrunning the following commands:\n\n1. `kubectl get networkpolicy`: lists all network policy objects and information about them.\n2. `iptables-save`: retrieves and lists all IP address tables chains for a particular node. You must run this command on the node as root.\n3. `cilium bpf policy get \u003cendpoint-id\u003e`: retrieves and lists allowed IP addresses from each endpoint's policy map.\n4. `cilium policy selectors`: prints out the [identities](https://docs.cilium.io/en/latest/gettingstarted/terminology/#identity) and the associated policies that have selected them.\n5. `cilium identity list`: shows mappings from identity to IP address.\n\nWhat's next\n-----------\n\n- Read [About multi-network network policies](/kubernetes-engine/docs/concepts/about-multi-networking-policies)\n- Read [Set up multi-network support for Pods](/kubernetes-engine/docs/how-to/setup-persistent-ip-addresses-on-gke-pods)"]]