Questa pagina spiega come migliorare la sicurezza di rete e il controllo del traffico all'interno del cluster configurando criteri di rete multi-rete che si applicano specificamente a una rete di pod designata. Questi criteri di rete multirete
controllano il traffico utilizzando regole firewall a livello di pod e controllano
il flusso di traffico tra pod e servizi.
Le policy di rete FQDN e CiliumClusterWide non sono supportate:
Se utilizzi una policy di rete FQDN e una policy di rete CiliumClusterWide su un
pod connesso a più reti, le policy influiscono su tutte le connessioni del pod, incluse quelle in cui le policy non vengono applicate.
Se vuoi utilizzare Google Cloud CLI per questa attività,
installala e poi
inizializza
gcloud CLI. Se hai già installato gcloud CLI, scarica l'ultima versione
eseguendo gcloud components update.
Crea criterio di rete
Per creare una policy di rete che applichi regole alla stessa rete di pod del tuo carico di lavoro, fai riferimento alla rete di pod specifica nella definizione della policy di rete.
Per definire le regole di traffico in entrata selezionate e i pod di destinazione in base alle etichette
o ad altri selettori, crea un criterio di rete standard.
Salva il seguente manifest di esempio come sample-ingress-network-policy1.yaml:
apiVersion:networking.k8s.io/v1
kind:NetworkPolicy
metadata:
name:sample-network-policy
namespace:default
annotations:
networking.gke.io/network:blue-pod-network# GKE-specific annotation for network selection
spec:
podSelector:
matchLabels:
app:test-app-2# Selects pods with the label "app: test-app-2"policyTypes:
-Ingress# Specifies the policy applies only to incoming trafficingress:
-from:# Allow incoming traffic only from...-podSelector:
matchLabels:
app:test-app-1# ...pods with the label "app: test-app-1"
Applica il manifest sample-ingress-network-policy1.yaml:
kubectlapply-fsample-ingress-network-policy1.yaml
Per definire le regole di traffico in uscita selezionate e i pod di destinazione in base alle etichette
o ad altri selettori, crea un criterio di rete standard.
Salva il seguente manifest di esempio come sample-egress-network-policy2.yaml:
Applica il manifest sample-egress-network-policy2.yaml:
kubectlapply-fsample-egress-network-policy2.yaml
Risolvere i problemi relativi ai criteri di rete multirete
Se riscontri problemi con i criteri di rete, indipendentemente dal fatto che vengano applicati a reti di pod specifiche o meno, puoi diagnosticare e risolvere il problema eseguendo i seguenti comandi:
kubectl get networkpolicy: elenca tutti gli oggetti policy di rete e
le relative informazioni.
iptables-save: recupera ed elenca tutte le catene di tabelle di indirizzi IP per un
nodo specifico. Devi eseguire questo comando sul nodo come root.
cilium bpf policy get <endpoint-id>: recupera ed elenca gli indirizzi IP consentiti dalla mappa dei criteri di ogni endpoint.
cilium policy selectors: stampa le
identità
e le norme associate che le hanno selezionate.
cilium identity list: mostra i mapping dall'identità all'indirizzo IP.
[[["Facile da capire","easyToUnderstand","thumb-up"],["Il problema è stato risolto","solvedMyProblem","thumb-up"],["Altra","otherUp","thumb-up"]],[["Difficile da capire","hardToUnderstand","thumb-down"],["Informazioni o codice di esempio errati","incorrectInformationOrSampleCode","thumb-down"],["Mancano le informazioni o gli esempi di cui ho bisogno","missingTheInformationSamplesINeed","thumb-down"],["Problema di traduzione","translationIssue","thumb-down"],["Altra","otherDown","thumb-down"]],["Ultimo aggiornamento 2025-09-04 UTC."],[],[],null,["# Set up multi-network network policies\n\n[Standard](/kubernetes-engine/docs/concepts/choose-cluster-mode)\n\n*** ** * ** ***\n\n|\n| **Preview**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nThis page explains how you can enhance network security and traffic control\nwithin your cluster by configuring multi-network network policies that apply\nspecifically to a designated Pod network. These multi-network network policies\ncontrol traffic by using firewall rules at the Pod level, and they control\ntraffic flow between Pods and Services.\n\nTo understand how multi-network network policies work, see [how Network\nPolicies\nwork](/kubernetes-engine/docs/concepts/about-multi-networking-policies#how-it-works)\nwith Pod networks.\n\nRequirements\n------------\n\nTo use multi-network network policies, consider the following requirements:\n\n- Google Cloud CLI version 459 and later.\n- You must have a GKE cluster running one of the following versions:\n - 1.28.5-gke.1293000 or later\n - 1.29.0-gke.1484000 or later\n- Your cluster must use [GKE Dataplane V2](/kubernetes-engine/docs/concepts/dataplane-v2).\n\nLimitations\n-----------\n\n**FQDN network policy and CiliumClusterWide network policy are not supported**:\nIf you use an FQDN network policy and a CiliumClusterWide network policy on a\nPod that's connected to multiple networks, the policies affect all the Pod's\nconnections, including connections where the policies aren't applied.\n\nConfigure multi-network network policies\n----------------------------------------\n\nTo use multi-network network policies, do the following:\n\n1. Create a cluster with [multi-network enabled GKE](/kubernetes-engine/docs/how-to/setup-multinetwork-support-for-pods#create-a-gke-cluster).\n2. Create a [node pool](/kubernetes-engine/docs/how-to/setup-multinetwork-support-for-pods#create-gke-node-pool) and a [Pod network](/kubernetes-engine/docs/how-to/setup-multinetwork-support-for-pods#create-pod-network).\n3. [Reference the Pod network](/kubernetes-engine/docs/how-to/setup-multinetwork-support-for-pods#reference-the-prepared-network).\n4. [Create a network policy](#create-network-policy) to be enforced that references the same Pod network utilized by the workload.\n\nBefore you begin\n----------------\n\nBefore you start, make sure that you have performed the following tasks:\n\n- Enable the Google Kubernetes Engine API.\n[Enable Google Kubernetes Engine API](https://console.cloud.google.com/flows/enableapi?apiid=container.googleapis.com)\n- If you want to use the Google Cloud CLI for this task, [install](/sdk/docs/install) and then [initialize](/sdk/docs/initializing) the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running `gcloud components update`. **Note:** For existing gcloud CLI installations, make sure to set the `compute/region` [property](/sdk/docs/properties#setting_properties). If you use primarily zonal clusters, set the `compute/zone` instead. By setting a default location, you can avoid errors in the gcloud CLI like the following: `One of [--zone, --region] must be supplied: Please specify location`. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.\n\nCreate network policy\n---------------------\n\n1. To create a network policy that enforces rules on the same Pod network as\n your workload, reference the specific Pod network in the network policy\n definition.\n\n2. To define the selected ingress traffic rules and target Pods based on labels\n or other selectors, create a standard Network Policy.\n\n Save the following sample manifest as `sample-ingress-network-policy1.yaml`: \n\n apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: sample-network-policy\n namespace: default\n annotations:\n networking.gke.io/network: blue-pod-network # GKE-specific annotation for network selection\n spec:\n podSelector:\n matchLabels:\n app: test-app-2 # Selects pods with the label \"app: test-app-2\"\n policyTypes:\n - Ingress # Specifies the policy applies only to incoming traffic\n ingress:\n - from: # Allow incoming traffic only from...\n - podSelector:\n matchLabels:\n app: test-app-1 # ...pods with the label \"app: test-app-1\"\n\n3. Apply the `sample-ingress-network-policy1.yaml` manifest:\n\n kubectl apply -f sample-ingress-network-policy1.yaml\n\n4. To define the selected egress traffic rules and target Pods based on labels\n or other selectors, create a standard network policy.\n\n Save the following sample manifest as `sample-egress-network-policy2.yaml`: \n\n apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: sample-network-policy-2\n namespace: default\n annotations:\n networking.gke.io/network: blue-pod-network # GKE-specific annotation (optional)\n spec:\n podSelector:\n matchLabels:\n app: test-app-2\n policyTypes:\n - Egress # Only applies to outgoing traffic\n egress:\n - to:\n - podSelector:\n matchLabels:\n app: test-app-3\n\n5. Apply the `sample-egress-network-policy2.yaml` manifest:\n\n kubectl apply -f sample-egress-network-policy2.yaml\n\nTroubleshoot multi-network network policies\n-------------------------------------------\n\nIf you experience issues with network policies, whether they are applied to\nspecific Pod networks or not, you can diagnose and troubleshoot the problem by\nrunning the following commands:\n\n1. `kubectl get networkpolicy`: lists all network policy objects and information about them.\n2. `iptables-save`: retrieves and lists all IP address tables chains for a particular node. You must run this command on the node as root.\n3. `cilium bpf policy get \u003cendpoint-id\u003e`: retrieves and lists allowed IP addresses from each endpoint's policy map.\n4. `cilium policy selectors`: prints out the [identities](https://docs.cilium.io/en/latest/gettingstarted/terminology/#identity) and the associated policies that have selected them.\n5. `cilium identity list`: shows mappings from identity to IP address.\n\nWhat's next\n-----------\n\n- Read [About multi-network network policies](/kubernetes-engine/docs/concepts/about-multi-networking-policies)\n- Read [Set up multi-network support for Pods](/kubernetes-engine/docs/how-to/setup-persistent-ip-addresses-on-gke-pods)"]]