This page is an overview of Packet Mirroring.
Packet Mirroring clones the traffic of specified instances in your Virtual Private Cloud (VPC) network and forwards it for examination. Packet Mirroring captures all traffic and packet data, including payloads and headers. The capture can be configured for both egress and ingress traffic, only ingress traffic, or only egress traffic.
The mirroring happens on the virtual machine (VM) instances, not on the network. Consequently, Packet Mirroring consumes additional bandwidth on the VMs.
Packet Mirroring is useful when you need to monitor and analyze your security status. It exports all traffic, not only the traffic between sampling periods. For example, you can use security software that analyzes mirrored traffic to detect all threats or anomalies. Additionally, you can inspect the full traffic flow to detect application performance issues. For more information, see the example use cases.
How it works
Packet Mirroring copies traffic from mirrored sources and sends it to a collector destination. To configure Packet Mirroring, you create a packet mirroring policy that specifies the source and destination.
Mirrored sources are Compute Engine VM instances that you can select by specifying subnets, network tags, or instance names. If you specify a subnet, all existing and future instances in that subnet are mirrored. You can specify one or more source types; if an instance matches at least one of them, it's mirrored.
Packet Mirroring collects traffic from an instance's network interface in the network where the packet mirroring policy applies. In cases where an instance has multiple network interfaces, the other interfaces aren't mirrored unless another policy has been configured to do so.
A collector destination is an instance group that is behind an internal load balancer. Instances in the instance group are referred to as collector instances. For the instance group, we recommend that you use managed instance groups because they provide autoscaling and autohealing capabilities.
When you specify the collector destination, you enter the name of a forwarding rule that is associated with the internal TCP/UDP load balancer. Google Cloud then forwards the mirrored traffic to the collector instances. An internal load balancer for Packet Mirroring is similar to other internal load balancers except that the forwarding rule must be configured for Packet Mirroring. Any non-mirrored traffic that is sent to the load balancer is dropped.
By default, Packet Mirroring collects all traffic of mirrored instances. Instead of collecting all traffic, you can use filters to narrow the traffic that is mirrored. Using filters can help you limit bandwidth usage on mirrored instances.
You can configure filters to collect traffic based on protocol, IP address ranges, direction of traffic (ingress-only, egress-only, or both) or a combination.
Multiple packet mirroring policies can apply to an instance. Depending on each
policy's filter, Google Cloud chooses one for each
flow. If you have distinct policies, Google Cloud uses the
corresponding policy that matches the mirrored traffic. For example, you might
have one policy that has the filter
198.51.100.3/24:TCP, and another policy
that has the filter
203.0.113.2/24:UDP. Because the policies are distinct,
there's no ambiguity about which policy Google Cloud uses.
However, if you have overlapping policies, Google Cloud evaluates
their filters to choose one. For example, you might have two
policies, one that has a filter for
10.0.0.0/24:TCP and another for
10.0.0.0/16:TCP. These policies overlap because their CIDR ranges overlap.
When choosing a policy, Google Cloud prioritizes policies by comparing their filter's CIDR range size.
Google Cloud chooses a policy based on a filter:
If policies have different but overlapping CIDR ranges and the same exact protocols, Google Cloud selects the policy that uses the most specific IP address range. Suppose the destination for a TCP packet leaving a mirrored instance is
10.240.1.4and there are two policies with the following filters:
10.240.0.0/16:TCP. Because the most specific match for
10.240.1.0/24:ALL, Google Cloud uses the policy that has the filter
If policies specify the same exact CIDR range with overlapping protocols, Google Cloud picks a policy with the most specific protocol. For example, the following filters have the same range but overlapping protocols:
10.240.1.0/24:ALL. For matching TCP traffic, Google Cloud using the
10.240.1.0/24:ALLpolicy applies to matching traffic for all other protocols.
If policies have the same exact CIDR range but distinct protocols, these policies don't overlap. Google Cloud uses the policy that corresponds to the mirrored traffic's protocol. For example, you might have a policy for
10.240.1.0/24:TCPand another for
10.240.1.0/24:UDP. Depending on the mirrored traffic's protocol, Google Cloud uses either the TCP or UDP policy.
If overlapping policies have the same exact filter, Google Cloud picks one using a non-deterministic method. Each time matching traffic is re-evaluated against these policies, Google Cloud might pick the same policy or a different one.
In cases where the selected policy is not deterministic, it's possible that Google Cloud captures mirrored traffic across multiple load balancers. To predictably and consistently send mirrored traffic to a single load balancer, create policies that have filters with non-overlapping address ranges. If ranges overlap, set unique filter protocols.
VPC Flow Logs
If mirrored instances are in a subnet that also has VPC Flow Logs enabled, VPC Flow Logs doesn't report the cloned packets. VPC Flow Logs only logs non-mirrored traffic.
However, if collector instances are in a subnet that has VPC Flow Logs enabled, VPC Flow Logs captures the flow between the mirrored and collector instances. The logs show the source and destination IP addresses as the mirrored and collector instances. To stop collecting logs on mirrored traffic, disable VPC Flow Logs on the collector instances subnet.
For more information, see Using VPC Flow Logs.
The following list describes constraints or behaviors with Packet Mirroring that are important to understand before you use it:
Each packet mirroring policy defines mirrored sources and a collector destination. You must adhere to the following rules:
- All mirrored sources must be in the same project, VPC network, and Google Cloud region.
- A collector destination must be in the same region as the mirrored sources. A collector destination can be located in either the same VPC network as the mirrored sources or a VPC network connected to the mirrored sources' network using VPC Network Peering.
- Each mirroring policy can only reference a single collector destination. However, a single collector destination can be referenced by multiple mirroring policies.
All layer 4 protocols over IPv4 are supported by packet mirroring.
You cannot mirror and collect traffic on the same network interface of a VM instance because doing this would cause a mirroring loop.
To mirror traffic passing between Pods on the same Google Kubernetes Engine (GKE) node, you must enable Intranode visibility for the cluster.
Mirroring traffic consumes bandwidth on the mirrored instance. For example, if a mirrored instance experiences 1 Gbps of ingress traffic and 1 Gbps of egress traffic, the total traffic on the instances is 1 Gbps of ingress and 3 Gbps of egress (1 Gbps of normal egress traffic and 2 Gbps of mirrored egress traffic). To limit what traffic is collected, you can use filters.
The cost of Packet Mirroring varies depending on the amount of egress traffic traveling from a mirrored instance to an instance group and whether the traffic travels between zones.
Packet Mirroring applies to both ingress and egress direction. If two VM instances that are being mirrored send traffic to each other, Google Cloud collects two versions of the same packet. You can alter this behaviour by specifying that only ingress or only egress packets are mirrored.
There is a maximum number of packet mirroring policies that you can create for a project. For more information, see the per-project quotas on the quotas page.
For each packet mirroring policy, the maximum number of mirrored sources that you can specify depends on the source type:
- 5 subnets
- 5 tags
- 50 instances
The maximum number of packet mirroring filters is 30, which is the number of IP address ranges multiplied by the number of protocols. For example, you can specify 30 ranges and 1 protocol, which would be 30 filters. However, you cannot specify 30 ranges and 2 protocols, which would be 60 filters and greater than the maximum.
You are charged for the amount of data processed by Packet Mirroring. For details, see Packet Mirroring pricing.
You are also charged for all the prerequisite components and egress traffic that are related to Packet Mirroring. For example, the instances that collect traffic are charged at the regular rate. Also, if packet mirroring traffic travels between zones, you are charged for the egress traffic. For pricing details, see the related pricing page.
Mirrored traffic is encrypted only if the VM encrypts that traffic at the application layer. While VM-to-VM connections within VPC networks and peered VPC networks are encrypted, the encryption and decryption happens in the hypervisors. From the perspective of the VM, this traffic is not encrypted.
The following sections describe real-world scenarios that demonstrate why you might use Packet Mirroring.
Security and network engineering teams must ensure that they are catching all anomalies and threats that might indicate security breaches and intrusions. They mirror all traffic so that they can complete a comprehensive inspection of suspicious flows. Because attacks can span multiple packets, security teams must be able to get all packets for each flow.
For example, the following security tools require you to capture multiple packets:
Intrusion detection system (IDS) tools require multiple packets of a single flow to match a signature so that the tools can detect persistent threats.
Deep Packet Inspection engines inspect packet payloads to detect protocol anomalies.
Network forensics for PCI compliance and other regulatory use cases require that most packets be examined. Packet Mirroring provides a solution for capturing different attack vectors, such as infrequent communication or attempted but unsuccessful communication.
Application performance monitoring
Network engineers can use mirrored traffic to troubleshoot performance issues reported by application and database teams. To check for networking issues, network engineers can view what's going over the wire rather than relying on application logs.
For example, network engineers can use data from Packet Mirroring to complete the following tasks:
Analyze protocols and behaviors so that they can find and fix issues, such as packet loss or TCP resets.
Analyze (in real time) traffic patterns from remote desktop, VoIP, and other interactive applications. Network engineers can search for issues that affect the application's user experience, such as multiple packet resends or more than expected reconnections.
Example collector destination topologies
You can use Packet Mirroring in various setups. The following examples show the location of collector destinations and their policies for different packet mirroring configurations, such as VPC Network Peering and Shared VPC.
Collector destination in the same network
The following example shows a packet mirroring configuration where the mirrored source and collector destination are in the same VPC network.
In the preceding diagram, the packet mirroring policy is configured to mirror
mirrored-subnet and send mirrored traffic to the internal TCP/UDP load balancer.
Google Cloud mirrors the traffic on existing and future instances in the
subnet. All traffic to and from the internet, on-premises hosts, or Google
services is mirrored.
Collector destination in a peer network
You can build a centralized collector model, where instances in different VPC networks send mirrored traffic to a collector destination in a central VPC network. That way, you can use a single destination collector.
In the following example, the
collector-load-balancer internal TCP/UDP load balancer is
us-central1 region in the
network-a VPC network in
project-a. This destination collector can be used by two packet mirroring
policy-1collects packets from mirrored sources in the
us-central1region in the
network-aVPC network in
project-aand sends them to the
policy-2collects packets from mirrored sources in the
us-central1region in the
network-bVPC network in
project-band sends them to the same
Two mirroring policies are required because mirrored sources exist in different VPC networks.
In the preceding diagram, the collector destination collects mirrored traffic
from subnets in two different networks. All resources (the source and
destination) must be in the same region. The setup in
network-a is similar to
the example where the mirrored source and collector destination are in the same
policy-1 is configured to collect
subnet-a and send it to
policy-2 is configured in
project-a but specifies
subnet-b as a mirrored
network-b are peered, the destination
collector can collect traffic from
The networks are in different projects and might have different owners. It's possible for either owner to create the packet mirroring policy if they have the right permissions:
If the owners of
project-acreate the packet mirroring policy, they must have the
compute.packetMirroringAdminrole on the network, subnet, or instances to mirror in
If the owners of
project-bcreate the packet mirroring policy, they must have
For more information about enabling private connectivity across two VPC networks, see VPC Network Peering.
In the following Shared VPC scenarios, the mirrored instances for the collector destination are all in the same Shared VPC network. Even though the resources are all in the same network, they can be in different projects, such as the host project or several different service projects. The following examples show where packet mirroring policies must be created and who can create them.
If both the mirrored sources and collector destination are in the same project, either in a host project or service project, the setup is similar to having everything in the same VPC network. The project owner can create all the resources and set the required permissions in that project.
For more information, see Shared VPC overiew.
Collector destination in service project
In the following example, the collector destination is in a service project that uses a subnet in the host project. In this case, the policy is also in the service project. The policy could also be in the host project.
In the preceding diagram, the service project contains the collector instances
that use the collector subnet in the Shared VPC network. The packet
mirroring policy was created in the service project and is configured to mirror
instances that have a network interface in
Service or host project users can create the packet mirroring policy. To do so,
users must have the
compute.packetMirroringUser role in the service project
where the collector destination is located. Users must also have the
compute.packetMirroringAdmin role on the mirrored sources.
Collector destination in host project
In the following example, the collector destination is in the host project and mirrored instances are in the service projects.
This example might apply to scenarios where developers deploy applications in service projects and use the Shared VPC network. They don't have to manage the networking infrastructure or Packet Mirroring. Instead, a centralized networking or security team, who have control over the host project and Shared VPC network, are responsible for provisioning packet mirroring policies.
In the preceding diagram, the packet mirroring policy is created in the host project, where the collector destination is located. The policy is configured to mirror instances in the mirrored subnet. VM instances in service projects can use the mirrored subnet, and their traffic is mirrored.
Service or host project users can create the packet mirroring policy. To do so,
users in the service project must have the
compute.packetMirroringUser role in
the host project. Users in the host project require the
compute.packetMirroringAdmin role for mirrored sources in the service
Multi-interface VM instances
You can include VM instances that have multiple network interfaces in a packet mirroring policy. Because a policy can mirror resources from a single network, you cannot create one policy to mirror traffic for all network interfaces of an instance. If you need to mirror more than one network interface of a multiple network interface instance, you must create one packet mirroring policy for each interface because each interface connects to a unique VPC network.
- To create and manage packet mirroring policies, see Using Packet Mirroring.
- To view metrics and check your existing packet mirroring policies, see Monitoring Packet Mirroring.
- For information about internal TCP/UDP load balancers, see Internal TCP/UDP Load Balancing concepts.
- For a list of partner providers, see Packet mirroring partner providers.