This document provides steps and guidance to implement your chosen network design after you review Decide the network design for your Google Cloud landing zone. If you have not already done so, review Landing zone design in Google Cloud before you choose an option.
These instructions are intended for network engineers, architects, and technical practitioners who are involved in creating the network design for your organization's landing zone.
Network design options
Based on your chosen network design, complete one of the following:
- Create option 1: Shared VPC network for each environment
- Create option 2: Hub-and-spoke topology with centralized appliances
- Create option 3: Hub-and-spoke topology without appliances
- Create option 4: Expose services in a consumer-producer model with Private Service Connect
Create option 1: Shared VPC network for each environment
If you have chosen to create the Shared VPC network for each environment in "Decide the network design for your Google Cloud landing zone", follow this procedure.
The following steps create a single instance of a landing zone. If you need more than one landing zone, perhaps one for development and one for production, repeat the steps for each landing zone.
Limit external access by using an organization policy
We recommend that you limit direct access to the internet to only the resources that need it. Resources without external addresses can still access many Google APIs and services through Private Google Access. Private Google Access is enabled at the subnet level and lets resources interact with key Google services, while isolating them from the public internet.
For usability, the default functionality of Google Cloud lets users create resources in all projects, as long as they have the correct IAM permissions. For improved security, we recommend that you restrict the default permissions for resource types that can cause unintended internet access. You can then authorize specific projects only to allow the creation of these resources. Use the instructions at Creating and managing organization policies to set the following constraints.
Restrict Protocol Forwarding Based on type of IP Address
Protocol forwarding establishes a forwarding rule resource with an external IP address and lets you direct the traffic to a VM.
The Restrict Protocol Forwarding Based on type of IP Address constraint prevents the creation of forwarding rules with external IP addresses for the entire organization. For projects authorized to use external forwarding rules, you can modify the constraint at the folder or project level.
Set the following values to configure this constraint:
- Applies to: Customize
- Policy enforcement: Replace
- Policy values: Custom
- Policy type: Deny
- Custom value:
IS:EXTERNAL
Define allowed external IPs for VM instances
By default, individual VM instances can acquire external IP addresses, which allows both outbound and inbound connectivity with the internet.
Enforcing the Define allowed external IPs for VM instances constraint prevents the use of external IP addresses with VM instances. For workloads that require external IP addresses on individual VM instances, modify the constraint at a folder or project level to specify the individual VM instances. Or, override the constraint for the relevant projects.
- Applies to: Customize
- Policy enforcement: Replace
- Policy values: Deny All
Disable VPC External IPv6 usage
The Disable VPC External IPv6 usage constraint, when set to True
,
prevents the configuration of VPC subnets with external IPv6 addresses for VM
instances.
- Applies to: Customize
- Enforcement: On
Disable default network creation
When a new project is created, a default VPC is automatically created. This is useful for quick experiments that don't require specific network configuration or integration with a larger enterprise networking environment.
Configure the Skip default network creation constraint to disable default VPC creation for new projects. You can manually create the default network within a project, if needed.
- Applies to: Customize
- Enforcement: On
Design firewall rules
Firewall rules let you allow or deny traffic to or from your VMs based on a configuration you define. Hierarchical firewall policies are implemented at the organization and folder levels, and network firewall policies are implemented at the VPC network level in the resource hierarchy. Together, these provide an important capability to help secure your workloads.
Regardless of where the firewall policies are applied, use the following guidelines when designing and evaluating your firewall rules:
- Implement least-privilege (also referred to as microsegmentation) principles. Block all traffic by default and only allow the specific traffic you need. This includes limiting the rules to only the protocols and ports you need for each workload.
- Enable firewall rules logging for visibility into firewall behavior and to use Firewall Insights.
- Define a numbering methodology for allocating firewall rule priorities. For example, it's best practice to reserve a range of low numbers in each policy for rules needed during incident response. We also recommend that you prioritize more specific rules higher than more general rules, to ensure that the specific rules aren't shadowed by the general rules. The following example shows a possible approach for firewall rule priorities:
Firewall rule priority range |
Purpose |
---|---|
0-999 |
Reserved for incident response |
1000-1999 |
Always blocked traffic |
2000-1999999999 |
Workload-specific rules |
2000000000-2100000000 |
Catch-all rules |
2100000001-2147483643 |
Reserved |
Configure hierarchical firewall policies
Hierarchical firewall policies let you create and enforce a consistent firewall policy across your organization. For examples of using hierarchical firewall policies, see Hierarchical firewall policy examples.
Define hierarchical firewall policies to implement the following network access controls:
- Identity-Aware Proxy (IAP) for TCP forwarding. IAP for TCP forwarding is allowed through a security policy that permits ingress traffic from IP range 35.235.240.0/20 for TCP ports 22 and 3389.
- Health checks for Cloud Load Balancing. The well-known ranges that
are used for health checks are allowed.
- For most Cloud Load Balancing instances (including Internal TCP/UDP Load Balancing, Internal HTTP(S) Load Balancing, External TCP Proxy Load Balancing, External SSL Proxy Load Balancing, and HTTP(S) Load Balancing), a security policy is defined that allows ingress traffic from the IP ranges 35.191.0.0/16 and 130.211.0.0/22 for ports 80 and 443.
- For Network Load Balancing, a security policy is defined that enables legacy health checks by allowing ingress traffic from IP ranges 35.191.0.0/16, 209.85.152.0/22, and 209.85.204.0/22 for ports 80 and 443.
Configure your Shared VPC environment
Before implementing a Shared VPC design, decide how to share subnets with service projects. You attach a service project to a host project. To determine which subnets are available for the service project, you assign IAM permissions to the host project or individual subnets. For example, you can choose to dedicate a different subnet to each service project, or share the same subnets between service projects.
- Create a new project for the Shared VPC. Later in this process, this project becomes the host project and contains the networks and networking resources to be shared with the service projects.
- Enable the Compute Engine API for the host project.
- Configure Shared VPC for the project.
- Create the custom-mode VPC network in the host project.
- Create subnets in the regions where you plan to deploy workloads. For each subnet, enable Private Google Access to allow VM instances without external IP addresses to reach Google services.
Configure Cloud NAT
Follow these steps if the workloads in specific regions require outbound internet access—for example, to download software packages or updates.
- Create a Cloud NAT gateway in the regions where workloads require outbound internet access. You can customize the Cloud NAT configuration to only allow outbound connectivity from specific subnets, if needed.
- At a minimum,
enable Cloud NAT logging
for the gateway to log
ERRORS_ONLY
. To include logs for translations performed by Cloud NAT, configure each gateway to logALL
.
Configure hybrid connectivity
You can use Dedicated Interconnect, Partner Interconnect, or Cloud VPN to provide hybrid connectivity to your landing zone. The following steps create the initial hybrid connectivity resources required for this design option:- If you're using Dedicated Interconnect, do the following. If you're using Partner Interconnect or Cloud VPN, you can skip these steps.
- For each region where you're terminating hybrid connectivity in the
VPC network, do the following:
- Create two Dedicated or Partner VLAN attachments, one for each edge availability zone. As part of this process, you select Cloud Routers and create BGP sessions.
- Configure the peer network (on-premises or other cloud) routers.
Configure workload projects
Create a separate service project for each workload:
- Create a new project to function as one of the service projects for the Shared VPC.
- Enable the Compute Engine API for the service project.
- Attach the project to the host project.
- Configure access to all subnets in the host project or some subnets in the host project.
Configure observability
Network Intelligence Center provides a cohesive way to monitor, troubleshoot, and visualize your cloud networking environment. Use it to ensure that your design functions with the desired intent.
The following configurations support the analysis of logging and metrics enabled.
- You must enable the Network Management API before you can run Connectivity Tests. Enabling the API is required to use the API directly, the Google Cloud CLI, or the Google Cloud console.
- You must enable the Firewall Insights API before you can perform any tasks using Firewall Insights.
Next steps
The initial configuration for this network design option is now complete. You can now either repeat these steps to configure an additional instance of the landing zone environment, such as a staging or production environment, or continue to Decide the security for your Google Cloud landing zone.
Create option 2: Hub-and-spoke topology with centralized appliances
If you have chosen to create the hub-and-spoke topology with centralized appliances in "Decide the network design for your Google Cloud landing zone", follow this procedure.
The following steps create a single instance of a landing zone. If you need more than one landing zone, perhaps one for development and one for production, repeat the steps for each landing zone.
Limit external access by using an organization policy
We recommend that you limit direct access to the internet to only the resources that need it. Resources without external addresses can still access many Google APIs and services through Private Google Access. Private Google Access is enabled at the subnet level and lets resources interact with key Google services, while isolating them from the public internet.
For usability, the default functionality of Google Cloud lets users create resources in all projects, as long as they have the correct IAM permissions. For improved security, we recommend that you restrict the default permissions for resource types that can cause unintended internet access. You can then authorize specific projects only to allow the creation of these resources. Use the instructions at Creating and managing organization policies to set the following constraints.
Restrict Protocol Forwarding Based on type of IP Address
Protocol forwarding establishes a forwarding rule resource with an external IP address and lets you direct the traffic to a VM.
The Restrict Protocol Forwarding Based on type of IP Address constraint prevents the creation of forwarding rules with external IP addresses for the entire organization. For projects authorized to use external forwarding rules, you can modify the constraint at the folder or project level.
Set the following values to configure this constraint:
- Applies to: Customize
- Policy enforcement: Replace
- Policy values: Custom
- Policy type: Deny
- Custom value:
IS:EXTERNAL
Define allowed external IPs for VM instances
By default, individual VM instances can acquire external IP addresses, which allows both outbound and inbound connectivity with the internet.
Enforcing the Define allowed external IPs for VM instances constraint prevents the use of external IP addresses with VM instances. For workloads that require external IP addresses on individual VM instances, modify the constraint at a folder or project level to specify the individual VM instances. Or, override the constraint for the relevant projects.
- Applies to: Customize
- Policy enforcement: Replace
- Policy values: Deny All
Disable VPC External IPv6 usage
The Disable VPC External IPv6 usage constraint, when set to True
,
prevents the configuration of VPC subnets with external IPv6 addresses for VM
instances.
- Applies to: Customize
- Enforcement: On
Disable default network creation
When a new project is created, a default VPC is automatically created. This is useful for quick experiments that don't require specific network configuration or integration with a larger enterprise networking environment.
Configure the Skip default network creation constraint to disable default VPC creation for new projects. You can manually create the default network within a project, if needed.
- Applies to: Customize
- Enforcement: On
Design firewall rules
Firewall rules let you allow or deny traffic to or from your VMs based on a configuration you define. Hierarchical firewall policies are implemented at the organization and folder levels, and network firewall policies are implemented at the VPC network level in the resource hierarchy. Together, these provide an important capability to help secure your workloads.
Regardless of where the firewall policies are applied, use the following guidelines when designing and evaluating your firewall rules:
- Implement least-privilege (also referred to as microsegmentation) principles. Block all traffic by default and only allow the specific traffic you need. This includes limiting the rules to only the protocols and ports you need for each workload.
- Enable firewall rules logging for visibility into firewall behavior and to use Firewall Insights.
- Define a numbering methodology for allocating firewall rule priorities. For example, it's best practice to reserve a range of low numbers in each policy for rules needed during incident response. We also recommend that you prioritize more specific rules higher than more general rules, to ensure that the specific rules aren't shadowed by the general rules. The following example shows a possible approach for firewall rule priorities:
Firewall rule priority range |
Purpose |
---|---|
0-999 |
Reserved for incident response |
1000-1999 |
Always blocked traffic |
2000-1999999999 |
Workload-specific rules |
2000000000-2100000000 |
Catch-all rules |
2100000001-2147483643 |
Reserved |
Configure hierarchical firewall policies
Hierarchical firewall policies let you create and enforce a consistent firewall policy across your organization. For examples of using hierarchical firewall policies, see Hierarchical firewall policy examples.
Define hierarchical firewall policies to implement the following network access controls:
- Identity-Aware Proxy (IAP) for TCP forwarding. IAP for TCP forwarding is allowed through a security policy that permits ingress traffic from IP range 35.235.240.0/20 for TCP ports 22 and 3389.
- Health checks for Cloud Load Balancing. The well-known ranges that
are used for health checks are allowed.
- For most Cloud Load Balancing instances (including Internal TCP/UDP Load Balancing, Internal HTTP(S) Load Balancing, External TCP Proxy Load Balancing, External SSL Proxy Load Balancing, and HTTP(S) Load Balancing), a security policy is defined that allows ingress traffic from the IP ranges 35.191.0.0/16 and 130.211.0.0/22 for ports 80 and 443.
- For Network Load Balancing, a security policy is defined that enables legacy health checks by allowing ingress traffic from IP ranges 35.191.0.0/16, 209.85.152.0/22, and 209.85.204.0/22 for ports 80 and 443.
Configure your VPC environment
The transit and hub VPC networks provide the networking resources to enable connectivity between workload spoke VPC networks and on-premises or multi-cloud networks.
- Create a new project for transit and hub VPC networks. Both VPC networks are part of the same project to support connectivity through the virtual network appliances.
- Enable the Compute Engine API for the project.
- Create the transit custom mode VPC network.
- In the transit VPC network, create a subnet in the regions where you plan to deploy the virtual network appliances.
- Create the hub custom mode VPC network.
- In the hub VPC network, create a subnet in the regions where you plan to deploy the virtual network appliances.
- Configure global or regional network firewall policies to allow ingress and egress traffic for the network virtual appliances.
- Create a managed instance group for the virtual network appliances.
- Configure the internal TCP/UDP load balancing resources for the transit VPC. This load balancer is used for routing traffic from the transit VPC to the hub VPC through the virtual network appliances.
- Configure the internal TCP/UDP load balancing resources for the hub VPC. This load balancer is used for routing traffic from the hub VPC to the transit VPC through the virtual network appliances.
- Configure Private Service Connect for Google APIs for the hub VPC.
- Modify VPC routes
to send all traffic through the network virtual appliances:
- Delete the
0.0.0.0/0
route with next-hopdefault-internet-gateway
from the hub VPC. - Configure a new route with destination
0.0.0.0/0
and a next-hop of the forwarding rule for the load balancer in the hub VPC.
- Delete the
Configure Cloud NAT
Follow these steps if the workloads in specific regions require outbound internet access—for example, to download software packages or updates.
- Create a Cloud NAT gateway in the regions where workloads require outbound internet access. You can customize the Cloud NAT configuration to only allow outbound connectivity from specific subnets, if needed.
- At a minimum,
enable Cloud NAT logging
for the gateway to log
ERRORS_ONLY
. To include logs for translations performed by Cloud NAT, configure each gateway to logALL
.
Configure hybrid connectivity
You can use Dedicated Interconnect, Partner Interconnect, or Cloud VPN to provide hybrid connectivity to your landing zone. The following steps create the initial hybrid connectivity resources required for this design option:- If you're using Dedicated Interconnect, do the following. If you're using Partner Interconnect or Cloud VPN, you can skip these steps.
- For each region where you're terminating hybrid connectivity in the
VPC network, do the following:
- Create two Dedicated or Partner VLAN attachments, one for each edge availability zone. As part of this process, you select Cloud Routers and create BGP sessions.
- Configure the peer network (on-premises or other cloud) routers.
- Configure custom advertised routes in the Cloud Routers for the subnet ranges in the hub and workload VPCs.
Configure workload projects
Create a separate spoke VPC for each workload:
- Create a new project to host your workload.
- Enable the Compute Engine API for the project.
- Configure VPC Network Peering
between the workload spoke VPC and hub VPC with the following settings:
- Enable custom route export on the hub VPC.
- Enable custom route import on the workload spoke VPC.
- Create subnets in the regions where you plan to deploy workloads. For each subnet, enable Private Google Access to allow VM instances with only internal IP addresses to reach Google services.
- Configure Private Service Connect for Google APIs.
- To route all traffic through the virtual network appliances in the hub
VPC, delete the
0.0.0.0/0
route with next-hopdefault-internet-gateway
from the workload spoke VPC. - Configure global or regional network firewall policies to allow ingress and egress traffic for your workload.
Configure observability
Network Intelligence Center provides a cohesive way to monitor, troubleshoot, and visualize your cloud networking environment. Use it to ensure that your design functions with the desired intent.
The following configurations support the analysis of logging and metrics enabled.
- You must enable the Network Management API before you can run Connectivity Tests. Enabling the API is required to use the API directly, the Google Cloud CLI, or the Google Cloud console.
- You must enable the Firewall Insights API before you can perform any tasks using Firewall Insights.
Next steps
The initial configuration for this network design option is now complete. You can now either repeat these steps to configure an additional instance of the landing zone environment, such as a staging or production environment, or continue to Decide the security for your Google Cloud landing zone.
Create option 3: Hub-and-spoke topology without appliances es
If you have chosen to create the hub-and-spoke topology without appliances in "Decide the network design for your Google Cloud landing zone", follow this procedure.
The following steps create a single instance of a landing zone. If you need more than one landing zone, perhaps one for development and one for production, repeat the steps for each landing zone.
Limit external access by using an organization policy
We recommend that you limit direct access to the internet to only the resources that need it. Resources without external addresses can still access many Google APIs and services through Private Google Access. Private Google Access is enabled at the subnet level and lets resources interact with key Google services, while isolating them from the public internet.
For usability, the default functionality of Google Cloud lets users create resources in all projects, as long as they have the correct IAM permissions. For improved security, we recommend that you restrict the default permissions for resource types that can cause unintended internet access. You can then authorize specific projects only to allow the creation of these resources. Use the instructions at Creating and managing organization policies to set the following constraints.
Restrict Protocol Forwarding Based on type of IP Address
Protocol forwarding establishes a forwarding rule resource with an external IP address and lets you direct the traffic to a VM.
The Restrict Protocol Forwarding Based on type of IP Address constraint prevents the creation of forwarding rules with external IP addresses for the entire organization. For projects authorized to use external forwarding rules, you can modify the constraint at the folder or project level.
Set the following values to configure this constraint:
- Applies to: Customize
- Policy enforcement: Replace
- Policy values: Custom
- Policy type: Deny
- Custom value:
IS:EXTERNAL
Define allowed external IPs for VM instances
By default, individual VM instances can acquire external IP addresses, which allows both outbound and inbound connectivity with the internet.
Enforcing the Define allowed external IPs for VM instances constraint prevents the use of external IP addresses with VM instances. For workloads that require external IP addresses on individual VM instances, modify the constraint at a folder or project level to specify the individual VM instances. Or, override the constraint for the relevant projects.
- Applies to: Customize
- Policy enforcement: Replace
- Policy values: Deny All
Disable VPC External IPv6 usage
The Disable VPC External IPv6 usage constraint, when set to True
,
prevents the configuration of VPC subnets with external IPv6 addresses for VM
instances.
- Applies to: Customize
- Enforcement: On
Disable default network creation
When a new project is created, a default VPC is automatically created. This is useful for quick experiments that don't require specific network configuration or integration with a larger enterprise networking environment.
Configure the Skip default network creation constraint to disable default VPC creation for new projects. You can manually create the default network within a project, if needed.
- Applies to: Customize
- Enforcement: On
Design firewall rules
Firewall rules let you allow or deny traffic to or from your VMs based on a configuration you define. Hierarchical firewall policies are implemented at the organization and folder levels, and network firewall policies are implemented at the VPC network level in the resource hierarchy. Together, these provide an important capability to help secure your workloads.
Regardless of where the firewall policies are applied, use the following guidelines when designing and evaluating your firewall rules:
- Implement least-privilege (also referred to as microsegmentation) principles. Block all traffic by default and only allow the specific traffic you need. This includes limiting the rules to only the protocols and ports you need for each workload.
- Enable firewall rules logging for visibility into firewall behavior and to use Firewall Insights.
- Define a numbering methodology for allocating firewall rule priorities. For example, it's best practice to reserve a range of low numbers in each policy for rules needed during incident response. We also recommend that you prioritize more specific rules higher than more general rules, to ensure that the specific rules aren't shadowed by the general rules. The following example shows a possible approach for firewall rule priorities:
Firewall rule priority range |
Purpose |
---|---|
0-999 |
Reserved for incident response |
1000-1999 |
Always blocked traffic |
2000-1999999999 |
Workload-specific rules |
2000000000-2100000000 |
Catch-all rules |
2100000001-2147483643 |
Reserved |
Configure hierarchical firewall policies
Hierarchical firewall policies let you create and enforce a consistent firewall policy across your organization. For examples of using hierarchical firewall policies, see Hierarchical firewall policy examples.
Define hierarchical firewall policies to implement the following network access controls:
- Identity-Aware Proxy (IAP) for TCP forwarding. IAP for TCP forwarding is allowed through a security policy that permits ingress traffic from IP range 35.235.240.0/20 for TCP ports 22 and 3389.
- Health checks for Cloud Load Balancing. The well-known ranges that
are used for health checks are allowed.
- For most Cloud Load Balancing instances (including Internal TCP/UDP Load Balancing, Internal HTTP(S) Load Balancing, External TCP Proxy Load Balancing, External SSL Proxy Load Balancing, and HTTP(S) Load Balancing), a security policy is defined that allows ingress traffic from the IP ranges 35.191.0.0/16 and 130.211.0.0/22 for ports 80 and 443.
- For Network Load Balancing, a security policy is defined that enables legacy health checks by allowing ingress traffic from IP ranges 35.191.0.0/16, 209.85.152.0/22, and 209.85.204.0/22 for ports 80 and 443.
Configure the hub VPC environment
The hub VPC provides the networking resources to enable connectivity between workload spoke VPC networks and on-premises or multi-cloud networks.
- Create a new project for the hub VPC network.
- Enable the Compute Engine API for the project.
- Create the hub custom mode VPC network.
- Configure Private Service Connect for Google APIs for the hub VPC.
Configure hybrid connectivity
You can use Dedicated Interconnect, Partner Interconnect, or Cloud VPN to provide hybrid connectivity to your landing zone. The following steps create the initial hybrid connectivity resources required for this design option:- If you're using Dedicated Interconnect, do the following. If you're using Partner Interconnect or Cloud VPN, you can skip these steps.
- For each region where you're terminating hybrid connectivity in the
VPC network, do the following:
- Create two Dedicated or Partner VLAN attachments, one for each edge availability zone. As part of this process, you select Cloud Routers and create BGP sessions.
- Configure the peer network (on-premises or other cloud) routers.
- Configure custom advertised routes in the Cloud Routers for the subnet ranges in the hub and workload VPCs.
Configure workload projects
Create a separate spoke VPC for each workload:
- Create a new project to host your workload.
- Enable the Compute Engine API for the project.
- Configure VPC Network Peering
between the workload spoke VPC and hub VPC, with the following settings:
- Enable custom route export on the hub VPC.
- Enable custom route import on the workload spoke VPC.
- Create subnets in the regions where you plan to deploy workloads. For each subnet, enable Private Google Access to allow VM instances with only internal IP addresses to reach Google services.
- Configure Private Service Connect for Google APIs.
Configure Cloud NAT
Follow these steps if the workloads in specific regions require outbound internet access—for example, to download software packages or updates.
- Create a Cloud NAT gateway in the regions where workloads require outbound internet access. You can customize the Cloud NAT configuration to only allow outbound connectivity from specific subnets, if needed.
- At a minimum,
enable Cloud NAT logging
for the gateway to log
ERRORS_ONLY
. To include logs for translations performed by Cloud NAT, configure each gateway to logALL
.
Configure observability
Network Intelligence Center provides a cohesive way to monitor, troubleshoot, and visualize your cloud networking environment. Use it to ensure that your design functions with the desired intent.
The following configurations support the analysis of logging and metrics enabled.
- You must enable the Network Management API before you can run Connectivity Tests. Enabling the API is required to use the API directly, the Google Cloud CLI, or the Google Cloud console.
- You must enable the Firewall Insights API before you can perform any tasks using Firewall Insights.
Next steps
The initial configuration for this network design option is now complete. You can now either repeat these steps to configure an additional instance of the landing zone environment, such as a staging or production environment, or continue to Decide the security for your Google Cloud landing zone.
Create option 4: Expose services in a consumer-producer model with Private Service Connect
If you have chosen to expose services in a consumer-producer model with Private Service Connect for your landing zone, as described in "Decide the network design for your Google Cloud landing zone", follow this procedure.
The following steps create a single instance of a landing zone. If you need more than one landing zone, perhaps one for development and one for production, repeat the steps for each landing zone.
Limit external access by using an organization policy
We recommend that you limit direct access to the internet to only the resources that need it. Resources without external addresses can still access many Google APIs and services through Private Google Access. Private Google Access is enabled at the subnet level and lets resources interact with key Google services, while isolating them from the public internet.
For usability, the default functionality of Google Cloud lets users create resources in all projects, as long as they have the correct IAM permissions. For improved security, we recommend that you restrict the default permissions for resource types that can cause unintended internet access. You can then authorize specific projects only to allow the creation of these resources. Use the instructions at Creating and managing organization policies to set the following constraints.
Restrict Protocol Forwarding Based on type of IP Address
Protocol forwarding establishes a forwarding rule resource with an external IP address and lets you direct the traffic to a VM.
The Restrict Protocol Forwarding Based on type of IP Address constraint prevents the creation of forwarding rules with external IP addresses for the entire organization. For projects authorized to use external forwarding rules, you can modify the constraint at the folder or project level.
Set the following values to configure this constraint:
- Applies to: Customize
- Policy enforcement: Replace
- Policy values: Custom
- Policy type: Deny
- Custom value:
IS:EXTERNAL
Define allowed external IPs for VM instances
By default, individual VM instances can acquire external IP addresses, which allows both outbound and inbound connectivity with the internet.
Enforcing the Define allowed external IPs for VM instances constraint prevents the use of external IP addresses with VM instances. For workloads that require external IP addresses on individual VM instances, modify the constraint at a folder or project level to specify the individual VM instances. Or, override the constraint for the relevant projects.
- Applies to: Customize
- Policy enforcement: Replace
- Policy values: Deny All
Disable VPC External IPv6 usage
The Disable VPC External IPv6 usage constraint, when set to True
,
prevents the configuration of VPC subnets with external IPv6 addresses for VM
instances.
- Applies to: Customize
- Enforcement: On
Disable default network creation
When a new project is created, a default VPC is automatically created. This is useful for quick experiments that don't require specific network configuration or integration with a larger enterprise networking environment.
Configure the Skip default network creation constraint to disable default VPC creation for new projects. You can manually create the default network within a project, if needed.
- Applies to: Customize
- Enforcement: On
Design firewall rules
Firewall rules let you allow or deny traffic to or from your VMs based on a configuration you define. Hierarchical firewall policies are implemented at the organization and folder levels, and network firewall policies are implemented at the VPC network level in the resource hierarchy. Together, these provide an important capability to help secure your workloads.
Regardless of where the firewall policies are applied, use the following guidelines when designing and evaluating your firewall rules:
- Implement least-privilege (also referred to as microsegmentation) principles. Block all traffic by default and only allow the specific traffic you need. This includes limiting the rules to only the protocols and ports you need for each workload.
- Enable firewall rules logging for visibility into firewall behavior and to use Firewall Insights.
- Define a numbering methodology for allocating firewall rule priorities. For example, it's best practice to reserve a range of low numbers in each policy for rules needed during incident response. We also recommend that you prioritize more specific rules higher than more general rules, to ensure that the specific rules aren't shadowed by the general rules. The following example shows a possible approach for firewall rule priorities:
Firewall rule priority range |
Purpose |
---|---|
0-999 |
Reserved for incident response |
1000-1999 |
Always blocked traffic |
2000-1999999999 |
Workload-specific rules |
2000000000-2100000000 |
Catch-all rules |
2100000001-2147483643 |
Reserved |
Configure hierarchical firewall policies
Hierarchical firewall policies let you create and enforce a consistent firewall policy across your organization. For examples of using hierarchical firewall policies, see Hierarchical firewall policy examples.
Define hierarchical firewall policies to implement the following network access controls:
- Identity-Aware Proxy (IAP) for TCP forwarding. IAP for TCP forwarding is allowed through a security policy that permits ingress traffic from IP range 35.235.240.0/20 for TCP ports 22 and 3389.
- Health checks for Cloud Load Balancing. The well-known ranges that
are used for health checks are allowed.
- For most Cloud Load Balancing instances (including Internal TCP/UDP Load Balancing, Internal HTTP(S) Load Balancing, External TCP Proxy Load Balancing, External SSL Proxy Load Balancing, and HTTP(S) Load Balancing), a security policy is defined that allows ingress traffic from the IP ranges 35.191.0.0/16 and 130.211.0.0/22 for ports 80 and 443.
- For Network Load Balancing, a security policy is defined that enables legacy health checks by allowing ingress traffic from IP ranges 35.191.0.0/16, 209.85.152.0/22, and 209.85.204.0/22 for ports 80 and 443.
Configure the VPC environment
The transit VPC provides the networking resources to enable connectivity between workload spoke VPC networks and on-premises or multi-cloud networks.
- Create a new project for the transit VPC network.
- Enable the Compute Engine API for the project.
- Create the transit custom mode VPC network.
- Create a Private Service Connect subnet in each region where you plan to publish services running in your hub VPC or on-premises environment. Consider Private Service Connect subnet sizing when deciding your IP addressing plan.
- For each on-premises service you want to expose to workloads running in Google Cloud, create an internal HTTP(S) or TCP proxy load balancer and expose the services using Private Service Connect.
- Configure Private Service Connect for Google APIs for the transit VPC.
Configure hybrid connectivity
You can use Dedicated Interconnect, Partner Interconnect, or Cloud VPN to provide hybrid connectivity to your landing zone. The following steps create the initial hybrid connectivity resources required for this design option:- If you're using Dedicated Interconnect, do the following. If you're using Partner Interconnect or Cloud VPN, you can skip these steps.
- For each region where you're terminating hybrid connectivity in the
VPC network, do the following:
- Create two Dedicated or Partner VLAN attachments, one for each edge availability zone. As part of this process, you select Cloud Routers and create BGP sessions.
- Configure the peer network (on-premises or other cloud) routers.
Configure workload projects
Create a separate VPC for each workload:
- Create a new project to host your workload.
- Enable the Compute Engine API for the project.
- Create a custom-mode VPC network.
- Create subnets in the regions where you plan to deploy workloads. For each subnet, enable Private Google Access to allow VM instances with only internal IP addresses to reach Google services.
- Configure Private Service Connect for Google APIs.
- For each workload you're consuming from a different VPC or your on-premises environment, create a Private Service Connect consumer endpoint.
- For each workload you're producing for a different VPC or your on-premises environment, create an internal load balancer and service attachment for the service. Consider Private Service Connect subnet sizing when deciding your IP addressing plan.
- If the service should be reachable from your on-premises environment, create a Private Service Connect consumer endpoint in the transit VPC.
Configure Cloud NAT
Follow these steps if the workloads in specific regions require outbound internet access—for example, to download software packages or updates.
- Create a Cloud NAT gateway in the regions where workloads require outbound internet access. You can customize the Cloud NAT configuration to only allow outbound connectivity from specific subnets, if needed.
- At a minimum,
enable Cloud NAT logging
for the gateway to log
ERRORS_ONLY
. To include logs for translations performed by Cloud NAT, configure each gateway to logALL
.
Configure observability
Network Intelligence Center provides a cohesive way to monitor, troubleshoot, and visualize your cloud networking environment. Use it to ensure that your design functions with the desired intent.
The following configurations support the analysis of logging and metrics enabled.
- You must enable the Network Management API before you can run Connectivity Tests. Enabling the API is required to use the API directly, the Google Cloud CLI, or the Google Cloud console.
- You must enable the Firewall Insights API before you can perform any tasks using Firewall Insights.
Next steps
The initial configuration for this network design option is now complete. You can now either repeat these steps to configure an additional instance of the landing zone environment, such as a staging or production environment, or continue to Decide the security for your Google Cloud landing zone.
What's next
- Decide the security for your Google Cloud landing zone (next document in this series).
- Read Best practices for VPC network design.
- Read more about Private Service Connect.