Jump to Content
Developers & Practitioners

Two networking patterns for secure intra-cloud access - Networking Architecture

January 9, 2023
Ammett Williams

Developer Relations Engineer

In a previous article 6 building blocks for cloud networking, we discussed an easy to remember breakdown of cloud networking. This model is made up of network connectivity, network security, service networking, service security, content delivery and observability.

In this article we would be looking at 2 simple networking methods for secure intra-cloud access. The document Networking for secure intra-cloud access: Reference architectures goes into more details on this topic. The two methods discussed in this article provide interesting context to introduce the topic.

The purpose of secure intra-cloud access 

Intra-cloud communication deals with various workloads that reside in a customer's Virtual Private Cloud (VPC) that need to connect to other resources in Google Cloud, which can span multiple VPCs. Securing intra-cloud communication can provide separation of concerns, isolate vulnerabilities and create secure boundaries. 

Providing secure communication can be achieved by either native tools or third-party capabilities. Each organization would configure based on their security compliance needs and type of workloads.

The Patterns

The document Networking for secure intra-cloud access: Reference architectures, has several patterns of which I found these two interesting:

  • Network Virtual Appliance (NVA)

  • Private Service Connect (PSC)

Each of these designs require a VPC, firewall rules and correct IAM permissions. Let’s look at them in more detail below.

Network Virtual Appliance

Customers may need to have a centralized VM appliance that traffic is routed through. Common use cases for this are for third party next-generation firewalls, IDS, WAF, NAT and Transparent proxy.

The design below shows a NVA in a Shared VPC connected to multiple NICs.

https://storage.googleapis.com/gweb-cloudblog-publish/images/network-virtual-appliance.max-2200x2200.png

The design elements are as follows.

Shared VPC
Shared VPC is set up in a host and service project manner.

  • VM appliance - with IP forwarding enabled is located in the host project and has multiple network interface cards (nic0-nic4). 

  • Subnets - there are several subnets with private IP address ranges configured in the host shared VPC. The VM appliance has a separate NIC to connect to each subnet. 

Service projects
Service projects connects to Shared VPC subnets

  • Service project resources - the service project admins have created VMs in subnets they have access to. 

  • Next hop routing  The next-hop route has to be set either by using the next-hop-instance name or next-hop-address IP.  

Public Connections
The host project connect externally is as follows:

  • VM appliance - has a nic attached to the network-perimeter subnet and also a Public virtual IP (VIP). 

Overall this design allows you to filter/inspect traffic between the service project VPCs connected to each NIC of the VM appliance.

Private Service connect

Private service connect allows you to connect to services that can reside in Google, Google Cloud or external locations all via an internal IP in your VPC. There are several patterns but we would look at the consumer and producer connection via Internal Load balancer. 
https://storage.googleapis.com/gweb-cloudblog-publish/images/psc-w-vpc.max-2200x2200.png

Producer network
In the producer network the configuration is as follows:

  • Managed instance group (MIG) - Provides autoscaling and provisioning of application VM servers in subnet 202 assigned 10.101.0.0/16 address range.

  • Internal Load balancer - (ILB) exposes services via one internal IP address. It connects to the MIG backend. For an ILB a proxy network is needed to run envoy proxies.

  • Private service connect, Service attachment - This publishes the service in a region and you can control who can access, by selecting either anyone with the service address or specific users.

Consumer network
In the producer network the configuration is as follows:

  • The clients are located in subnet 101 and in this case they are VMs.

  • The Private Service Connect endpoint is located in subnet 102. This is assigned an internal IP address (10.100.7.9) and makes a connection to the advertised service attachment created in the producers network. Once this connection is established whenever a client wants to access the service they just need to point to the private IP assigned to the PSC endpoint.

  • Service Directory - This is not shown in the diagram but a Service Directory namespace is created as part of the PSC endpoint setup. This namespace can be linked to a cloud DNS private zone so that the service can be resolved via name. 

Overall this design limits exposure to the internet and circumvents the need for complex configurations. Two main benefits of this are as follows:

  • There is no exposure of ports or VM on the producer side since the consumer can only access the specified published service.

  • Consumers can choose their own private IP address. This is independent of  the producer IPs so the problem of overlapping IP addresses can be avoided.

Deeper look

It is worth checking out the Networking for secure intra-cloud access: Reference architectures document to see more patterns. 

https://storage.googleapis.com/gweb-cloudblog-publish/original_images/intra-doc.gif

More on architecture

To learn more about networking architecture, I recommend reading the following documents:

Want to ask a question, find out more or share a thought? Please connect with me on Linkedin or Twitter: @ammettw.

Posted in