Hybrid Connectivity Using Your Own Public IP Addresses on Compute Engine

(Using Your Own Public IP Addresses on GCP)

This tutorial demonstrates how to create and use your own public IP addresses, or custom IP addresses, on virtual machines (VMs) in Compute Engine on Google Cloud Platform (GCP). This tutorial assumes that you have a provider-independent IP space that you can advertise by using your own routing equipment. It also assumes that you have some experience with Compute Engine.

You might want to use custom IP addresses because:

  • Your own IP addresses are hard-coded in applications that cannot be easily changed.
  • Your own IP addresses have associated reputation, allowing traffic from these endpoints. This is essential for mail-sending applications.

The following diagram shows the steps to add a custom IP address to a VM and advertise it through Cloud Interconnect – Dedicated to the internet.

add a custom IP address to a VM and advertise

The numbers in the graphic correspond to the following steps:

  1. Add your custom IP addresses to each VM.
  2. Add your public IP prefixes to advertise to your peering router by using custom route advertising.
  3. Add static routes back to the VM instance that is mapping the custom IP addresses to the GCP instance.
  4. Set up an internet transit connection and advertise your public IP prefixes.

At the end of the tutorial you test the connection by following these steps:

  1. Pinging an interconnected VM.
  2. Verifying that the requests to the custom IP address are routed correctly.
  3. Confirming that the custom IP address is forwarded as the source for outgoing requests.

This tutorial uses Cloud Router custom route advertisements, which you can use to restrict or extend IP ranges in Border Gateway Protocol (BGP) advertisements globally or on a per-neighbor basis. You can withhold one or more subnet ranges from being advertised, or you can advertise ranges that your Cloud Router doesn't advertise by default.

Objectives

  • Add a custom IP address to a VM.
  • Use custom route advertising to advertise the IP address from your Cloud Router through Dedicated Interconnect or Cloud VPN.
  • Add a static route back to the VM instance that is mapping the custom IP address to the GCP instance.
  • Test the connectivity.

Costs

This tutorial uses billable components of Cloud Platform, including:

  • Compute Engine
  • Dedicated Interconnect or Cloud VPN

Use the Pricing Calculator to generate a cost estimate based on your projected usage. New GCP users might be eligible for a free trial.

Before you begin

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. Select or create a Google Cloud Platform project.

    Go to the Manage resources page

  3. Zorg dat facturering is ingeschakeld voor uw project.

    Meer informatie over het inschakelen van facturering

  4. Enable the Compute Engine API.

    Enable the API

  5. Establish Dedicated Interconnect or Cloud VPN from your on-premises infrastructure to your GCP project.

Setting up your environment

In this section you configure the project settings that you need in order to complete the tutorial.

Start a Cloud Shell instance

Open Cloud Shell

You work through the rest of the tutorial in Cloud Shell.

Configure project settings

To simplify using the gcloud command-line tool, you can set up the following properties so that you don't have to supply them with each command:

  1. Set your default project by using your project ID for [PROJECT_ID]:

    gcloud config set project [PROJECT_ID]
  2. Set your default Compute Engine region by using your preferred region for [REGION]:

    gcloud config set compute/region [ZONE]
    export REGION=[REGION]
    
  3. Set your default Compute Engine zone by using your preferred zone for [ZONE]:

    gcloud config set compute/zone [ZONE]
    export ZONE=[ZONE]
    

(Optional) Create interconnected virtual private clouds (VPCs)

When you implement this solution in a production environment, you connect to your on-premises environment, which is connected to your GCP project through Cloud VPN or Dedicated Interconnect.

If you want to try this method for this tutorial, but you don't have an established Cloud VPN or dedicated connection, you can use the following steps to create two interconnected VPCs in the same project. One VPC houses the custom IP address and the other acts as the on-premises environment.

  1. Create a VPC that includes a custom subnetwork for the VM that will host the custom IP address:

    gcloud compute networks create custom-ip-vpc --subnet-mode custom
    gcloud compute networks subnets create subnet-custom-ip \
       --network custom-ip-vpc \
       --range 10.10.20.0/24
    
  2. Create a second VPC that includes a custom subnetwork to represent your on-premises network. Create this second VPC in the same project as the first VPC. You use 172.16.0.0/16 as the address range:

    gcloud compute networks create on-premises-vpc --subnet-mode custom
    gcloud compute networks subnets create subnet-on-premises \
       --network on-premises-vpc \
       --range 172.16.0.0/16
    
  3. Create one Cloud Router for each VPC. The autonomous system number (ASN) must be unique throughout your peered network. You can choose any private ASN from 64512 to 65534 or from 4200000000 to 4294967294.

    gcloud compute routers create cloud-router \
       --network custom-ip-vpc \
       --asn 65001
    gcloud compute routers create on-premises-router \
       --network on-premises-vpc \
       --asn 65002
    
  4. Create a Cloud VPN gateway for each VPC:

    gcloud compute target-vpn-gateways create cloud-vpn \
        --network custom-ip-vpc
    gcloud compute target-vpn-gateways create on-premises-vpn \
        --network on-premises-vpc
    
  5. Reserve a static IP address to use for the peering points for the Cloud VPN tunnel:

    gcloud compute addresses create cloud-vpn-static-ip --region $REGION
    gcloud compute addresses create on-premises-vpn-static-ip --region $REGION
    
  6. Retrieve the reserved IP address:

    gcloud compute addresses list
    

    The output looks like this:

     NAME                      REGION        ADDRESS     STATUS
     cloud-vpn-static-ip       europe-west1  203.0.113.1 RESERVED
     on-premises-vpn-static-ip europe-west1  203.0.113.2 RESERVED
  7. Create rules that forward IPsec-specific traffic toward the Cloud VPN gateway, such as encapsulating security payload (ESP) protocol and user datagram protocol (UDP) ports 500 and 4500. Use the static IP addresses that you reserved in a previous step.

    gcloud compute forwarding-rules create fr-esp-cloud \
      --address cloud-vpn-static-ip \
      --target-vpn-gateway cloud-vpn \
      --ip-protocol ESP \
      --region $REGION
    
    gcloud compute forwarding-rules create fr-udp500-cloud \
      --address cloud-vpn-static-ip \
      --target-vpn-gateway cloud-vpn \
      --ip-protocol UDP \
      --ports 500 \
      --region $REGION
    
    gcloud compute forwarding-rules create fr-udp4500-cloud \
       --address cloud-vpn-static-ip \
       --target-vpn-gateway cloud-vpn \
       --ip-protocol UDP \
       --ports 4500 \
       --region $REGION
    
    1. Create the same forwarding rules for the on-premises Cloud VPN gateway, adjusting the IP address and VPC resource:
    gcloud compute forwarding-rules create fr-esp-on-premises \
      --address on-premises-vpn-static-ip \
      --target-vpn-gateway on-premises-vpn \
      --ip-protocol ESP \
      --region $REGION
    
    gcloud compute forwarding-rules create fr-udp500-on-premises \
      --address on-premises-vpn-static-ip \
      --target-vpn-gateway on-premises-vpn \
      --ip-protocol UDP \
      --ports 500 \
      --region $REGION
    
    gcloud compute forwarding-rules create fr-udp4500-on-premises \
      --address on-premises-vpn-static-ip \
      --target-vpn-gateway on-premises-vpn \
      --ip-protocol UDP \
      --ports 4500 \
      --region $REGION
    
  8. Verify that you've created all the rules:

    gcloud compute forwarding-rules list

    The output looks like this:

    NAME                    REGION        IP_ADDRESS    IP_PROTOCOL TARGET
    fr-esp-cloud            europe-west1  203.0.113.1   ESP europe-west1/targetVpnGateways/cloud-vpn
    fr-udp500-cloud         europe-west1  203.0.113.1   UDP europe-west1/targetVpnGateways/cloud-vpn
    fr-udp4500-cloud        europe-west1  203.0.113.1   UDP europe-west1/targetVpnGateways/cloud-vpn
    fr-esp-on-premises      europe-west1  203.0.113.2   ESP europe-west1/targetVpnGateways/on-premises-vpn
    fr-udp500-on-premises   europe-west1  203.0.113.2   UDP europe-west1/targetVpnGateways/on-premises-vpn
    fr-udp4500-on-premises  europe-west1  203.0.113.2   UDP europe-west1/targetVpnGateways/on-premises-vpn
    
  9. Create a Cloud VPN tunnel that connects the two gateways.

    Use the static address that you assigned to the on-premises Cloud VPN gateway as a peer address.

    Within the --peer-address flag, use the gcloud tool to retrieve the static IP address that you reserved in a previous step.

    gcloud compute vpn-tunnels create tunnel-cloud \
      --ike-version 2 \
      --target-vpn-gateway cloud-vpn \
      --peer-address on-premises-vpn-static-ip \
      --peer-address $(gcloud compute addresses list \
        --format="value(address.basename())" --filter="name=on-premises-vpn-static-ip") \
      --shared-secret SHAREDSECRET \
      --router cloud-router
    
    gcloud compute vpn-tunnels create tunnel-on-premises \
      --ike-version 2 \
      --target-vpn-gateway on-premises-vpn \
      --peer-address $(gcloud compute addresses list \
        --format="value(address.basename())" --filter="name=cloud-vpn-static-ip") \
      --shared-secret SHAREDSECRET \
      --router on-premises-router
    

    You have allocated the resources for the Cloud VPN tunnel, but no traffic is forwarded yet.

  10. Verify that the tunnel is created:

    gcloud compute vpn-tunnels list

    The output looks like this:

    NAME                REGION        GATEWAY         PEER_ADDRESS
    tunnel-cloud        europe-west1  cloud-vpn       203.0.113.2
    tunnel-on-premises  europe-west1  on-premises-vpn 203.0.113.1
  11. Update the Cloud Router configuration to add a virtual interface to establish a BGP session. Do this update for both peering points:

    gcloud compute routers add-interface cloud-router \
      --interface-name if-cloud \
      --ip-address 169.254.1.1 \
      --mask-length 30 \
      --vpn-tunnel tunnel-cloud
    
    gcloud compute routers add-interface on-premises-router \
      --interface-name if-on-premises \
      --ip-address 169.254.1.2 \
      --mask-length 30 \
      --vpn-tunnel tunnel-on-premises
    

    The IP addresses that you use here must belong to the link-local address block 169.254.0.0/16. Every tunnel in your peered network requires a unique pair of IP addresses.

  12. Update both routers to add the BGP peer to the interfaces:

    gcloud compute routers add-bgp-peer cloud-router \
      --peer-name bgp-peer-cloud \
      --interface if-cloud \
      --peer-ip-address 169.254.1.2 \
      --peer-asn 65002
    
    gcloud compute routers add-bgp-peer on-premises-router \
      --peer-name bgp-peer-on-premises \
      --interface if-on-premises \
      --peer-ip-address 169.254.1.1 \
      --peer-asn 65001
    
  13. Verify the setting by providing the respective Cloud Router resource name:

    gcloud compute routers describe [CLOUD_ROUTER_NAME]
  14. Create firewall rules that allow inbound traffic from the peered on-premises VPC:

    gcloud compute firewall-rules create vpnrule \
        --network custom-ip-vpc \
        --allow tcp,udp,icmp \
        --source-ranges $(gcloud compute networks subnets list \
        --format="value(ipCidrRange)" \
        --filter="name=subnet-on-premises") \
        --target-tags=custom-ip
    

Creating a VM with a custom IP address

In this section you create a VM and add a custom IP address to that VM. Then you configure the Cloud Router and your VPC to allow traffic that uses this custom IP address. To test the setup, you create a VM in the on-premises network or in the VPC that represents it. To connect through SSH to the VM that hosts the custom IP address, you need a bastion host.

The following diagram shows the result of the preceding steps and includes the VMs mentioned in this section.

the results of following these steps, showing two custom IP addresses

  1. Create a VM.

    Use the VPC custom-ip-vpc that you created in the preceding steps. If you use your own VPC, you must change the name and IP addresses accordingly throughout this tutorial.

    gcloud compute instances create "custom-ip-vm" \
      --machine-type "n1-standard-1" \
      --subnet "subnet-custom-ip" \
      --no-address \
      --can-ip-forward \
      --tags=custom-ip,ssh
    

    To communicate by using the custom IP address, you must enable IP forwarding. Otherwise traffic is filtered if it doesn't originate from the custom IP addresses that are assigned to the VM.

  2. Create the bastion host:

    gcloud compute instances create "bastion-host" \
      --machine-type "n1-standard-1" \
      --subnet "subnet-custom-ip" \
      --can-ip-forward \
      --tags=ssh
    

    The bastion host resides in the same VPC as the VM that hosts the custom IP address.

  3. Create a firewall rule that allows SSH connections to the bastion host VM and to the VM that will host the custom IP address:

    gcloud compute firewall-rules create allow-ssh-custom-ip \
      --network custom-ip-vpc \
      --allow tcp:22 \
      --source-ranges 0.0.0.0/0 \
      --target-tags=ssh
    
  4. Establish a connection through the bastion host to the VM that will host the custom IP address:

    eval "$(ssh-agent -s)"
    ssh-add .ssh/google_compute_engine
    gcloud compute ssh --ssh-flag="-A" bastion-host
    ssh custom-ip-vm
    
  5. Add the custom IP address and routes to the VM by using a subinterface:

    sudo nano /etc/network/interfaces

    Add the following lines to the end of the file:

    auto eth0:0
    iface eth0:0 inet static
    address 198.51.100.10
    netmask 255.255.255.255
    up ip route add 10.0.0.0/8 via 10.10.20.1 dev eth0 src 10.10.20.2
    up ip route change default via 10.10.20.1 src 198.51.100.10
    

    The example uses 198.51.100.10 as a non-RFC 1918 IP address, which is part of the RFC 5737 block that is reserved for documentation purposes. Replace this IP address with your custom IP address.

    Change the default route to use the custom IP address as the source for all outgoing traffic that isn't destined for the 10.0.0.0/8 range. Then add a route to the VM that allows all traffic to instances within the 10.0.0.0/8 range to be routed through the standard interface eth0 by using the internal IP address of the VM. You use this IP address to communicate with the VM from other instances within your GCP project.

    By changing the default route, you get the benefits of the custom IP address without having to change anything on your application. The route settings must always comply with your use case requirements and might differ from the settings that are used in this tutorial.

  6. Save your changes and then restart the networking services:

    sudo /etc/init.d/networking restart
  7. After the networking services restart, verify that the VM is using the custom IP address:

    ip addr

    The output confirms that the VM is using the custom IP address through the subinterface eth0:0 that you defined previously:

    1: lo: [LOOPBACK,UP,LOWER_UP] mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: eth0: [BROADCAST,MULTICAST,UP,LOWER_UP] mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 42:01:0a:0a:14:02 brd ff:ff:ff:ff:ff:ff
        inet 10.10.20.2/32 brd 10.10.20.2 scope global eth0
           valid_lft forever preferred_lft forever
        inet 198.51.100.10/32 brd 198.51.100.10 scope global eth0:0
           valid_lft forever preferred_lft forever
        inet6 fe80::4001:aff:fe0a:1402/64 scope link
           valid_lft forever preferred_lft forever
    
  8. Verify that you set the routes correctly:

    ip route show

    The output looks something like the following:

    default via 10.10.20.1 dev eth0 src 198.51.100.10
    10.0.0.0/8 via 10.10.20.1 dev eth0 src 10.10.20.2
    10.10.20.1 dev eth0 scope link
    

    The first line shows that the updated default route uses your custom IP address as the source for outgoing traffic. The second line shows a route that allows all traffic within the 10.0.0.0/8 range to use the standard IP address of the VM as the source. This route is used for internal traffic that isn't facing the public internet.

  9. Close the SSH connection to the VM:

    exit
  10. Close the SSH connection to the bastion host:

    exit

Creating a route for the custom IP address

Now that you have set up the custom IP address on the VM that will host it, you need to set routing rules on GCP to make sure that traffic to and from this VM is routed correctly.

  1. Create a route that takes all requests that are sent to the custom IP address and redirects those requests to the VM that you configured in the previous step:

    gcloud compute routes create custom-ip-vm-route \
      --network=custom-ip-vpc \
      --destination-range=198.51.100.10/32 \
      --next-hop-instance=custom-ip-vm
    
  2. Create another route that redirects all outgoing traffic that is sent from the instance with the custom IP address to the internet through the Cloud VPN tunnel or dedicated connection:

    gcloud compute routes create default-route \
      --network=custom-ip-vpc \
      --priority=100 \
      --tags=custom-ip \
      --destination-range=0.0.0.0/0  \
      --next-hop-vpn-tunnel=tunnel-cloud
    

Configuring custom route advertisements

You can use custom route advertisements over a Cloud Router BGP session to selectively advertise specific subnets through BGP dynamic routing. Instead of restricting the advertisement of subnets, you advertise your custom IP address by populating the --advertisement-ranges flag with the custom IP address that you added to your newly created VM.

  1. Update the advertisement configuration for your Cloud Router:

    gcloud compute routers update cloud-router \
      --advertisement-mode=CUSTOM \
      --set-advertisement-groups=ALL_SUBNETS \
      --set-advertisement-ranges=198.51.100.10/32
    

    The custom IP address is advertised to your peering router or the Cloud Router in the VPC simulating this environment.

  2. Verify that firewall rules allow incoming traffic to your new VM:

    gcloud compute firewall-rules list

    Make sure that traffic is allowed through the Cloud VPN tunnel or dedicated connection to the VM that will host the custom IP address. If you followed the optional create interconnected VPCs steps, the firewall rule that is named vpnrule regulates this traffic.

Testing the setup

To test whether your VM can handle incoming and outgoing traffic that's using the custom IP address, follow these steps:

  • Create a VM in the interconnected VPC or on-premises network.
  • Ping the custom IP address to verify that incoming traffic gets routed correctly.
  • Ping the VM that resides in the interconnected VPC or on-premises network from the custom IP address.
  • Use tcpdump on the interconnected VPC or on-premises network to verify that the custom IP address is forwarded as the source address for outbound traffic.

If you didn't follow the create interconnected VPCs steps, you can work with a machine in your on-premises environment in a similar way. Replace all IP addresses accordingly.

  1. Create a VM in the interconnected VPC or on-premises network:

    gcloud compute instances create "on-premises-vm" \
      --machine-type "n1-standard-1" \
      --subnet "subnet-on-premises" \
      --tags "on-premises-vm"
    
  2. Create firewall rules to allow an SSH connection and ping the VM that you created:

    gcloud compute firewall-rules create allow-ssh-on-premises \
      --network on-premises-vpc \
      --allow tcp:22 \
      --source-ranges 0.0.0.0/0 \
      --target-tags=on-premises-vm
    
    gcloud compute firewall-rules create allow-icmp-on-premises \
      --network on-premises-vpc \
      --allow icmp \
      --source-ranges=198.51.100.10/32
    
  3. Ping the custom IP address from the VM that resides in the interconnected VPC to verify that incoming traffic is routed correctly:

    gcloud compute ssh on-premises-vm
    ping 198.51.100.10 -c 3
    

    At this stage the request should be routed correctly to the VM that is using the custom IP address.

  4. Install tcpdump on the interconnected VM:

    sudo apt-get update
    sudo apt-get install tcpdump -y
    
  5. Run tcpdump to verify that the custom IP address is forwarded as the source address for outbound traffic:

    sudo /usr/sbin/tcpdump icmp -n -c 5

    Keep this window open.

  6. Open a separate SSH window from your VM that will host the custom IP address, and ping the VM by using an SSH agent in the same way that you did previously.

  7. Retrieve the IP address of the simulated on-premises VM:

    gcloud compute instances list  \
      --format="value(networkInterfaces[0].networkIP)" \
      --filter="name=on-premises-vm"
    

    The output is the IP address of the on-premises VM.

  8. Use the bastion host to log in to the VM that is using the custom IP address:

    eval "$(ssh-agent -s)"
    ssh-add .ssh/google_compute_engine
    gcloud compute ssh --ssh-flag="-A" bastion-host
    ssh custom-ip-vm
    
  9. Ping the on-premises VM. Replace [IP_ADDRESS] with the IP address of the on-premises VM.

    ping [IP_ADDRESS] -c 3

    The output of tcpdump looks something like this:

    15:00:22.448708 IP 198.51.100.10 > 172.16.0.2: ICMP echo request, id 26718, seq 1, length 64
    15:00:22.448744 IP 172.16.0.2 > 198.51.100.10: ICMP echo reply, id 26718, seq 1, length 64
    15:00:23.449013 IP 198.51.100.10 > 172.16.0.2: ICMP echo request, id 26718, seq 2, length 64
    15:00:23.449046 IP 172.16.0.2 > 198.51.100.10: ICMP echo reply, id 26718, seq 2, length 64
    15:00:24.450382 IP 198.51.100.10 > 172.16.0.2: ICMP echo request, id 26718, seq 3, length 64
    

    This output shows that the custom IP address 198.51.100.10 is being used as the source IP address.

Advertising the custom IP address to the internet isn't covered by the preceding steps. If you want to advertise the custom IP address to the internet, configure your on-premises equipment to do so.

To route traffic that is destined for the custom IP address, you must set a route. The number of routes per project is limited by your quota, so you cannot use this solution to create more custom IP addresses than your limit.

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

  • Delete all Compute Engine instances created in this tutorial.
  • Delete all Cloud VPN tunnels created in this tutorial.

Delete the project

The easiest way to eliminate billing is to delete the project that you created for the tutorial.

To delete the project:

  1. In the GCP Console, go to the Projects page.

    Go to the Projects page

  2. In the project list, select the project you want to delete and click Delete .
  3. In the dialog, type the project ID, and then click Shut down to delete the project.

What's next

Was deze pagina nuttig? Laat ons weten hoe goed we u hebben geholpen:

Feedback verzenden over...