Set up Google Cloud to work with your Bare Metal Solution environment

When your Bare Metal Solution environment is ready, you are notified by Google Cloud. The notification includes the internal IP addresses of your new servers.

These instructions show you how to do the following tasks that are required to connect to your Bare Metal Solution environment:

  • Create redundant VLAN attachments to the Bare Metal Solution environment.
  • Create a jump host VM instance in your VPC network.
  • Use SSH or RDP to log in to your Bare Metal Solution servers from the jump host VM instance.

After you are connected to your servers, validate the configuration of your Bare Metal Solution order.

Before you begin

To connect to and configure your Bare Metal Solution environment, you need:

  • A Google Cloud project with billing enabled. You can create a project on the project selector page in the Google Cloud console.
  • A Virtual Private Cloud (VPC) network. This is the VPC network that you named when you placed your order for Bare Metal Solution. If you need to create the VPC network, see Using VPC networks.
  • The following information that is provided to you by Google Cloud when your Bare Metal Solution is ready:
    • The IP addresses of your bare-metal servers.
    • The temporary passwords for each of your bare-metal servers.

Create the VLAN attachments for the Cloud Interconnect connection

To access your Bare Metal Solution servers, you need to create VLAN attachments in the same region as your servers and pre-activate them. When you create the VLAN attachments, the system generates pairing keys that you need to share with Google Cloud. Google Cloud uses these pairing keys to activate the connection between your Bare Metal Solution environment and your VPC network.

The VLAN attachments (also known as InterconnectAttachments) allocate VLANs on the Partner Interconnect connection.

Currently, individual Bare Metal Solution interconnect VLAN attachments support a maximum speed of 10 Gbps. To achieve higher throughput into a VPC network, you can configure multiple attachments into the VPC network. For each BGP session, you should use the same MED values to allow the traffic to use ECMP over all of the configured interconnect attachments.

Console

  1. If you don't already have Cloud Router instances in the network and region that you are using with Bare Metal Solution, you need to create one for each VLAN attachment. When you create the routers, specify 16550 as the ASN for each Cloud Router.

    For instructions, see Creating Cloud Routers.

  2. Go to the Cloud Interconnect VLAN attachments tab in the Google Cloud console.
    Go to VLAN attachments tab

  3. Click Create VLAN attachment at the top of the Google Cloud console.

  4. Select Partner Interconnect to create Partner VLAN attachments, and then click Continue.

  5. Click I already have a service provider.

  6. Select Create a redundant pair of VLANs. Both attachments can serve traffic, and you can route the traffic to load-balance between them. If one attachment goes down, for example during scheduled maintenance, the other attachment continues to serve traffic. For more information, see the Redundancy section in the Partner Interconnect Overview page.

  7. For the Network and Region fields, select the VPC network and Google Cloud region where your attachments will connect to.

  8. Specify the details for each of your VLAN attachments.

    • Cloud Router — A Cloud Router to associate with this attachment.
      • You can only choose a Cloud Router in the VPC network and region that you selected with an ASN of 16550.
      • You can assign only one Cloud Router per attachment. For a pair of VLAN attachments, you need two Cloud Routers.
    • VLAN attachment name — A name for each attachment. The names are displayed in the Google Cloud console and used by the Google Cloud CLI to reference the attachments, such as my-attachment-1 and my-attachment-2.
    • Description — Information about each VLAN attachment.
    • Maximum transmission unit (MTU) — The maximum packet size for network transmission. The default size is 1440.
  9. Click Create to create the attachments, which takes a few moments to complete.

  10. After creation is complete, copy the pairing keys. The keys include an alpha-numeric code, the name of the region, and the number of the network availability zone, for example /1 or /2. You'll share these keys with Google Cloud.

  11. Pre-activate both attachments by selecting Enable. When you pre-activate the attachments, they start passing traffic immediately after Google Cloud completes your Bare Metal Solution configuration.

  12. Click OK to view a list of your VLAN attachments.

  13. After Google Cloud notifies you that your Bare Metal Solution servers are ready, go to the VLAN attachments tab in the Google Cloud console.
    Go to VLAN attachments tab

  14. Look for the Status column, which should appear as Up for your attachments. If the status of your attachments appears as Down, enable the attachments as follows:

    1. Click the name of the first VLAN attachment to view its details page.
    2. Click Enable.
    3. Click VLAN attachment details to return to the main VLAN attachments tab.
    4. Click the name of the second VLAN attachment to view its details page.
    5. Click Enable.

gcloud

  1. If you don't already have Cloud Router instances in the network and region that you are using with Bare Metal Solution, create one for each VLAN attachment. Use 16550 as the ASN number:

    gcloud compute routers create router-name \
    --network vpc-network-name \
    --asn 16550 \
    --region region

    For more information, see Creating Cloud Routers.

  2. Create an InterconnectAttachment of type PARTNER, specifying the name of your Cloud Router and the edge availability domain (EAD) of the VLAN attachment. Also, add the --admin-enabled flag to pre-activate the attachments and send traffic immediately after Google Cloud completes the Bare Metal Solution configuration.

    gcloud compute interconnects attachments partner create first-attachment-name \
      --region region \
      --router first-router-name \
      --edge-availability-domain availability-domain-1 \
      --admin-enabled
    gcloud compute interconnects attachments partner create second-attachment-name \
      --region region \
      --router second-router-name \
      --edge-availability-domain availability-domain-2 \
      --admin-enabled

    Google Cloud automatically adds an interface and a BGP peer on the Cloud Router. The attachment generates a pairing key that you'll need to share with Google Cloud later.

    The following example creates redundant attachments, one in EAD availability-domain-1 and another in EAD availability-domain-2. Each is associated with a separate Cloud Router, my-router-1 and my-router-2, respectively. They are both in the us-central1 region.

    gcloud compute interconnects attachments partner create my-attachment \
     --region us-central1 \
     --router my-router-1 \
     --edge-availability-domain availability-domain-1 \
     --admin-enabled
    gcloud compute interconnects attachments partner create my-attachment \
     --region us-central1 \
     --router my-router-2 \
     --edge-availability-domain availability-domain-2 \
      --admin-enabled
  3. Describe the attachment to retrieve its pairing key. You'll share the key with Google Cloud after you open a change request to create the connection to the Bare Metal Solution environment.

    gcloud compute interconnects attachments describe my-attachment \
      --region us-central1
    adminEnabled: false
    edgeAvailabilityDomain: AVAILABILITY_DOMAIN_1
    creationTimestamp: '2017-12-01T08:29:09.886-08:00'
    id: '7976913826166357434'
    kind: compute#interconnectAttachment
    labelFingerprint: 42WmSpB8rSM=
    name: my-attachment
    pairingKey: 7e51371e-72a3-40b5-b844-2e3efefaee59/us-central1/1
    region: https://www.googleapis.com/compute/v1/projects/customer-project/regions/us-central1
    router: https://www.googleapis.com/compute/v1/projects/customer-project/regions/us-central1/routers/my-router
    selfLink: https://www.googleapis.com/compute/v1/projects/customer-project/regions/us-central1/interconnectAttachments/my-attachment
    state: PENDING_PARTNER
    type: PARTNER
    • The pairingKey field contains the pairing key that you need to copy and share with your service provider. Treat the pairing key as sensitive information until your VLAN attachment is configured.
    • The state of the VLAN attachment is PENDING_PARTNER until Google Cloud completes your VLAN attachment configuration. Afterwards, the state of the attachment is INACTIVE or ACTIVE, depending on whether you chose to pre-activate your attachments.

    When you request connections from Google Cloud, you must select the same metro (city) for both attachments for them to be redundant. For more information, see the Redundancy section in the Partner Interconnect Overview page.

  4. If the VLAN attachments do not come up after Google Cloud completes your Bare Metal Solution order, activate each VLAN attachment:

    gcloud compute interconnects attachments partner update attachment-name \
    --region region \
    --admin-enabled

You can check the status of the Cloud Routers and your advertised routes in the Cloud console. For more information, see Viewing Router Status and Advertised Routes.

Set up routing between Bare Metal Solution and Google Cloud

As soon as your VLAN attachments are active, your BGP sessions come up and the routes from the Bare Metal Solution environment are received over the BGP sessions.

Add a custom advertisement for a default IP range to your BGP sessions

To set up routing for traffic from the Bare Metal Solution environment, the recommendation is to add a custom advertisement of a default route, such as 0.0.0.0/0, on your BGP sessions to the Bare Metal Solution environment.

To specify advertisements on an existing BGP session:

Console

  1. Go to the Cloud Router page in the Google Cloud console.
    Cloud Router list
  2. Select the Cloud Router that contains the BGP session to update.
  3. In the Cloud Router's detail page, select the BGP session to update.
  4. In the BGP session details page, select Edit.
  5. For the Routes, select Create custom routes.
  6. Select Add custom route to add an advertised route.
  7. Configure the route advertisement.
    • Source — Select Custom IP range to specify a custom IP range.
    • IP address range — Specify the custom IP range by using CIDR notation.
    • Description — Add a description to help you identify the purpose of this route advertisement.
  8. After you're done adding routes, select Save.

gcloud

You can add to existing custom advertisements or you can set a new customer advertisement, which replaces any existing custom advertisements with the new one.

To set a new custom advertisement for a default IP range, use the --set-advertisement-ranges flag:

gcloud compute routers update-bgp-peer router-name \
   --peer-name bgp-session-name \
   --advertisement-mode custom \
   --set-advertisement-ranges 0.0.0.0/0

To append the default IP range to existing ones, use the --add-advertisement-ranges flag. Note that this flag requires the Cloud Router's advertisement mode to already be set to custom. The following example, adds the 0.0.0.0/0 custom IP to the Cloud Router's advertisements:

gcloud compute routers update-bgp-peer router-name \
   --peer-name bgp-session-name \
   --add-advertisement-ranges 0.0.0.0/0

Optionally, set the VPC Network Dynamic Routing Mode to global

If you have Bare Metal Solution servers in two different regions, consider enabling global routing mode on the VPC network to have your Bare Metal Solution regions talk to each other directly over the VPC network.

The global routing mode is also needed to enable communications between an on-premises environment that is connected to one Google Cloud region and a Bare Metal Solution environment in another Google Cloud region.

To set the global routing mode, see Setting the VPC Network Dynamic Routing Mode.

VPC firewall setup

New VPC networks come with active default firewall rules that restrict most traffic in the VPC network.

To connect to your Bare Metal Solution environment, network traffic must be enabled between:

  • Your Bare Metal Solution environment and network destinations on Google Cloud.
  • Your local environment and your resources on Google Cloud, such as any jump host VM instance you might use to connect to your Bare Metal Solution environment.

Within your Bare Metal Solution environment, if you need to control network traffic between the bare-metal servers or between the servers and destinations not on Google Cloud, you need to implement a control mechanism yourself.

To create a firewall rule in your VPC network on Google Cloud:

Console

  1. Go to the Firewall rules page:

    Go to Firewall rules

  2. Click Create firewall rule.

  3. Define the firewall rule.

    1. Name the firewall rule.
    2. In the Network field, select the network where your VM is located.
    3. In the Targets field, specify either Specified target tags or Specified service account.
    4. Specify the target network tag or service account in the appropriate fields.
    5. In the Source filter field, specify IP ranges to allow incoming traffic from your Bare Metal Solution environment.
    6. In the Source IP ranges field, specify the IP addresses of the servers or devices in your Bare Metal Solution environment.
    7. In the Protocols and ports section, specify the protocols and ports that are required in your environment.
    8. Click Create.

gcloud

The following command creates a firewall rule that defines the source by using an IP range and the target by using the network tag of an instance. Modify the command for your environment as necessary.

gcloud compute firewall-rules create rule-name \
    --project=your-project-id \
    --direction=INGRESS \
    --priority=1000 \
    --network=your-network-name \
    --action=ALLOW \
    --rules=protocol:port \
    --source-ranges=ip-range \
    --target-tags=instance-network-tag

For more information about creating firewall rules, see Creating firewall rules.

Connecting to your bare-metal server

The servers in your Bare Metal Solution environment are not provisioned with external IP addresses.

After you have created a firewall rule to allow traffic into your VPC network from the Bare Metal Solution environment, you can connect to your server by using a jump host VM instance.

Create a jump host VM instance on Google Cloud

To quickly connect to your bare-metal servers, create a Compute Engine virtual machine (VM) to use as a jump host. Create the VM in the same Google Cloud region as your Bare Metal Solution environment.

If you need a more secure connection method, see Connect using a bastion host.

To create a jump host VM instance, choose the instructions below based on the operating system you are using in your Bare Metal Solution environment.

For more information about creating Compute Engine VM instances, see Creating and starting a VM instance.

Linux

Create a virtual machine instance

  1. In the Google Cloud console, go to the VM Instances page:

    Go to the VM Instances page

  2. Click Create instance.

  3. In the Name field, specify a name for the VM instance.

  4. Under Region, select the region of your Bare Metal Solution environment.

  5. In the Boot disk section, click Change.

    1. In the Operating systems field, select an OS of your choice.
    2. In the Version field, select the OS version.
  6. Click Management, security, disks, networking, sole tenancy to expand the section.

  7. Click Networking to display the networking options.

    • Optionally, under Network tags, define one or more network tags for the instance.
    • Under Network interfaces, confirm that the proper VPC network is displayed.
  8. Click Create.

Allow a short time for the instance to start. After the instance is ready, it is listed on the VM instances page with a green status icon.

Connect to your jump host VM instance

  1. If you need to create a firewall rule to allow access to your jump host VM instance, see Firewall setup.

  2. In the Google Cloud console, go to the VM instances page:

    Go to the VM Instances page

  3. In the list of VM instances, click SSH in the row that contains your jump host.

    The SSH button is highlighted for the jump host row in the VM
instances page

You now have a terminal window with your jump host VM instance, from which you can connect to your bare-metal server by using SSH.

Logging in to a Bare Metal Solution server for the first time

Linux

  1. Connect to your jump host VM instance.

  2. On the jump host VM instance, open a command-line terminal and confirm that you can reach your Bare Metal Solution server:

    ping bare-metal-ip

    If your ping is unsuccessful, check and correct the following:

  3. From the jump host VM instance, SSH into the Bare Metal Solution server by using the customeradmin user ID and the IP address of the server:

    ssh customeradmin@bare-metal-ip
  4. When prompted, enter the password provided to you by Google Cloud.

  5. On first login, you are required to change the password for your Bare Metal Solution server.

  6. Set a new password and store it in a safe location. After resetting the password, the server logs you out automatically.

  7. Log back into the Bare Metal Solution server using the customeradmin user ID and your new password:

    ssh customeradmin@bare-metal-ip
  8. We recommend that you also change the root user password. Start by logging in as the root user:

    sudo su -
  9. To change the root password, issue the passwd command and follow the prompts:

    passwd
  10. To return to the customeradmin user prompt, exit the root user prompt:

    exit
  11. Remember to store your passwords in a safe place for recovery purposes.

  12. Confirm that your server configuration matches your order. The things to check include:

    • The server configuration, including the number and type of CPUs, the sockets, and the memory.
    • The operating system or hypervisor software, including vendor and version.
    • The storage, including type and amount.

Set up access to the public internet

Bare Metal Solution doesn't come with access to the internet. You can choose from the following methods to set up access depending on various factors, including your business requirements and existing infrastructure:

Access internet using a Compute Engine VM and Cloud NAT

The following instructions set up a NAT gateway on a Compute Engine VM to connect the servers in a Bare Metal Solution environment to the internet for purposes such as receiving software updates.

The instructions use the default internet gateway of your VPC network to access the internet.

The Linux commands that are shown in the following instructions are for the Debian operating system. If you use a different operating system, the commands you need to use might also be different.

In the VPC network that you are using with your Bare Metal Solution environment, perform the following steps:

  1. Open the Cloud Shell:

    Go to the Cloud Shell

  2. Create and configure a Compute Engine VM to serve as a NAT gateway.

    1. Create a VM:

      gcloud compute instances create instance-name \
        --machine-type=machine-type-name \
        --network vpc-network-name \
        --subnet=subnet-name \
        --can-ip-forward \
        --zone=your-zone \
        --image-family=os-image-family-name \
        --image-project=os-image-project \
        --tags=natgw-network-tag \
        --service-account=optional-service-account-email
      

      In later steps, you use the network tag that you define in this step to route traffic to this VM.

      If you don't specify a service account, remove the --service-account= flag. Compute Engine uses the default service account of the project.

    2. SSH into the NAT gateway VM and configure the iptables:

      $ sudo sysctl -w net.ipv4.ip_forward=1
      $ sudo iptables -t nat -A POSTROUTING \
         -o $(/bin/ip -o -4 route show to default | awk '{print $5}') -j MASQUERADE
      

      The first sudo command tells the kernel that you want to allow IP forwarding. The second sudo command masquerades packets received from internal instances as if they were sent from the NAT gateway instance.

    3. Check the iptables:

      $ sudo iptables -v -L -t nat
    4. To retain your NAT gateway settings across a reboot, execute the following commands on the NAT gateway VM:

      $ sudo -i
      
      $ echo "net.ipv4.ip_forward=1" > /etc/sysctl.d/70-natgw.conf
      
      $ apt-get install iptables-persistent
      
      $ exit
      
  3. In Cloud Shell, create a route to 0.0.0.0/0 with the default internet gateway as the next hop. Specify the network tag that you defined in the previous step on the --tags argument. Assign the route a higher priority than any other default route.

    gcloud compute routes create default-internet-gateway-route-name \
        --destination-range=0.0.0.0/0 \
        --network=network-name \
        --priority=800 \
        --tags=natgw-network-tag \
        --next-hop-gateway=default-internet-gateway
    
  4. Add the network tag that you just created to any existing VMs in your VPC network that need internet access, so that they can continue to access the internet after you create a new default route that your Bare Metal Solution servers can also use.

  5. Optional: Remove routes to the internet that existed before the route you created in the previous step, including those created by default.

  6. Confirm that any existing VMs in your network and the NAT gateway VM can access the internet by pinging an external IP address, such as 8.8.8.8, the Google DNS, from each VM.

  7. Create a default route to 0.0.0.0/0 with the NAT gateway VM as the next hop. Give the route a lower priority than priority that you specified for the first route that you created.

    gcloud compute routes create natgw-route-name \
        --destination-range=0.0.0.0/0 \
        --network=network-name \
        --priority=900 \
        --next-hop-instance=natgw-vm-name \
        --next-hop-instance-zone=natgw-vm-zone
    
  8. Log in to your Bare Metal Solution servers and ping an external IP address to confirm that they can access the internet.

    If the ping is not successful, make sure that you have created a firewall rule that allows access from your Bare Metal Solution environment to your VPC network.

Access internet using redundant Compute Engine VMs, Cloud NAT, internal passthrough Network Load Balancer, and policy-based routing

This section shows how to set up internal passthrough Network Load Balancer with Compute Engine VMs and Cloud NAT configured as the backend. Policy-based routing forwards the internet traffic to the frontend of the internal passthrough Network Load Balancer.

The following diagram shows this setup.

Setup for using redundant Compute Engine VMs, Cloud NAT, internal passthrough Network Load Balancer, and policy-based routing to access internet.

In the VPC network of your Bare Metal Solution environment, perform the following steps:

  1. Create and configure a Compute Engine VM to serve as a NAT gateway. Complete the steps described in Method 1: Using a single Compute Engine VM and Cloud NAT.

    A lightweight http server can be used to perform a health check for internal passthrough Network Load Balancer.

    
    # Installing http server
    
    sudo yum install httpd
    sudo systemctl restart httpd
    
    # Testing
    
    curl http://127.0.0.1:80
    
    
    
  2. Create an instance group.

    gcloud compute instance-groups unmanaged create INSTANCE_GROUP_NAME --project=PROJECT_ID --zone=ZONE
    

    Replace the following:

    • INSTANCE_GROUP_NAME: the name of the instance group
    • PROJECT_ID: the ID of the project
    • ZONE: the zone in which to create the instance group
  3. Add the VM to the instance group.

    gcloud compute instance-groups unmanaged add-instances INSTANCE_GROUP_NAME --project=PROJECT_ID --zone=ZONE --instances=VM_NAME
    

    Replace the following:

    • INSTANCE_GROUP_NAME: the name of the instance group
    • PROJECT_ID: the ID of the project
    • ZONE: the zone in which to create the instance group
    • VM_NAME: the name of the VM
  4. Create an internal passthrough Network Load Balancer:

    1. In the Google Cloud console, go to the Load balancing page.

      Go to Load balancing

    2. Click Create load balancer.

    3. On the Network Load Balancer (TCP/SSL) card, click Start configuration.

      1. For Internet facing or internal only, select Only between my VMs.
      2. For Multiple regions or single region, select Single region only.
      3. For Load balancer type, select Pass-through.
      4. Click Continue.
    4. Enter a Name for your load balancer.

    5. Click Backend configuration and make the following changes:

      1. Region: Select a region.
      2. Network: Select a network.
      3. To add backends, do the following:
        1. Under New Backend, to handle IPv4 traffic only, select the IP stack type as IPv4 (single-stack).
        2. Select your instance group and click Done.
      4. Select a health check. You can also create a health check, enter the following information, and click Save:

        • Name: Enter a name for the health check.
        • Protocol: HTTP
        • Port: 80
        • Proxy protocol: NONE
        • Request path: /
    6. Click Frontend configuration. In the New Frontend IP and port section, make the following changes:

      1. Ports: Choose All, and enter 80,8008,8080,8088 for the Port number.
      2. Click Done.
    7. Click Review and finalize.

    8. Review your load balancer configuration settings.

    9. Click Create.

  5. Create a policy-based route for the internet.

    gcloud network-connectivity policy-based-routes create ROUTE_NAME \
     --source-range=SOURCE_RANGE \
     --destination-range=0.0.0.0/0 \
     --ip-protocol=ALL \
     --network="projects/PROJECT_ID/global/networks/NETWORK" \
     --next-hop-ilb-ip=NEXT_HOP \
     --description="DESCRIPTION" \
     --priority=PRIORITY \
     --interconnect-attachment-region=REGION
    

    Replace the following:

    • ROUTE_NAME: the name of the policy-based route
    • SOURCE_RANGE: the source IP CIDR range. In this case, this is the Bare Metal Solution IP address.
    • PROJECT_ID: the ID of the project
    • NETWORK: the network to which the policy-based route is applied
    • NEXT_HOP: the IPv4 address of the route's next hop. In this case, this is the IP address of the frontend of the internal passthrough Network Load Balancer.
    • DESCRIPTION: a description of the route
    • PRIORITY: the priority of the policy-based route compared to other policy-based routes
    • REGION: the region of the VLAN attachment
  6. Create a policy-based route to skip the internet policy-based route for on-premises subnets and local subnets.

     gcloud network-connectivity policy-based-routes create ROUTE_NAME \
     --source-range=SOURCE_RANGE/32 \
     --destination-range=DESTINATION_RANGE \
     --ip-protocol=ALL \
     --network="projects/PROJECT_ID/global/networks/VPC_NAME" \
     --next-hop-other-routes="DEFAULT_ROUTING" \
     --description="DESCRIPTION" \
     --priority=PRIORITY \
     --interconnect-attachment-region=REGION
    

    Replace the following:

    • ROUTE_NAME: the name of the policy-based route
    • SOURCE_RANGE: the source IP CIDR range. In this case, this is the Bare Metal Solution IP address.
    • DESTINATION_RANGE: the destination IP CIDR range. In this case, this is the on-premise subnet or a local subnet.
    • PROJECT_ID: the ID of the project
    • VPC_NAME: the name of the VPC network
    • DESCRIPTION: a description of the route
    • PRIORITY: the priority of the policy-based route compared to other policy-based routes. The priority of this policy-based route must be less than or equal to the policy-based route for the internet.
    • REGION: the region of the VLAN attachment
  7. Update the firewall to allow HTTP port 80 on the VM.

    The health check might fail if you don't update the firewall.

Access internet using redundant Compute Engine VMs, Cloud NAT, internal passthrough Network Load Balancer, and policy-based routing in a separate VPC

If you don't want to add policy-based routes for local subnets, you can use this method to access the internet. However, to use this method, you need to create a VLAN attachment and a VPC to connect the Bare Metal Solution.

The following diagram shows this setup.

Set up for using redundant Compute Engine VMs, Cloud NAT, internal passthrough Network Load Balancer, and policy-based routing in a separate VPC.

Follow these steps:

  1. Create a VPC network for the internet.

    gcloud compute networks create NETWORK --project=PROJECT_ID --subnet-mode=custom --mtu=MTU --bgp-routing-mode=regional
    

    Replace the following:

    • NETWORK: the name for the VPC network.
    • PROJECT_ID: the ID of the project
    • MTU: the maximum transmission unit (MTU), which is the largest packet size of the network
  2. Create a subnet.

    gcloud compute networks subnets create SUBNET_NAME --project=PROJECT_ID --range=RANGE --stack-type=IPV4_ONLY --network=NETWORK --region=REGION
    

    Replace the following:

    • SUBNET_NAME: the name for the subnet
    • PROJECT_ID: the ID of the project
    • RANGE: the IP space allocated to this subnet in CIDR format
    • NETWORK: the VPC network to which the subnet belongs
    • REGION: the region of the subnet
  3. Create two Cloud Routers for the redundancy and advertisements.

    gcloud compute routers create ROUTER_NAME --project=PROJECT_ID --region=REGION --network=NETWORK --advertisement-mode=custom --set-advertisement-ranges=0.0.0.0/0
    

    Replace the following:

    • ROUTER_NAME: the name of the router
    • PROJECT_ID: the ID of the project
    • REGION: the region of the router
    • NETWORK: the VPC network for this router
  4. Create four VLAN attachments, two for each Cloud Router.

    For instructions, see Create VLAN attachments.

  5. After the VLAN attachments are active, follow the steps in Method 2: Using redundant Compute Engine VMs, Cloud NAT, internal passthrough Network Load Balancer, and policy-based routing to configure the internet infrastructure. However, for this setup, don't configure the policy-based route for local traffic. Only create a policy-based route for the internet in a routing table of the VPC network.

Set up access to Google Cloud APIs and services

Bare Metal Solution doesn't come with access to Google Cloud services. You can choose how to implement access depending on various factors, including your business requirements and existing infrastructure.

You can access Google Cloud APIs and services privately from your Bare Metal Solution environment.

You set up private access to the Google Cloud APIs and services from a Bare Metal Solution environment as you would for an on-premises environment.

Follow the instructions for on-premises environments in Configuring Private Google Access for on-premises hosts.

The instructions guide you through the following high-level steps:

  1. Configuring routes for the Google API traffic.
  2. Configuring your Bare Metal Solution DNS to resolve *.googleapis.com as a CNAME to restricted.googleapis.com.

What's next

After you have set up your Bare Metal Solution environment, you can install your workloads.

If you plan to run Oracle databases on the servers in your Bare Metal Solution environment, you can use the open source Toolkit for Bare Metal Solution to install your Oracle software.