Installation requirements

Before you order Google Distributed Cloud Edge hardware, you must meet the Distributed Cloud Edge installation requirements described on this page.

Plan the hardware configuration

Before you can order hardware, your network administrator must work with Google to plan the hardware configuration for the Distributed Cloud Edge installation.

Based on your business requirements, your network administrator must determine and provide the following information:

  • Number of racks of each variant
  • Power supply type (AC or DC)

Distributed Cloud Edge ships in one of the following variants.

Configuration 1 Configuration 2
Configuration Six non-GPU servers with local SSD storage, two ToR switches, dedicated rack Six GPU-enabled servers with local SSD storage, two ToR switches, dedicated rack
Purpose Ideal for general purpose computing Ideal for AI/ML or graphic intensive workloads
Estimated cost Pricing Pricing
CPU 6 x 96 vCPUs (576 vCPUs total) 6 x 96 vCPUs (576 vCPUs total)
GPU None 6 x Dual NVIDIA Tesla T4 GPUs (12 GPUs total)
RAM 6 x 256 GB (1536 GB total) 6 x 256 GB (1536 GB total)
Storage 6 x 4 TB SSD (24 TB total) 6 x 4 TB SSD (24 TB total)
Power AC or DC AC only

Purchase Premium Support

Distributed Cloud Edge requires Premium Support. If you're not currently a Premium Support customer, then you must purchase Premium Support to use Distributed Cloud Edge.

Provide Google Cloud information

When you order the hardware, you must provide the following information to Google, if applicable:

  • Your Google Cloud organization ID
  • Whether you want Google to provision your Distributed Cloud Edge machines as part of an existing Google Cloud project or if you want a new Google Cloud project
  • The ID of the target Google Cloud project (if you're provisioning as part of an existing Google Cloud project)
  • The desired number of Distributed Cloud Edge zones

Delivery path and installation site

To verify that your delivery path and installation site can accommodate the Distributed Cloud Edge hardware, Google might ask you for photographs and drawings that accurately depict both, or they might perform a pre-delivery survey of your site.

The delivery path must be free from obstructions and have a grade below 3%. If the installation site is not on the same floor as your loading dock or building entrance, then you must provide access to an elevator.

All doorways, hallways, and elevators must support the gross weight and dimensions of the crated Distributed Cloud Edge hardware.

You must provide Google service technicians access throughout the delivery path up to and including the installation site.

If your installation site is not a typical data center, you must provide dimensional drawings of the installation site before you order hardware to ensure that the rack can be safely installed and powered up.

Space needed

The Distributed Cloud Edge hardware rack comes in a crate with the following dimensions.

Dimension Value (imperial) Value (metric)
Height 87 inches 221 cm
Depth 60 inches 152 cm
Width 40 inches 102 cm

The Distributed Cloud Edge hardware rack has the following dimensions.

Dimension Value (imperial) Value (metric)
Height 80 inches 203 cm
Depth 48 inches 122 cm
Width 24 inches 61 cm

If your local jurisdiction or facility requires that you brace the Distributed Cloud Edge rack, you might need special seismic bracing hardware. The Distributed Cloud Edge rack ships anchored to its crate with brackets that you can re-use to attach the rack to your floor. However, these brackets might not meet your local bracing requirements.

Rack weight

The gross weight of the Distributed Cloud Edge rack is as follows.

Rack fill Gross weight
Typical 900 lbs (408 kg)
Maximum 1300 lbs (590 kg)

The delivery path, including any elevators, and the installation site must safely support this weight while in full compliance with local building codes.

Power supply

The Distributed Cloud Edge rack requires single-phase or three-phase alternating current power at 50Hz or 60Hz, or -48V direct current Telco-style positive-ground power. You must specify the desired power supply type when you order the hardware.

You must supply power to the installation site in accordance with your local building codes, including the following work:

  • Installing cabling conduits
  • Running the required cabling
  • Connecting the cabling to your electrical panel
  • Turning on the power

All electrical work must be performed by a certified electrician.

Line specifications

You must supply the following number of independent power lines to ensure high availability, based on the variant that you want to deploy.

Variant Line requirement
AC power Two (2) independent power lines with a dedicated ground connection
DC power Four (4) independent supply lines with dedicated returns and a dedicated ground connection

All power receptacles must be located at most 6 feet (1.8 meters) from the installation site.

AC power specifications

For AC power, all power supply lines must meet one of the following specifications.

Phase Voltage Amperage Connector
Single-phase 208V, 50/60Hz 30A NEMA L6-30P
Single-phase 230V, 50/60Hz 32A IEC 60309 2P+2E 6Hr
Three-phase Wye 120V/208V, 50/60Hz 30A NEMA L21-30P
Three-phase Delta 208V, 50/60Hz 60A IEC 60309 3P+G 9Hr
Three-phase Delta 208V, 50/60Hz 50A CS8365 50A 3P+G
Three-phase Wye 240V/410V, 50/60Hz 16A IEC 60309 3P+N+PE 6Hr
Three-phase Wye 230V/400V, 50/60Hz 32A IEC 60309 3P+N+PE 6Hr

DC power specifications

For DC power, all power supply lines must be positive-ground -48V Telco-style lines in the following configuration:

  • 4 supply lines feeding from two or more redundant DC supplies
  • 4 return lines feeding to the corresponding redundant DC supplies
  • 1 dedicated grounding bond

The lines must meet the following specifications.

Line type Voltage Amperage Connector
Supply -44V to -60V 125A Two-hole 3/8-inch-on-1-inch-centers compression lug
Return -44V to -60V 125A Two-hole 3/8-inch-on-1-inch-centers compression lug
Ground N/A Consult an electrician Single-hole 1/4-inch compression lug

Power draw

The power draw of a Distributed Cloud Edge hardware rack ranges between 3,000W and 5,500W based on the selected configuration, presence of GPUs, CPU load, and other factors. Peak power consumption can momentarily reach 5,900W at power-up.

Backup power

For either variant, each of the independent power lines must have an independent uninterruptible power supply (UPS) capable of powering the Distributed Cloud Edge hardware continuously for a minimum of 20 minutes for battery systems and 15 seconds for inertial systems.

In addition to UPS backup, you must also provide emergency electrical generator backup of sufficient capacity to both charge the UPS units and power the Distributed Cloud Edge hardware for a minimum of four hours. The Distributed Cloud Edge hardware must be connected to the UPS units. The UPS units must then connect to the generator backup by using an automatic transfer switch (ATS) or similar system that does not require human intervention to facilitate an emergency transfer.

Battery UPS systems without generator backup, such as older DC plants, must have sufficient capacity to power Distributed Cloud Edge for a minimum of four hours.

Cooling

The Distributed Cloud Edge hardware rack is air-cooled and requires a climate-controlled environment to operate. Your installation site must provide adequate cooling to keep the Distributed Cloud Edge hardware operational.

Your installation site must provide the following ambient environment.

Environmental factor Required range
Temperature Between 59°F (15°C) and 89°F (31°C)
Relative humidity Between 30% and 70%, non-condensing

The ambient temperature and relative humidity fluctuations must not be greater than the following:

  • Ambient temperature: +/- 9°F (5°C) per hour
  • Relative humidity: +/- 5% per hour

These guidelines apply to installations at altitudes below 10,000' MSL (3050m). For higher altitudes, consult an HVAC professional and your Google representative. Extreme swings outside of these recommended ranges can result in a protective shutdown and/or permanent damage to Distributed Cloud Edge hardware.

The rack produces up to 13,650 BTUs of heat per hour and uses forced air to remove the heat from the installed hardware. The front of the rack acts as a cold air intake, and the back of the rack acts as a hot air exhaust. You must provide 4 feet (1.2 meters) of open space at both the front and the back of the rack to allow for sufficient airflow. If possible, install the rack in front of a dedicated perforated tile or register.

Distributed Cloud Edge requires that air at your deployment site be continuously circulated, conditioned, and filtered by using permanently installed commercial or industrial-grade HVAC equipment. Failure to maintain the required environmental conditions could result in long-term damage to Distributed Cloud Edge hardware and a reduced reliability of your Distributed Cloud Edge deployment.

Your cooling infrastructure must meet the following guidelines:

  • All HVAC equipment must have backup power and the capability to automatically restart after power loss.
  • Air handlers, conditioning equipment, condensers, pumps, cooling towers, chillers, and other HVAC components must have appropriate redundancy.
  • You must regularly inspect and maintain your HVAC equipment to keep its operating performance consistent and within the required ranges.
  • You must not expose the Distributed Cloud Edge hardware to direct sunlight or any other type of infrared radiation because this alters the validated temperature profile of the hardware.
  • You must not expose the Distributed Cloud Edge hardware to unconditioned, unfiltered air. Even brief exposure can cause dust buildup and restrict airflow to critical components.
  • A fresh-air cooling system is acceptable if it is professionally engineered and installed. However, it must meet the thermal requirements listed previously.

Networking

The Distributed Cloud Edge hardware rack requires four LC single-mode fiber connections split between two redundant network devices on your local network. Only 100GBASE-LR4 and 10GBASE-LR links are supported. You must specify your network requirements, such as IP address ranges and firewall configuration, when you order Distributed Cloud Edge hardware. For optical transport circuits, enable fault propagation for optimal routing protocol convergence.

Before you order, your network administrator must work with Google to plan the network configuration for the Distributed Cloud Edge installation.

Figure 1 depicts a typical Distributed Cloud Edge configuration:

Figure 1. Distributed Cloud Edge components.
Figure 1. Distributed Cloud Edge components.

For more information about the components shown in this diagram, see Distributed Cloud Edge hardware.

Allocate address blocks

Distributed Cloud Edge requires that you allocate the following address blocks on your local network.

Network component Allocation requirement
Peering link to your local network

Four public or private /31 CIDR blocks.

You can provide four /31 CIDR blocks, a VLAN ID, and two BGP ASNs that cover these four address blocks. One ASN is for your local routers that peer with Distributed Cloud Edge ToR switches, and one ASN is for the Distributed Cloud Edge switches.

ToR switch management subnetwork At least one /30 CIDR block, either public or RFC 1918.
Distributed Cloud Edge machine management subnetwork At least one /27 CIDR block, either public or RFC 1918.
Distributed Cloud Edge nodes subnetwork At least one /27 CIDR block, either public or RFC 1918.

When you order Distributed Cloud Edge hardware, your network administrator must provide the preceding CIDR block allocation information. These values cannot be changed after Distributed Cloud Edge is deployed.

Node, machine management, and ToR switch CIDR blocks must be routable subnetworks on your local network. They can be private RFC 1918-range subnetworks or public networks. You must configure the appropriate BGP sessions on your peering edge routers to accept routes for the Distributed Cloud Edge nodes subnetwork, the Distributed Cloud Edge machine management subnetwork, and the lower two /32 IP addresses of the ToR switch management subnetwork.

The CIDR blocks are allocated per Distributed Cloud Edge rack. When you create a Distributed Cloud Edge cluster, Distributed Cloud Edge automatically assigns nodes within that cluster to IP addresses within the specified node CIDR block based on their capabilities and availability.

In a multi-rack Distributed Cloud Edge installation, you must specify unique CIDR blocks for each Distributed Cloud Edge rack. Each rack is connected to your network separately.

If you plan to expand your Distributed Cloud Edge installation with additional machines, you must account for the additional IP addresses that this requires in your initial Distributed Cloud Edge order. You must also account for overhead IP addresses, such as gateway addresses and floating addresses used by VPN connections between your workloads and Google Cloud. Work with your Google Cloud sales representative to determine the optimum node CIDR block allocations based on your business requirements.

When your Distributed Cloud Edge installation is up and running, you also need to allocate network IP addresses for your Distributed Cloud Edge Pods and Services as described in Distributed Cloud Edge Pod and Service network address allocation.

Configure firewalls

Distributed Cloud Edge requires that you configure your firewall to allow the following types of network traffic:

  • Distributed Cloud Edge management and cluster control plane traffic
  • Distributed Cloud Edge workload traffic

Distributed Cloud Edge management and cluster control plane traffic

Distributed Cloud Edge requires that you open the following ports on your local network. Distributed Cloud Edge requires these ports for outbound connections to Google over the internet for management and cluster control plane traffic. You must use a stateful firewall that tracks this outbound traffic and allows the corresponding returning inbound traffic through to Distributed Cloud Edge.

Function Originating subnetwork Protocol Ports
Domain Name System (DNS) ToR switch management, Distributed Cloud Edge machine management, Distributed Cloud Edge nodes TCP, UDP 53
Network Time Protocol (NTP) ToR switch management, Distributed Cloud Edge machine management, Distributed Cloud Edge nodes UDP 123
Terminal Access Controller Access Control System (TACACS) for switch authentication ToR switch management TCP 3535
Management VPN ToR switch management, Distributed Cloud Edge machine management UDP 443
Bootstrap and Management API ToR switch management, Distributed Cloud Edge machine management TCP 443
Remote Kubernetes control plane Distributed Cloud Edge machine management, Distributed Cloud Edge nodes TCP 6443
Kubernetes Konnectivity proxy Distributed Cloud Edge machine management, Distributed Cloud Edge nodes TCP 8132, 8133, 8134
Monitoring service Distributed Cloud Edge machine management, Distributed Cloud Edge nodes TCP 443
Logging service Distributed Cloud Edge machine management, Distributed Cloud Edge nodes TCP 443
Cloud VPN and Virtual Private Cloud data plane Distributed Cloud Edge nodes UDP (ESP, IKE) 500, 4500

Distributed Cloud Edge workload traffic

Your network administrator must also configure additional firewall rules to allow traffic to and from the workloads deployed on your Distributed Cloud Edge clusters.

If you deploy Distributed Cloud Edge behind a NAT gateway, and you have additional firewall rules configured on your WAN gateways that filter or block inbound UDP traffic, the Cloud VPN connectivity required by Distributed Cloud Edge might be affected. In such cases, you must allow inbound Cloud VPN UDP traffic from the Cloud VPN IP address ranges.

For example, you need to allow inbound UDP traffic from the source IP address ranges 35.242.0.0/17, 35.220.0.0/17, and 34.157.0.0/16 with the source port matching 500 or 4500 (IKE/ESP). If your firewall solution requires a more exact configuration, set the destination IP address range to match the IP address range of the Distributed Cloud Edge nodes subnetwork. If your firewall is upstream of your NAT gateway, set the destination IP address range to the NAT gateway's public IP address.

On-site maintenance

Google remotely monitors the Distributed Cloud Edge hardware. If you encounter an issue, contact Support to file a ticket. If Google detects a hardware failure, we will schedule a visit to your installation site. A Google-certified technician works with you to coordinate the visit and make the required repairs.

What's next