This topic provides an overview of the Hosted Private HSM solution.
To support moving your workloads to the cloud, Google hosts customer-owned Hardware Security Modules (HSMs), providing physical and network security, rack space, power, and network integration for a monthly fee.
Hosted private HSMs enable you to contract directly with Google for placement of your HSMs. HSMs are placed within specified colocation facilities and connect to Google Cloud.
The Hosted Private HSM solution is supported in colocation facilities with active peering fabrics. These facilities meet and exceed Google's standards for data center security and provide low-latency, highly available service.
This offering is limited to FIPS 140-2 Level 3 (or better) certified HSMs, and is not a generalized hosting or colocation service. The Hosted Private HSM solution is PCI-DSS and SOC compliant in all locations.
Separation of responsibilities
It's your responsibility to obtain and provision HSMs and ship them to the appropriate facilities. The HSMs used are your choice, but they must comply with the HSM equipment requirements.
Google pre-configures the racks, top-of-rack switches, and connectivity. The switches are from different vendors for each pair of racks. For the Hosted Private HSM solution, you have your own dedicated racks and switches. Google provides a racking service for your HSMs and works with you to validate the interconnect. Redundant power supplies are provided for each rack.
Accessing hosted private HSMs
You have logical management access to your HSMs and are responsible for their maintenance and management. You maintain full control of your HSMs.
Google does not have logical access to your HSMs, but provides and maintains the racks, switching, and interconnect. Google has no access to your data or keys on your HSM.
Google provides a remote hands service. With notice, you can schedule an escorted visit to the facility. You are responsible for your own compliance and audit requirements.
At the end of your contract or the HSM's end of life, the HSM will be shipped back to you or destroyed if it cannot be shipped back.
HSM equipment requirements
This section details the physical requirements for HSMs and associated cables for hosting HSMs in a Google facility.
The number of HSMs that may fit in a rack will depend on the number of ports available in the current model of the top-of-rack switch, the number of rack units taken up the HSM model, and the power draw of the HSMs.
- Dual AC power supplies (16A max per power supply).
- 208V line to line (for United States based locations).
- Rack PDU providing C13 or C19 receptacles/outlets.
Power Cables (to be provided by you)
- Rack PDU cable end should be C14 / C20 connector types.
- 2 x 6 feet / 2 meter (preferred length) power cables.
- Network interface controller: Dual 1g copper NICs (if applicable).
Network Cables (to be provided by you)
- 2 x 6 feet / 2 meter (preferred length) CAT-5e or better patch cables.
- Rack depth: 42 inches deep.
- Rack unit spacing: Standard EIA-310 19" rack mount with square hole mounts. You can occupy up to 4 rack units per HSM.
- The HSMs cannot be equipped with cameras or wireless networks such as Bluetooth.
- The HSM must be FIPS 140-2 Level 3 (or better) certified.
The HSM must be new equipment.
The HSM must be fully remotely manageable.
There are no requirements for weight or cooling.
For hosted HSMs to meet the requirements for a 99.99% SLA, you must:
- Deploy HSMs in a minimum of two Google Cloud regions.
- Deploy a minimum of four HSMs per region (two HSMs minimum in two racks minimum).
You provide Google with the MAC address for each HSM network interface and its assigned IP address. This information helps Google verify server-to-top-of-rack cabling and aids troubleshooting during the deployment process.
Network requirements will be discussed in more detail with your account representative during the onboarding process.
A pair of racks at a single location is covered by a 99.9% SLA.
A full deployment across two locations provides a 99.99% SLA.
Applications should be designed to take advantage of this redundancy model. An application should be able to failover from zone 1 to zone 2 within a single location (i.e., rack to rack).
Enabling the Global Routing feature allows HSM's at either location to reach Google Cloud resources in any region.
Please note a single Interconnect failure isn't an SLA violation.
This high-level diagram shows the required connectivity to achieve a a 99.99% SLA on the service.
- Each region deployment contains a minimum of two racks for your use, and one switch per rack.
- The top-of-rack switches are provided by Google and have vendor diversity.
- Each top-of-rack switch has a 10G Partner Interconnect with redundant VLAN attachments for Partner Interconnect to redundant Cloud Routers.
- Each HSM should have a minimum of 2 1GE copper network interfaces with redundant connections to both top-of-rack switches (e.g., redundant connections to both top-of-rack switches for both the management and data interfaces).
- You provide the IP allocations for the HSM networks.
- Top-of-rack switches advertise their locally attached subnets to the pair of Cloud Routers.
- You enable global dynamic routing in your Virtual Private Cloud (VPC) to allow access to the HSMs from any Google Cloud region where you've deployed resources. Global dynamic routing is also required to meet the requirements for 99.99% availability.
- BGP between the top-of-rack switches and the Cloud Routers in your project exchange reachability information to route between Google Cloud project resources and the HSMs.
The following steps are completed by you to enable your HSMs to be hosted with Google. For each set of racks in a region:
Create a redundant pair of Cloud Routers per region using ASN16550. See Creating Cloud Routers for more information.
Create two redundant pairs of VLAN attachments with Partner Interconnect per region using the Cloud Routers from the previous step. Create the attachments with the pre-activate option enabled. There should be a total of four attachments per region. If the attachments were created without the pre- activation option enabled, you can activate the connections manually.
For more information on Partner Interconnect and pre-activation options, see Partner Interconnect overview.
Enable global dynamic routing in the VPC.
- To achieve 99.99% availability, use the steps in Establishing 99.99% Availability for Partner Interconnect.
- Deployments in a single region have 99.9% availability until the second region is available. For this case, see Establishing 99.9% Availability for Partner Interconnect
Configure firewall rules as needed to allow traffic between your premises and project resources.
If you're interested in hosting your HSMs with Google, reach out to your account representative for additional assistance.