Google Cloud for OpenStack Users

This article is designed to equip people familiar with OpenStack with the key concepts to get started with Google Cloud, whether you’re considering migrating or implementing a hybrid deployment.


This article first compares sample 3-tier web application architectures on OpenStack and on Google Cloud, and then compares features of OpenStack and Google Cloud and provides quick reference tables that explain how OpenStack terms and concepts correspond to those in Google Cloud.

This article does not compare the SDKs, APIs, or command-line tools provided by OpenStack and Google Cloud.

For a complete list of Google Cloud products, see Products & Services.

Cloud computing services

Cloud computing services provide a set of baseline services, which include compute, storage, networking, access management, and often database services.

OpenStack’s baseline services include:

  • Compute: OpenStack Compute (Nova and Glance)
  • Storage: OpenStack Block Storage (Cinder)
  • Networking: OpenStack Networking (Neutron)
  • Identity and access management: OpenStack Identity Service (Keystone)

Google Cloud's baseline services include:

Pricing on Google Cloud

Most services on Google Cloud are offered based on a pay-as-you-go pricing model which includes per-second or per-minute billing and automatic discounts with increased usage. Combined with flexible resource allocation such as custom machine types, the Google Cloud pricing model helps optimize cost-effective performance for your applications.

For quick price estimation tools and details, see the Google Cloud Pricing Calculator.

Why Google Cloud?

For the past 15 years, Google has been building one of the fastest, most powerful, and highest-quality cloud infrastructures on the planet. Internally, Google uses this infrastructure for several high-traffic and global-scale services, including Gmail, Maps, YouTube, and Search. Google Cloud puts this infrastructure at your service.

Comparing sample architectures

This section compares how you might build a 3-tier web application system on OpenStack and on Google Cloud.

  • The OpenStack configuration results in an active-backup configuration.
  • The Google Cloud configuration leverages managed services to result in an active-active configuration.

A typical 3-tier web application consists of the following components:

  • Load balancer
  • Web server
  • Application server
  • Database server

OpenStack: Active-backup configuration

The sample OpenStack configuration shown in figure 1 has the following characteristics:

  • You deploy resources across two regions as two separate failure domains for redundancy.
  • The network uses a single subnet for all tiers in each region, and all servers are virtual machine (VM) instances.
  • You define security groups for the four server roles and assign them to the appropriate instances.
  • A Cinder volume is attached to the database server as a data volume. To ensure redundancy across failure domains, the database in the active region is backed up to object storage and restored to the one in the backup region when necessary. (This architecture does not use real-time database replication, because bandwidth is limited between regions.)
  • This architecture provides an active-backup configuration. When you failover the service to another region, you restore the most recent backup to the database server in the backup region.

Three-tier web app

Figure 1: A 3-tier web application system on OpenStack

Google Cloud: Active-active configuration

The sample Google Cloud configuration shown in figure 2 has the following characteristics:

  • You deploy resources in a single subnet that covers multiple zones in a single region for redundancy.
  • You deploy web servers and application servers as VM instances.
  • You define firewall rules using tags to specify packet destinations.
  • You configure access control for the database connection based on the source IP address as part of the Cloud SQL configuration. Cloud SQL is a Google Cloud managed service for MySQL databases that provides data replication between multiple zones. Scheduled data backup and failover process are automated, and you can access the database from multiple zones in the same region. Failover between zones is transparent to applications running on VM instances. You can access a single Cloud SQL instance from multiple zones. (Cloud SQL has two generations: Cloud SQL First Generation and Cloud SQL Second Generation. They have different characteristics and default behaviors for replication and failover.)
  • Google Cloud Load Balancing is an HTTP(S) load-balancing service with which you can use a single, global IP address (VIP) to distribute client access to multiple regions and zones.
  • The architecture provides an active-active configuration across multiple zones, resulting in a redundant service across failure domains.

Three-tier web app2

Figure 2: A 3-tier web application system on Google Cloud

Comparing features

This section compares details about the features used in the sample architectures.

Network architecture

This section compares how OpenStack and Google Cloud networks work within regions and zones, discusses projects and tenants, IP addresses and failover, and firewall rules.

Regions and zones

Here’s how OpenStack’s terms and concepts map to those in Google Cloud:

OpenStack Google Cloud Notes
Region Region In OpenStack, a region can span multiple data centers. In Google Cloud, a region corresponds to a data center campus that usually has multiple independent buildings.
Availability zone Zone In OpenStack, an availability zone is commonly used to identify a set of servers that have a common attribute.

In Google Cloud, a zone corresponds to a cluster within a data center.

In OpenStack, a region is defined as a single cluster that is managed by dedicated controller nodes. A region consists of availability zones. You use availability zones to define multiple groups of resources, such as compute nodes and storage enclosures.

In Google Cloud, a region is an independent geographic area that consists of zones. Locations within regions tend to have round-trip network latencies of under 5 ms on the 95th percentile. Like OpenStack availability zones, a zone is a deployment area for Google Cloud resources within a region. A zone should be considered a single failure domain.

Google Cloud provides a dedicated internal network across all regions, so that bandwidth and latency between different zones in the same region are comparable to bandwidth and latency within a single region. You can deploy a tightly coupled cluster application across multiple zones without concern about bandwidth and latency between zones.

In both platforms, deploying your application across multiple zones or regions can help protect against unexpected failures. Zones are considered independent failure domains in Google Cloud. In contrast, in OpenStack, regions (instead of availability zones) are considered independent failure domains, because availability zones in a single region share the same controller nodes. For a list of Google Cloud regions and zones, see Cloud Locations.

Projects and tenants

Here’s how OpenStack’s terms and concepts map to those in Google Cloud:

OpenStack Google Cloud Notes
Tenant Project All resources in Google Cloud must belong to a project. Users can create their own new projects.

In OpenStack only administrators can create new tenants.

You must set up a Google account to use Google Cloud services. Google Cloud groups your service usage by project. You can create multiple, wholly separate projects under the same account, and your users can create their own projects under their accounts. Resources within a single project can work together easily, for example by communicating through an internal network, subject to regions-and-zones rules. Resources in each project are isolated from those in other projects; you can only interconnect them through an external network connection.

This model allows you to create project spaces for separate divisions or groups within your company. This model can also be useful for testing purposes: after you're done with a project, you can delete the project, and all of the resources created by that project are deleted as well.

OpenStack uses the term tenant to refer to similar functionality for isolating resources for different groups. In OpenStack, only systems administrators can create new tenants.

Network configurations

Here’s how OpenStack’s terms and concepts map to those in Google Cloud:

OpenStack Google Cloud Notes
Neutron Cloud Virtual Network Cloud Virtual Network is a managed network service. In Google Cloud, you can define multiple networks in your project, and each network is independent.
Virtual private network
Virtual private network A single Google Cloud virtual private network spans all regions connected through the internal network of Google Cloud. A Neutron virtual private network spans all the availability zones in a region.

In a typical OpenStack Neutron deployment, each tenant’s virtual network is contained in its own private network space. As shown in figure 1, you don't necessarily have to use multiple subnets, but you can configure multiple subnets if you need to. Figure 3 shows an OpenStack virtual network that consists of a single virtual router and three virtual switches. Two of the virtual switches connect to the external network through the virtual router. AZ-1 and AZ-2 are availability zones.

virtual network

Figure 3: Example of an OpenStack network configuration

Google Cloud offers two types of network: legacy and subnet.

The legacy network provides a single private subnet that spans all regions around the globe, as shown in figure 4.

In a subnet network, shown in figure 5, the virtual switch corresponds to an OpenStack Neutron virtual router, and virtual switches are connected through the internal network. All subnets are routable to each other over private IP addresses, so you don't have to use global IP addresses or the internet for inter-region communication.

In Google Cloud, you can use a mix of legacy networks and subnet networks in a project. Each newly created project includes a predefined network named default, which in turn contains a subnet named default in each region.

legacy network

Figure 4: Example of a Google Cloud legacy network configuration

subnet network

Figure 5: Example of a Google Cloud subnet network configuration

IP addresses

Here’s how OpenStack’s terms and concepts map to those in Google Cloud:

OpenStack Google Cloud
Instances have private internal IP addresses. A global IP address (a floating IP address) can be assigned if necessary.

Instances have private internal IP addresses. In addition, when an instance is created using the `gcloud` command-line tool, an ephemeral IP address is automatically assigned. Static IP addresses can also be assigned if necessary.
Global IP addresses are required for communication between instances in different regions or under different virtual routers. Subnets in all regions are routable to each other over private IP addresses, so you don't have to use global IP addresses or the internet for communication between instances.

In Neutron, VM instances in a private network communicate through virtual switches and routers using private IP addresses assigned at launch. Global IP addresses are not assigned by default.

The virtual router handles access from VM instances to the external network, so that instances share the common global IP address assigned to the virtual router. To expose an instance to the external network and allow connections from external users, you assign a floating IP address to the instance from a pool of global IP addresses.

Because the virtual network is contained in a single region, instances in different regions must use the floating IP addresses to communicate over the external network.

A single network can include multiple virtual routers. VM instances connected to different routers can’t communicate directly with private IP addresses; however, they communicate through the external network using floating IP addresses.

In Google Cloud, all VM instances have an internal IP address and an external IP address at launch. You can detach an external IP address if necessary.

By default, an external IP address is ephemeral, meaning that it is tied to the life of the instance. When you shut down an instance and then start it again, it might be assigned a different external IP address. You can also request a permanent IP address—called a static IP—to attach to an instance. A static IP address is yours until you choose to release it, and you explicitly assign them to VM instances. Static IP addresses can be attached to and detached from instances as needed.

As illustrated in the sample 3-tier web application architectures, in OpenStack, on failover, because you cannot use the same global IP address in different regions, you must use an additional mechanism, such as Dynamic DNS, to allow clients to continue accessing the service using the same URL.

In Google Cloud, on the other hand, you can use a single, global IP address (VIP) provided by Cloud Load Balancing to distribute client access to multiple regions and zones. This provides failover between regions and zones, which is transparent to clients.


Here’s how OpenStack’s terms and concepts map to those in Google Cloud:

OpenStack Google Cloud Notes
Enforces firewall rules through security groups. Enforces firewall rules through rules and tags. An OpenStack security group contains multiple ACLs, which you define by instance role and then assign to an instance.

OpenStack security groups must be defined for each region.

Google Cloud rules and tags can be defined once and used across all regions.

In OpenStack, a single security group contains multiple access control lists (ACLs), which are independent of VM instances. You assign a security group to a VM instance to apply the ACLs to the instance. Usually you define security groups according to VM instance roles, such as web server or database server.

For example, for the sample architecture, you define the following security groups for each region:

Security Group Source Protocol
load-balancer any HTTP/HTTPS
Management Subnet SSH
web-server load-balancer HTTPS
Management Subnet SSH
web-application web-server TCP 8080
Management Subnet SSH
database web-application MYSQL
Management Subnet SSH

You can specify the packet source with a subnet range or the name of a security group. In the preceding table, Management Subnet stands for a subnet range from which system administrators sign in to the guest OS for maintenance purposes.

This architecture assumes that client SSL is terminated at the load balancer, which communicates to web servers by using HTTPS. Web servers communicate to application servers with TCP 8080. MySQL is used for the database server.

After defining these security groups, you assign them to each instance as follows:

Instance Security Group
Load balancer load-balancer
Web server web-server
Application server web-application
Database server database

In Google Cloud, using Cloud Virtual Network, you can define firewall rules that span all regions using a combination of rules and tags. A tag is an arbitrary label associated with a VM instance, and you can assign multiple tags to a single VM instance. A rule is an ACL that allows a packet to flow with the following conditions:

  • Source: IP range, subnet, tags
  • Destination: tags

For example, you first define a rule that allows sending packets that have a destination port TCP 80 from any IP addresses to web-server tag. Then you can add the web-server tag to any VM instances to allow HTTP connections to them. You manage tags to specify instance roles, and rules to specify ACLs that correspond to roles separately.

Figure 6 illustrates some predefined rules for the default network in a newly created project. For example, is a subnet range that contains underlying subnets in all regions. This subnet range can be different in each project. From start to finish, the rules allow the following network connections:

  • Internet Control Message Protocol (ICMP) packets from any external source.
  • Any packets between all instances connected to the default network.
  • Remote Desktop Protocol (RDP) connection from any external source.
  • Secure Shell (SSH) connection from any external source.

Predefined firewall rules

Figure 6: Predefined firewall rules in Virtual Cloud Network for the default network

For more information, see Using Networks and Firewalls.

In Google Cloud firewall rules, you use tags to specify packet destinations, and subnet ranges or tags to specify packet sources. In this scenario, you define the following rules:

Source Destination Protocol web-server HTTP, HTTPS
web-server web-application TCP 8080
Management Subnet web-server, web-application SSH

In the preceding table, is a subnet range from which the load balancer and health-checking system access the web server. Management Subnet stands for a subnet range from which systems administrators sign in to the guest OS for maintenance purposes. web-server and web-application are tags assigned to the web server and application server respectively. Instead of assigning rules as you would if you were using security groups, you assign tags to instances according to their roles.

Access control for the database connection is configured based on the source IP address as part of the Cloud SQL configuration.


Here’s how OpenStack’s terms and concepts map to those in Google Cloud:

OpenStack Google Cloud Notes
Cinder volumes Persistent disks Instance-attached persistent storage

With Google Cloud persistent disks, data is automatically encrypted before it goes from the instance to persistent disk storage.

Ephemeral disks N/A Instance-attached ephemeral storage
OpenStack Swift Cloud Storage Object storage services
Ephemeral disks and Cinder volumes

OpenStack provides two options for instance-attached disks: ephemeral disks and Cinder volumes.

Ephemeral disks were designed to be used as system disks that contain operating system files, and Cinder volumes were designed to store persistent application data. Because live migration is not available with ephemeral disks, however, people often use Cinder volumes for system disks, too.

When an ephemeral disk is used as a system disk, an OS template image is copied into the local storage of a compute node, and the local image is attached to the instance. When the instance is destroyed, the attached image is also destroyed.

Cinder volumes provide a persistent disk area that resides in external storage devices. In typical deployments, the block device is attached to the compute node using the iSCSI protocol, and attached to the instance as a virtual disk. When the instance is destroyed, the attached volume remains in the external device and can be reused from another instance.

Application software running on the instance can also access the object storage provided by OpenStack Swift.

ephemeral disk and Cinder volume

Figure 7: Comparison of an ephemeral disk and a Cinder volume in OpenStack

Persistent disks

Google Cloud provides persistent storage attached to an instance in the form of persistent disks, which are similar to Cinder volumes in OpenStack. You use a persistent disk as a system disk that contains operating system files and to store persistent application data. The data is automatically encrypted before it goes from the instance to persistent disk storage. You can extend a single persistent disk up to 64 TB, but the maximum total capacity of the attached disks is restricted based on the instance size. For more information, see Storage Options.

When you need high-performance local storage, you can also use local solid-state persistent disks (SSDs). Each local SSD is 375 GB in size, but you can attach up to eight local SSD devices per instance for 3 TB of total local SSD storage space. Note that live migration is available for instances with local SSDs attached. Because the data in local SSDs is copied between instances during live migration, storage performance might temporarily decrease.

Application software running on the instance can access the object storage provided by Cloud Storage, a hosted service for storing and accessing large numbers of binary objects, or blobs, of varying sizes.

VM instances

Here’s how OpenStack’s terms and concepts map to those in Google Cloud:

OpenStack Google Cloud Note
Instance type Machine type In OpenStack, general users cannot create custom instance types.

In Google Cloud, any user can create custom machine types.
Metadata service Metadata server OpenStack provides information only about instances.

Google Cloud provides information about projects, too.

When you launch a new VM instance in OpenStack, you choose an instance type to specify the instance size, such as the number of vCPUs and the amount of memory. If you have an appropriate access right assigned by the system administrator, you can define additional instance types. General users are not allowed to add custom instance types.

In Google Cloud, instance sizes are defined as machine types. In addition to choosing one from the predefined machine types, a user can change the number of vCPUs and the amount of memory separately to create a custom machine type. For more information, see Machine Types.


OpenStack provides a metadata service to retrieve VM instance information from the instance guest operating system (guest OS), such as instance type, security groups, and assigned IP addresses. You can also add custom metadata in key-value form. OpenStack provides a type of metadata called user-data. You can specify an executable text file as user-data when you launch a new instance to have the cloud-init agent running in the guest OS execute startup configuration tasks according to its contents, such as installing application packages. You use the URL under to access metadata from the guest OS.

Google Cloud provides metadata server for retrieving information about instances and projects from the instance guest OS. Project metadata is shared by all instances in the project. You can add custom metadata for both instance metadata and project metadata in key-value form. Google Cloud provides special metadata called startup-script and shutdown-script, executed when you start up or shut down the instance respectively.

Unlike OpenStack user-data, startup-script is executed every time you restart the instance. The two scripts startup-script and shutdown-script are handled by the agent included in the compute-image-packages , which is preinstalled in the OS template images. For more information, see Storing and Retrieving Instance Metadata.

You use one of the following URLs to access metadata from the guest OS:

  • http://metadata/computeMetadata/v1/
Guest operating system agent
OpenStack Google Cloud Notes
cloud-init configuration tasks agent package compute-image-packages configuration tasks agent package The OpenStack agent handles only initial boot settings.

The Google Cloud agent handles initial boot settings and dynamic configuration changes while the instance is running.

In OpenStack, the agent package called cloud-init is preinstalled in the standard guest OS images. It handles the initial configuration tasks at the first boot time, such as extending the root filesystem space, storing an SSH public key, and executing the script provided as user-data.

In Google Cloud, the agent package called compute-image-packages is preinstalled in the standard guest OS images. It handles the initial configuration tasks through startup-script at the first boot time, and also handles dynamic system configuration while the guest OS is running, such as adding new SSH public keys and changing network configurations for HTTP load balancing. You can generate and add a new SSH key through Google Cloud Console or the gcloud command-line tool after you launch an instance.

Cloud SDK is preinstalled in the standard guest OS images on Google Cloud. You can use the SDK to access services on Google Cloud from guest OS by using a service account that is granted certain privileges by default.

Access control

Access control in Google Cloud is enforced per instance by IAM. For example, if you want your application running on an instance to store data in Google Cloud Storage, you can assign the read-write permission to the instance. With IAM, you don't have to prepare passwords or credential codes manually for applications running on the instance. For more information, see Creating and Enabling Service Accounts for Instances.

OpenStack Keystone provides access control for service APIs based on user account, but does not provide instance-based access control for application APIs, such as read-write permission on object storage or database. You can implement custom access control for application APIs if necessary.

Beyond IaaS: Platform as a service

This article compares the components and services offered by OpenStack and Google Cloud that are essential for IaaS. But to make the most of Google Cloud in terms of availability, scalability, and operational efficiency, you should combine managed services as building blocks for your entire system.

The full range of Google Cloud services extend well beyond the traditional concept of an IaaS platform to include the following and more:

What's Next

  • Learn more about integrating Dataproc with storage, computing, and monitoring services across Google Cloud to create a complete data-processing platform.
  • Learn about monitoring, logging, and diagnostics using Google Stackdriver.
  • Learn more about BigQuery, a fully managed data warehouse for analytics that provides a serverless platform, so you can focus on data analytics using SQL even with petabyte-scale datasets.
  • Learn more about Google Cloud Products & Services.