Migration to Google Cloud: Building your foundation

Stay organized with collections Save and categorize content based on your preferences.

This document helps you create the basic cloud infrastructure for your workloads. It can also help you plan how this infrastructure supports your applications. This planning includes identity management, organization and project structure, and networking.

This document is part of a multi-part series about migrating to Google Cloud. If you're interested in an overview of the series, see Migration to Google Cloud: Choosing your migration path.

This document is part of a series:

The following diagram illustrates the path of your migration journey.

Migration path with four phases.

This document is useful if you're planning a migration from an on-premises environment, from a private hosting environment, from another cloud provider to Google Cloud, or if you're evaluating the opportunity to migrate and want to explore what it might look like. The Enterprise onboarding checklist and this document can help you understand the available products, services, and options to help you build your foundation on Google Cloud.

When planning your migration to Google Cloud, you need to understand an array of topics and concepts related to cloud architecture. A poorly planned foundation can cause your business to face delays, confusion, and downtime, and can put the success of your cloud migration at risk. This guide provides an overview of Google Cloud foundation concepts and decision points.

Each section of this document poses questions that you need to ask and answer for your organization before building your foundation on Google Cloud. These questions are not exhaustive; they are meant to facilitate a conversation between your architecture teams and business leadership about what is right for your organization. Your plans for infrastructure, tooling, security, and account management are unique for your business and need deep consideration. When you finish this document and answer the questions for your organization, you're ready to begin the formal planning of your cloud infrastructure and services that support your migration to Google Cloud.

Enterprise considerations

Consider the following questions for your organization:

  • Which IT responsibilities might change between you and your infrastructure provider when you move to Google Cloud?
  • How can you support or meet your regulatory compliance needs—for example, HIPAA or GDPR—during and after your migration to Google Cloud?
  • How can you control where your data is stored and processed in accordance with your data residency requirements?

Shared responsibility model

The shared responsibilities between you and Google Cloud might be different than those you are used to, and you need to understand their implications for your business. The processes you previously implemented to provision, configure, and consume resources might change.

Review the Terms of Service and the Google security model for an overview of the contractual relationship between your organization and Google, and the implications of using a public cloud provider.

Compliance, security, and privacy

Many organizations have compliance requirements around industry and government standards, regulations, and certifications. Many enterprise workloads are subject to regulatory scrutiny, and can require attestations of compliance by you and your cloud provider. If your business is regulated under HIPAA or HITECH, make sure you understand your responsibilities and which Google Cloud services are regulated. For information about Google Cloud certifications and compliance standards, see the Compliance resource center. For more information about region-specific or sector-specific regulations, see Google Cloud and the General Data Protection Regulation (GDPR).

Trust and security are important to every organization. Google Cloud implements a shared security model for many services such as the Shared security model for Google Kubernetes Engine, and the Shared responsibility matrix for PCI DSS.]

The Google Cloud trust principles can help you understand our commitment to protecting the privacy of your data and your customers' data. For more information about Google's design approach for security and privacy, read the Google infrastructure security design overview.

Data residency considerations

Geography can also be an important consideration for compliance. Make sure that you understand your data residency requirements and implement policies for deploying workloads into new regions to control where your data is stored and processed. Understand how to use resource location constraints to help ensure that your workloads can only be deployed in preapproved regions. You need to account for the regionality of different Google Cloud services when choosing the deployment target for your workloads. Make sure that you understand your regulatory compliance requirements and how to implement a governance strategy that helps you ensure compliance.

Resource hierarchy

Consider the following questions for your organization:

  • How do your existing business and organizational structures map to Google Cloud?
  • How often do you expect changes to your resource hierarchy?
  • How do project quotas impact your ability to create resources in the cloud?
  • How can you incorporate your existing cloud deployments with your migrated workloads?
  • What are best practices for managing multiple teams working simultaneously on multiple Google Cloud projects?

Your current business processes, lines of communication, and reporting structure are reflected in the design of your Google Cloud resource hierarchy. The resource hierarchy provides the necessary structure to your cloud environment, determines the way you are billed for resource consumption, and establishes a security model for granting roles and permissions. You need to understand how these facets are implemented in your business today, and plan how to migrate these processes to Google Cloud.

Understand Google Cloud resources

Resources are the fundamental components that make up all of Google Cloud services. The Organization resource is the apex of the Google Cloud resource hierarchy. All resources that belong to an organization are grouped under the organization node. This structure provides central visibility and control over every resource that belongs to an organization.

An Organization can contain one or more folders, and each folder can contain one or more projects. You can use folders to group related projects.

Cloud projects contain service resources such as Compute Engine virtual machines (VMs), Pub/Sub topics, Cloud Storage buckets, Cloud VPN endpoints, and other Google Cloud services. You can create resources by using the Google Cloud console, Cloud Shell, or the Cloud APIs. If you expect frequent changes to your environment, consider adopting an infrastructure as a code (IaC) approach to streamline resource management.

Manage your Cloud projects

Google Cloud enforces quotas on resource usage on a per-project basis. These quotas set a limit on how much of a particular Google Cloud resource your project can use. Quota management is critical for any migration to Google Cloud, so you should incorporate detailed quota planning into your migration strategy.

Also, quotas are a feature of Cloud projects in the resource hierarchy. When you plan your Google Cloud foundation, having more projects can help you better manage your quotas. For example, review the documentation for Virtual Private Cloud (VPC) resource quotas to better understand the quota implication of choosing a standard VPC or Shared VPC.

For help with planning your Google Cloud resource hierarchy, see Decide a resource hierarchy for your Google Cloud landing zone. If you're already working in Google Cloud and have created independent projects as tests or proofs-of-concept, you can migrate existing Cloud projects into your organization.

Identity and Access Management

Consider the following questions for your organization:

  • Who will control, administer, and audit access to Google Cloud resources?
  • How will your existing security and access policies change when you move to Google Cloud?
  • How will you securely enable your users and apps to interact with Google Cloud services?

Identity and Access Management (IAM) lets you grant granular access to Google Cloud resources. Cloud Identity is a separate but related service that can help you migrate and manage your identities. At a high level, understanding how you want to manage access to your Google Cloud resources forms the basis for how you provision, configure, and maintain IAM.

Understand identities

Google Cloud uses identities for authentication and access management. To access any Google Cloud resources, a member of your organization must have an identity that Google Cloud can understand. Cloud Identity is an identity as a service (IDaaS) platform that lets you centrally manage users and groups who can access Google Cloud resources. By setting up your users in Cloud Identity, you can set up single sign-on (SSO) with thousands of third-party software as a service (SaaS) applications. The way you set up Cloud Identity depends on how you currently manage identities. There are three common ways to set up identities in Google Cloud:

  • Managing identities directly in Google Cloud.
  • Using your existing Google Workspace accounts.
  • Using an existing identity provider (IdP).

If you want to manage identities directly in Google Cloud, you can use Cloud Identity without any preexisting IdP. If you are a Google Workspace customer, you can use the same identity you use for Google Workspace apps to authenticate corporate users in a hybrid environment. If you already have an IdP within your organization, you can use your existing IdP with Google Cloud.

Migrate identities

If you already have an IdP in your current environment, identity federation lets you continue to manage identities in your current IdP while supporting SSO. Enterprise IT departments often rely on existing directory services to manage user accounts and to control access to applications and computing resources.

Federation works by synchronizing your identities from your current IdP to Google Cloud, and delegating authentication to your existing IdP. This type of configuration means that users only sign in once, and because they're already authenticated in your system, with identity federation these users can also access Google Cloud resources.

With Google Cloud Directory Sync, you can synchronize identities and groups from your Lightweight Directory Access Protocol (LDAP) server to Google Cloud. Google Cloud Directory Sync communicates with Identity Platform over secure connections and usually runs in your existing computing environment.

Understand access management

The model for managing access consists of four core concepts:

  • Principal: Can be a Google user, a service account (for Google Cloud products), a Google Group, or a Google Workspace or Cloud Identity account that can access a resource. Principals cannot perform any action that they aren't permitted to do.
  • Role: A collection of permissions.
  • Permission: Determines what operations are allowed on a resource. When you grant a role to a principal, you grant all the permissions that the role contains.
  • IAM policy: Binds a set of principals to a role. When you want to define which principals have access to a resource, you create a policy and attach it to the resource.

Proper setup and effective management of principals, roles, and permissions forms the backbone of your security posture in Google Cloud. Access management helps protect you from internal misuse, and helps protect you from external attempts at unauthorized access to your resources.

Understand application access

In addition to users and groups, there is another kind of identity known as a service account. A service account is an identity that your programs and services can use to authenticate and gain access to Google Cloud resources.

Service accounts are either user-managed or Google-managed. User-managed service accounts include service accounts that you explicitly create and manage using IAM, and the Compute Engine default service account that comes built into all Cloud projects. Google-managed service accounts are automatically created, and run internal Google processes on your behalf.

When using service accounts, it's important to understand application default credentials, and follow our recommended best practices for service accounts to avoid exposing your resources to undue risk. The most common risks involve privilege escalation or accidental deletion of a service account that a critical application relies on.

Follow best practices

Proper configuration of IAM can help you log and audit any actions performed by your users. Make sure that you understand roles and permissions, and grant roles according to the principle of least privilege. Consult IAM role recommendations periodically to identify over-granted permissions to IAM users.

When you design your IAM policies, avoid common pitfalls. Remember that a policy set on a child resource cannot restrict access granted on its parent. Check the policy granted on every resource and make sure that you understand the hierarchical inheritance. Know when to use basic, predefined, and custom roles, organization policy constraints, and how to test permissions. The IAM best practices can also help explain common implementation patterns.

Treat each component of your application as a separate trust boundary. If you have multiple services that require different permissions, create a separate service account for each of the services, and then grant only the required permissions to each service account. Enforce multi-factor authentication for your user accounts, and help ensure secure access from mobile devices. Google has codified its principle of least privilege and its concept of zero-trust security into a new model of enterprise security called BeyondCorp.


Consider the following questions for your organization:

  • How will your existing financial tracking methods change or adapt to Cloud Billing?
  • Where will you have your bill sent and what are your payment options?
  • How will the pay-as-you-go billing model change how you allocate capital and procure IT services?
  • How will application owners attribute their cloud spending to existing cost centers?

Assess your billing requirements

How you pay for Google Cloud resources that you consume is an important consideration for your business, and an important part of your relationship with Google Cloud. You can manage billing in the Google Cloud console with Cloud Billing alongside the rest of your cloud environment.

The concepts of resource hierarchy and billing are closely related, so it's critical that you and your business stakeholders understand these concepts. Make sure you thoroughly assess and understand your current accounting and reporting requirements in order to set up Cloud Billing Accounts in a way that satisfies your requirements. If you have other Google Cloud Accounts across your organization, you can migrate Cloud Billing accounts to your organization.

On-premises billing concerns can involve maintenance, power, and cooling of physical hardware and facilities. Cloud Billing has its own unique concerns, including the use of multiple accounts, budgets, labels, and reporting options.

Manage access to Cloud Billing

Cloud Billing Accounts define who pays for a defined set of resources. The accounts connected to a payments profile, where a payment method is set up. When creating a Cloud Billing Account, you decide who in your organization is the Cloud Billing admin. You also define who should have access to Cloud Billing Accounts and what actions they're allowed to perform in that account. For more information, see Cloud Billing access control.

To help control costs and enforce budget limits, learn how to create budgets and set alerts to be notified when your resource usage reaches certain thresholds.

Manage and analyze costs

Resources are grouped by Google Cloud project, and a Cloud Billing Account is linked to one or more projects to determine who pays for which resources. You can use labels for more fine-grained cost attribution such as charging usage back to individual teams or cost centers. Labels are useful when exporting your billing data to BigQuery for detailed analysis. You can also use advanced tooling, such as Orbitera, to help manage complex billing arrangements.

If you plan to use Cloud Billing labels to enhance your visibility into your organization's cloud spending, you should incorporate the labeling into your cloud governance strategy. These efforts help to ensure that every resource launched in Google Cloud has the appropriate labels attached.

Connectivity and networking

Consider the following questions for your organization:

  • How will software-defined networking change how you manage connectivity between workloads?
  • How can you set up network firewall rules in Google Cloud?
  • How can you deploy your workloads globally?
  • What strategies are available to connect your existing on-premises environment with your new cloud environment?

Assess your networking requirements

Your network architecture determines how your Google Cloud resources are segmented, and how they communicate with your existing on-premises environment, the public internet, any third-party services, and other Google resources. This architecture depends on a combination of your current network architecture, compliance and regulatory requirements, scalability and disaster recovery considerations, performance requirements, and Google Cloud networking best practices. You need to understand how these Google Cloud networking concepts relate to your current infrastructure.

When planning your migration to Google Cloud, you need to decide which regions and zones your workloads are migrated to. Make sure you understand how these decisions relate to your business objectives and regulatory compliance requirements.

Understand VPC

Virtual Private Cloud (VPC) is a private network space within Google Cloud that you control, and is the foundational component of your network architecture. Configuring your VPC has important security and governance considerations. When configured properly, VPC can help strengthen your foundation and accelerate your migration.

VPC gives you the flexibility to scale across multiple zones and regions and configure how your workloads are connected and deployed. This multi-regional capability is crucial in planning your organization's disaster recovery efforts and planning for global availability of your products and services.

You can run a single VPC for your entire organization and share it among many Cloud projects or you can set up multiple VPCs. You can also take advantage of Shared VPCs to connect resources from multiple Cloud projects to a common VPC network, or use VPC peering to connect multiple VPCs across different projects or organizations.

Firewall management in Google Cloud might be different than your networking teams are used to. So you need to understand how your existing firewall configuration is translated to VPC firewall rules to help protect your workloads and minimize the attack surface of your cloud estate.

Understand regions and zones

Google Cloud hosts its infrastructure in many different locations so you can choose where to deploy your resources, and lets you scale your workloads to take advantage of Google's global network.

Regions are independent geographic areas that consist of zones. A zone is a single deployment area for Google Cloud resources within a region. Consider zones as a single failure domain within a region. Resources in Google Cloud can be global, multi-regional, regional, or zonal. For example, VPCs and their associated routes and firewall rules are global, which means that they aren't associated with any particular region or zone. There are many factors to consider when choosing how to deploy your resources; it's important to understand how geography affects your billing, network architecture, performance, disaster recovery, and regulatory compliance.

Understand connectivity options

Your workloads might need to connect to your on-premises environment and the public internet. Take the time to understand the hybrid connectivity options provided by Google Cloud for managing connectivity with your on-premises or multi-cloud environments. Each connectivity option has different attributes, and you should find the product that best meets your business requirements. Each approach has important security, performance, and compliance implications.

You can connect your VPC to your on-premises network using the following methods:

Cloud VPN allows you to connect securely to your Google Cloud network over the internet through an IPsec VPN tunnel and can be set up quickly. Cloud Interconnect creates a dedicated, enterprise-grade, highly available, low-latency connection to your VPC with flexible bandwidth options. Establishing a Cloud Interconnect takes more time than a Cloud VPN connection and usually requires working with an Partner Interconnect. Using Direct or Carrier Peering is also a dedicated enterprise-grade connection with options that might lower your bandwidth charges and decrease the setup time if you meet Google's requirements.

Depending on your business needs, you can use a combination of VPN, Cloud Interconnect, or peering. Many of the most common connectivity options are outlined in VPC best practices. Once you have chosen and implemented your connectivity options, the Cloud Router service can exchange routing information with your on-premises network using the Border Gateway Protocol (BGP).

You can use Cloud NAT to help provide your private VMs and Google Kubernetes Engine (GKE) clusters secure, controlled access to the public internet. Plan IP addressing in the cloud. When planning your Google Cloud environment, you must understand the different IP addressing options available. Begin by exploring the different default IP addressing options you have when creating a VPC. You can use auto mode to automatically create a subnet of predefined private IP ranges in each region. Or you can use a custom mode network that lets you decide what subnets to create by using only the regions and IP ranges that you specify. If you build your VPC with auto mode, you can always convert it to custom mode later, but you cannot convert from custom mode back to auto mode. You should always plan production networks before their deployment. We recommend using custom mode in production environments.

VMs can have one primary internal IP address, one or more secondary IP addresses, and one external IP address. To communicate between instances on the same VPC network, you can use the internal IP address for the instance. To communicate with the public internet, you must use the instance's external IP address unless you have configured a proxy or Cloud NAT. Similarly, you must use the instance's external IP address to connect to instances outside of the same VPC network unless the networks are connected through VPC peering or Cloud VPN. Both external and internal IP addresses can be either ephemeral or static.

Many other Google Cloud services have rules for IP addresses that can impact how you design your Google Cloud foundation. For example, you can create GKE clusters using routes-based or VPC-native rules. GKE has guidance on optimizing IP address allocation when designing your pods. Be sure to explore the IP design details of each service that you plan to use within Google Cloud.

Understand DNS options

Cloud DNS can serve as your public domain name system (DNS) server. A separate but similar service, called internal DNS, is included with your VPC. Instead of manually migrating and configuring your own DNS servers, you can use the internal DNS service for your private network. For more information, learn how internal DNS is different from Cloud DNS. If you own a block of IP addresses and want to use those addresses in Google Cloud, learn how to migrate your IP addresses. For a more detailed understanding of how you can implement Cloud DNS, see Cloud DNS best practices.

Understand data transfer

On-premises networking is managed and priced in a fundamentally different way than cloud networking. When managing your own data center or colocation facility, installing routers, switches, and cabling requires a fixed, upfront capital expenditure. In the cloud, you are billed for data transfer rather than the fixed cost of installing hardware, plus the ongoing cost of maintenance. Plan and manage data transfer costs accurately in the cloud by understanding data transfer costs.

When planning for traffic management, there are three ways you are charged:

  • Ingress traffic: Network traffic that enters your Google Cloud environment from outside locations. These locations can be from the public internet, on-premises locations, or other cloud environments. Ingress is free for most services on Google Cloud. Some services that deal with internet-facing traffic management, such as Cloud Load Balancing, Cloud CDN, and Google Cloud Armor charge based on how much ingress traffic they handle.
  • Egress traffic: Network traffic that leaves your Google Cloud environment in any way. Egress charges apply to many Google Cloud services, including Compute Engine, Cloud Storage, Cloud SQL, and Cloud Interconnect.
  • Regional and zonal traffic: Network traffic that crosses regional or zonal boundaries in Google Cloud can also be subject to bandwidth charges. These charges can impact how you choose to design your apps for disaster recovery and high availability. Similar to egress charges, cross-regional and cross-zonal traffic charges apply to many Google Cloud services and are important to consider when planning for high availability and disaster recovery. For example, sending traffic to a database replica in another zone is subject to cross-zonal traffic charges

Automate and control your network setup

In Google Cloud, the physical network layer is virtualized, and you deploy and configure your network using software-defined networking (SDN). You can think of SDN as the networking subset of IaC. To ensure that your network is configured in a consistent and repeatable way, you need to understand how to automatically deploy and tear down your environments. You can use IaC tools, like Deployment Manager. Google Cloud also supports open source IaC tooling such as Terraform.


Consider the following questions for your organization:

  • How do you map your existing security tools and processes to your new Google Cloud environment?
  • What management and process changes do you need to make to take advantage of cloud-based security services and adapt to a modern cloud security model?
  • How does Google maintain security and privacy of your private data in Google Cloud?
  • How can you protect your cloud environment from data exfiltration and other evolving security threats?

Understand the Google Cloud security model

The way you manage and maintain the security of your systems in Google Cloud, and the tools you use, are different than when managing an on-premises infrastructure. Your approach will change and evolve over time to adapt to new threats, new products, and improved security models. Zero-trust security is a new approach to enterprise security that was pioneered by Google, and is likely different from the security model that you're currently employing for your on-premises workloads.

Most companies use firewalls to help enforce perimeter security. However, this security model is problematic because when that perimeter is breached, an attacker has relatively easy access to a company's privileged intranet.

Your migration might involve adopting new tools, technologies, practices, and attitudes toward enterprise security that can improve your security posture and help protect your mission-critical workloads from attacks, and your data from manipulation and exfiltration. It's important to learn how to adapt your current practices and tooling for the zero-trust security environment. For more information, read the BeyondCorp whitepaper.

The responsibilities that you share with Google might be different than the responsibilities of your current provider, and understanding these changes is critical to ensuring the continued security and compliance of your workloads. Strong, verifiable security and regulatory compliance are often intertwined, and begin with strong management and oversight practices, consistent implementation of Google Cloud best practices, and active threat detection and monitoring.

Secure your data

Central to Google's security strategy are authentication, integrity, and encryption, for data that is both at rest and in transit. Encryption at rest helps protect your data from a system compromise or data exfiltration by encrypting data while stored. Encryption in transit helps protect your data if communications are intercepted while data moves between your on-premises environment and Google Cloud or between two Google Cloud services. Securely managing encryption keys is a vital aspect of your overall security and regulatory posture.

You can manage your encryption keys by using Cloud Key Management Service (Cloud KMS), which lets you keep your encryption keys in a central location. For example, knowing how to configure key rotation with Cloud KMS is not only a good security practice, but can be necessary to support compliance with PCI Data Security Standards. Make sure that you know when you need a hardware security module to protect your keys, how to manage secrets in Cloud KMS, and how to configure access to keys and keyrings in IAM.

For workloads that store or process sensitive customer data or internal business information, understand how to use Cloud Data Loss Prevention to help secure these workloads.

With VPC, you can configure security perimeters around the resources of your Google Cloud services and control the movement of data across the perimeter boundary. VPC operate outside the scope of IAM, so your data can be protected even in cases where your IAM credentials are compromised.

Secure your apps

To run your applications securely in the cloud, you first need to assess the security needs of each of your applications. This assessment can help ensure proper configuration and management of your VPC, network routing policies, firewall rules, and IAM. In addition to understanding network-level policies, learn when to use and how to configure Identity-Aware Proxy as a central authorization layer for your applications. Other workloads might require OS-level protection from Shielded VMs for added security or to meet regulatory compliance requirements. Public-facing applications might require active DDoS protection by configuring Google Cloud Armor policies.

Manage risks

You and your security administrator should understand Security Command Center to help you track and monitor potential security risks across your entire organization. Security Command Center can help you remediate these risks before an attacker can exploit them. If there is a known risk in Google's core infrastructure, Google provides transparency into the process of reporting, communicating, and remediating such incidents. These policies and processes for managing incidents are the same that Google uses internally, and you can consider adopting these processes for your own organization to better secure your assets. A complete playbook for how Google handles incident response is in Chapter 9 of our Site Reliability Engineering book.

Monitoring and alerting

Consider the following questions for your organization:

  • How will your current monitoring solution interoperate with Google Cloud?
  • What options do you have for monitoring your services and Google Cloud-provided services?
  • How do you enable secure and transparent access to your services to comply with regulators and auditors?

Assess current monitoring and alerting solution

Your current enterprise IT environment likely employs one or more logging and monitoring solutions already. In Google Cloud, you can aggregate all logging, metrics, and events for your infrastructure into a metrics scope. It's important to understand your options for managing logging data that is generated from your infrastructure and workloads.

If you want to centralize the management of your logging data, you can export logs from Cloud Logging to your existing solution, or import logs from your current tools. Cloud Logging supports a rich and growing ecosystem of integrations to expand the operations, security, and compliance capabilities available to Google Cloud customers.

Understand logging

Cloud Logging aggregates logging data from many Google Cloud services for your teams to perform analysis, audit user activity, monitor workloads, and respond to issues. Your incident response teams can use Cloud Logging as the command center for detecting, responding to, and remediating app bugs and security incidents. The information captured by Cloud Logging provides visibility into your Google Cloud environment by using data collected from a wide variety of sources, including IAM audit logs, VPC flow logs, and agent logs. These logs flow into Cloud Logging in raw form, and are enriched by implementing metrics, alerts, and filters. Your teams need to understand what and how to monitor, how to analyze the data that is collected, and how to create pertinent and actionable alerts.

In addition to real-time monitoring, you can perform custom analyses on large logging datasets by exporting your Cloud Logging data to BigQuery. For example, you can query your logs to understand how application performance might correlate with internal user actions on Google Cloud.

Cloud Logging gathers important and often confidential data about your workloads and infrastructure. As with all Google Cloud services, it's important to understand how to control access to Cloud Logging using IAM. To review actions taken by authorized users, you can use Cloud Audit Logs. When you engage with Google Cloud support to help diagnose an issue, our support staff might require access to your environment. You can audit actions taken by the support team by enabling Access Transparency.

The metrics scope is your single pane of glass from which you can configure monitoring and alerting for multiple Cloud projects. In hybrid and multi-cloud environments, you can use Cloud Logging to monitor your on-premises infrastructure and your Amazon Web Services (AWS) environment. Because your Workspace aggregates metrics from many sources, a best practice is to set up your workspaces and related tooling (such as BigQuery) in a separate Cloud project. You can then effectively tailor access to this information and avoid affecting project quotas.

Understand metrics and alerts

The primary tool for understanding the state of your infrastructure workloads is the metrics in Cloud Monitoring. Metrics are observations that are collected by Cloud Monitoring to help you understand how your apps and system services are performing. Google Cloud provides over 1000 metric types to help you monitor Google Cloud, AWS, and other platforms. Depending on your use cases and business requirements, you can also create custom metrics to monitor other aspects of your workloads and infrastructure. A full understanding of metrics is important to create useful alerts. Alerts can be triggered by configuring alerting policies.

Metrics also feed into the data displayed in dashboards and charts in Cloud Monitoring. Learn how to incorporate transparent service-level indicators into your monitoring to help diagnose problems. You can determine if a problem might be due to one of your business workloads, or if it's due to a degradation in one or more Google Cloud services. You can also use the Google Cloud Status Dashboard for a quick overview of the current operating status of the Google Cloud infrastructure.

Interacting with Google Cloud

Consider the following questions for your organization:

  • How can your users and apps interact with Google Cloud services?
  • How can you automate the provisioning of new resources and how with this impact your existing business processes?

Compare methods

There are several ways to interact with Google Cloud services:

  • Cloud APIs provide programmatic interfaces for interacting with Google Cloud from your app code. The Cloud APIs form the foundation of every interaction with Google Cloud.
  • Google Cloud console is a graphical user interface that lets you interact with Google Cloud in your browser.
  • The Google Cloud CLI is a developer-friendly software library that enables your applications to interact with the Cloud APIs.
  • Google Cloud CLI is a tool that lets you make calls to the Cloud APIs from the command line, and is useful for scripting repeatable tasks.
  • Infrastructure as Code (IaC) tools like Deployment Manager and Terraform lets you manage your infrastructure deployments in source code and standardize provisioning of resources by using the Cloud APIs.

Different people and systems interact with Google Cloud differently based on their role and function. For example, a member of your finance team might interact with Cloud Billing logs in BigQuery using the Google Cloud console. An automated deployment process might provision resources by making calls directly to Cloud APIs. Understanding the proper use of each method is important for proper governance and security of your cloud environment.

Product-specific interaction

Some services, such as Cloud Storage and GKE, provide their own tools for fine-grained interaction with those services. For example, Cloud Storage provides a tool called gsutil. It's important to understand this tool to optimize the migration of your data and correctly manage the security of your Cloud Storage buckets. In GKE, you can deploy clusters using the Google Cloud console or the Google Cloud CLI, but cluster management is performed by using the kubectl command-line tool.

Automate your interactions

Google Cloud also offers Cloud Build, a managed service to support building and deploying code. For services that can take advantage of IaC, Deployment Manager lets you manage the lifecycle of your Google Cloud services and products.

You can automate interactions with specific resources by using the Cloud APIs. Within Google Cloud, the Cloud APIs are enabled per-project, and your governance strategy should determine who is allowed to enable the Cloud APIs and under what conditions. Usage of these APIs can consume resources from Google Cloud services, and incur costs, so make sure you understand how enabling the Cloud APIs can affect your billing.


Consider the following questions for your organization:

  • How can you ensure your users support and meet their compliance needs and align them with your business policies?
  • What strategies are available to maintain and organize your Google Cloud users and resources?

Effective governance is critical for helping to ensure reliability, security, and maintainability of your assets in Google Cloud. As in any system, entropy naturally increases over time and left unchecked, it can result in cloud sprawl and other maintainability challenges. Without effective governance, the accumulation of these challenges can impact your ability to achieve your business objectives and to reduce risk. Disciplined planning and enforcement of standards around naming conventions, labeling strategies, access controls, cost controls, and service levels is an important component of your cloud migration strategy. More broadly, the exercise of developing a governance strategy creates alignment among stakeholders and business leadership.

Support continuous compliance

To help support organization-wide compliance of your Google Cloud resources, consider establishing a consistent resource naming and grouping strategy. Google Cloud provides several methods for annotating and enforcing policies on resources:

  • Security marks let you classify resources to provide security insights from Security Command Center, and to enforce policies on groups of resources.
  • Labels can track your resource spend in Cloud Billing, and provide extra insights in Cloud Logging.
  • Network tags control network traffic to and from a VM by identifying which VMs are subject to firewall and network rules.

For example, you can use Cloud Functions to serve as a Compliance-as-Code solution. In a Cloud Function, you can use the Cloud APIs to verify names and labels on provisioned resources and ensure that your standards are being followed. In some circumstances, you can automatically de-provision resources that violate your established standards. It's important to establish these standards and conventions during your migration planning process, and decide on how to enforce them. You can also prevent accidental deletion of critical resources, and you can use Cloud Functions to automatically enforce this protection for resources that have a particular label. Make sure you understand how you can use IAM conditions to alter access based on specific attributes of a resource.

Enabling your migration workforce

Consider the following questions for your organization:

  • Do you have the knowledge, experience, and personnel to plan and execute your migration to the cloud?
  • How can you access strategic guidance and technical assistance during your cloud migration journey?
  • After you're up and running in the cloud, where can you get help?

Assess your capabilities

Migration to Google Cloud is an important strategic transformation for your business. Organizations have numerous options for choosing how to break up the work of any cloud migration. Some companies might choose to use only in-house resources, others might completely outsource the work, but most fall somewhere in the middle. Understanding your own capabilities, including the knowledge and expertise of your current organization, will lead you to the right balance of these approaches.

Training staff should be a high priority for every organization. Google Cloud certifications enable your existing teams to demonstrate and validate the skills needed to fully use Google Cloud technology in your business. Google also offers various training options, from in-depth courses, hands-on labs, and direct training engagements.

Work with cloud experts

Google offers professional consulting services to help you navigate this journey through each stage of your migration to Google Cloud.

When you choose a consultant, you work with the experts who helped build the Google Cloud infrastructure. They can educate your team on best practices and guiding principles for a successful implementation.

Depending on your project needs, you might want to consider engaging with a Google Cloud partner. Partners are vetted by Google in one or more specialties and employ Google Cloud certified professions. The right partner can bring expertise and provide hands-on guidance to help accelerate your cloud migration.

Understand support options

After your business is running on Google Cloud, Google Cloud Support can help prevent issues from arising in the first place. If you encounter a problem, support can help find the root of the problem quickly and provide long-lasting solutions that prevent the issue from happening again. Understand the different support options available to you, and decide which one is right for your business.

What's next