This document helps you plan, design, and implement the process of migrating your workloads to Google Cloud Platform (GCP). Moving apps from one environment to another is a challenging task, even for experienced teams, so you need to plan and execute your migration carefully.
This document is useful if you're planning a migration from an on-premises environment, from a private-hosting environment, from another cloud provider to GCP, or if you're evaluating the opportunity to migrate and want to explore what it might look like.
Beginning the journey
When planning your migration to GCP, you start by defining the environments that are involved in the migration. Your starting point can be either an on-premises environment, a private hosting environment, or another public cloud environment.
An on-premises environment is an environment where you have full ownership and responsibility. You retain full control over every aspect of the environment, such as cooling, physical security, and hardware maintenance.
In a private hosting environment such as a colocation facility, you outsource part of the physical infrastructure and its management to an external party. This infrastructure is typically shared between customers. In a private hosting environment, you don't have to manage the physical security and safety services. Some hosting environments let you manage part of the physical hardware appliances, such as servers, racks, and network devices, while others manage those appliances for you. Typically, power and network cabling are provided as a service so you don't have to manage them. You maintain full control over hypervisors that virtualize physical resources, the virtualized infrastructure that you provision, and workloads that you run on such infrastructure.
A public cloud environment has the advantage that you don't have to manage the whole resource stack by yourself. You can focus on what aspect of the stack is most valuable to you. Like in a private hosting environment, you don't have to manage the underlying physical infrastructure. Additionally, you don't have to manage the resource virtualization hypervisor. You can build a virtualized infrastructure and can deploy your workloads in this new infrastructure. You can also buy fully managed services, where you care only about your workloads, forgetting about all the operational burden to manage runtime environments.
For each environment, this document evaluates the following aspects as well as who should provide and manage the relevant services:
|Resources||On-premises environment||Private hosting environment||Public cloud environment|
|Physical security and safety||You||Service provider||Service provider|
|Power and network cabling||You||Service provider||Service provider|
|Hardware (incl. maintenance)||You||Depends on service provider||Service provider|
|Virtualization platform||You||You||Service provider|
|App resources||You||You||You (eventually leveraging fully managed services)|
In this document, the starting point can be an on-premises environment, a co-location facility, or a public cloud environment. The target environment is GCP.
After you define your starting and target environments, you define the workload types and the related operational processes that are in scope for the migration. This document considers two types of workloads and operations: legacy and cloud-native.
Legacy workloads and operations are developed without any consideration for cloud environments. These workloads and operations can be difficult to modify and expensive to run and maintain because they usually don't support any type of scalability.
Cloud-native workloads and operations are natively scalable, portable, available, and secure. The workloads and operations can help increase developer productivity and agility, because developers can focus on the actual workloads, rather than spending effort to manage development and runtime environments, or dealing with manual and cumbersome deployment processes. GCP also has a shared responsibility model for security, which means GCP is in charge of security of the cloud, while you are in charge of security in the cloud.
Considering these environment and workload types, your starting situation is one of the following:
- On-premises or private hosting environment with legacy workloads and operations.
- On-premises or private hosting environment with cloud-native workloads and operations.
- Public cloud or private hosting environment with legacy workloads and operations.
- Public cloud or private hosting environment with cloud-native workloads and operations.
The migration process can change, depending on these starting points.
Migrating a workload from a legacy on-premises environment or private hosting environment to a cloud-native environment, such as a public cloud, can be challenging and risky. Successful migrations change the workload to migrate as little as possible during the migration operations. Moving legacy on-premises apps to the cloud often requires multiple migration steps.
Types of migrations
There are three major types of migrations:
- Lift and shift
- Improve and move
- Rip and replace
In the following sections, each type of migration is defined with examples of when to use each type.
Lift and shift
In a lift and shift migration, you move workloads from a source environment to a target environment with minor or no modifications or refactoring. The modifications you apply to the workloads to migrate are only the minimum changes you need to make in order for the workloads to operate in the target environment.
A lift and shift migration is ideal when a workload can operate as is in the target environment, or when there is little or no business need for change. This migration is the type that requires the least amount of time because the amount of refactoring is kept to a minimum.
There might be technical issues that force a lift and shift migration. If you cannot refactor a workload to migrate and cannot decommission the workload, you must use a lift and shift migration. For example, it can be difficult or impossible to modify the source code of the workload, or the build process isn't straightforward so producing new artifacts after refactoring the source code might not be possible.
Lift and shift migrations are the easiest to perform because there is no need for new expertise and your team can use the same set of tools and skills. These migrations also support off-the-shelf software. Because you migrate existing workloads with minimal refactoring, lift and shift migrations tend to be the quickest, compared to improve and move or rip and replace migrations.
On the other hand, the results of a lift and shift migration are non-cloud-native workloads running in the target environment. These workloads don't take full advantage of cloud platform features, such as horizontal scalability, fine-grained pricing, and highly managed services.
Improve and move
In an improve and move migration, you modernize the workload while migrating it. In this type of migration, you modify the workloads to take advantage of cloud-native capabilities, and not just to make them work in the new environment. You can improve each workload for performance, features, cost, or user experience.
If the current architecture or infrastructure of an app isn't supported in the target environment as it is, a certain amount of refactoring is necessary to overcome these limits.
Another reason to choose the improve and move approach is when a major update to the workload is necessary in addition to the updates you need to make to migrate.
Improve and move migrations let your app leverage features of a cloud platform, such as scalability and high availability. You can also architect the improvement to increase the portability of the app.
On the other hand, improve and move migrations take longer than lift and shift migrations, because they must be refactored in order for the app to migrate. You need to evaluate the extra time and effort as part of the life cycle of the app.
An improve and move migration also requires that you master new skills.
Rip and replace
In a rip and replace migration, you decommission an existing app and completely redesign and rewrite it as a cloud-native app.
If the current app isn't meeting your goals—for example, you don't want to maintain it, it's too costly to migrate using one of the previously mentioned approaches, or it's not supported on GCP—you can do a rip and replace migration.
Rip and replace migrations let your app take full advantage of GCP features, such as horizontal scalability, highly managed services, and high availability. Because you're rewriting the app from scratch, you also remove the technical debt of the existing, legacy version.
However, rip and replace migrations can take longer than lift and shift or improve and move migrations. Moreover, this type of migration isn't suitable for off-the-shelf apps because it requires rewriting the app. You need to evaluate the extra time and effort to redesign and rewrite the app as part of its lifecycle.
A rip and replace migration also requires new skills. You need to use new toolchains to provision and configure the new environment and to deploy the app in that environment.
Google Cloud Adoption Framework
Before starting your migration, you should evaluate the maturity of your organization in adopting cloud technologies. The Google Cloud Adoption Framework serves both as a map for determining where your business information technology capabilities are now, and as a guide to where you want to be.
You can use this framework to assess your organization's readiness for GCP and what you need to do to fill in the gaps and develop new competencies, as illustrated in the following diagram.
The framework assesses four themes:
- Learn. The quality and scale of your learning programs.
- Lead. The extent to which your IT departments are supported by a mandate from leadership to migrate to GCP.
- Scale. The extent to which you use cloud-native services, and how much operational automation you currently have in place.
- Secure. The capability to protect your current environment from unauthorized and inappropriate access.
For each theme, you should be in one of the following three phases, according to the framework:
- Tactical. There are no coherent plans covering all the individual workloads you have in place. You're mostly interested in a quick return on investments and little disruption to your IT organization.
- Strategic. There is a plan in place to develop individual workloads with an eye to future scaling needs. You're interested in the mid-term goal to streamline operations to be more efficient than they are today.
- Transformational. Cloud operations work smoothly, and you use data that you gather from those operations to improve your IT business. You're interested in the long-term goal of making the IT department one of the engines of innovation in your organization.
When you evaluate the four topics in terms of the three phases, you get the Cloud Maturity Scale. In each theme, you can see what happens when you move from adopting new technologies when needed, to working with them more strategically across the organization—which naturally means deeper, more comprehensive, and more consistent training for your teams.
The migration path
It's important to remember that a migration is a journey. You are at point A with your existing infrastructure and environments, and you want to reach point B. To get from A to B, you can choose any of the options previously described.
The following diagram illustrates the path of this journey.
There are four phases of your migration:
- Assess. In this phase, you perform a thorough assessment and discovery of your existing environment in order to understand your app and environment inventory, identify app dependencies and requirements, perform total cost of ownership calculations, and establish app performance benchmarks.
- Plan. In this phase, you create the basic cloud infrastructure for your workloads to live in and plan how you will move apps. This planning includes identity management, organization and project structure, networking, sorting your apps, and developing a prioritized migration strategy.
- Deploy. In this phase, you design, implement and execute a deployment process to move workloads to GCP. You might also have to refine your cloud infrastructure to deal with new needs.
- Optimize. In this phase, you begin to take full advantage of cloud-native technologies and capabilities to expand your business's potential to things such as performance, scalability, disaster recovery, costs, training, as well as opening the doors to machine learning and artificial intelligence integrations for your app.
Migration phase 1: assess
In the assessment phase, you gather information about the workloads you want to migrate and their current runtime environment.
A key to a successful migration is understanding what apps exist in your current environment–what databases, message brokers, data warehouses, and network appliances exist–and the apps dependencies for each of them. You need to list all of your machines, hardware specifications, operating systems, licenses, and which of the apps and services are used by each of them.
After you take your inventory, you can build your catalog matrix to help you organize your apps into categories based on their complexity and risk in moving to GCP.
The following table is an example catalog matrix.
|Doesn't have dependencies or dependents||Has dependencies or dependents|
This catalog matrix example contains two dimensions of assessment criteria. Your apps might require more dimensions or additional considerations. Create your matrix to include all of the unique requirements of your environment.
Educate your organization about GCP
As part of the assess phase, your organization needs to start learning about GCP. You need to train and certify your software and network engineers on how the cloud works and what GCP products they can leverage as well as what kind of frameworks, APIs, and libraries they can use to deploy workloads on GCP.
Experiment and design proofs of concept
Another important part of the assessment phase is choosing a proof of concept (PoC) and implementing it, or experimenting with GCP products to validate use cases or any areas of uncertainty.
Consider the following use cases:
- Verifying that a zone can spin up 50,000 virtual CPU cores.
- Implementing firewall rules for a complex workload.
- Comparing the performance of your on-premises databases to Cloud SQL, Cloud Spanner, Cloud Firestore, or Cloud Bigtable.
- Experiment with the availability of regional GKE clusters.
- Testing the internal and external network latency of your apps on GCP.
- Evaluating the speed and reliability of a Cloud Build deployment pipeline for containers on GKE
- Comparing Cloud Dataflow to Spark on Cloud Dataproc.
- Transferring data to BigQuery and phrasing business critical queries to test correctness.
- Evaluating Stackdriver to replace other logging mechanisms.
For each experiment, you measure your business impact, such as one of the following
- If you observe a 95% reduction of the launch time to spin up 50,000 virtual CPU cores on GCP compared to your current environment, this reduces your time to market by a certain factor. This reduction also impacts the setup time of your disaster recovery environment by decreasing the downtime of your critical lines of business.
- If you gain the ability to have a globally available and always-on disaster recovery plan, you can increase the reliability of your app.
- By using cloud-scaling technology, you can lower your total cost of services by scaling down when your resource needs are low, and scaling up on-demand.
Calculate total cost of ownership
Building a total cost of ownership model lets you compare your costs on GCP with the costs you have today. There are tools that can help you, such as the GCP price calculator, and you can also leverage some of our partner offerings. Don't forget the operational costs of running on-premises or in your own data center–power, cooling, maintenance, and other support services impact the total cost of ownership.
Choose which workloads to migrate first
In order to prepare for your migration, you identify apps with features that make them likely first-movers. You can pick just one, or include many apps in your first-mover list. These first-movers let your teams run and test apps in the cloud environment, where they can focus on the migration instead of on the complexity of the apps. Starting with a less complex app lowers your initial risk because later you can apply your team's new knowledge to harder to migrate apps.
Identifying a first-mover can be complex, but good candidates usually satisfy many of the following workload criteria:
- Not business critical, so the main line of business isn't impacted by the migration, because your teams don't have yet a significant experience with cloud technologies.
- Not an edge case because it's easy to apply the same pattern to other workloads that you want to migrate.
- Can be used to build a knowledge base.
- Supported by a team that is highly motivated and eager to run on GCP.
- Moved by a central team that moves other workloads. Moving the first workload leads to more experience in that team, which can prove useful in future workload migrations.
- A dependency-light workload, for example, a stateless one is easier to move because they can move without impacting other workloads or with minimal configuration changes.
- Requires minimal app changes or refactoring.
- Doesn't need large quantities of data moved.
- Doesn't have strict compliance requirements.
- Doesn't require third-party proprietary licenses because some providers don't license their products for the cloud or might require a change in license type.
- Not impacted by downtime caused by a cutover window. For example, you can export data from your current database and then import it to a database instance on GCP during a planned maintenance window. Synchronizing two database instances to achieve a zero downtime migration is more complicated.
Migration phase 2: plan
In this phase, you provision and configure the cloud infrastructure and services that will support your workloads on GCP. Building a foundation of critical configurations and services is an evolving process. When you establish your rules, governance, and settings, make sure you allow room for changes later. Avoid making decisions that lock you in to a way of doing things. If you need to change things later on, you want to have options to support those changes.
To plan for your migration, you need to do the following:
- Establish user and service identities.
- Design your resource organization.
- Define groups and roles for resource access.
- Design your network topology and establish connectivity.
In GCP, you have identity types to choose from:
- Google Accounts. An account that usually belongs to an individual user that interacts with GCP.
- Service accounts. An account that usually belongs to an app or a service, rather than to a user.
- Google groups. A named collection of Google accounts.
- G Suite domains. A virtual group of all the Google accounts that have been created in an organization's G Suite account.
- Cloud Identity domains. These domains are like G Suite domains, but they don't have access to G Suite applications.
For more information, read about each identity type.
For example, you can federate GCP with Active Directory to establish consistent authentication and authorization mechanisms in a hybrid environment.
Design resource organization
After establishing the identities you need for your app, you grant them permissions on resources of your apps. You can do this by assigning roles to each identity. A role is a collection of permissions. A permission is a collection of operations that are allowed on a resource.
To avoid repeating the same configuration steps, you can organize your resources in different types of structures. These structures are organized in a hierarchy:
- Organizations are the root of a resource hierarchy and represent a real organization, such as a company. An organization can contain folders and projects. An organization admin can grant permissions on all the resources contained in that organization.
- Folders are an additional layer of isolation between projects and can be seen as sub-organizations in the organization. A folder can contain other folders and projects. An admin can use the folder to delegate admin rights.
- Projects are the base-level organization entities and must be used to access other GCP resources. Every resource instance you deploy and use is contained in a project.
Because resources inherit permissions from the parent node, you can avoid repeating the same configuration steps for resources with the same parent. You can find more details about the Cloud Identity and Access Management (Cloud IAM) inheritance mechanism in the policy inheritance section of the Resource Manager documentation.
Organizations, folders and projects are resources and support a set of operations like all other GCP resources. You can interact with these resources like you would any other GCP resource. For example, you can automate the creation of your hierarchy by using the Resource Manager API. You can organize the resource hierarchy according to your needs. The root node of each hierarchy is always an organization. In the following sections, there are types of hierarchies that you can implement in your organization. Each hierarchy type is characterized by its implementation complexity and its flexibility.
In an environment-oriented hierarchy, you have one organization that contains one folder per environment.
The following diagram shows an example of an environment-oriented hierarchy.
The multiple environments are development, quality assurance, and production. In each environment, there are multiple instances deployed of the same two apps, My app 1 and My app 2.
This hierarchy is simple to implement because it has only three levels, but it can pose challenges if you have to deploy services that are shared by multiple environments.
In a function-oriented hierarchy, you have one organization that contains one folder per business function, such as information technology and management. Each business function folder can contain multiple environment folders.
The following diagram shows an example of a function-oriented hierarchy.
In this hierarchy, the multiple business functions are apps, management, and information technology. You can deploy multiple instances of My app, plus shared services, such as Jira and website.
This option is more flexible compared to environment-oriented hierarchies because it gives you the same environment separation, plus it allows you to deploy shared services. On the other hand, a function-oriented hierarchy is more complex to manage than an environment-oriented one, and it doesn't separate access by business unit, such as retail or financial.
Granular access-oriented hierarchy
In a granular access-oriented hierarchy, you have one organization that contains one folder per business unit, such as retail or financial. Each business unit folder can contain one folder per business function. Each business function folder can contain one folder per environment.
The following diagram shows an example of a granular access-oriented hierarchy.
In this hierarchy, there are multiple business units, multiple business functions, and environments. You can deploy multiple instances of the My app 1 and My app 2 apps and a shared service, Net host.
This hierarchy is the most flexible and extensible option. On the other hand, you need to spend a greater effort to manage the structure, roles, and permissions. The network topology can also be significantly more complex because the number of projects is higher compared to the other options.
Define groups and roles for resource access
You need to set up the groups and roles to grant the necessary access to resources. In GCP, you can delegate admin access to resources in your organization. At minimum, you need the following roles:
- An organization admin, who defines Cloud IAM policies and the hierarchy of the organization and its resources.
- A network admin, who creates and configures networks, subnetworks, and network devices, such as Cloud Router, Cloud VPN and Cloud Load Balancing. An additional responsibility is to maintain firewall rules in collaboration with the security admin.
- A security admin, who establishes policies and constraints for the organization and its resources, configures new Cloud IAM roles for projects, and maintains visibility on logs and resources.
- A billing admin, who configures billing accounts and monitors resource usage and spending across the whole organization.
Design network topology and establish connectivity
The last step of the plan phase is to set up the network topology and connectivity from your existing environment to GCP.
After creating your projects and establishing identities, you should create at least one Virtual Private Cloud (VPC) network. VPCs let you have a private global addressing space, spanning multiple regions. Inter-regional communication doesn't use the public internet. You can create VPCs to segregate parts of your apps, or have a shared VPC spanning multiple projects. After setting up VPCs, you should also configure network flow logging and firewall rules logging by using Stackdriver. For more information about VPCs and how to set them up, see Best practices and reference architectures for VPC design.
GCP offers many hybrid connectivity options to connect your existing environment to your GCP projects:
- Public internet
- Cloud VPN
- Cloud Interconnect
Connecting through the public internet is a simple and inexpensive connection option because it's backed by a resilient infrastructure that uses Google's existing edge network. On the other hand, this infrastructure isn't private or dedicated. The security of this option depends on the apps that exchange data on each connection. For this reason, we don't recommend using this type of connection to send unencrypted traffic.
Cloud VPN extends your existing network to GCP by using an IPSec tunnel. Traffic is encrypted and travels between the two networks over the public internet. While Cloud VPN requires additional configuration and can impact the throughput of your connection, it is often the best choice if you don't encrypt traffic at the app level and if you need to access private GCP resources.
Peering lets you establish a connection to Google's network over a private channel. There are two peering types:
- Direct peering lets you establish a direct peering connection between your network and Google's edge network. If you don't need to access private resources on GCP and if you meet Google's peering requirements, this is a good option. It doesn't have any Service Level Agreements (SLA), but this option lets you cut your egress fees over public internet access of Cloud VPN.
- Carrier peering lets you connect to Google's network by using enterprise-grade network services managed by a service provider. Although Google doesn't offer any SLA on this connectivity option, it might be covered by a service provider's SLA. When evaluating the pricing of this option, you should consider both GCP egress fees and service provider fees.
Cloud Interconnect extends your existing network to Google's network through a highly available connection. It doesn't provide any encrypted channel by default, so if you want to use this option, we recommend that you encrypt sensitive traffic at the app level . You can choose between two Cloud Interconnect options:
- Cloud Interconnect – Dedicated gives you high bandwidth private connections with a minimum of 10 Gbps, but requires routing equipment in a colocation facility. In other words, you have to meet Google at one of the points of presence (PoPs). Google provides an end-to-end SLA for Dedicated Interconnect connections, and you're charged based on the dedicated bandwidth and the number of attachments.
- Cloud Interconnect – Partner lets you use dedicated high-bandwidth private connections managed by a service provider, without requiring you to configure routing equipment in a Google colocation facility. Google provides an SLA for the connection between Google and the service provider. The service provider might offer an SLA for the connection between you and them. Partner Interconnect is charged based on the connection capacity and the amount of egress traffic through an interconnect. Additionally, you might be charged by the service provider for their service.
Migration phase 3: deploy
After building a foundation for your GCP environment, you can begin to deploy your workloads. You can implement a deployment process and refine it during the migration. You might need to revisit the foundation of your environment as you progress with the migration. New needs can arise as you become more proficient with the new cloud environment, platforms, services, and tools.
When designing the deployment process for your workloads, you should take into account how much automation and flexibility you need. There are multiple deployment process types for you to choose, ranging from a fully manual process to a streamlined, fully automated one.
Fully manual deployments
A fully manual provisioning, configuration, and deployment lets you quickly experiment with the platform and the tools, but it's also error prone, often not documented, and not repeatable. For these reasons, we recommend that you avoid a fully manual deployment unless you have no other option. For example, you can manually create resources using the GCP Console such as a Compute Engine instance and manually run the commands to deploy your workload.
Configuration management tools
A configuration management (CM) tool lets you configure an environment in an automated, repeatable, and controlled way. You can use a CM tool to configure the environment and to deploy your workloads. While this is a better process compared to a fully manual deployment, it typically lacks the features to implement an elaborate deployment, like a deployment with no downtime or a blue-green deployment. Some CM tools let you implement your own deployment logic and can be used to mimic those missing features. However, using a CM tool as a deployment tool can add complexity to your deploy process, and can be more difficult to manage and maintain than a dedicated deployment toolchain. Designing, building, and maintaining a customized deployment solution can be a large additional burden for your operations team.
If you have already invested in containerization, you can go a step further and use a service such as Google Kubernetes Engine (GKE) to orchestrate your workloads. By using Kubernetes to orchestrate your containers, you don't have to worry about the underlying infrastructure and the deployment logic.
By implementing an automated artifact production and deployment process, such as a continuous integration and continuous deployment (CI/CD) pipeline, you can automate the creation and deployment of artifacts. You can fully automate this process, and you can even insert manual approval steps, if needed.
For an example implementation of this process, see Continuous Delivery Pipelines with Spinnaker and Google Kubernetes Engine.
Infrastructure as code
While you can automate the deployment process by implementing a CI/CD pipeline, you can adopt a similar process for your infrastructure. By defining your infrastructure as code, you can automatically provision all the necessary resources to run your workloads. With this type of process, you make your infrastructure more observable and repeatable. You could also apply a test-driven development approach to your infrastructure. On the other hand, you need to invest time and effort to implement an infrastructure as code process, so take this into account when planning your migration.
Migration phase 4: optimize
After deploying your workloads, you can start optimizing your target environment. In this optimization phase, the following cross-area activities can help you optimize this environment:
- Build and train your team.
- Monitor everything.
- Automate everything.
- Codify everything.
- Use managed services instead of self-managed ones.
- Optimize for performance and scalability.
- Reduce costs.
Build and train your team
When you plan your migration, you can train your development and operation teams to take full advantage of the new cloud environment. Not only can those teams be more efficient with effective training, but they can also choose the best cloud-native tools and services for the job. Training opportunities help to retain technical talent and empower engineers to leverage all of the advantages of GCP.
During this phase, you can also review the business processes that govern those teams. If you find any inefficiency or unnecessary burden in those processes, you have the possibility to refine them and improve them with training.
Monitoring is the key to ensure that everything in your environment is working as expected, and to improve your environments, practices, and processes.
Before you expose your environment to production traffic, we recommend that you design and implement a monitoring system where you define metrics that are important to assess the correct operation of the environment and its components, including your workloads. For example, if you are deploying a containerized infrastructure, you can implement a white-box monitoring system with Prometheus and Stackdriver. Or, you can monitor your Cloud IoT Core devices with Stackdriver and Cloud Functions.
We aso recommend that you set an alerting system like Stackdriver alerting, that lets you be proactive, not just reactive. You need to set up alerts for critical errors and conditions, but you also need to set up warnings that give you time to correct a potentially disruptive situation before it affects your users.
You can then export Stackdriver logs for long-term storage because Stackdriver logs have a limited retention period, or run data analytics against the metrics extracted from such logs to gain insights on how your environment is performing and start planning improvements.
Manual operations are exposed to a high error risk and are also time consuming. In most cases, you can automate critical activities such as deployments, secrets exchanges, and configuration updates. Automation leads to cost and time savings, and reduces risk. Teams also become more efficient, because they don't have to spend effort on repetitive tasks. Automating infrastructure with Cloud Composer and Automating Canary Analysis on Google Kubernetes Engine with Spinnaker are examples of automation on GCP.
When provisioning the target environment on GCP, you should aim to capture as many aspects as you can in code. By implementing processes such as infrastructure-as-code and policy-as-code, you can make your environment fully auditable and repeatable. You can also apply a test-driven development approach to aspects other than code, to have immediate feedback on the modifications you intend to apply to your environment.
Use managed services instead of self-managed ones
GCP has a portfolio of services and products that you can use without having to manage any underlying servers or infrastructure. In the optimization phase, you could either expand your workloads to use such services, or replace some of your existing workloads with these services.
A few examples of managed services are as follows:
- Using Cloud SQL for MySQL instead of managing your own MySQL cluster.
- Using Cloud AutoML to tag and classify images instead of deploying and maintaining your own machine learning models.
- Deploying your workloads on GKE instead of using your own self-managed Kubernetes cluster, or even migrating your VMs to containers and running them on GKE.
- Using App Engine for serverless web hosting.
Optimize for performance and scalability
One of the advantages of migrating to the cloud is access to resources. You can bolster existing resources, add more when you need them, and also remove unneeded resources in a scalable way.
You have more options to optimize performance compared to on-premises deployments:
- Horizontal scaling. You can elastically add or remove virtual machines, cluster nodes, and database instances. You can use services such as Compute Engine autoscaling groups, GKE cluster autoscaler.
- Vertical scaling. Adding more resources to your existing instances is easier in a cloud environment because you don't have to provision any additional physical infrastructure. For example, you can easily change the machine type of Compute Engine instances.
GCP offers a wide range of tools and pricing options to help you reduce your costs.
For example, if you provisioned Compute Engine instances, you can apply sizing recommendations for those instances.
To reduce your billing, you can analyze your billing reports to study your spending trends and determine which GCP products you are using most frequently. You can even export your billing data to BigQuery or to a file to analyze.
To further reduce your costs, GCP has features, such as sustained use discounts that apply automatic discounts to your Compute Engine billing. You can also purchase committed use contracts in return for discounted prices for Compute Engine instances. For BigQuery, you can also enroll in flat-rate pricing. GCP autoscaling features also help you reduce your billing by scaling down your resources according to client requests. You can reduce monitoring and logging costs by optimizing your Stackdriver usage.
GCP offers various options and resources for you to find the necessary help and support to best leverage GCP services.
If you don't need dedicated support, you can use these self-service resources:
- Documentation. GCP provides documentation for each of its products and services, as well as for APIs. For more information about migrations, check out the GCP migration center.
- Tools. GCP provides several products and services to help
you migrate. For example:
- Migrate for Compute Engine is a product for migrating physical servers and virtual machines from on-premises and cloud environments to GCP. Migrate for Compute Engine lets you migrate a virtual machine to GCP in a few minutes, where the data is copied in the background but the virtual machines are completely operational.
- Online transfer to Cloud Storage by using the gsutil command-line tool, Cloud Storage JSON API or the GCP Console.
- Storage Transfer Service lets you bring data to Cloud Storage from other cloud providers, online resources, or local data.
- Transfer Appliance is a hardware appliance you can use to migrate large volumes of data (from hundreds of terabytes up to 1 petabyte) to GCP without disrupting business operations.
- BigQuery Data Transfer Service automates data movement from software-as-a-service apps to BigQuery on a scheduled, managed basis.
- Whitepapers. These papers include reference architectures, case studies, best practices, and advanced tutorials.
- Media content. You can listen to the GCP podcast or watch any of the videos on the GCP YouTube channel. These resources discuss a wide range of topics from product explanations to development strategies.
- Online courses and hands-on labs. GCP has several courses on Coursera that include video content, reading materials, and hands-on labs. You can also take hands-on labs using Qwiklabs or participate in live online classes.
GCP has partnered with multiple companies to enable you to use their products. Some of the offerings might be free to use so ask the company and your GCP account manager.
These products include the following:
- Assessment and discovery phase: StratoZone, CloudPhysics, Risc Networks and Cloudamize
- Provisioning and configuration phases: Terraform, Chef, Ansible, SaltStack and Puppet
GCP partners not just with product and technology companies, but with system integrators that can provide hands-on-keyboard assistance. In the partners list, you can find a list of system integrators that specialize in cloud migrations.
GCP Professional Services
Our Professional Services team is here to help you get the most out of your investment in GCP.
Cloud Plan and Foundations: get help with your migration
Professional Services can help you plan your migration and deploy your workloads in production with our Cloud Plan and Foundations offering. These experts provide your team with guidance through each phase of migrating your workload into production, from setting up GCP foundations to optimize the platform for your unique workload needs and deploying the workload.
The objectives of Cloud Plan and Foundations are:
- Set up the GCP foundation.
- Create design documentation.
- Plan deployment and migration activities.
- Deploy workloads into production.
- Track issues and risks.
Professional Services guides your team through the following activities and deliverables:
- Conducting technical kickoff workshops.
- Building a technical design document.
- Creating a migration plan.
- Creating a program charter.
- Providing project management.
- Providing technical expertise.
Cloud Sprint: accelerate your migration to GCP
Cloud Sprint is an intensive hands-on workshop that accelerates your app migration to GCP. In this workshop, GCP Professional Services leads one of your teams through interactive discussions, whiteboarding sessions, and reviewing target apps to migrate to GCP. During the Cloud Sprint, Professional Services works alongside your team members to help you gain first-hand experience with cloud solutions with required deployment activities to help you understand your next steps for future GCP migrations.
Training: Develop your team's skills
GCP Professional Services can provide training in fields based on your team's needs.
- Read about migrating a monolithic application to microservices on Google Kubernetes Engine.
- Learn about Anthos and Migrate for Anthos.
- Try out other Google Cloud Platform features for yourself. Have a look at our tutorials.