Networking

Driving enterprise modernization with Google Cloud infrastructure

Google Cloud infrastructure

Organizations are adopting modern cloud architectures to deliver the best experience to their customers and benefit from greater agility and faster time to market. Google Cloud Platform (GCP) is at the center of this shift, from enabling customers to adopt hybrid and multi-cloud architectures to modernizing their services. Today, we’re announcing important additions to our migration and networking portfolios to help you with your modernization journey:

Migrate to GCP from more clouds
Businesses migrate virtual machines from on-prem to Google Cloud all the time and, increasingly, they also want to move workloads between clouds. That’s why we’re announcing today that Migrate for Compute Engine is adding beta support for migrating virtual machines directly out of Microsoft Azure into Google Compute Engine (GCE). This complements Migrate for Compute Engine’s existing support for migrating VMs out of Amazon EC2. As a result, whether you’re migrating between clouds for better agility, to save money, or to increase security, you now have a way to lift and shift into Google Cloud—quickly, easily and cost-effectively.

Trax, which uses GCP to digitize brick-and-mortar retail stores, has significantly accelerated its migration and freed up developer time thanks to the ease of use and flexibility of Migrate for Compute Engine. 

"Migrate for Compute Engine allowed our DevOps team to successfully move dozens of servers within a few hours and without utilizing developers or doing any manual setup,” said Mark Serdze, director of cloud infrastructure at Trax. "Previous migration sprints were taking as long as three weeks, so getting sprints down to as little as three hours with Migrate for Compute Engine was a huge time and energy saver for us. And being able to use the same solution to move VMs from on-prem, or from other cloud providers, will be very beneficial as we continue down our migration path."

Simplify transformation with enterprise-ready Service Mesh and modern load balancing 
As enterprises break monoliths apart and start modernizing services, they need solutions for consistent service and traffic management at scale. Organizations want to invest time and resources in building applications and innovating, not on the infrastructure and networking required to deploy and manage these services. Service mesh is rapidly growing in popularity because it solves these challenges by decoupling applications from application networking and service development from operations. To ease service mesh deployment and management, we’re announcing two enterprise-ready solutions that make it easier to adopt microservices and modern load balancing: general availability of Traffic Director and beta availability of Layer 7 Internal Load Balancer (L7 ILB).

Traffic Director, now available in Anthos, is our fully managed, scalable, resilient service mesh control plane that provides configuration, policy and intelligence to Envoy or similar proxies in the data plane using open APIs, so customers are not locked in. Originally built at Lyft, Envoy is an open-source high-performance proxy that runs alongside the application to deliver common platform-agnostic networking capabilities, and together with Traffic Director, abstracts away application networking. Traffic Director delivers global resiliency, intelligent load balancing and advanced traffic control like traffic splitting, fault injection and mirroring to your services. You can bring your own Envoy builds or use certified Envoy builds from Tetrate.io.

Traffic Director,.gif

"Service mesh technologies are integral to the evolution from monolithic, closed architectures to cloud-native applications," said Vishal Banthia, software engineer at Mercari, a leading online marketplace in Japan. "We are excited to see Traffic Director deliver fully-managed service mesh capabilities by leveraging Google's strengths in global infrastructure and multi-cloud service management."

We're also taking the capabilities of Traffic Director a step further for customers who want to modernize existing applications. With L7 ILB, currently in beta, you can now bring powerful load balancing features to legacy environments. Powered by Traffic Director and Envoy, L7 ILB allows you to deliver rich traffic control to legacy services with minimal toil—and with the familiar experience of using a traditional load balancer. Deploying L7 ILB is also a great first step toward migrating legacy apps to service mesh. 

“L7 ILB makes it simple for enterprises to deploy modern load balancing,” said Matt Klein, creator of Envoy Proxy. “Under the hood, L7 ILB is powered by Traffic Director and Envoy, so you get advanced traffic management simply by placing L7 ILB in front of your legacy apps.”
L7 ILB.gif

Both L7 ILB and Traffic Director work out-of-the-box with virtual machines (Compute Engine) and containers (Google Kubernetes Engine or self-managed) so you can modernize services at your own pace.

Deliver resilient connectivity for hybrid environments 
Networking is the foundation of hybrid cloud, and fast, reliable connectivity is critical, whether it’s with a high performance option like Cloud Interconnect or Cloud VPN for lower bandwidth needs. For mission-critical requirements, High Availability VPN and 100Gbps Dedicated Interconnect will soon be generally available, providing resilient connectivity with industry leading SLAs for deploying and managing multi-cloud services.

We look forward to hearing how you use these new features. Please visit our website to learn more about our networking and migration solutions, including Migrate for Anthos.