Jump to Content
Compute

GCP infrastructure and operations: watch and learn

October 5, 2018
Google Cloud Content & Editorial

As cloud computing grows, there’s a lot to think about as you’re planning how to build, manage and grow your cloud infrastructure. To help you get started, we’re sharing the sessions that attendees at Next ‘18 in San Francisco rated highest on the topics of infrastructure and operations. (Those align generally with our compute, management and monitoring offerings.) Take a spin through these sessions to learn what’s new and what’s coming up.

1. Best Practices from Google SRE: How You Can Use Them with GKE + Istio

This session answered one of the big questions at Next ‘18: What can Istio do for me? This session describes the short version: As a service mesh, Istio defines a set of APIs that provides observability into distributed apps [including those running on Google Kubernetes Engine (GKE)], plus secures and controls traffic and requests between services. This session also walks through Google’s site reliability engineering (SRE) principles, which aim to improve app performance and user satisfaction using a set of tasks that DevOps teams can implement. 

The takeaway: As cloud infrastructures become more distributed, service meshes provide foundational and easily scalable monitoring capabilities.

2. Improving Reliability with Error Budgets, Metrics, and Tracing in Stackdriver

This session starts with the idea that distributed microservices are taking over computing environments, and with that comes a lot more moving pieces to track, namely reliability. Using SRE principles that treat operations like a software engineering problem, you’ll learn how to put these principles into practice and use Stackdriver to monitor the symptoms of underlying issues to get to the root cause, and match metrics to what users actually care about.

The takeaway: Developers have some new areas to learn about—namely operability. You’ll find tips on tackling this area to get the desired result for the bottom line: bringing products to market faster.

3. Cloud Load Balancing Deep Dive and Best Practices

Whether you’re expanding your business across the globe or simply shopping and playing games online, you have likely already experienced Google's global load balancer. It underpins the services and workloads of leading enterprises, retail, gaming and media companies. Listen to this talk for the demos, stay for great under-the-hood details about the types of load balancing that Google Cloud provides, and learn how you can optimize your cloud services for performance and cost. Learn about modern load balancing including QUIC, HTTP/2, container-native load balancing with network endpoint groups, and load balancing for microservices.

The takeaway: It's just fun to see how a global-scale load balancer works. Plus, you’ll get an understanding of Google's cloud load balancing options and configurations, and the benefits they can bring to your services.

4. Google Kubernetes Engine—Forget About Node Provisioning at Last

In this session you’ll learn how to leave behind node provisioning with two capabilities in GKE that let you autoscale Kubernetes workloads. Autoscaling Kubernetes workloads used to happen on two autonomous levels: pod autoscaling and cluster autoscaling. This session describes the recently launched Vertical Pod Autoscaler and Node Auto Provisioning, both of which allow fully automatic setups for configuring GKE clusters. You’ll see how workload deployments get scheduled with the master scheduler tool and how it’s possible to scale to almost zero.

The takeaway: The march toward automation continues. Automating what makes sense for your environment lets you focus on workloads, not infrastructure.

5. Best practices using Kubernetes, Spinnaker and Istio to Manage a Multi-cloud Environment

This session offers a look at an open cloud computing architecture with the aim of demystifying the huge range of available cloud products. It covers four building blocks: service mesh, CI/CD, orchestration and containerization, using GCP products for illustration. You’ll get a look into managing containers in Kubernetes, then explore the application deployment pipeline, from staging to production, and understand how you can see into these microservices with a service mesh like Istio that separates apps from network functions.

The takeaway: The future of cloud is open-source, and seeing a complete architecture in practice can bring ideas about how you can use cloud services to your advantage.

You can check out all the announcements from Next ‘18 San Francisco here, and see more video coverage here.

Posted in