12 data management sessions to add to your docket at Next '19
The Google Cloud data management team
Data is the backbone of many an enterprise, and when cloud is in the picture it becomes especially important to store, manage and use all that data effectively. At Next ‘19, you’ll find plenty of sessions that can help you understand ways to manage your Google Cloud data, and tips to store and manage it efficiently. For an excellent primer on Google Cloud Platform (GCP) data storage, sign yourself up for this spotlight session for the basics and demos. Here are some other sessions to check out:
You can choose different ways to migrate your database to the cloud, whether lift-and-shift to use fully managed GCP or a total rebuild to move onto cloud-native databases. This session will explain best practices for database migration and tools to make it easier.
There is a whole range of essential enterprise workloads you can move to the cloud, and in this session you’ll learn specifically about Accenture Managed Services for GCP, which makes it easy for you to run Oracle databases and software on GCP.
Get the details in this session on migrating on-prem Oracle databases to Cloud SQL PostgreSQL. You’ll get a look at all the basics, from assessing your source database and doing schema conversion to data replication and performance tuning.
This migration story illustrates the real-world considerations that Spotify used to decide between Cassandra and Cloud Bigtable, and how they migrated workloads and built an auto-scaler for Cloud Bigtable.
In this session, you’ll hear about the database performance tuning we’ve done recently to considerably improve Cloud SQL for PostgreSQL. We'll also highlight Cloud SQL’s use of Google's Regional Persistent Disk storage layer. You'll learn about PostgreSQL performance tuning and how to let Cloud SQL handle mundane, yet necessary, tasks.
Dive into Cloud Spanner with Google Engineering Fellow Andrew Fikes. You’ll learn about the evolution of Cloud Spanner and what that means for the next generation of databases, and get technical details about how Cloud Spanner ensures strong consistency.
Find out how to use Cloud Spanner to its full potential in this session, which will include best practices, optimization strategies and ways to improve performance and scalability. You’ll see live demos of how Cloud Spanner can speed up transactions and queries, and ways to monitor its performance.
High-performance computing (HPC) storage in the cloud is still an emerging area, particularly because complexity, price and performance have caused concern. This session will look at companies that are using HPC storage in the cloud across multiple industries. You’ll also see how HPC storage uses GCP tools like Compute Engine VMs and Persistent Disk.
See how one company, Segment, built its own Lambda architecture for customer data using Cloud Bigtable to handle fast random reads and BigQuery to process large analytics datasets. Segment’s CTO will also describe the decision-making process around choosing these GCP products vs. competing options, and their current setup, with tens of terabytes stored in multiple systems and super-fast latency.
Come take a look at how Cloud Bigtable’s new multi-regional replication works using Google’s SD-WAN. This new feature makes it possible for a single instance of data, up to petabyte size, to be accessed within or between five different continents in up to four regions. Your users can access data globally with low latency, and get a fast disaster recovery option for essential data.
In-memory caching can help speed up application performance, but it brings challenges too. Take a closer look in this session to learn about cache sizing, API considerations and latency troubleshooting.
This detailed look at Twitter’s complex Hadoop migration will cover their use of the Cloud Storage Connector and open-source tools. You’ll hear from Twitter engineers on how they planned and managed the migration to GCP and how they solved some of their unique data management challenges.