Jump to Content
Google Cloud

7 must-see sessions on data analytics at Next ‘18

July 11, 2018
Saptarshi Mukherjee

Global Head of Product and Solutions Marketing, Data Analytics, Google Cloud

From understanding Wikipedia pageview data to determining patent coverage from both private and public patent datasets, big data has proved essential to solving numerous interesting, large-scale problems. This year at Next ‘18, we’re offering more than 60 sessions on topics ranging from data warehousing to data visualization to streaming data analytics. Leading enterprises and disruptive start-ups will share how they’re using Google Cloud data analytics solutions to achieve more with big data. If you’re short on time, here are seven we recommend checking out.


Spotlight: Rethinking Big Data Analytics with Google Cloud

Google Cloud Platform combines powerful serverless solutions for enterprise data warehousing, streaming analytics, managed Spark and Hadoop, modern business intelligence (BI), planet-scale data lakes, and AI. Sudhir Hasbe, our director of product management for data analytics, will take you through our vision and engineering strategy, helping you achieve more with complete big data analytics solutions rather than isolated products. Our customers will share how they are developing their own tailored big data implementations using Google Cloud. We will also share some key product launches that will make it easy for you to capture value from big data.


Better Together: Google Marketing Platform and Google Cloud

To meet rising consumer expectations, marketers need tools that work together and make it possible to better understand and reach their customers. Product Manager Mary Pishny and Director, Product Management Fausto Ibarra will take you through the new Google Marketing Platform that can help you plan, buy, measure and optimize digital media and customer experiences. Products covered include Analytics 360, BigQuery, Data Studio and Ads Data Hub.


Migrating On-Premises Hadoop Infrastructure to Google Cloud Platform (with Cloud Dataproc)

Product Manager James Malone and Customer Engineer Rajeev Mahajan will show you how to move your on-premises Apache Hadoop system to Google Cloud Platform (GCP). The presenters will walk you through a migration process that not only moves your Hadoop work to GCP, but also streamlines your workflow using features unique to our Hadoop system that we’ve explicitly configured for cloud computing. This session will provide an overview of the migration process, with particolar emphasis on moving from large, persistent clusters to an ephemeral model, meaning resources will be turned on only to run your workloads, then powered down after. The session will also walk you through the process of moving your data to Cloud Storage, Cloud Dataproc, and other GCP products.


Data and Analytics Platform Overview and Customer Examples

Moloco Research Scientist Haden Lee, Solutions Architects Reza Rokni and Win Woo, and Group Product Manager William Vambenepe will walk you through the core products in Google Cloud’s data platform, using both theoretical and real-world examples based on Moloco’s platform usage. The session will cover big data analytics services like BigQuery, Cloud Dataflow, Pub/Sub, Dataproc, and Data Studio, and will use demos to demonstrate how these products can be applied to typical enterprise scenarios.


Data Warehousing Migrations: Lessons from Home Depot

Product Manager Tino Tereshko will host a discussion with Rick Ramaker, Technology Director, and Kevin Scholz, Enterprise Architect, both at the Home Depot, on their transition from an on-premise data warehouse to BigQuery. You’ll learn about Home Depot’s technical and business challenges, their data architecture on Google Cloud, and how Rick and Kevin guided a large and complex business through change.


Predicting Community Engagement on Reddit using TensorFlow, GDELT and Cloud Dataflow

Product Manager Sergei Sokolenko and Reddit’s VP Engineering Nick Caldwell will show you how to use feature engineering on Cloud Dataflow and BigQuery. By analyzing a cross-section of the world’s news from the GDELT Project using aTensorFlow estimator, they’ll demonstrate how you can predict both the subreddit destination and the popularity of newly published content. These predictions and insights have the potential to help content creators understand which topics are generating the most interest.


Going Beyond the Traditional EDW with BigQuery: Building the World's Largest Enterprise Data Warehouse

Lloyd Tabb, founder and CTO of Looker, can do things with BigQuery that astound its creators—even his co-presenter, Jordan Tigani, Google Engineering Director, who helped build BigQuery. Together, they will show how BigQuery's capabilities inherit from its unique underlying infrastructure, and demonstrate how the scale, data reach, and feature set make BigQuery more than just a data warehouse. They will demonstrate recent security, performance, and manageability improvements. They will also show tricks and tips for getting more out of BigQuery, and run queries that push it to its performance and scale limits.

To learn more about these sessions and more, and to register, visit the Next ‘18 website. See you in San Francisco!

Posted in