Unlocking data analytics and machine learning for more businesses
VP, Product Management, Cloud AI and Industry Solutions
Most enterprises already see the value of AI. In fact, more than 60 percent are in the process of adopting it right now. But what about that other 40 percent—what’s stopping them?
What we found, working with hundreds of enterprises, is that using AI comes down to simplicity and usefulness. Enterprises need tools that are simple and familiar, and they need to be able to directly apply them to their unique challenges.
Today, we’re making a number of updates to our data analytics and Cloud AI services aimed at making AI more simple and useful, and putting it in the hands of as many businesses and developers as we can.
Here’s what’s new:
- BigQuery ML, now available in beta
- Support for training and online prediction through scikit-learn and XGBoost in Cloud ML Engine
- Kubeflow v0.2
- Cloud TPU v3 and Cloud TPU Pod, both available in alpha
- A new partnership with Iron Mountain
Bringing machine learning closer to your data with BigQuery ML
For many businesses, there are significant hurdles to building the analytics pipeline necessary for AI. Creating a team of in-house data scientists is impractical for many. Data analysts, typically trained in SQL, aren’t always familiar with the processes and programming languages used for machine learning. And workstreams that involve moving data out of an enterprise data warehouse can be time-consuming and costly.
To address these challenges, we’re announcing BigQuery ML. BigQuery ML puts the power of predictive analytics in reach of millions of users—even those without a data science background. By bringing ML to where customers already store their data, BigQuery ML helps customers quickly create and deploy models, accelerating speed to market. They can run models at scale on large datasets. And they can do it all using simple SQL commands.
For an in-depth look at BigQuery ML, and what you can do with it, read our data analytics blog.
Bringing machine learning to more data scientists with our AI platform
To move from raw data to business insights, you need many things: massive compute resources, tools to build ML models, and the skills to train and optimize them. It can be daunting to say the least, and data scientists have told us they want a complete solution that can simplify this process. To address this, we built our AI platform to provide an end-to-end stack—from our high-performance infrastructure, to custom hardware optimized for machine learning, to fully-managed services like Cloud ML Engine. Now we're making it even faster and easier with a number of enhancements.
Support for training and online prediction through scikit-learn and XGBoost in Cloud ML Engine
Whether in the cloud, on premises, or through a combination of both, businesses often need the freedom to train and deploy with different ML frameworks. Starting today, Cloud ML Engine supports both training and online prediction through scikit-learn and XGBoost. We’re also announcing the availability of Cloud Deep Learning VM Image, which offers pre-configured VM images so you can get started with your ML project using TensorFlow, scikit-learn and PyTorch on Google Cloud.
Introducing Kubeflow v0.2
We remain committed to open source software, and support many open source standards for data analytics and machine learning. Last year we announced Kubeflow to make it easier to use machine learning software stacks—like TensorFlow, Scikit-Learn, and others—all on Kubernetes. Kubeflow v0.2 is now available, with an improved user interface to navigate among components, and several enhancements to monitoring and reporting. You can learn more, and get started here.
Advancing our machine learning stack from the cloud to the edge
Our entire AI platform is built on our high-performance infrastructure that spans from our global networks to our Cloud TPUs, custom ASICs designed for machine learning workloads. Each TPU delivers up to 180 teraflops of floating-point performance and includes a custom high-speed network that allows TPUs to work together in “TPU pods.” Today, we’re announcing the alpha release of Cloud TPU Pod, providing up to 11.5 petaflops to accelerate the training of a single large machine learning model.
“Cloud TPU Pods have transformed our approach to visual shopping by delivering a 10X speedup over our previous infrastructure,” says Larry Colagiovanni, VP of New Product Development at eBay. “We used to spend months training a single image recognition model, whereas now we can train much more accurate models in a few days on Cloud TPU Pods. We’ve also been able to take advantage of the additional memory the TPU Pods have, allowing us to process many more images at a time. This rapid turnaround time enables us to iterate faster and deliver improved experiences for both eBay customers and sellers.”
We’ve also increased the support and availability of our existing TPU offerings. Our second-generation Cloud TPUs are now available to all our customers, and our third-generation TPUs, announced at this year’s I/O, are now available in alpha. Support for Cloud TPUs in Kubernetes Engine is also now in beta. We hope this makes compute-intensive machine learning faster and more useful.
And as we extend our ML stack, we recognize the need to run faster inferences at the edge. To serve this need, we’re introducing Edge TPU, a custom ASIC offered as a part of our Cloud IoT Edge solution. You can learn more about it in our IoT blog post.
Making AI more accessible to developers
There are still many more developers in the world than data scientists, and our goal is to make it possible to adopt AI regardless of deep machine learning expertise. We offer everything from the pre-trained models in our machine learning APIs to AutoML, which lets you create your own custom models. For developers, these building blocks deliver on the best of both worlds: ease of use and high model quality.
Since launching AutoML this February, we’ve seen customers use this technology to do things previously not possible. As an example, Urban Outfitters is using AutoML vision to enhance their customers’ shopping experiences. “To create and maintain a comprehensive set of product attributes, our team has been using AutoML Vision to automate the product attribution process by recognizing nuanced product characteristics, like patterns and neckline styles,” says Alan Rosenwinkel, a data scientist at the brand's parent company URBN. “This is critical to providing our customers with relevant product recommendations, accurate search results, and helpful product filters as it's time-consuming and arduous to manually create product attributes. We look forward to continuing to work with Google Cloud AI to innovate on behalf of our customers."
Partnering with Iron Mountain
A key challenge many businesses face is extracting insights from what’s known as “dark data”—information inside of stored documents, for example. To address this need, we’re partnering with Iron Mountain to build industry-specific solutions using our machine learning tools, helping customers solve business problems with these new insights from their documents. We’ve already started working on solutions for mortgage documents, energy customers, media and entertainment assets, and more, leveraging our research and expertise in Optical Character Recognition (OCR), entity extraction and Natural Language Processing. We’re working closely with Iron Mountain to understand what their customers need and where our technology can help. For more information, read Iron Mountain’s press release.
We remain committed to bringing the benefits of AI to as many businesses as possible, and we hope these updates put AI in the hands of many more data scientists, analysts and developers. For more information, and to learn about Cloud AI, visit our website.