Announcing updates to AutoML Vision Edge, AutoML Video, and Video Intelligence API
Whether businesses are using machine learning to perform predictive maintenance or create better retail shopping experiences, ML has the power to unlock value across a myriad of use cases. We’re constantly inspired by all the ways our customers use Google Cloud AI for image and video understanding—everything from eBay's use of image search to improve their shopping experience, to AES leveraging AutoML Vision to accelerate a greener energy future and help make their employees safer. Today, we’re introducing a number of enhancements to our Vision AI portfolio to help even more customers take advantage of AI.
AutoML Vision Edge now detects objects
Performing machine learning on edge devices like connected sensors and cameras can help businesses do everything from detect anomalies faster to efficiently predict maintenance. But optimizing machine learning models to run on the edge can be challenging because these devices often grapple with latency and unreliable connectivity. In April, we announced AutoML Vision Edge to help businesses train, build and deploy ML models at the edge, beginning initially with image classification. Today, AutoML Vision Edge can now perform object detection as well as image classification—all directly on your edge device. Object detection is critical for use cases such as identifying pieces of an outfit in a shopping app, detecting defects on a fast-moving conveyor belt, or assessing inventory on a retail shelf. AutoML Vision Edge models are optimized to a small memory footprint and offer low latency while delivering high accuracy. AutoML Vision Edge supports a variety of hardware devices that use NVIDIA GPUs, ARM, or other chipsets, as well as Android and iOS operating systems.
TRYON, an AI-enabled start-up specialized in designing and producing augmented reality software for jewelry e-commerce and retail stores, is using AutoML Vision Edge’s object tracking capabilities to power an augmented reality shopping experience.
"At TRYON, we use augmented reality (AR) to create an experience where customers can try on jewelry before they make a purchase,” says Andrii Tsok, Co-founder, CTO at TRYON. “Customers can try on rings, bracelets and watches anytime and anywhere with their smartphones, so they can get a better idea of what the jewelry would look like. To deliver this service to customers and retailers, we need to create a custom AI model that works on the customer's phone. We evaluated AutoML Vision Edge Object detection and were so impressed with the accuracy and the speed that we decided to include the object detection model in our first beta release. By integrating AutoML Vision Edge Object detection into our platform we expect to double our productivity by reducing the amount of resources and time for managing internal infrastructure."
AutoML Video now tracks objects and more
In April, we launched AutoML Video Intelligence to make it easier for businesses to train custom models to identify video content according to their own defined labels. Today, we’ve brought object detection to AutoML Video, enabling it to track the movement of multiple objects between frames. This is an important component of a broad range of applications such as traffic management, sports analytics, robotic navigation, and more.
Video Intelligence API can now recognize logos
The Video Intelligence API offers pre-trained machine learning models that automatically recognize a vast number of objects, scenes, and actions in stored and streaming video. Now, the Video Intelligence API can also detect, track and recognize logos of popular businesses and organizations. With the ability to recognize over 100,000 logos, the Video Intelligence Logo Recognition feature is ideal for brand safety, ad placement, and sports sponsorship use cases.
These new improvements are available today. To learn more about our image products, visit our Vision website, and to learn more about our Video products, visit our Video Intelligence website. We’re excited to offer this new functionality and can’t wait to see how you will use it to infuse AI into your applications.