Edge TPUEarly Access

Google’s purpose-built ASIC designed to run inference at the edge.

AI at the edge

AI at the edge

AI is pervasive today, from consumer to enterprise applications. With the explosive growth of connected devices, combined with a demand for privacy/confidentiality, low latency and bandwidth constraints, AI models trained in the cloud increasingly need to be run at the edge. Edge TPU is Google’s purpose-built ASIC designed to run AI at the edge. It delivers high performance in a small physical and power footprint, enabling the deployment of high-accuracy AI at the edge.

End-to-end AI infrastructure

End-to-end AI infrastructure

Edge TPU complements Cloud TPU and Google Cloud services to provide an end-to-end, cloud-to-edge, hardware + software infrastructure for facilitating the deployment of customers' AI-based solutions.

High performance in a small physical and power footprint

High performance in a small physical and power footprint

Thanks to its performance, small footprint, and low power, Edge TPU enables the broad deployment of high-quality AI at the edge.

Co-design of AI hardware, software and algorithms

Co-design of AI hardware, software and algorithms

Edge TPU isn't just a hardware solution, it combines custom hardware, open software, and state-of-the-art AI algorithms to provide high-quality, easy to deploy AI solutions for the edge.

A broad range of applications

A broad range of applications

Edge TPU can be used for a growing number of industrial use-cases such as predictive maintenance, anomaly detection, machine vision, robotics, voice recognition, and many more. It can be used in manufacturing, on-premise, healthcare, retail, smart spaces, transportation, etc.

An open, end-to-end infrastructure for deploying AI solutions

Edge TPU enables the deployment of high-quality ML inference at the edge. It augments Google’s Cloud TPU and Cloud IoT to provide an end-to-end (cloud-to-edge, hardware + software) infrastructure to facilitate the deployment of customers' AI-based solutions. In addition to its open-source TensorFlow Lite programming environment, Edge TPU will initially be deployed with several Google AI models, combining Google's expertise in both AI and hardware.

Edge TPU complements CPUs, GPUs, FPGAs, and other ASIC solutions for running AI at the edge, which will be supported by Cloud IoT Edge.

Edge
(Devices/nodes, Gateways, Servers)
Google Cloud
Tasks ML inference ML training and inference
Software, services Cloud IoT Edge, Linux OS
Cloud ML Engine, Kubernetes Engine,
Compute Engine, Cloud IoT Core
ML frameworks TensorFlow Lite, NN API
TensorFlow, scikit-learn,
XGBoost, Keras
Hardware accelerators Edge TPU, GPU, CPU Cloud TPU, GPU, and CPU

Edge TPU features

This ASIC is the first step in a roadmap that will leverage Google's AI expertise to follow and reflect in hardware the rapid evolution of AI.

Type Inference Accelerator
Performance Example Edge TPU enables users to concurrently execute multiple state-of-the-art AI models per frame, on a high-resolution video, at 30 frames per second, in a power-efficient manner.
Numerics Int8, Int16
IO Interface PCIe, USB

Edge-based ML inference is vital to delivering reliable, live, low-latency, and cost-effective smart city IoT. Cloud IoT Edge and Edge TPU unlock these capabilities in new ways for the next generation of Smart Parking systems.

John Heard, Chief Technology Officer, Smart Parking Limited
Google Cloud

Get started

Build with Edge TPU using the development board, which includes an Edge TPU SoM and a carrier board. Available with our Cloud IoT Edge Alpha program.

LEARN ABOUT CLOUD IOT EDGE arrow_forward

Apply for early access to the Cloud IoT Edge Alpha and Edge TPU Early Access development board.

Clould IOT Edge

Products listed on this page are in alpha or early access. For more information on our product launch stages, see here.