Built on Google’s Infrastructure
Access some of the same hardware that Google uses to develop high performance machine learning products. GPUs give you the power that you need to process massive datasets. The hardware is passed through directly to the virtual machine to provide bare metal performance.
- Several GPU types available
- NVIDIA Tesla K80, P100, P4, T4, and V100 GPUs are available today, depending on your compute or visualization needs.
- Bare metal performance
- GPUs are offered in passthrough mode, directly attached to the virtual machine to provide maximum performance.
- All the benefits of the Google Cloud
- Run GPU workloads on Google Cloud Platform where you have access to industry-leading storage, networking, and data analytics technologies.
- Virtual Workstations in the Cloud
- Run graphics-intensive applications including 3D visualization and rendering with NVIDIA GRID Virtual Workstations, supported on P4, P100, and T4 GPUs.
- Attach GPUs to any machine type
- Optimally balance the processor, memory, high performance disk and GPU power for your individual workload.
- Flexible GPU counts per instance
- Attach up to 8 GPU dies to your instance to get the power that you need for your applications.
- GPU application frameworks
- Whether your applications require OpenCL, CUDA, Vulkan or OpenGL, Compute Engine provides the hardware that you need to accelerate your workloads.
- Per-second billing
- Get the same per-second billing for GPUs that you do for the rest of Google Cloud Platform's resources. Pay only for what you need while you are using it.
- Preemptible GPUs
- For batch processing jobs, customers can save 70% from on-demand prices by using GPUs with preemptible instances. Together with preemptible GPU instances, managed instance groups can be used to create a large pool of affordable GPU capacity that runs as long as capacity is available.
“For certain tasks, [NVIDIA] GPUs are a cost-effective and high-performance alternative to traditional CPUs. They work great with Shazam’s core music recognition workload, in which we match snippets of user-recorded audio fingerprints against our catalog of over 40 million songs. We do that by taking the audio signatures of each and every song, compiling them into a custom database format and loading them into GPU memory. Whenever a user Shazams a song, our algorithm uses GPUs to search that database until it finds a match. This happens successfully over 20 million times per day.”— Ben Belchak Head of Site Reliability Engineering, Shazam