This overview introduces some of the commonly used Google Cloud services. For the full list of services, see the Products and services page.
This overview covers the following types of services:
Computing and hosting services
Google Cloud gives you options for computing and hosting. You can choose to do the following:
- Work in a serverless environment.
- Use a managed application platform.
- Leverage container technologies to gain lots of flexibility.
- Build your own cloud-based infrastructure to have the most control and flexibility.
You can imagine a spectrum where, at one end, you have most of the responsibilities for resource management and, at the other end, Google has most of those responsibilities:
Cloud Functions, Google Cloud's functions as a service (FaaS) offering, provides a serverless execution environment for building and connecting cloud services. With Cloud Functions, you write simple, single-purpose functions that are attached to events raised by your cloud infrastructure and services. Your Cloud Function runs when a watched event is raised. Your code executes in a fully managed environment; you don't need to provision any infrastructure or worry about managing any servers.
Cloud Functions are a good choice for use cases that include the following:
- Data processing and ETL operations, for scenarios such as video transcoding and IoT streaming data.
- Webhooks to respond to HTTP triggers.
- Lightweight APIs that compose loosely coupled logic into applications.
- Mobile backend functions.
App Engine is Google Cloud's platform as a service (PaaS). With App Engine, Google handles most of the management of the resources for you. For example, if your application requires more computing resources because traffic to your website increases, Google automatically scales the system to provide those resources. If the system software needs a security update, that's handled for you, too.
When you build your app on App Engine, you can:
Build your app on top of the App Engine flexible environment runtimes in the languages that App Engine flexible supports, including: Python 2.7/3.6, Java 8, Go 1.8, Node.js, PHP 5.6, 7, .NET, and Ruby. Or use custom runtimes to use an alternative implementation of a supported language or any other language.
Let Google manage app hosting, scaling, monitoring, and infrastructure for you.
Use the App Engine SDK to develop and test on your local machine in an environment that simulates App Engine on Google Cloud.
Easily use the storage technologies that App Engine is designed to support in the standard and flexible environments.
In the standard environment, you can also choose from a variety of third-party databases to use with your applications such as Redis, MongoDB, Cassandra, and Hadoop.
In the flexible environment, you can easily use any third-party database supported by your language, if the database is accessible from the App Engine instance.
In either environment, these third-party databases can be hosted on Compute Engine, hosted on another cloud provider, hosted on-premises, or managed by a third-party vendor.
Use built-in, managed services for activities such as email and user management.
Use Web Security Scanner to identify security vulnerabilities as a complement to your existing secure design and development processes.
Deploy your app by using the App Engine launcher GUI application on macOS or Microsoft Windows or by using the command line.
For the standard environment, run your app from the Central US or Western Europe regions.
For a complete list and description of App Engine features, see the App Engine documentation.
With container-based computing, you can focus on your application code, instead of on deployments and integration into hosting environments. Google Kubernetes Engine (GKE), Google Cloud's containers as a service (CaaS) offering, is built on the open source Kubernetes system, which gives you the flexibility of on-premises or hybrid clouds, in addition to Google Cloud's public cloud infrastructure.
When you build with GKE, you can:
Create and manage groups of Compute Engine instances running Kubernetes, called clusters. GKE uses Compute Engine instances as nodes in a cluster. Each node runs the Docker runtime, a Kubernetes node agent that monitors the health of the node, and a simple network proxy.
Declare the requirements for your Docker containers by creating a simple JSON configuration file.
Use Container Registry for secure, private storage of Docker images. You can push images to your registry and then you can pull images to any Compute Engine instance or your own hardware by using an HTTP endpoint.
Create single- and multi-container pods. Each pod represents a logical host that can contain one or more containers. Containers in a pod work together by sharing resources, such as networking resources. Together, a set of pods might comprise an entire application, a microservice, or one layer in a multi-tier application.
Create and manage replication controllers, which manage the creation and deletion of pod replicas based on a template. Replication controllers help to ensure that your application has the resources it needs to run reliably and scale appropriately.
Create and manage services. Services create an abstraction layer that decouples frontend clients from pods that provide backend functions. In this way, clients can work without concerns about which pods are being created and deleted at any given moment.
Create an external network load balancer.
Google Cloud's unmanaged compute service is Compute Engine. You can think of Compute Engine as providing an infrastructure as a service (IaaS), because the system provides a robust computing infrastructure, but you must choose and configure the platform components that you want to use. With Compute Engine, it's your responsibility to configure, administer, and monitor the systems. Google will ensure that resources are available, reliable, and ready for you to use, but it's up to you to provision and manage them. The advantage here is that you have complete control of the systems and unlimited flexibility.
When you build on Compute Engine, you can do the following:
Use virtual machines (VMs), called instances, to build your application, much like you would if you had your own hardware infrastructure. You can choose from a variety of instance types to customize your configuration to meet your needs and your budget.
Choose which global regions and zones to deploy your resources in, giving you control over where your data is stored and used.
Choose which operating systems, development stacks, languages, frameworks, services, and other software technologies you prefer.
Create instances from public or private images.
Use GCP storage technologies or any third-party technologies you prefer.
Use Google Cloud Marketplace to quickly deploy pre-configured software packages. For example, you can deploy a LAMP or MEAN stack with just a few clicks.
Create instance groups to more easily manage multiple instances together.
Use autoscaling with an instance group to automatically add and remove capacity.
Attach and detach disks as needed.
Use SSH to connect directly to your instances.
Combining computing and hosting options
You don't have to stick with just one type of computing service. For example, you can combine App Engine and Compute Engine to take advantage of the features and benefits of each. For an example of using both App Engine and Compute Engine, see Reliable task scheduling on Compute Engine.
For a detailed look at options for serving websites, see Serving websites.
Whatever your application, you'll probably need to store some media files, backups, or other file-like objects. Google Cloud provides a variety of storage services, including:
Consistent, scalable, large-capacity data storage in Cloud Storage. Cloud Storage comes in several flavors:
- Standard Storage provides maximum availability.
- Cloud Storage Nearline provides low-cost archival storage ideal for data accessed less than once a month.
- Cloud Storage Coldline provides even lower-cost archival storage ideal for data accessed less than once a quarter.
- Cloud Storage Archive provides the lowest-cost archival storage for backup and disaster recovery ideal for data you intend to access less than once a year.
Persistent disks on Compute Engine, for use as primary storage for your instances. Compute Engine offers both hard-disk-based persistent disks, called standard persistent disks, and solid-state persistent disks (SSD).
Fully managed NFS file servers in Filestore. You can use Filestore instances to store data from applications running on Compute Engine VM instances or GKE clusters.
To understand the full range and benefits of storage services on Google Cloud, learn more about our storage options.
Google Cloud provides a variety of SQL and NoSQL database services:
A SQL database in Cloud SQL, which provides either MySQL or PostgreSQL databases.
A fully managed, mission-critical, relational database service in Cloud Spanner that offers transactional consistency at global scale, schemas, SQL querying, and automatic, synchronous replication for high availability.
You can also choose to set up your preferred database technology on Compute Engine by using persistent disks. For example, you can set up MongoDB for NoSQL document storage.
To find out about the differences between our database services, read more about Google Cloud databases.
While App Engine manages networking for you, and GKE uses the Kubernetes model, Compute Engine provides a set of networking services. These services help you to load-balance traffic across resources, create DNS records, and connect your existing network to Google's network.
Networks, firewalls, and routes
Virtual Private Cloud (VPC) provides a set of networking services that your VM instances use. Each instance can be attached to only one network. Every VPC project has a default network. You can create additional networks in your project, but networks cannot be shared between projects.
Firewall rules govern traffic coming into instances on a network. The default network has a default set of firewall rules, and you can create custom rules, too.
A route lets you implement more advanced networking functions in your instances, such as creating VPNs. A route specifies how packets leaving an instance should be directed. For example, a route might specify that packets destined for a particular network range should be handled by a gateway virtual machine instance that you configure and operate.
If your website or application is running on Compute Engine, the time might come when you're ready to distribute the workload across multiple instances. Server-side load balancing features provide you with the following options:
Network load balancing lets you distribute traffic among server instances in the same region based on incoming IP protocol data, such as address, port, and protocol. Network load balancing is a great solution if, for example, you want to meet the demands of increasing traffic to your website.
HTTP(S) load balancing enables you to distribute traffic across regions so you can ensure that requests are routed to the closest region or, in the event of a failure or over-capacity limitations, to a healthy instance in the next closest region. You can also use HTTP(S) load balancing to distribute traffic based on content type. For example, you might set up your servers to deliver static content, such as images and CSS, from one server and dynamic content, such as PHP pages, from a different server. The load balancer can direct each request to the server that provides each content type.
You can publish and maintain Domain Name System (DNS) records by using the same infrastructure that Google uses. You can use the Google Cloud Console, the command line, or a REST API to work with managed zones and DNS records.
If you have an existing network that you want to connect to Google Cloud resources, Google Cloud offers the following options for advanced connectivity:
Cloud Interconnect enables you to connect your existing network to your VPC network through a highly available, low-latency, enterprise-grade connection. You can use Dedicated Interconnect to connect directly to Google, or use Partner Interconnect to connect to Google through a supported service provider.
Cloud VPN enables you to connect your existing network to your VPC network through an IPsec connection. You can also use VPN to connect two Cloud VPN gateways to each other.
Direct Peering enables you to exchange internet traffic between your business network and Google at one of Google's broad-reaching edge network locations. See Google's peering site for more information about edge locations.
Carrier Peering enables you to connect your infrastructure to Google's network edge through highly available, lower-latency connections by using service providers. You can also extend your private network into your private Virtual Private Cloud network over Carrier Peering links by using a VPN tunnel between the networks.
Big data services
Big data services enable you to process and query big data in the cloud to get fast answers to complicated questions.
BigQuery provides data analysis services. With BigQuery, you can:
- Create custom schemas that organize your data into datasets and tables.
- Load data from a variety of sources, including streaming data.
- Use SQL-like commands to query massive datasets very quickly. BigQuery is designed and optimized for speed.
- Use the web UI, command-line interface, or API.
- Load, query, export, and copy data by using jobs.
- Manage data and protect it by using permissions.
To try out BigQuery quickly and easily, try using the Web UI Quickstart to query a public dataset.
Batch and streaming data processing
Dataflow provides a managed service and set of SDKs that you can use to perform batch and streaming data processing tasks. Dataflow works well for high-volume computation, especially when the processing tasks can clearly and easily be divided into parallel workloads. Dataflow is also great for extract-transform-load (ETL) tasks, which are useful for moving data between different storage media, transforming data into a more desirable format, or loading data onto a new storage system.
Pub/Sub is an asynchronous messaging service. Your application can send messages as JSON data structures to a publishing unit called a topic. Because Pub/Sub topics are a global resource, other applications in projects that you own can subscribe to the topic to receive the messages in HTTP request or response bodies. To familiarize yourself with Pub/Sub, see the Pub/Sub quickstart.
Pub/Sub's usefulness isn't confined to big data. You can use Pub/Sub in many circumstances where you need an asynchronous messaging service. For an example that uses Pub/Sub to coordinate App Engine and Compute Engine, see Reliable task scheduling on Compute Engine.
Machine learning services
AI Platform offers a variety of powerful machine learning (ML) services. You can choose to use APIs that provide pre-trained models optimized for specific applications, or build and train your own large-scale, sophisticated models using a managed TensorFlow framework.
Machine learning APIs
Google Cloud offers a variety of APIs that enable you to take advantage of Google's ML without creating and training your own models.
Video Intelligence API lets you use video analysis technology that provides label detection, explicit content detection, shot-change detection, and regionalization features.
Speech-to-Text lets you convert audio to text, recognizing over 110 languages and variants, to support your global user base. You can transcribe the text of users dictating to an application’s microphone, enable command-and-control through voice, or transcribe audio files, among other use cases.
Cloud Vision lets you easily integrate vision detection features, including image labeling, face and landmark detection, optical character recognition (OCR), and tagging of explicit content.
Cloud Natural Language API lets you add sentiment analysis, entity analysis, entity-sentiment analysis, content classification, and syntax analysis.
Cloud Translation lets you quickly translate source text into any of over a hundred supported languages. Language detection helps out in cases where the source language is not known.
Dialogflow lets you build conversational interfaces for websites, mobile applications, popular messaging platforms, and IoT devices. You can use it to build interfaces, such as chatbots, that are capable of natural and rich interactions with humans.
AI Platform combines the managed infrastructure of GCP with the power and flexibility of TensorFlow. You can use it to train your machine learning models at scale, and to host trained models to make predictions about new data in the cloud.
AI Platform enables you to train machine learning models by running TensorFlow training applications on GCP, and hosts those trained models for you, so you can use them to get predictions about new data. AI Platform manages the computing resources that your training job needs to run, so you can focus more on your model than on hardware configuration or resource management.
To try out AI Platform, see Getting started: training and prediction with Keras.