This article discusses how to host a website on Google Cloud. Google Cloud provides a robust, flexible, reliable, and scalable platform for serving websites. Google built Google Cloud by using the same infrastructure that Google uses to serve content from sites such as Google.com, YouTube, and Gmail. You can host your website's content by using the type and design of infrastructure that best suits your needs.
You might find this article useful if you are:
- Knowledgeable about how to create a website and have deployed and run some web-hosting infrastructure before.
- Evaluating whether and how to migrate your site to Google Cloud.
If you want to build a simple website, consider using Google Sites, a structured wiki- and web page–creation tool. For more information, visit Sites help.
Choosing an option
If you're new to using Google Cloud, it's a reasonable approach to start by using the kind of technology you're already familiar with. For example, if you currently use hardware servers or virtual machines (VMs) to host your site, perhaps with another cloud provider or on your own hardware, Compute Engine provides a familiar paradigm for you. If you already use a platform-as-a-service (PaaS) offering, such as Heroku or Engine Yard, App Engine might be the best place to start. If you prefer serverless computing, Cloud Run probably is a good option for you.
After you become more familiar with Google Cloud, you can explore the richness of products and services that Google Cloud provides. For example, if you started by using Compute Engine, you might augment your site's capabilities by using Google Kubernetes Engine (GKE) or migrate some or all of the functionality to App Engine and Cloud Run.
The following table summarizes your hosting options on Google Cloud:
Option | Product | Data storage | Load balancing | Scalability | Logging and monitoring |
---|---|---|---|---|---|
Static website | Cloud Storage Firebase Hosting |
Cloud Storage bucket | HTTP(S) optional |
Automatically | |
Virtual machines | Compute Engine | Cloud SQL, Cloud Storage, Firestore, and Bigtable, or you can use another external storage provider. Hard-disk-based persistent disks, called standard persistent disks, and solid-state persistent disks (SSD). |
HTTP(S) TCP Proxy SSL Proxy IPv6 termination Network Cross-region Internal |
Automatically with managed instance groups | |
Containers | GKE | Similar to Compute Engine but interacts with persistent disks differently |
Network HTTP(S) |
Cluster autoscaler | |
Managed platform | App Engine |
Google Cloud services such as Cloud SQL, Firestore, Cloud Storage, and accessible third-party databases | HTTP(S) Managed by Google |
Managed by Google | |
Serverless | Cloud Run |
Google Cloud services such as Cloud SQL, Firestore, Cloud Storage, and accessible third-party databases | HTTP(S) Managed by Google |
Managed by Google |
This article can help you to understand the main technologies that you can use for web hosting on Google Cloud and give you a glimpse of how the technologies work. The article provides links to complete documentation, tutorials, and solutions articles that can help you build a deeper understanding, when you're ready.
Understanding costs
Because there are so many variables and each implementation is different, it's beyond the scope of this article to provide specific advice about costs. To understand Google's principles about how pricing works on Google Cloud, see the pricing page. To understand pricing for individual services, see the product pricing section. You can also use the pricing calculator to estimate what your Google Cloud usage might look like. You can provide details about the services you want to use and then see a pricing estimate.
Setting up domain name services
Usually, you will want to register a domain name for your site. You can use a public domain name registrar to register a unique name for your site. If you want complete control of your own domain name system (DNS), you can use Cloud DNS to serve as your DNS provider. The Cloud DNS documentation includes a quickstart to get you going.
If you have an existing DNS provider that you want to use, you generally need to
create a couple of records with that provider. For a domain name such as
example.com
, you create an A
record with your DNS provider. For the
www.example.com
sub-domain, you create a CNAME
record for www
to point
it to the example.com
domain. The A
record maps a hostname to an IP address.
The CNAME
record creates an alias for the A
record.
If your domain name registrar is also your DNS provider, that's probably all you need to do. If you use separate providers for registration and DNS, make sure that your domain name registrar has the correct name servers associated with your domain.
After making your DNS changes, the record updates can take some time to propagate depending on your time-to-live (TTL) values in your zone. If this is a new hostname, the changes go into effect quickly because the DNS resolvers don't have cached previous values and can contact the DNS provider to get the necessary information to route requests.
Hosting a static website
The simplest way to serve website content over HTTP(S) is to host static web pages. Static web pages are served unchanged, as they were written, usually by using HTML. Using a static website is a good option if your site's pages rarely change after they have been published, such as blog posts or pages that are part of a small-business website. You can do a lot with static web pages, but if you need your site to have robust interactions with users through server-side code, you should consider the other options discussed in this article.
Hosting a static website with Cloud Storage
To host a static site in Cloud Storage, you need to create a
Cloud Storage bucket,
upload the content, and test your new site. You can
serve your data directly from storage.googleapis.com
,
or you can
verify that you own your domain
and use
your domain name.
You can create your static web pages however you choose. For example, you could hand-author pages by using HTML and CSS. You can use a static-site generator, such as Jekyll, Ghost, or Hugo, to create the content. With static-site generators, you create a static website by authoring in markdown, and providing templates and tools. Site generators generally provide a local web server that you can use to preview your content.
After your static site is working, you can update the static pages by using any
process you like. That process can be as straightforward as hand-copying an
updated page to the bucket. You might choose to use a more automated approach,
such as storing your content on GitHub and then using a
webhook
to run a
script that updates the bucket. An even more advanced system might use a
continuous-integration/continuous-delivery (CI/CD) tool, such as
Jenkins,
to update the content in the
bucket. Jenkins has a Cloud Storage
plugin
that provides a Google Cloud Storage Uploader
post-build step to publish build
artifacts to Cloud Storage.
If you have a web app that needs to serve static content or user-uploaded static media, using Cloud Storage can be a cost-effective and efficient way to host and serve this content, while reducing the amount of dynamic requests to your web app.
Additionally, Cloud Storage can directly accept user-submitted content. This feature lets users upload large media files directly and securely without proxying through your servers.
To get the best performance from your static website, see Best practices for Cloud Storage.
For more information, see the following pages:
- Hosting a static website
- J is for Jenkins (blog post)
- Band Aid 30 on Google Cloud (blog post)
- Cloud Storage documentation
Hosting a static website with Firebase Hosting
Firebase Hosting provides fast and secure static hosting for your web app. With Firebase Hosting, you can deploy web apps and static content to a global content-delivery network (CDN) by using a single command.
Here are some benefits you get when you use Firebase Hosting:
- Zero-configuration SSL is built into Firebase Hosting. Provisions SSL certificates on custom domains for free.
- All of your content is served over HTTPS.
- Your content is delivered to your users from CDN edges around the world.
- Using the Firebase CLI, you can get your app up and running in seconds. Use command-line tools to add deployment targets into your build process.
- You get release management features, such as atomic deployment of new assets, full versioning, and one-click rollbacks.
- Hosting offers a configuration useful for single-page apps and other sites that are more app-like.
- Hosting is built to be used seamlessly with other Firebase features.
For more information, see the following pages:
Using virtual machines with Compute Engine
For infrastructure as a service (IaaS) use cases, Google Cloud provides Compute Engine. Compute Engine provides a robust computing infrastructure, but you must choose and configure the platform components that you want to use. With Compute Engine, it's your responsibility to configure, administer, and monitor the systems. Google ensures that resources are available, reliable, and ready for you to use, but it's up to you to provision and manage them. The advantage, here, is that you have complete control of the systems and unlimited flexibility.
Use Compute Engine to design and deploy nearly any website-hosting system you want. You can use VMs, called instances, to build your app, much like you would if you had your own hardware infrastructure. Compute Engine offers a variety of machine types to customize your configuration to meet your needs and your budget. You can choose which operating systems, development stacks, languages, frameworks, services, and other software technologies you prefer.
Setting up automatically with Google Cloud Marketplace
The easiest way to deploy a complete web-hosting stack is by using Google Cloud Marketplace. With just a few clicks, you can deploy any of over 100 fully realized solutions with Google Click to Deploy or Bitnami.
For example, you can set up a LAMP stack or WordPress with Cloud Marketplace. The system deploys a complete, working software stack in just a few minutes on a single instance. Before you deploy, Cloud Marketplace shows you cost estimates for running the site, gives you clear information about which versions of the software components it installs for you, and lets you customize your configuration by changing component instance names, choosing the machine type, and choosing a disk size. After you deploy, you have complete control over the Compute Engine instances, their configurations, and the software.
Setting up manually
You can also create your infrastructure on Compute Engine manually, either building your configuration from scratch or building on a Google Cloud Marketplace deployment. For example, you might want to use a version of a software component not offered by Cloud Marketplace, or perhaps you prefer to install and configure everything on your own.
Providing a complete framework and best practices for setting up a website is beyond the scope of this article. But from a high-level view, the technical side of setting up a web-hosting infrastructure on Compute Engine requires that you:
- Understand the requirements. If you're building a new website, make sure you understand the components you need, such as instances, storage needs, and networking infrastructure. If you're migrating your app from an existing solution, you probably already understand these requirements, but you need think through how your existing setup maps to Google Cloud services.
- Plan the design. Think through your architecture and write down your design. Be as explicit as you can.
- Create the components. The components that you might usually think of as physical assets, such as computers and network switches, are provided through services in Compute Engine. For example, if you want a computer, you have to create a Compute Engine instance. If you want a persistent hard disk drive, you create that, too. Infrastructure as code tools, such as Terraform, makes this an easy and repeatable process.
- Configure and customize. After you have the components you want, you need to configure them, install and configure software, and write and deploy any customization code that you require. You can replicate the configuration by running shell scripts, which helps to speed future deployments. Terraform helps here, too, by providing declarative, flexible configuration templates for automatic deployment of resources. You can also take advantage of IT automation tools such as Puppet and Chef.
- Deploy the assets. Presumably, you have web pages and images.
- Test. Verify that everything works as you expect.
- Deploy to production. Open up your site for the world to see and use.
Storing data with Compute Engine
Most websites need some kind of storage. You might need storage for a variety of reasons, such as saving files that your users upload, and of course the assets that your site uses.
Google Cloud provides a variety of managed storage services, including:
- A SQL database in Cloud SQL, which is a fully managed relational database service for MySQL, PostgreSQL, and SQL Server.
- Two options for NoSQL data storage: Firestore and Bigtable.
- Memorystore, which is a fully managed in-memory data store service for Redis and Memcached.
- Consistent, scalable, large-capacity object storage in
Cloud Storage.
Cloud Storage comes in several classes:
- Standard provides maximum availability.
- Nearline provides a low-cost choice ideal for data accessed less than once a month.
- Coldline provides a low-cost choice ideal for data accessed less than once a quarter.
- Archive provides the lowest-cost choice for archiving, backup, and disaster recovery.
- Persistent disks on Compute Engine for use as primary storage for your instances. Compute Engine offers both hard-disk-based persistent disks, called standard persistent disks, and solid-state persistent disks (SSD). You can also choose to set up your preferred storage technology on Compute Engine by using persistent disks. For example, you can set up PostgreSQL as your SQL database or MongoDB as your NoSQL storage. To understand the full range and benefits of storage services on Google Cloud, see Choosing a storage option.
Load balancing with Compute Engine
For any website that operates at scale, using load-balancing technologies to distribute the workload among servers is often a requirement. You have a variety of options when architecting your load-balanced web servers on Compute Engine, including:
- HTTP(S) load balancing.
Explains the fundamentals of using Cloud Load Balancing.
- Content-based load balancing. Demonstrates how to distribute traffic to different instances based on the incoming URL.
- Cross-region load balancing. Demonstrates configuring VM instances in different regions and using HTTP or HTTPS load balancing to distribute traffic across the regions.
- TCP Proxy load balancing. Demonstrates setting up global TCP Proxy load balancing for a service that exists in multiple regions.
- SSL Proxy load balancing. Demonstrates setting up global SSL Proxy load balancing for a service that exists in multiple regions.
- IPv6 termination for HTTP(S), SSL Proxy, and TCP Proxy load balancing. Explains IPv6 termination and the options for configuring load balancers to handle IPv6 requests.
- Network load balancing. Shows a basic scenario that sets up a layer 3 load balancing configuration to distribute HTTP traffic across healthy instances.
- Cross-region load balancing using Microsoft IIS backends. Shows how to use the Compute Engine load balancer to distribute traffic to Microsoft Internet Information Services (IIS) servers.
- Setting up internal load balancing You can set up a load balancer that distributes network traffic on a private network that isn't exposed to the internet. Internal load balancing is useful not only for intranet apps where all traffic remains on a private network, but also for complex web apps where a frontend sends requests to backend servers by using a private network.
Load balancing deployment is flexible, and you can use Compute Engine with your existing solutions. For example, HTTP(S) load balancing using Nginx is one possible solution that you could use in place of the Compute Engine load balancer.
Content distribution with Compute Engine
Because response time is a fundamental metric for any website, using a CDN to lower latency and increase performance is often a requirement, especially for a site with global web traffic.
Cloud CDN uses Google's globally distributed edge points of presence to deliver content from cache locations closest to users. Cloud CDN works with HTTP(S) load balancing. To serve content out of Compute Engine, Cloud Storage, or both from a single IP address, enable Cloud CDN for an HTTP(S) load balancer.
Autoscaling with Compute Engine
You can set up your architecture to add and remove servers as demand varies. This approach can help to ensure that your site performs well under peak load, while keeping costs under control during more-typical demand periods. Compute Engine provides an autoscaler that you can use for this purpose.
Autoscaling is a feature of managed instance groups. A managed instance group is a pool of homogeneous virtual machine instances, created from a common instance template. An autoscaler adds or remove instances in a managed instance group. Although Compute Engine has both managed and unmanaged instance groups, you can only use managed instance groups with an autoscaler. For more information, see autoscaling on Compute Engine.
For an in-depth look at what it takes to build a scalable and resilient web-app solution, see Building scalable and resilient web apps.
Logging and monitoring with Compute Engine
Google Cloud includes features that you can use to keep tabs on what's happening with your website.
Cloud Logging collects and stores logs from apps and services on Google Cloud. You can view or export logs and integrate third-party logs by using a logging agent.
Cloud Monitoring provides dashboards and alerts for your site. You configure Monitoring with the Google Cloud console. You can review performance metrics for cloud services, virtual machines, and common open source servers such as MongoDB, Apache, Nginx, and Elasticsearch. You can use the Cloud Monitoring API to retrieve monitoring data and create custom metrics.
Cloud Monitoring also provides uptime checks, which send requests to your websites to see if they respond. You can monitor a website's availability by deploying an alerting policy that creates an incident if the uptime check fails.
Managing DevOps with Compute Engine
For information about managing DevOps with Compute Engine, see Distributed load testing using Kubernetes.
Using containers with GKE
You might already be using containers, such as Docker containers. For web hosting, containers offer several advantages, including:
- Componentization. You can use containers to separate the various components of your web app. For example, suppose your site runs a web server and a database. You can run these components in separate containers, modifying and updating one component without affecting the other. As your app's design becomes more complex, containers are a good fit for a service-oriented architecture, including microservices. This kind of design supports scalability, among other goals.
- Portability. A container has everything it needs to run—your app and its dependencies are bundled together. You can run your containers on a variety of platforms, without worrying about the underlying system details.
- Rapid deployment. When it's time to deploy, your system is built from a set of definitions and images, so the parts can be deployed quickly, reliably, and automatically. Containers are typically small and deploy much more quickly compared to, for example, virtual machines.
Container computing on Google Cloud offers even more advantages for web hosting, including:
- Orchestration. GKE is a managed service built on Kubernetes, the open source container-orchestration system introduced by Google. With GKE, your code runs in containers that are part of a cluster that is composed of Compute Engine instances. Instead of administering individual containers or creating and shutting down each container manually, you can automatically manage the cluster through GKE, which uses the configuration you define.
- Image registration. Artifact Registry provides private storage for Docker images on Google Cloud. You can access the registry through an HTTPS endpoint, so you can pull images from any machine, whether it's a Compute Engine instance or your own hardware. The registry service hosts your custom images in Cloud Storage under your Google Cloud project. This approach ensures by default that your custom images are only accessible by principals that have access to your project.
- Mobility. This means that you have the flexibility to move and combine workloads with other cloud providers, or mix cloud computing workloads with on-premises implementations to create a hybrid solution.
Storing data with GKE
Because GKE runs on Google Cloud and uses Compute Engine instances as nodes, your storage options have a lot in common with storage on Compute Engine. You can access Cloud SQL, Cloud Storage, Firestore, and Bigtable through their APIs, or you can use another external storage provider if you choose. However, GKE does interact with Compute Engine persistent disks in a different way than a normal Compute Engine instance would.
A Compute Engine instance includes an attached disk. When you use
Compute Engine, as long as the instance exists, the disk volume remains
with the instance. You can even detach the disk and use it with a different
instance. But in a container, on-disk files are ephemeral. When a container
restarts, such as after a crash, the on-disk files are lost. Kubernetes solves
this issue by using
volume
and Storage Class
abstractions. One type of storage class is
GCE PD
.
This means that you can use Compute Engine persistent disks with containers to
keep your data files from being deleted when you use GKE.
To understand the features and benefits of a volume, you should first understand a bit about pods. You can think of a pod as an app-specific logical host for one or more containers. A pod runs on a node instance. When containers are members of a pod, they can share several resources, including a set of shared storage volumes. These volumes enable data to survive container restarts and to be shared among the containers within the pod. Of course, you can use a single container and volume in a pod, too, but the pod is a required abstraction to logically connect these resources to each other.
For an example, see the tutorial Using persistent disks with WordPress and MySQL.
Load balancing with GKE
Many large web-hosting architectures need to have multiple servers running that can share the traffic demands. Because you can create and manage multiple containers, nodes, and pods with GKE, it's a natural fit for a load-balanced web-hosting system.
Using network load balancing
The easiest way to create a load balancer in GKE is to use Compute Engine's network load balancing. Network load balancing can balance the load of your systems based on incoming internet protocol data, such as the address, port, and protocol type. Network load balancing uses forwarding rules. These rules point to target pools that list which instances are available to be used for load balancing.
With network load balancing, you can load balance additional TCP/UDP-based protocols such as SMTP traffic, and your app can directly inspect the packets.
You can deploy network load balancing simply by adding the type: LoadBalancer
field to your service configuration file.
Using HTTP(S) load balancing
If you need more advanced load-balancing features, such as HTTPS load balancing, content-based load balancing, or cross-region load balancing, you can integrate your GKE service with Compute Engine's HTTP/HTTPS load balancing feature. Kubernetes provides the Ingress resource that encapsulates a collection of rules for routing external traffic to Kubernetes endpoints. In GKE, an Ingress resource handles provisioning and configuring the Compute Engine HTTP/HTTPS load balancer.
For more information about using HTTP/HTTPS load balancing in GKE, see Setting up HTTP load balancing with Ingress.
Scaling with GKE
For automatic resizing of clusters, you can use the Cluster Autoscaler. This feature periodically checks whether there are any pods that are waiting for a node with free resources but aren't being scheduled. If such pods exist, then the autoscaler resizes the node pool if resizing would allow the waiting pods to be scheduled.
Cluster Autoscaler also monitors the usage of all nodes. If a node isn't needed for an extended period of time, and all of its pods can be scheduled elsewhere, then the node is deleted.
For more information about the Cluster Autoscaler, its limitations, and best practices, see the Cluster Autoscaler documentation.
Logging and monitoring with GKE
Like on Compute Engine, Logging and Monitoring provide your logging and monitoring services. Logging collects and stores logs from apps and services. You can view or export logs and integrate third-party logs by using a logging agent.
Monitoring provides dashboards and alerts for your site. You configure Monitoring with the Google Cloud console. You can review performance metrics for cloud services, virtual machines, and common open source servers such as MongoDB, Apache, Nginx, and Elasticsearch. You can use the Monitoring API to retrieve monitoring data and create custom metrics.
Managing DevOps with GKE
When you use GKE, you're already getting many of the benefits most people think of when they think of DevOps. This is especially true when it comes to ease of packaging, deployment, and management. For your CI/CD workflow needs, you can take advantage of tools that are built for the cloud, such as Cloud Build, and Cloud Deploy, or popular tools such as Jenkins. For more information, see the following articles:
Building on a managed platform with App Engine
On Google Cloud, the managed platform as a service (PaaS) is called App Engine. When you build your website on App Engine, you get to focus on coding up your features and let Google worry about managing the supporting infrastructure. App Engine provides a wide range of features that make scalability, load balancing, logging, monitoring, and security much easier than if you had to build and manage them yourself. App Engine lets you code in a variety of programming languages, and it can use a variety of other Google Cloud services.
App Engine provides the standard environment, which lets you run apps in a secure, sandboxed environment. The App Engine standard environment distributes requests across multiple servers, and scales servers to meet traffic demands. Your app runs in its own secure, reliable environment that's independent of the hardware, operating system, or physical location of the server.
To give you more options, App Engine offers the flexible environment. When you use the flexible environment, your app runs on configurable Compute Engine instances, but App Engine manages the hosting environment for you. This means that you can use additional runtimes, including custom runtimes, for more programming language choices. You can also take advantage of some of the flexibility that Compute Engine offers, such as choosing from a variety of CPU and memory options.
Programming languages
The App Engine standard environment provides default runtimes, and you write source code in specific versions of the supported programming languages.
With the flexible environment, you write source code in a version of any of the supported programming languages. You can customize these runtimes or provide your own runtime with a custom Docker image or Dockerfile.
If the programming language you use is a primary concern, you need to decide whether the runtimes provided by the App Engine standard environment meet your requirements. If they don't, you should consider using the flexible environment.
To determine which environment best meets your app's needs, see Choosing an App Engine environment.
Getting started tutorials by language
The following tutorials can help you get started using the App Engine standard environment:
- Hello World in Python
- Hello World in Java
- Hello World in PHP
- Hello World in Ruby
- Hello World in Go
- Hello World in Node.js
The following tutorials can help you get started using the flexible environment:
- Getting started with Python
- Getting started with Java
- Getting started with PHP
- Getting started with Go
- Getting started with Node.js
- Getting started with Ruby
- Getting started with .NET
Storing data with App Engine
App Engine gives you options for storing your data:
Name | Structure | Consistency |
---|---|---|
Firestore | Schemaless | Strongly consistent. |
Cloud SQL | Relational | Strongly consistent. |
Cloud Storage | Files and their associated metadata | Strongly consistent except when performing list operations that get a list of buckets or objects. |
You can also use several third-party databases with the standard environment.
For more details about storage in App Engine, see Choosing a storage option, and then select your preferred programming language.
When you use the flexible environment, you can use all of the same storage options as you can with the standard environment, and a wider range of third-party databases as well. For more information about third-party databases in the flexible environment, see Using third-party databases.
Load balancing and autoscaling with App Engine
By default, App Engine automatically routes incoming requests to appropriate backend instances and does load balancing for you. However, if you want to take advantage of Google Cloud’s fully featured enterprise-grade HTTP(S) load balancing capabilities, you can use serverless network endpoint groups.
For scaling, App Engine can automatically create and shut down instances as traffic fluctuates, or you can specify a number of instances to run regardless of the amount of traffic.
Logging and monitoring with App Engine
In App Engine, requests are logged automatically, and you can view these
logs in the Google Cloud console. App Engine also works with
standard, language-specific libraries that provide logging functionality and
forwards the log entries to the logs in the Google Cloud console. For example,
in Python
you can use the standard Python logging module and
in Java
you can integrate the logback appender or java.util.logging
with Cloud Logging. This approach enables the full features of Cloud Logging
and requires only a few lines of Google Cloud-specific code.
Cloud Monitoring provides features for monitoring your App Engine apps. Through the Google Cloud console, you can monitor incidents, uptime checks, and other details.
Building on a serverless platform with Cloud Run
Google Cloud's serverless platform lets you write code your way without worrying about the underlying infrastructure. You can build full-stack serverless applications with Google Cloud’s storage, databases, machine learning, and more.
For your containerized websites, you can also deploy them to Cloud Run in addition to using GKE. Cloud Run is a fully managed serverless platform that lets you run highly scalable containerized applications on Google Cloud. You only pay for the time that your code runs.
Using containers with Cloud Run, you can take advantage of mature technologies such as Nginx, Express.js, and Django to build your websites, access your SQL database on Cloud SQL, and render dynamic HTML pages.
The Cloud Run documentation includes a quickstart to get you going.
Storing data with Cloud Run
Cloud Run containers are ephemeral and you need to understand their quotas and limits for your use cases. Files can be temporarily stored for processing in a container instance, but this storage comes out of the available memory for the service as described in the runtime contract.
For persistent storage, similar to App Engine, you can choose Google Cloud's services such as Cloud Storage, Firestore or Cloud SQL. Alternatively, you can also use a third-party storage solution.
Load balancing and autoscaling with Cloud Run
By default, when you build on Cloud Run, it automatically routes incoming requests to appropriate back-end containers and do load balancing for you. However, if you want to take advantage of Google Cloud’s fully featured enterprise-grade HTTP(S) load balancing capabilities, you can use serverless network endpoint groups.
With HTTP(S) load balancing, you can enable Cloud CDN or serve traffic from multiple regions. In addition, you can use middleware such as API Gateway to enhance your service.
For Cloud Run, Google Cloud manages container instance autoscaling for you. Each revision is automatically scaled to the number of container instances needed to handle all incoming requests. When a revision doesn't receive any traffic, by default it's scaled to zero container instances. However, if desired, you can change this default to specify an instance to be kept idle or warm using the minimum instances setting.
Logging and monitoring with Cloud Run
Cloud Run has two types of logs, which are automatically sent to Cloud Logging:
- Request logs: logs of requests sent to Cloud Run services. These logs are created automatically.
- Container logs: logs emitted from the container instances, typically from your own code, written to supported locations as described in Writing container logs.
You can view logs for your service in a couple of ways:
- Use the Cloud Run page in the Google Cloud console.
- Use Cloud Logging Logs Explorer in the Google Cloud console.
Both of these viewing methods examine the same logs stored in Cloud Logging, but the Logs Explorer provides more details and more filtering capabilities.
Cloud Monitoring provides Cloud Run performance monitoring, metrics, and uptime checks, along with alerts to send notifications when certain metric thresholds are exceeded. Google Cloud Observability pricing applies, which means there is no charge for metrics on the fully managed version of Cloud Run. Note that you can also use Cloud Monitoring custom metrics.
Cloud Run is integrated with Cloud Monitoring with no setup or configuration required. This means that metrics of your Cloud Run services are automatically captured when they are running.
Building content management systems
Hosting a website means managing your website assets. Cloud Storage provides a global repository for these assets. One common architecture deploys static content to Cloud Storage and then syncs to Compute Engine to render dynamic pages. Cloud Storage works with many third-party content management systems, such as WordPress, Drupal, and Joomla. Cloud Storage also offers an Amazon S3 compatible API, so any system that works with Amazon S3 can work with Cloud Storage.
The diagram below is a sample architecture for a content management system.
What's next
- Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center.