This page lists known issues for Cloud Run and Cloud Run on GKE.
You can also check for existing issues or open new issues in the public issue trackers.
The following are known Cloud Run issues:
GCP services not yet supported
The following table lists services that are not yet supported by the fully managed version of Cloud Run. Note that Cloud Run on GKE can use any service that Google Kubernetes Engine can use.
|Virtual Private Cloud||Cloud Run cannot connect to VPC network.|
|Cloud Memorystore||Cloud Run cannot connect to VPC network.|
|Cloud Filestore (NAS)||Cloud Filestore is not Cloud Firestore, which is supported.|
|Cloud Load Balancing|
|VPC Service Controls]|
|Cloud Asset Inventory|
High request latency when invoking from some regions
We have noticed abnormally high request latencies when invoking Cloud Run services from the following locations: Mumbai, Seoul, and São Paulo.
gRPC and WebSocket support
Inbound WebSockets and gRPC are not currently supported for Cloud Run. Those protocols do work outbound, however.
Delay with labels on Networking SKUs for new revisions
After deploying a new revision, labels can take up to a few hours to be associated with networking billing SKUs.
Reserved "_ah/" URLs
It is not possible to use request paths that begin with
Cloud Run on GKE
The following are known Cloud Run on GKE issues:
Upgrading to 0.6.0-gke.1
When upgrading a cluster to a version that includes Cloud Run on GKE
0.6.0.gke-1 all previously deployed services will return the
no healthy upstream until a new revision is deployed.
Reploying a service will resolve the error state. This issue will be addressed in a future update.
When using a zonal cluster with Cloud Run on GKE, access to the control plane is unavailable during cluster maintenance.
During this period, Cloud Run on GKE may not work as expected. Services deployed in that cluster
- Are not shown in the Cloud Console or via gcloud SDK
- Cannot be deleted or updated
- Will not automatically scale instances, but existing instances will continue to serve new requests
To avoid these issues, you can use a regional cluster, which ensures a high availability control plane.
Default memory limit is not enforced through command line
When deploying using the command line, unless the
--memory argument is used
the deployed service will not have a memory limit. This allows the service
to consume as much memory as available on the node where the pod is running, and
may have unexpected side effects.
When deploying through the UI, the default value of
256M is used unless the
value is overridden.
You can workaround this by defining a default memory limit for the namespace where you are deploying services with Cloud Run on GKE. For more information see Configuring default memory limits in the Kubernetes documentation.
Default CPU limit is not enabled
When deploying using the command line or Console, the amount of CPU a service can use is not defined. This allows the service to consume all available CPU in the node where it is running, which may have unexpected side effects.
You can workaround this by defining a default CPU limit for the namespace where you are deploying services with Cloud Run on GKE. For more information see Configuring default CPU limits in the Kubernetes documentation.
Note: By default, services deployed with Cloud Run on GKE request
400m CPU, which is used to schedule instances of a service on the cluster nodes.
Istio 1.0 limitations
Cloud Run on GKE uses Istio 1.0 for networking, which limits the number of services and revisions that can exist in a cluster. For more information on these limitations, see Istio 1.0 performance and scalability.
Cloud Run on GKE should not be used to deploy more than 150 services or 300 active revisions in the same cluster.
Contents of read/write points in the container are always empty
If you have a container creates files or folders in
/var/log/nginx, when you run that container in
Cloud Run on GKE, those files or folders will not be visible because an
empty read/write volume has been mounted on
/var/log, which hides the contents
of the underlying container.
If your service needs to write to a subdirectory of
/var/log, the service
must ensure that the folder exists at runtime before writing into the folder. It
cannot assume that the folder exists from the container image.