This page lists known issues for Cloud Run (fully managed) and Cloud Run for Anthos deployed on GKE.
You can also check for existing issues or open new issues in the public issue trackers.
Cloud Run (fully managed)
The following are known Cloud Run (fully managed) issues:
GCP services not yet supported
The following table lists services that are not yet supported by Cloud Run (fully managed). Note that Cloud Run for Anthos deployed on GKE can use any service that Google Kubernetes Engine can use.
|Virtual Private Cloud||Cloud Run (fully managed) cannot connect to VPC network.|
|Cloud Memorystore||Cloud Run (fully managed) cannot connect to VPC network.|
|Cloud Filestore (NAS)||Cloud Filestore is not Cloud Firestore, which is supported.|
|Cloud Load Balancing|
|VPC Service Controls|
|Cloud Asset Inventory|
gRPC and WebSocket support
Inbound WebSockets and gRPC are not currently supported for Cloud Run (fully managed). Those protocols do work outbound, however.
Delay with labels on Networking SKUs for new revisions
After deploying a new revision, labels can take up to a few hours to be associated with networking billing SKUs.
Reserved "_ah/" URLs
It is not possible to use request paths that begin with
No ability to restrict available regions
It is not yet possible to set an organization policy with a resource locations constraint on Cloud Run (fully managed) services. This means you cannot restrict which Cloud Run (fully managed) regions are available to your organization.
Cloud Run for Anthos deployed on GKE
The following are known Cloud Run for Anthos deployed on GKE issues:
GKE Versions 1.14.3 and 1.14.6 don't operate correctly
The following versions of Cloud Run on GKE contain an issue that may cause Cloud Run services to stop working:
- 1.14.3-gke.11 (Cloud Run version 0.6.1)
- 1.14.6-gke.1 (Cloud Run version 0.8.0)
We strongly recommend that you do not create or upgrade any Cloud Run on GKE clusters to GKE version 1.14.X until a fixed version is available, as they will fail to operate correctly.
Please select a cluster version from the recommended versions page.
Gateway restarts in version
In the Cloud Run for Anthos deployed on GKE release
0.6.1-gke.1, a bug causes the cluster
local gateway to reboot periodically due to the CPU being set too low for the
gateway. To solve this issue, edit the HPA as follows:
kubectl edit hpa cluster-local-gateway -n istio-system
minReplicas to match
maxReplicas. By default this will be
lower values should work.
Upgrading to 0.6.0-gke.1
When upgrading a cluster to a version that includes Cloud Run for Anthos deployed on GKE
0.6.0.gke-1 all previously deployed services will return the
no healthy upstream until a new revision is deployed.
Redeploying a service will resolve the error state. This issue will be addressed in a future update.
When using a zonal cluster with Cloud Run for Anthos deployed on GKE, access to the control plane is unavailable during cluster maintenance.
During this period, Cloud Run for Anthos deployed on GKE may not work as expected. Services deployed in that cluster
- Are not shown in the Cloud Console or via gcloud SDK
- Cannot be deleted or updated
- Will not automatically scale instances, but existing instances will continue to serve new requests
To avoid these issues, you can use a regional cluster, which ensures a high availability control plane.
Default memory limit is not enforced through command line
When deploying using the command line, unless the
--memory argument is used
the deployed service will not have a memory limit. This allows the service
to consume as much memory as available on the node where the pod is running, and
may have unexpected side effects.
When deploying through the UI, the default value of
256M is used unless the
value is overridden.
You can workaround this by defining a default memory limit for the namespace where you are deploying services with Cloud Run on GKE. For more information see Configuring default memory limits in the Kubernetes documentation.
Default CPU limit is not enabled
When deploying using the command line or Console, the amount of CPU a service can use is not defined. This allows the service to consume all available CPU in the node where it is running, which may have unexpected side effects.
You can workaround this by defining a default CPU limit for the namespace where you are deploying services with Cloud Run on GKE. For more information see Configuring default CPU limits in the Kubernetes documentation.
Note: By default, services deployed with Cloud Run for Anthos deployed on GKE request
400m CPU, which is used to schedule instances of a service on the cluster nodes.
Istio 1.0 limitations
Cloud Run for Anthos deployed on GKE uses Istio 1.0 for networking, which limits the number of services and revisions that can exist in a cluster. For more information on these limitations, see Istio 1.0 performance and scalability.
Cloud Run for Anthos deployed on GKE should not be used to deploy more than 150 services or 300 active revisions in the same cluster.
Contents of read/write points in the container are always empty
If you have a container creates files or folders in
/var/log/nginx, when you run that container in
Cloud Run for Anthos deployed on GKE, those files or folders will not be visible because an
empty read/write volume has been mounted on
/var/log, which hides the contents
of the underlying container.
If your service needs to write to a subdirectory of
/var/log, the service
must ensure that the folder exists at runtime before writing into the folder. It
cannot assume that the folder exists from the container image.