This page lists known issues for Cloud Run (fully managed) and Cloud Run for Anthos on Google Cloud.
You can also check for existing issues or open new issues in the public issue trackers.
Cloud Run (fully managed)
The following are known Cloud Run (fully managed) issues:
Google Cloud services not yet supported
The following table lists services that are not yet supported by Cloud Run (fully managed). Note that Cloud Run for Anthos on Google Cloud can use any service that Google Kubernetes Engine can use.
|Virtual Private Cloud||Cloud Run (fully managed) cannot connect to VPC network.|
|Memorystore||Cloud Run (fully managed) cannot connect to VPC network.|
|Filestore (NAS)||Filestore is not Firestore, which is supported.|
|Cloud Load Balancing||Cloud Run (fully managed) does not work with Cloud Load Balancing.|
|Google Cloud Armor||Cloud Run (fully managed) does not work with Cloud Load Balancing.|
|Cloud CDN||Cloud Run (fully managed) does not work with Cloud Load Balancing.|
|Identity-Aware Proxy||Cloud Run (fully managed) does not work with Cloud Load Balancing.|
|VPC Service Controls||Cloud Run (fully managed) cannot be deployed into a VPC network.|
|Cloud Asset Inventory|
High request latency with custom domains when invoking from some regions
Requests to Cloud Run (fully managed) services using custom domains can have an abnormally high latency from some locations.
gRPC and WebSockets support
Cloud Run (fully managed) currently does not support HTTP streaming. Therefore, inbound requests with protocols like WebSockets and gRPC (streaming RPCs) are not supported.
As an exception, Cloud Run (fully managed) services support unary gRPC for inbound requests.
For outbound requests, both WebSockets and gRPC are supported on Cloud Run (fully managed).
Delay with labels on Networking SKUs for new revisions
After deploying a new revision, labels can take up to a few hours to be associated with networking billing SKUs.
Cloud Run-specific networking SKU IDs are not final.
After general availability, Cloud Run-specific networking SKUs may be replaced by a common GCP neworking SKU. Pricing will remain the same.
Reserved "_ah/" URLs
It is not possible to use request paths that begin with
No ability to restrict available regions
It is not yet possible to set an organization policy with a resource locations constraint on Cloud Run (fully managed) services. This means you cannot restrict which Cloud Run (fully managed) regions are available to your organization.
Cloud Run for Anthos on Google Cloud
The following are known Cloud Run for Anthos on Google Cloud issues:
Services stuck in
RevisionMissing due to missing
Creation of a new service or a new service revision may become stuck in the state "RevisionMissing" due to a missing webhook configuration. You can confirm this using the command
kubectl get mutatingwebhookconfiguration webhook.serving.knative.dev
kmutatingwebhookconfigurations.admissionregistration.k8s.io "webhook.serving.knative.dev" not found`
Until this is fixed in an upcoming version, you can do the following to fix this issue:
Restart the webhook Pod to recreate the
kubectl delete pod -n knative-serving -lapp=webhook kubectl get mutatingwebhookconfiguration --watch
Restart the controllers:
kubectl delete pod -n gke-system -listio=pilot kubectl delete pod -n knative-serving -lapp=controller
Deploy a new revision for each service that has the
gcloud run services update SERVICE --update-labels client.knative.dev/nonce=""
replacing SERVICE with the name of the service.
Repeat the above steps as needed if you experience the same issue when you deploy new revisions of the service.
Upgrading from version
When upgrading a cluster from
1.14.6-gke.2, you must take additional actions
to bring Knative Serving into a healthy state.
For example, you may experience these symptoms:
- Errors attempting to reach Knative Services
- Errors reconciling
VirtualServicesin the Knative controller logs
To address these issues, run these commands:
kubectl delete pods -n knative-serving -lapp=controller kubectl delete pods -n gke-system -listio=pilot
These commands restart the Knative controller and
istio-pilot pods. After
restarting these processes, you should observe that the deployed Knative
Services become ready.
Some GKE Versions don't operate correctly
The following versions of Cloud Run on GKE contain an issue that may cause Cloud Run services to stop working:
- 1.15.3-gke.1 (Cloud Run version 0.8.0)
- 1.14.3-gke.11 (Cloud Run version 0.6.1)
- 1.14.6-gke.1 (Cloud Run version 0.8.0)
- 1.14.6-gke.2 (Cloud Run version 0.8.1)
- 1.12.10-gke.5 (Cloud Run version 0.8.0)
- 1.12.10-gke.10 (Cloud Run version 0.8.1)
We strongly recommend that you do not create or upgrade any Cloud Run on GKE clusters to GKE version 1.14.X or the other listed versions until a fixed version is available, as they will fail to operate correctly.
Instead, select a cluster version from the recommended versions page.
Gateway restarts in version
In the Cloud Run for Anthos on Google Cloud release
0.6.1-gke.1, a bug causes the cluster
local gateway to reboot periodically due to the CPU being set too low for the
gateway. To solve this issue, edit the HPA as follows:
kubectl edit hpa cluster-local-gateway -n istio-system
minReplicas to match
maxReplicas. By default this will be
lower values should work.
Upgrading to 0.6.0-gke.1
When upgrading a cluster to a version that includes Cloud Run for Anthos on Google Cloud
0.6.0.gke-1 all previously deployed services will return the
no healthy upstream until a new revision is deployed.
Redeploying a service will resolve the error state. This issue will be addressed in a future update.
When using a zonal cluster with Cloud Run for Anthos on Google Cloud, access to the control plane is unavailable during cluster maintenance.
During this period, Cloud Run for Anthos on Google Cloud may not work as expected. Services deployed in that cluster
- Are not shown in the Cloud Console or via gcloud SDK
- Cannot be deleted or updated
- Will not automatically scale instances, but existing instances will continue to serve new requests
To avoid these issues, you can use a regional cluster, which ensures a high availability control plane.
Default memory limit is not enforced through command line
When deploying using the command line, unless the
--memory argument is used
the deployed service will not have a memory limit. This allows the service
to consume as much memory as available on the node where the pod is running, and
may have unexpected side effects.
When deploying through the UI, the default value of
256M is used unless the
value is overridden.
You can workaround this by defining a default memory limit for the namespace where you are deploying services with Cloud Run on GKE. For more information see Configuring default memory limits in the Kubernetes documentation.
Default CPU limit is not enabled
When deploying using the command line or Console, the amount of CPU a service can use is not defined. This allows the service to consume all available CPU in the node where it is running, which may have unexpected side effects.
You can workaround this by defining a default CPU limit for the namespace where you are deploying services with Cloud Run on GKE. For more information see Configuring default CPU limits in the Kubernetes documentation.
Note: By default, services deployed with Cloud Run for Anthos on Google Cloud request
400m CPU, which is used to schedule instances of a service on the cluster nodes.
Istio 1.0 limitations
Cloud Run for Anthos on Google Cloud uses Istio 1.0 for networking, which limits the number of services and revisions that can exist in a cluster. For more information on these limitations, see Istio 1.0 performance and scalability.
Cloud Run for Anthos on Google Cloud should not be used to deploy more than 150 services or 300 active revisions in the same cluster.
Contents of read/write points in the container are always empty
If you have a container creates files or folders in
/var/log/nginx, when you run that container in
Cloud Run for Anthos on Google Cloud, those files or folders will not be visible because an
empty read/write volume has been mounted on
/var/log, which hides the contents
of the underlying container.
If your service needs to write to a subdirectory of
/var/log, the service
must ensure that the folder exists at runtime before writing into the folder. It
cannot assume that the folder exists from the container image.