Troubleshoot Cloud Functions
This document shows you some of the common problems you might run into and how to deal with them.
Deployment
The deployment phase is a frequent source of problems. Many of the issues you might encounter during deployment are related to roles and permissions. Others have to do with incorrect configuration.
User with Viewer role cannot deploy a function
A user who has been assigned the Project Viewer or Cloud Functions Viewer role has read-only access to functions and function details. These roles are not allowed to deploy new functions.
The error message
Cloud console
You need permissions for this action. Required permission(s): cloudfunctions.functions.create
Cloud SDK
ERROR: (gcloud.functions.deploy) PERMISSION_DENIED: Permission
'cloudfunctions.functions.sourceCodeSet' denied on resource
'projects/<PROJECT_ID>/locations/<LOCATION>` (or resource may not exist)
The solution
Assign the user a role that has the appropriate access.
User with Project Viewer or Cloud Function role cannot deploy a function
In order to deploy a function, a user who has been assigned the Project Viewer, the Cloud Function Developer, or Cloud Function Admin role must be assigned an additional role.
The error message
Cloud console
User does not have the iam.serviceAccounts.actAs permission on
<PROJECT_ID>@appspot.gserviceaccount.com required to create function.
You can fix this by running
'gcloud iam service-accounts add-iam-policy-binding <PROJECT_ID>@appspot.gserviceaccount.com --member=user: --role=roles/iam.serviceAccountUser'
Cloud SDK
ERROR: (gcloud.functions.deploy) ResponseError: status=[403], code=[Forbidden],
message=[Missing necessary permission iam.serviceAccounts.actAs for <USER>
on the service account <PROJECT_ID>@appspot.gserviceaccount.com. Ensure that
service account <PROJECT_ID>@appspot.gserviceaccount.com is a member of the
project <PROJECT_ID>, and then grant <USER> the role 'roles/iam.serviceAccountUser'.
You can do that by running
'gcloud iam service-accounts add-iam-policy-binding <PROJECT_ID>@appspot.gserviceaccount.com --member=<USER> --role=roles/iam.serviceAccountUser'
In case the member is a service account please use the prefix 'serviceAccount:' instead of 'user:'.]
The solution
Assign the user an additional role,
the Service Account User IAM role (roles/iam.serviceAccountUser
), scoped to
the Cloud Functions
runtime service account.
Deployment service account missing the Service Agent role when deploying functions
The Cloud Functions service uses the Cloud Functions Service Agent service
account (service-<PROJECT_NUMBER>@gcf-admin-robot.iam.gserviceaccount.com
)
when performing administrative actions on your project. By default this account
is assigned the Cloud Functions cloudfunctions.serviceAgent
role. This role
is required for Cloud Pub/Sub, IAM, Cloud Storage and Firebase integrations. If
you have changed the role for this service account, deployment fails.
The error message
Cloud console
Missing necessary permission resourcemanager.projects.getIamPolicy for
serviceAccount:service-<PROJECT_NUMBER>@gcf-admin-robot.iam.gserviceaccount.com on project <PROJECT_ID>.
Please grant serviceAccount:service-<PROJECT_NUMBER>@gcf-admin-robot.iam.gserviceaccount.com
the roles/cloudfunctions.serviceAgent role. You can do that by running
'gcloud projects add-iam-policy-binding <PROJECT_ID> --member=serviceAccount:service-<PROJECT_NUMBER>@gcf-admin-robot.iam.gserviceaccount.com --role=roles/cloudfunctions.serviceAgent'
Cloud SDK
ERROR: (gcloud.functions.deploy) OperationError: code=7,
message=Missing necessary permission resourcemanager.projects.getIamPolicy
for serviceAccount:service-<PROJECT_NUMBER>@gcf-admin-robot.iam.gserviceaccount.com
on project <PROJECT_ID>. Please grant
serviceAccount:service-<PROJECT_NUMBER>@gcf-admin-robot.iam.gserviceaccount.com
the roles/cloudfunctions.serviceAgent role. You can do that by running
'gcloud projects add-iam-policy-binding <PROJECT_ID> --member=serviceAccount:service-<PROJECT_NUMBER>@gcf-admin-robot.iam.gserviceaccount.com --role=roles/cloudfunctions.serviceAgent'
The solution
Reset this service account to the default role.
Deployment service account missing Pub/Sub permissions when deploying an event-driven function
The Cloud Functions service uses the Cloud Functions Service Agent service
account (service-<PROJECT_NUMBER>@gcf-admin-robot.iam.gserviceaccount.com
)
when performing administrative actions. By default this account is assigned the
Cloud Functions cloudfunctions.serviceAgent
role. To deploy
event-driven functions,
the Cloud Functions service must
access Cloud Pub/Sub
to configure topics and subscriptions. If the role assigned to the service
account is changed and the appropriate permissions are not otherwise granted,
the Cloud Functions service cannot access Cloud Pub/Sub and the deployment
fails.
The error message
Cloud console
Failed to configure trigger PubSub projects/<PROJECT_ID>/topics/<FUNCTION_NAME>
Cloud SDK
ERROR: (gcloud.functions.deploy) OperationError: code=13,
message=Failed to configure trigger PubSub projects/<PROJECT_ID>/topics/<FUNCTION_NAME>
The solution
You can:
Reset this service account to the default role.
or
Grant the
pubsub.subscriptions.*
andpubsub.topics.*
permissions to your service account manually.
Default runtime service account does not exist
When a user managed runtime service account is not specified, 1st gen functions default to using the app engine service account as their runtime service account. If this default account has been deleted and no user managed account is specified, deployments will fail.
The error message
Cloud console
Default service account '<PROJECT_ID>@appspot.gserviceaccount.com' doesn't exist. Please recreate this account or specify a different account. Please visit https://cloud.google.com/functions/docs/troubleshooting for in-depth troubleshooting documentation.
Cloud SDK
ERROR: (gcloud.functions.deploy) ResponseError: status=[400], code=[Ok], message=[Default service account '<PROJECT_ID>@appspot.gserviceaccount.com' doesn't exist. Please recreate this account or specify a different account. Please visit https://cloud.google.com/functions/docs/troubleshooting for in-depth troubleshooting documentation.]
The solution
Specify a user managed runtime service account when deploying your 1st gen functions.
or
Recreate the default service account
@appspot.gserviceaccount.com for your project.
User missing permissions for runtime service account while deploying a function
In environments where multiple functions are accessing different resources, it
is a common practice to use
per-function identities,
with named runtime service accounts rather than the default runtime
service account (PROJECT_ID@appspot.gserviceaccount.com
).
However, to use a non-default runtime service account, the deployer must have
the iam.serviceAccounts.actAs
permission on that non-default account.
A user who creates a non-default runtime service account is automatically
granted this permission, but other deployers must have this permission granted
by a user with the correct permissions.
The error message
Cloud SDK
ERROR: (gcloud.functions.deploy) ResponseError: status=[400], code=[Bad Request],
message=[Invalid function service account requested: <SERVICE_ACCOUNT_NAME@<PROJECT_ID>.iam.gserviceaccount.com]
The solution
Assign the user
the roles/iam.serviceAccountUser
role on the non-default
<SERVICE_ACCOUNT_NAME>
runtime service account. This role includes the
iam.serviceAccounts.actAs
permission.
Cloud Functions Service Agent service account missing project bucket permissions while deploying a function
Cloud Functions can only be
triggered by events
from Cloud Storage buckets in the same Google Cloud Platform project. In
addition, the Cloud Functions Service Agent service account
(service-<PROJECT_NUMBER>@gcf-admin-robot.iam.gserviceaccount.com
) needs a
cloudfunctions.serviceAgent
role on your project.
The error message
Cloud console
Deployment failure: Insufficient permissions to (re)configure a trigger
(permission denied for bucket <BUCKET_ID>). Please, give owner permissions
to the editor role of the bucket and try again.
Cloud SDK
ERROR: (gcloud.functions.deploy) OperationError: code=7, message=Insufficient
permissions to (re)configure a trigger (permission denied for bucket <BUCKET_ID>).
Please, give owner permissions to the editor role of the bucket and try again.
The solution
You can:
Reset this service account to the default role.
or
Grant the runtime service account the
cloudfunctions.serviceAgent
role.or
Grant the runtime service account the
storage.buckets.{get, update}
and theresourcemanager.projects.get
permissions.
User with Project Editor role cannot make a function public
To ensure that unauthorized developers cannot modify authentication settings
for function invocations, the user or service that is deploying the function
must have the cloudfunctions.functions.setIamPolicy
permission.
The error message
Cloud SDK
ERROR: (gcloud.functions.add-iam-policy-binding) ResponseError: status=[403], code=[Forbidden], message=[Permission 'cloudfunctions.functions.setIamPolicy' denied on resource 'projects/<PROJECT_ID>/locations/<LOCATION>/functions/<FUNCTION_NAME> (or resource may not exist).]
The solution
You can:
Assign the deployer either the Project Owner or the Cloud Functions Admin role, both of which contain the
cloudfunctions.functions.setIamPolicy
permission.or
Grant the permission manually by creating a custom role.
Function deployment fails due to Cloud Build not supporting VPC-SC
Cloud Functions uses Cloud Build to build your source code into a runnable container. In order to use Cloud Functions with VPC Service Controls, you must configure an access level for the Cloud Build service account in your service perimeter.
The error message
Cloud console
One of the below:
Error in the build environment
OR
Unable to build your function due to VPC Service Controls. The Cloud Build
service account associated with this function needs an appropriate access
level on the service perimeter. Please grant access to the Cloud Build
service account: '{PROJECT_NUMBER}@cloudbuild.gserviceaccount.com' by
following the instructions at
https://cloud.google.com/functions/docs/securing/using-vpc-service-controls#grant-build-access"
Cloud SDK
One of the below:
ERROR: (gcloud.functions.deploy) OperationError: code=13, message=Error in
the build environment
OR
Unable to build your function due to VPC Service Controls. The Cloud Build
service account associated with this function needs an appropriate access
level on the service perimeter. Please grant access to the Cloud Build
service account: '{PROJECT_NUMBER}@cloudbuild.gserviceaccount.com' by
following the instructions at
https://cloud.google.com/functions/docs/securing/using-vpc-service-controls#grant-build-access"
The solution
If your project's Audited Resources logs mention "Request is prohibited by organization's policy" in the VPC Service Controls section and have a Cloud Storage label, you need to grant the Cloud Build Service Account access to the VPC Service Controls perimeter.
Function deployment fails due to incorrectly specified entry point
Cloud Functions deployment can fail if the entry point to your code, that is, the exported function name, is not specified correctly.
The error message
Cloud console
Deployment failure: Function failed on loading user code. Error message:
Error: please examine your function logs to see the error cause:
https://cloud.google.com/functions/docs/monitoring/logging#viewing_logs
Cloud SDK
ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Function
failed on loading user code. Error message: Please examine your function
logs to see the error cause:
https://cloud.google.com/functions/docs/monitoring/logging#viewing_logs
The solution
Your source code must contain an entry point function that has been correctly specified in your deployment, either via Cloud console or Cloud SDK.
Function deployment fails when using Resource Location Constraint organization policy
If your organization uses a Resource Location Constraint policy, you may see this error in your logs. It indicates that the deployment pipeline failed to create a multi-regional storage bucket.
The error message
In Cloud Build logs:
Token exchange failed for project '<PROJECT_ID>'.
Org Policy Violated: '<REGION>' violates constraint 'constraints/gcp.resourceLocations'
In Cloud Storage logs:
<REGION>.artifacts.<PROJECT_ID>.appspot.com` storage bucket could not be created.
The solution
If you are using
constraints/gcp.resourceLocations
in your organization policy constraints, you should specify the appropriate
multi-region location. For example, if you are deploying in any of the us
regions, you should use us-locations
.
However, if you require more fine grained control and want to restrict function deployment to a single region (not multiple regions), create the multi-region bucket first:
- Allow the whole multi-region
- Deploy a test function
- After the deployment has succeeded, change the organizational policy back to allow only the specific region.
The multi-region storage bucket stays available for that region, so that
subsequent deployments can succeed. If you later decide to allowlist
a region outside of the one where the multi-region storage bucket was created,
you must repeat the process.
Function deployment fails while executing function's global scope
This error indicates that there was a problem with your code. The deployment pipeline finished deploying the function, but failed at the last step - sending a health check to the function. This health check is meant to execute a function's global scope, which could be throwing an exception, crashing, or timing out. The global scope is where you commonly load in libraries and initialize clients.
The error message
In Cloud Logging logs:
"Function failed on loading user code. This is likely
due to a bug in the user code."
The solution
For a more detailed error message, look into your function's build logs, as well as your function's runtime logs. If it is unclear why your function failed to execute its global scope, consider temporarily moving the code into the request invocation, using lazy initialization of the global variables. This allows you to add extra log statements around your client libraries, which could be timing out on their instantiation (especially if they are calling other services), or crashing/throwing exceptions altogether. Additionally, you can try increasing the function timeout.
Build
When you deploy your function's source code to Cloud Functions, that source is stored in a Cloud Storage bucket. Cloud Build then automatically builds your code into a container image and pushes that image to Container Registry. Cloud Functions accesses this image when it needs to run the container to execute your function.
Build failed due to missing Container Registry Images
Cloud Functions (1st gen) uses Container Registry to manage images of the
functions. Container Registry uses Cloud Storage to store the layers of the
images in buckets named
STORAGE-REGION.artifacts.PROJECT-ID.appspot.com
. Using
Object Lifecycle Management on these buckets breaks
the deployment of the functions as the deployments depend on these images being
present.
The error message
Cloud console
Build failed: Build error details not available. Please check the logs at
<CLOUD_CONSOLE_LINK>
CLOUD_CONSOLE_LINK contains an error like below :
failed to get OS from config file for image 'us.gcr.io/<PROJECT_ID>/gcf/us-central1/<UUID>/worker:latest'"
Cloud SDK
ERROR: (gcloud.functions.deploy) OperationError: code=13, message=Build
failed: Build error details not available. Please check the logs at
<CLOUD_CONSOLE_LINK>
CLOUD_CONSOLE_LINK contains an error like below :
failed to get OS from config file for image 'us.gcr.io/<PROJECT_ID>/gcf/us-central1/<UUID>/worker:latest'"
The solution
- Disable Lifecycle Management on the buckets required by Container Registry.
- Delete all the images of affected functions. You can access build logs to find the image paths. Reference script to bulk delete the images. Note that this does not affect the functions that are currently deployed.
- Redeploy the functions.
Serving
The serving phase can also be a source of errors.
Serving permission error due to the function requiring authentication
HTTP functions without Allow unauthenticated invocations enabled restrict access to end users and service accounts that don't have appropriate permissions. This error message indicates that the caller does not have permission to invoke the function.
The error message
HTTP Error Response code: 403 Forbidden
HTTP Error Response body: Error: Forbidden Your client does not have permission
to get URL /<FUNCTION_NAME>
from this server.
The solution
You can:
Redeploy your function to allow unauthenticated invocations if this is supported by your organization. This can be useful for testing purposes.
Invoke your HTTP function using authentication credentials in your request header. For example, you can get an identity token via
gcloud
as follows:curl -H "Authorization: Bearer $(gcloud auth print-identity-token)" \ https://REGION-PROJECT_ID.cloudfunctions.net/FUNCTION_NAME
For Cloud Functions (1st gen), allow public (unauthenticated) access to all users for the specific function.
For Cloud Functions (2nd gen) you can do either of the following:
Assign the user the Cloud Run Invoker Cloud IAM role for the specific function.
From the Google Cloud console:
Click the linked name of the function to which you want to grant access.
Click the Powered By Cloud Run link in the link in the top right corner of the Function details overview page.
Click Trigger and select Allow unauthenticated invocations.
Click Save.
Serving permission error due to "allow internal traffic only" configuration
Ingress settings restrict whether an HTTP function can be invoked by resources outside of your Google Cloud project or VPC Service Controls service perimeter. When the "allow internal traffic only" setting for ingress networking is configured, this error message indicates that only requests from VPC networks in the same project or VPC Service Controls perimeter are allowed.
The error message
HTTP Error Response code: 403 Forbidden
HTTP Error Response body: Error 403 (Forbidden) 403. That's an error. Access is forbidden. That's all we know.
The solution
You can:
Ensure that the request is coming from your Google Cloud project or VPC Service Controls service perimeter.
or
Change the ingress settings to allow all traffic for the function.
Function invocation lacks valid authentication credentials
Invoking a Cloud Functions function that has been set up with restricted access requires an ID token. Access tokens or refresh tokens do not work.
The error message
HTTP Error Response code: 401 Unauthorized
HTTP Error Response body: Your client does not have permission to the requested URL
The solution
Make sure that your requests include an
Authorization: Bearer ID_TOKEN
header,
and that the token is an ID token, not an access or refresh token. If you
are generating this token manually with a service account's private key, you
must exchange the self-signed JWT token for a Google-signed Identity token,
following this
guide.
Attempt to invoke function using curl
redirects to Google login page
If you attempt to invoke a function that does not exist, Cloud Functions
responds with an HTTP/2 302
redirect which takes you to the Google account
login page. This is incorrect. It should respond with an HTTP/2 404
error
response code. The problem is being addressed.
The solution
Make sure you specify the name of your function correctly. You can always check
using gcloud functions call
which returns the correct 404
error for a
missing function.
Application crashes and function execution fails
This error indicates that the process running your function has died. This is usually due to the runtime crashing due to issues in the function code. This may also happen when a deadlock or some other condition in your function's code causes the runtime to become unresponsive to incoming requests.
The error message
In Cloud Logging logs: "Infrastructure cannot communicate with function. There was likely a crash or deadlock in the user-provided code."
The solution
Different runtimes can crash under different scenarios. To find the root cause, output detailed debug level logs, check your application logic, and test for edge cases.
The Cloud Functions Python37 runtime currently has a known limitation on the rate that it can handle logging. If log statements from a Python37 runtime instance are written at a sufficiently high rate, it can produce this error. Python runtime versions >= 3.8 do not have this limitation. We encourage users to migrate to a higher version of the Python runtime to avoid this issue.
If you are still uncertain about the cause of the error, check out our support page.
Function stops mid-execution, or continues running after your code finishes
Some Cloud Functions runtimes allow users to run asynchronous tasks. If your function creates such tasks, it must also explicitly wait for these tasks to complete. Failure to do so may cause your function to stop executing at the wrong time.
The error behavior
Your function exhibits one of the following behaviors:
- Your function terminates while asynchronous tasks are still running, but before the specified timeout period has elapsed.
- Your function does not stop running when these tasks finish, and continues to run until the timeout period has elapsed.
The solution
If your function terminates early, you should make sure all your function's asynchronous tasks have been completed before doing any of the following:
- returning a value
- resolving or rejecting a returned
Promise
object (Node.js functions only) - throwing uncaught exceptions and/or errors
- sending an HTTP response
- calling a callback function
If your function fails to terminate once all asynchronous tasks have completed, you should verify that your function is correctly signaling Cloud Functions once it has completed. In particular, make sure that you perform one of the operations listed above as soon as your function has finished its asynchronous tasks.
JavaScript heap out of memory
For Node.js 12+ functions with memory limits greater than 2GiB, users need to
configure NODE_OPTIONS
to have max_old_space_size
so
that the JavaScript heap limit is equivalent to the function's memory limit.
The error message
Cloud console
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
The solution
Deploy your Node.js 12+ function, with NODE_OPTIONS
configured to
have max_old_space_size
set to your function's memory limit. For
example:
gcloud functions deploy envVarMemory \
--runtime nodejs20 \
--set-env-vars NODE_OPTIONS="--max_old_space_size=8192" \
--memory 8Gi \
--trigger-http
Function terminated
You may see one of the following error messages when the process running your code exited either due to a runtime error or a deliberate exit. There is also a small chance that a rare infrastructure error occurred.
The error messages
Function invocation was interrupted. Error: function terminated. Recommended action: inspect logs for termination reason. Additional troubleshooting information can be found in Logging.
Request rejected. Error: function terminated. Recommended action: inspect logs for termination reason. Additional troubleshooting information can be found in Logging.
Function cannot be initialized. Error: function terminated. Recommended action: inspect logs for termination reason. Additional troubleshooting information can be found in Logging.
The solution
For a background (Pub/Sub triggered) function when an
executionID
is associated with the request that ended up in error, try enabling retry on failure. This allows the retrying of function execution when a retriable exception is raised. For more information for how to use this option safely, including mitigations for avoiding infinite retry loops and managing retriable/fatal errors differently, see Best Practices.Background activity (anything that happens after your function has terminated) can cause issues, so check your code. Cloud Functions does not guarantee any actions other than those that run during the execution period of the function, so even if an activity runs in the background, it might be terminated by the cleanup process.
In cases when there is a sudden traffic spike, try spreading the workload over a little more time. Also test your functions locally using the Functions Framework before you deploy to Cloud Functions to ensure that the error is not due to missing or conflicting dependencies.
Runtime error when accessing resources protected by VPC-SC
By default, Cloud Functions uses public IP addresses to make outbound requests
to other services. If your functions are not inside a VPC Service Controls
perimeter, this might cause them to receive HTTP 403
responses when attempting
to access Google Cloud services protected by VPC-SC, due to service perimeter
denials.
The error message
In Audited Resources logs, an entry like the following:
"protoPayload": { "@type": "type.googleapis.com/google.cloud.audit.AuditLog", "status": { "code": 7, "details": [ { "@type": "type.googleapis.com/google.rpc.PreconditionFailure", "violations": [ { "type": "VPC_SERVICE_CONTROLS", ... "authenticationInfo": { "principalEmail": "CLOUD_FUNCTION_RUNTIME_SERVICE_ACCOUNT", ... "metadata": { "violationReason": "NO_MATCHING_ACCESS_LEVEL", "securityPolicyInfo": { "organizationId": "ORGANIZATION_ID", "servicePerimeterName": "accessPolicies/NUMBER/servicePerimeters/SERVICE_PERIMETER_NAME" ...
The solution
Add Cloud Functions in your Google Cloud project as a protected resource in the service perimeter and deploy VPC-SC compliant functions. See Using VPC Service Controls for more information.
Alternatively, if your Cloud Functions project cannot be added to the service perimeter, see Using VPC Service Controls with functions outside a perimeter.
Scalability
Scaling issues related to Cloud Functions infrastructure can arise in several circumstances.
Cloud Logging errors related to pending queue request aborts
The following conditions can be associated with scaling failures.
- A huge sudden increase in traffic.
- A long cold start time.
- A long request processing time.
- High function error rate.
- Reaching the maximum instance limit and hence the system cannot scale any further.
- Transient factors attributed to the Cloud Functions service.
In each case Cloud Functions might not scale up fast enough to manage the traffic.
The error message
The request was aborted because there was no available instance
severity=WARNING
( Response code: 429 ) Cloud Functions cannot scale due to themax-instances
limit you set during configuration.severity=ERROR
( Response code: 500 ) Cloud Functions intrinsically cannot manage the rate of traffic.
The solution
- For HTTP trigger-based functions, have the client implement exponential
backoff and retries for requests that must not be dropped. If you are
triggering Cloud Functions from Workflows,
you can use the
try/retry
syntax to achieve this. - For background / event-driven functions, Cloud Functions supports at-least-once delivery. Even without explicitly enabling retry, the event is automatically re-delivered and the function execution will be retried. See Retrying Event-Driven Functions for more information.
- When the root cause of the issue is a period of heightened transient errors attributed solely to Cloud Functions or if you need assistance with your issue, please contact support.
Logging
Setting up logging to help you track down problems can cause problems of its own.
Logs entries have no, or incorrect, log severity levels
Cloud Functions includes simple runtime logging by default. Logs written to
stdout
or stderr
appear automatically in the
Google Cloud console.
But these log entries, by default, contain only simple string messages.
The error message
No or incorrect severity levels in logs.
The solution
To include log severities, you must send a structured log entry instead.
Handle or log exceptions differently in the event of a crash
You may want to customize how you manage and log crash information.
The solution
Wrap your function in a try
block to customize handling exceptions and
logging stack traces.
Example
import logging
import traceback
def try_catch_log(wrapped_func):
def wrapper(*args, **kwargs):
try:
response = wrapped_func(*args, **kwargs)
except Exception:
# Replace new lines with spaces so as to prevent several entries which
# would trigger several errors.
error_message = traceback.format_exc().replace('\n', ' ')
logging.error(error_message)
return 'Error';
return response;
return wrapper;
#Example hello world function
@try_catch_log
def python_hello_world(request):
request_args = request.args
if request_args and 'name' in request_args:
1 + 's'
return 'Hello World!'
Logs too large in Node.js 10+, Python 3.8, Go 1.13, and Java 11
The max size for a regular log entry in these runtimes is 105 KiB.
The solution
Make sure you send log entries smaller than this limit.
Cloud Function is returning errors, but logs are missing
Cloud Function logs are streamed to a default bucket that is created and enabled when a project is created. If the default bucket is disabled or if Cloud Function logs are in the exclusion filter, the logs won't appear in Log Explorer.
The solution
Make sure default logs are enabled.
Cloud Functions logs are not appearing in Log Explorer
Some Cloud Logging client libraries use an asynchronous process to write log entries. If a function crashes, or otherwise terminates, it is possible that some log entries have not been written yet and may appear later. It is also possible that some logs will be lost and cannot be seen in Log Explorer.
The solution
Use the client library interface to flush buffered log entries before exiting
the function or use the library to write log entries synchronously. You can also
synchronously write logs directly to stdout
or stderr
.
Cloud Functions logs are not appearing via Log Router Sink
Log entries are routed to their various destinations using Log Router Sinks.
Included in the settings are Exclusion filters, which define entries that can simply be discarded.
The solution
Make sure no exclusion filter is set for resource.type="cloud_functions"
Database connections
There are a number of issues that can arise when connecting to a database, many associated with exceeding connection limits or timing out. If you see a Cloud SQL warning in your logs, for example, "context deadline exceeded", you might need to adjust your connection configuration. See the Cloud SQL docs for additional details.
Networking
Network connectivity
If all outbound requests from a Cloud Function fail even after you configure egress settings, you can run Connectivity Tests to identify any underlying network connectivity issues. For more information, see Create and run Connectivity Tests.
Serverless VPC Access connector is not ready or does not exist
If a Serverless VPC Access connector fails, it might not be using a /28 subnet mask dedicated to the connector as required.
The error message
VPC connector projects/xxxxx/locations/REGION/connectors/xxxx
is not ready yet or does not exist.
The solution
List your subnets to check whether your connector uses a /28 subnet mask.
If it does not, recreate or create a new connector to use a /28 subnet. Note the following considerations:
If you recreate the connector, you do not need to redeploy other functions. You might experience a network interruption as the connector is recreated.
If you create a new alternate connector, redeploy your functions to use the new connector and then delete the original connector. This method avoids network interruption.