[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2024-12-21 UTC。"],[],[],null,["# Issues and limitations\n\nThis page describes some of the issues and limitations that you might encounter\nwhen using Cloud Tasks.\n\nExecution order\n---------------\n\nWith the exception of tasks scheduled to run in the future, task queues are\ncompletely platform-independent about execution order. There are no guarantees\nor best effort attempts made to execute tasks in any particular order.\nSpecifically: there are no guarantees that old tasks will execute unless a queue\nis completely emptied. A number of common cases exist where newer tasks are\nexecuted sooner than older tasks, and the patterns surrounding this can change\nwithout notice.\n\nDuplicate execution\n-------------------\n\nCloud Tasks aims for a strict \"execute exactly once\" semantic.\nHowever, in situations where a design trade-off must be made between guaranteed\nexecution and duplicate execution, the service errs on the side of guaranteed\nexecution. As such, a non-zero number of duplicate executions do occur.\nDevelopers should take steps to ensure that duplicate execution is not a\ncatastrophic event. In production, more than 99.999% of tasks are executed only\nonce.\n\nResource limitations\n--------------------\n\nThe most common source of backlogs in immediate processing queues is exhausting\nresources on the target instances. If a user is attempting to execute 100 tasks\nper second on frontend instances that can only process 10 requests per second,\na backlog will build. This typically manifests in one of two ways, either of\nwhich can generally be resolved by increasing the number of instances processing\nrequests.\n\n### Backoff errors and enforced rates\n\nServers that are being overloaded can start to return backoff errors: HTTP `503`\n(for App Engine targets), or HTTP `429` or `5xx` (for external targets).\nCloud Tasks reacts to these errors by slowing down execution until\nerrors stop. This system throttling prevents the worker from overloading. Note\nthat user-specified settings are not changed.\n\nSystem throttling happens in the following circumstances:\n\n- Cloud Tasks backs off on all errors. Normally the backoff\n specified in\n [`rate limits`](/tasks/docs/reference/rpc/google.cloud.tasks.v2#google.cloud.tasks.v2.Queue.FIELDS.google.cloud.tasks.v2.RateLimits.google.cloud.tasks.v2.Queue.rate_limits)\n is used. But if the worker returns HTTP `429 Too Many Requests`,\n `503 Service Unavailable`, or the rate of errors is high, Cloud Tasks\n uses a higher backoff rate. The retry specified in the `Retry-After` HTTP\n response header is considered.\n\n- To prevent traffic spikes and to smooth sudden increases in traffic,\n dispatches ramp up slowly when the queue is newly created or idle, and if large\n numbers of tasks suddenly become available to dispatch (due to spikes in create\n task rates, the queue being unpaused, or many tasks that are scheduled at the\n same time).\n\n### Latency spikes and max concurrent\n\nOverloaded servers can also respond with large increases in latency.\nIn this situation, requests remain open for longer. Because queues run with a\nmaximum concurrent number of tasks, this can result in queues being unable to\nexecute tasks at the expected rate. Increasing the\n[`max_concurrent_dispatches`](/tasks/docs/reference/rpc/google.cloud.tasks.v2#ratelimits)\nfor the affected queues can help in situations where the value has been set too\nlow, introducing an artificial rate limit. But increasing\n`max_concurrent_dispatches` is unlikely to relieve any underlying resource\npressure.\n\n### Ramp-up issues with long running tasks\n\nCloud Tasks queues ramp up their output in part based on the number of\nsuccessful previously dispatched tasks. If the task handler takes a considerable\nperiod of time---on the order of minutes---to complete a task and return a success\nresponse, there can be a lag in the queue ramp-up rate.\n\nViewing more than 5,000 tasks\n-----------------------------\n\nIf you have more than 5,000 tasks, some tasks are not visible in the\nGoogle Cloud console. Use [gcloud CLI](/sdk/gcloud/reference/tasks/list)\nto view all tasks.\n\nRecreating a queue with the same name\n-------------------------------------\n\nIf you [delete a queue](/tasks/docs/deleting-appengine-queues-and-tasks#delete-queues)\nfrom the Google Cloud console, you must wait 3 days before recreating it with\nthe same name. This waiting period prevents unexpected behaviour in tasks that\nare executing at the time of deletion or waiting to be executed. It also avoids\ninternal process failures in the delete or recreate cycle."]]