Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Saat menjalankan pipeline, Vertex AI Pipelines akan memeriksa apakah
eksekusi ada di Vertex ML Metadata dengan antarmuka
(kunci cache) setiap langkah pipeline.
Antarmuka langkah didefinisikan sebagai kombinasi hal berikut:
Input langkah pipeline. Input ini mencakup nilai
parameter input (jika ada) dan ID artefak input (jika ada).
Definisi output langkah pipeline. Definisi output ini
mencakup definisi parameter output (nama, jika ada) dan definisi artefak
output (nama, jika ada).
Spesifikasi komponen. Spesifikasi ini mencakup
gambar, perintah, argumen, dan variabel lingkungan yang digunakan, serta
urutan
perintah dan argumen.
Selain itu, hanya pipeline dengan nama pipeline yang sama yang akan berbagi
cache.
Jika ada eksekusi yang cocok di Vertex ML Metadata, output
eksekusi tersebut akan digunakan dan langkah akan dilewati. Dengan demikian, komputasi yang diselesaikan
dalam operasi pipeline sebelumnya dapat dilewati, sehingga membantu menghemat biaya.
Anda dapat menonaktifkan caching eksekusi di tingkat tugas dengan menyetel hal berikut:
eval_task.set_caching_options(False)
Anda dapat menonaktifkan caching eksekusi untuk seluruh tugas pipeline. Saat menjalankan
pipeline menggunakan PipelineJob(), Anda dapat menggunakan argumen enable_caching untuk
menentukan bahwa operasi pipeline ini tidak menggunakan caching. Semua langkah dalam
tugas pipeline tidak akan menggunakan caching.
Pelajari lebih lanjut cara membuat proses pipeline.
Gunakan contoh berikut untuk menonaktifkan caching:
pl=PipelineJob(display_name="My first pipeline",# Whether or not to enable caching# True = enable the current run to use caching results from previous runs# False = disable the current run's use of caching results from previous runs# None = defer to cache option for each pipeline component in the pipeline definitionenable_caching=False,# Local or Cloud Storage path to a compiled pipeline definitiontemplate_path="pipeline.yaml",# Dictionary containing input parameters for your pipelineparameter_values=parameter_values,# Cloud Storage path to act as the pipeline rootpipeline_root=pipeline_root,)
Batasan berikut berlaku untuk fitur tersebut:
Hasil yang di-cache tidak memiliki time to live (TTL), dan dapat digunakan kembali
selama entri tidak dihapus dari
Vertex ML Metadata. Jika entri dihapus dari
Vertex ML Metadata, tugas akan dijalankan kembali untuk membuat ulang
hasilnya.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-09-04 UTC."],[],[],null,["# Configure execution caching\n\nWhen Vertex AI Pipelines runs a pipeline, it checks to see whether or not\nan *execution* exists in Vertex ML Metadata with the interface\n(cache key) of each pipeline step.\n\nThe step's interface is defined as the combination of the following:\n\n1. The **pipeline step's inputs**. These inputs include the input\n parameters' value (if any) and the input artifact ID (if any).\n\n2. The **pipeline step's output definition**. This output definition\n includes output parameter definition (name, if any) and output artifact\n definition (name, if any).\n\n3. The **component's specification**. This specification includes the\n image, commands, arguments and environment variables being used, as well\n as the order of the\n commands and arguments.\n\nAdditionally, only the pipelines with the same pipeline name will share the\ncache.\n\nIf there is a matching execution in Vertex ML Metadata, the outputs of\nthat execution are used and the step is skipped. This helps to reduce costs by\nskipping computations that were completed in a previous pipeline run.\n\nYou can turn off execution caching at task level by setting the following: \n\n eval_task.set_caching_options(False)\n\nYou can turn off execution caching for an entire pipeline job. When you run\na pipeline using `PipelineJob()`, you can use the `enable_caching` argument to\nspecify that this pipeline run does not use caching. All steps within the\npipeline job won't use caching.\n[Learn more about creating pipeline runs](/vertex-ai/docs/pipelines/run-pipeline).\n\nUse the following sample to turn off caching: \n\n pl = PipelineJob(\n display_name=\"My first pipeline\",\n\n # Whether or not to enable caching\n # True = enable the current run to use caching results from previous runs\n # False = disable the current run's use of caching results from previous runs\n # None = defer to cache option for each pipeline component in the pipeline definition\n enable_caching=False,\n\n # Local or Cloud Storage path to a compiled pipeline definition\n template_path=\"pipeline.yaml\",\n\n # Dictionary containing input parameters for your pipeline\n parameter_values=parameter_values,\n\n # Cloud Storage path to act as the pipeline root\n pipeline_root=pipeline_root,\n )\n\n| **Important:** Pipeline components should be built to be deterministic. A given set of inputs should always produce the same output. Depending on their interface, non-deterministic pipeline components can be unexpectedly skipped due to execution caching.\n\nThe following limitations apply to this feature:\n\n- The cached result doesn't have a time to live (TTL), and can be reused as long as the entry is not deleted from the Vertex ML Metadata. If the entry is deleted from Vertex ML Metadata, the task will rerun to regenerate the result again."]]