使用 RunInference API 時,如果提供的圖片大小不同或字詞嵌入長度不同,可能會發生下列錯誤:
File "/beam/sdks/python/apache_beam/ml/inference/pytorch_inference.py", line 232, in run_inference batched_tensors = torch.stack(key_to_tensor_list[key]) RuntimeError: stack expects each tensor to be equal size, but got [12] at entry 0 and [10] at entry 1 [while running 'PyTorchRunInference/ParDo(_RunInferenceDoFn)']
發生這項錯誤的原因是 RunInference API 無法批次處理大小不同的張量元素。如需解決方法,請參閱 Apache Beam 說明文件中的「無法批次處理張量元素」。
避免大型模型發生記憶體不足錯誤
載入中型或大型 ML 模型時,機器可能會記憶體不足。
Dataflow 提供相關工具,協助您在載入機器學習模型時避免記憶體不足 (OOM) 錯誤。請參閱下表,判斷適合您情況的方法。
情境
解決方案
模型夠小,可放入記憶體。
使用 RunInference 轉換,無需額外設定。RunInference 轉換會在執行緒之間共用模型。如果機器上的每個 CPU 核心都能容納一個模型,管道就可以使用預設設定。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-08-18 (世界標準時間)。"],[[["\u003cp\u003eDataflow ML facilitates both prediction and inference pipelines, as well as data preparation for training ML models.\u003c/p\u003e\n"],["\u003cp\u003eDataflow ML supports both batch and streaming data pipelines, utilizing the \u003ccode\u003eRunInference\u003c/code\u003e API (from Apache Beam 2.40.0) and \u003ccode\u003eMLTransform\u003c/code\u003e API (from Apache Beam 2.53.0).\u003c/p\u003e\n"],["\u003cp\u003eThe system is compatible with model handlers for popular frameworks like PyTorch, scikit-learn, TensorFlow, ONNX, and TensorRT, with options for custom handlers for other frameworks.\u003c/p\u003e\n"],["\u003cp\u003eDataflow ML enables the use of multiple inference models within a single pipeline via the \u003ccode\u003eRunInference\u003c/code\u003e transform and supports the use of GPUs for pipelines that need them.\u003c/p\u003e\n"],["\u003cp\u003eDataflow ML also provides troubleshooting guidance for common issues, including tensor size mismatch errors and out-of-memory errors when dealing with large models.\u003c/p\u003e\n"]]],[],null,["You can use Dataflow ML's scale data processing abilities for\n[prediction and inference pipelines](#prediction) and for\n[data preparation for training](#data-prep).\n\n**Figure 1.** The complete Dataflow ML workflow.\n\nRequirements and limitations \n\n- Dataflow ML supports batch and streaming pipelines.\n- The `RunInference` API is supported in Apache Beam 2.40.0 and later versions.\n- The `MLTransform` API is supported in Apache Beam 2.53.0 and later versions.\n- Model handlers are available for PyTorch, scikit-learn, TensorFlow, ONNX, and TensorRT. For unsupported frameworks, you can use a custom model handler.\n\nData preparation for training\n\n- Use the `MLTransform` feature to prepare your data for training ML models. For\n more information, see\n [Preprocess data with `MLTransform`](/dataflow/docs/machine-learning/ml-preprocess-data).\n\n- Use Dataflow with ML-OPS frameworks, such as\n [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/v1/introduction/)\n (KFP) or [TensorFlow Extended](https://www.tensorflow.org/tfx) (TFX).\n To learn more, see [Dataflow ML in ML workflows](/dataflow/docs/machine-learning/ml-data).\n\nPrediction and inference pipelines\n\nDataflow ML combines the power of Dataflow with\nApache Beam's\n[`RunInference` API](https://beam.apache.org/documentation/ml/about-ml/).\nWith the `RunInference` API, you define the model's characteristics and properties\nand pass that configuration to the `RunInference` transform. This feature\nallows users to run the model within their\nDataflow pipelines without needing to know\nthe model's implementation details. You can choose the framework that best\nsuits your data, such as TensorFlow and PyTorch.\n\nRun multiple models in a pipeline\n\nUse the `RunInference` transform to add multiple inference models to\nyour Dataflow pipeline. For more information, including code details,\nsee [Multi-model pipelines](https://beam.apache.org/documentation/ml/about-ml/#multi-model-pipelines)\nin the Apache Beam documentation.\n\nBuild a cross-language pipeline\n\nTo use RunInference with a Java pipeline,\n[create a cross-language Python transform](https://beam.apache.org/documentation/programming-guide/#1312-creating-cross-language-python-transforms). The pipeline calls the\ntransform, which does the preprocessing, postprocessing, and inference.\n\nFor detailed instructions and a sample pipeline, see\n[Using RunInference from the Java SDK](https://beam.apache.org/documentation/ml/multi-language-inference/).\n\nUse GPUs with Dataflow\n\nFor batch or streaming pipelines that require the use of accelerators, you can\nrun Dataflow pipelines on NVIDIA GPU devices. For more information, see\n[Run a Dataflow pipeline with GPUs](/dataflow/docs/gpu/use-gpus).\n\nTroubleshoot Dataflow ML\n\nThis section provides troubleshooting strategies and links that you might find\nhelpful when using Dataflow ML.\n\nStack expects each tensor to be equal size\n\nIf you provide images of different sizes or word embeddings of different lengths\nwhen using the `RunInference` API, the following error might occur: \n\n File \"/beam/sdks/python/apache_beam/ml/inference/pytorch_inference.py\", line 232, in run_inference batched_tensors = torch.stack(key_to_tensor_list[key]) RuntimeError: stack expects each tensor to be equal size, but got [12] at entry 0 and [10] at entry 1 [while running 'PyTorchRunInference/ParDo(_RunInferenceDoFn)']\n\nThis error occurs because the `RunInference` API can't batch tensor elements of\ndifferent sizes. For workarounds, see\n[Unable to batch tensor elements](https://beam.apache.org/documentation/ml/about-ml/#unable-to-batch-tensor-elements)\nin the Apache Beam documentation.\n\nAvoid out-of-memory errors with large models\n\nWhen you load a medium or large ML model, your machine might run out of memory.\nDataflow provides tools to help avoid out-of-memory (OOM) errors\nwhen loading ML models. Use the following table to determine the appropriate\napproach for your scenario.\n\n| Scenario | Solution |\n|----------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| The models are small enough to fit in memory. | Use the `RunInference` transform without any additional configurations. The `RunInference` transform shares the models across threads. If you can fit one model per CPU core on your machine, then your pipeline can use the default configuration. |\n| Multiple differently-trained models are performing the same task. | Use per-model keys. For more information, see [Run ML inference with multiple differently-trained models](/dataflow/docs/notebooks/per_key_models). |\n| One model is loaded into memory, and all processes share this model. | Use the `large_model` parameter. For more information, see [Run ML inference with multiple differently-trained models](/dataflow/docs/notebooks/per_key_models). If you're building a custom model handler, instead of using the `large_model` parameter, override the [`share_model_across_processes`](https://beam.apache.org/releases/pydoc/current/apache_beam.ml.inference.base.html#apache_beam.ml.inference.base.ModelHandler.share_model_across_processes) parameter. |\n| You need to configure the exact number of models loaded onto your machine. | To control exactly how many models are loaded, use the [`model_copies`](https://beam.apache.org/releases/pydoc/current/apache_beam.ml.inference.base.html#apache_beam.ml.inference.base.ModelHandler.model_copies) parameter. If you're building a custom model handler, override the `model_copies` parameter. |\n\nFor more information about memory management with Dataflow, see\n[Troubleshoot Dataflow out of memory errors](/dataflow/docs/guides/troubleshoot-oom).\n\nWhat's next\n\n- Explore the [Dataflow ML notebooks](https://github.com/apache/beam/tree/master/examples/notebooks/beam-ml) in GitHub.\n- Get in-depth information about using ML with Apache Beam in Apache Beam's [AI/ML pipelines](https://beam.apache.org/documentation/ml/overview/) documentation.\n- Learn more about the [`RunInference` API](https://beam.apache.org/releases/pydoc/current/apache_beam.ml.inference.html#apache_beam.ml.inference.RunInference).\n- Learn about the [metrics](https://beam.apache.org/documentation/ml/runinference-metrics/) that you can use to monitor your `RunInference` transform."]]