このセクションでは、Dataflow ML を使用する際に役立つトラブルシューティング方法とリンクについて説明します。
スタックで各テンソルが同じサイズと想定されている
RunInference API の使用時に、さまざまなサイズの画像や長さの異なる単語のエンベディングを指定すると、次のエラーが発生する可能性があります。
File "/beam/sdks/python/apache_beam/ml/inference/pytorch_inference.py", line 232, in run_inference batched_tensors = torch.stack(key_to_tensor_list[key]) RuntimeError: stack expects each tensor to be equal size, but got [12] at entry 0 and [10] at entry 1 [while running 'PyTorchRunInference/ParDo(_RunInferenceDoFn)']
このエラーは、RunInference API がさまざまなサイズのテンソル要素をバッチ処理できないために発生します。回避策については、Apache Beam ドキュメントのテンソル要素をバッチ処理できないをご覧ください。
大規模なモデルでメモリ不足エラーを回避する
中規模または大規模の ML モデルを読み込むと、マシンのメモリが不足する可能性があります。Dataflow には、ML モデルの読み込み時にメモリ不足(OOM)エラーを回避するためのツールが用意されています。次の表を使用して、シナリオに適したアプローチを決定してください。
シナリオ
解決策
モデルはメモリに収まる程度の小さなサイズである。
追加の構成なしで RunInference 変換を使用します。RunInference 変換は、スレッド間でモデルを共有します。マシンの CPU コアごとに 1 つのモデルを配置できる場合は、パイプラインでデフォルト構成を使用できます。
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-04 UTC。"],[[["\u003cp\u003eDataflow ML facilitates both prediction and inference pipelines, as well as data preparation for training ML models.\u003c/p\u003e\n"],["\u003cp\u003eDataflow ML supports both batch and streaming data pipelines, utilizing the \u003ccode\u003eRunInference\u003c/code\u003e API (from Apache Beam 2.40.0) and \u003ccode\u003eMLTransform\u003c/code\u003e API (from Apache Beam 2.53.0).\u003c/p\u003e\n"],["\u003cp\u003eThe system is compatible with model handlers for popular frameworks like PyTorch, scikit-learn, TensorFlow, ONNX, and TensorRT, with options for custom handlers for other frameworks.\u003c/p\u003e\n"],["\u003cp\u003eDataflow ML enables the use of multiple inference models within a single pipeline via the \u003ccode\u003eRunInference\u003c/code\u003e transform and supports the use of GPUs for pipelines that need them.\u003c/p\u003e\n"],["\u003cp\u003eDataflow ML also provides troubleshooting guidance for common issues, including tensor size mismatch errors and out-of-memory errors when dealing with large models.\u003c/p\u003e\n"]]],[],null,["# About Dataflow ML\n\nYou can use Dataflow ML's scale data processing abilities for\n[prediction and inference pipelines](#prediction) and for\n[data preparation for training](#data-prep).\n\n**Figure 1.** The complete Dataflow ML workflow.\n\nRequirements and limitations\n----------------------------\n\n- Dataflow ML supports batch and streaming pipelines.\n- The `RunInference` API is supported in Apache Beam 2.40.0 and later versions.\n- The `MLTransform` API is supported in Apache Beam 2.53.0 and later versions.\n- Model handlers are available for PyTorch, scikit-learn, TensorFlow, ONNX, and TensorRT. For unsupported frameworks, you can use a custom model handler.\n\nData preparation for training\n-----------------------------\n\n- Use the `MLTransform` feature to prepare your data for training ML models. For\n more information, see\n [Preprocess data with `MLTransform`](/dataflow/docs/machine-learning/ml-preprocess-data).\n\n- Use Dataflow with ML-OPS frameworks, such as\n [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/v1/introduction/)\n (KFP) or [TensorFlow Extended](https://www.tensorflow.org/tfx) (TFX).\n To learn more, see [Dataflow ML in ML workflows](/dataflow/docs/machine-learning/ml-data).\n\nPrediction and inference pipelines\n----------------------------------\n\nDataflow ML combines the power of Dataflow with\nApache Beam's\n[`RunInference` API](https://beam.apache.org/documentation/ml/about-ml/).\nWith the `RunInference` API, you define the model's characteristics and properties\nand pass that configuration to the `RunInference` transform. This feature\nallows users to run the model within their\nDataflow pipelines without needing to know\nthe model's implementation details. You can choose the framework that best\nsuits your data, such as TensorFlow and PyTorch.\n\nRun multiple models in a pipeline\n---------------------------------\n\nUse the `RunInference` transform to add multiple inference models to\nyour Dataflow pipeline. For more information, including code details,\nsee [Multi-model pipelines](https://beam.apache.org/documentation/ml/about-ml/#multi-model-pipelines)\nin the Apache Beam documentation.\n\nBuild a cross-language pipeline\n-------------------------------\n\nTo use RunInference with a Java pipeline,\n[create a cross-language Python transform](https://beam.apache.org/documentation/programming-guide/#1312-creating-cross-language-python-transforms). The pipeline calls the\ntransform, which does the preprocessing, postprocessing, and inference.\n\nFor detailed instructions and a sample pipeline, see\n[Using RunInference from the Java SDK](https://beam.apache.org/documentation/ml/multi-language-inference/).\n\nUse GPUs with Dataflow\n----------------------\n\nFor batch or streaming pipelines that require the use of accelerators, you can\nrun Dataflow pipelines on NVIDIA GPU devices. For more information, see\n[Run a Dataflow pipeline with GPUs](/dataflow/docs/gpu/use-gpus).\n\nTroubleshoot Dataflow ML\n------------------------\n\nThis section provides troubleshooting strategies and links that you might find\nhelpful when using Dataflow ML.\n\n### Stack expects each tensor to be equal size\n\nIf you provide images of different sizes or word embeddings of different lengths\nwhen using the `RunInference` API, the following error might occur: \n\n File \"/beam/sdks/python/apache_beam/ml/inference/pytorch_inference.py\", line 232, in run_inference batched_tensors = torch.stack(key_to_tensor_list[key]) RuntimeError: stack expects each tensor to be equal size, but got [12] at entry 0 and [10] at entry 1 [while running 'PyTorchRunInference/ParDo(_RunInferenceDoFn)']\n\nThis error occurs because the `RunInference` API can't batch tensor elements of\ndifferent sizes. For workarounds, see\n[Unable to batch tensor elements](https://beam.apache.org/documentation/ml/about-ml/#unable-to-batch-tensor-elements)\nin the Apache Beam documentation.\n\n### Avoid out-of-memory errors with large models\n\nWhen you load a medium or large ML model, your machine might run out of memory.\nDataflow provides tools to help avoid out-of-memory (OOM) errors\nwhen loading ML models. Use the following table to determine the appropriate\napproach for your scenario.\n\nFor more information about memory management with Dataflow, see\n[Troubleshoot Dataflow out of memory errors](/dataflow/docs/guides/troubleshoot-oom).\n\nWhat's next\n-----------\n\n- Explore the [Dataflow ML notebooks](https://github.com/apache/beam/tree/master/examples/notebooks/beam-ml) in GitHub.\n- Get in-depth information about using ML with Apache Beam in Apache Beam's [AI/ML pipelines](https://beam.apache.org/documentation/ml/overview/) documentation.\n- Learn more about the [`RunInference` API](https://beam.apache.org/releases/pydoc/current/apache_beam.ml.inference.html#apache_beam.ml.inference.RunInference).\n- Learn about the [metrics](https://beam.apache.org/documentation/ml/runinference-metrics/) that you can use to monitor your `RunInference` transform."]]