Vertex AI Model Registry では、モデルの概要を確認できるため、新しいバージョンの整理、追跡、トレーニングを行いやすくなります。Model Registry では、モデルの評価、エンドポイントへのモデルのデプロイ、バッチ予測の作成、特定のモデルとモデル バージョンの詳細の表示を行えます。
特徴を管理する: 複数のチームで ML の特徴を再利用する場合は、特徴を迅速かつ効率的に共有、提供できる方法が必要です。
Vertex AI Feature Store には、ML の特徴を整理、保存、提供するための一元化されたリポジトリが用意されています。一元的な featurestore を使用することで、組織は ML の特徴を大規模に再利用でき、新しい ML アプリケーションの開発とデプロイにかかる時間を短縮できます。
Vertex AI Model Monitoring は、トレーニング サービング スキューと予測ドリフトをモニタリングし、受信予測データのスキューがトレーニング ベースラインから遠すぎる場合にアラートを送信します。アラートと特徴量分布を使用して、モデルの再トレーニングが必要かどうかを評価できます。
AI と Python アプリケーションをスケーリングする: Ray は、AI と Python アプリケーションをスケーリングするためのオープンソース フレームワークです。Ray は、ML ワークフローの分散コンピューティングと並列処理を実現するためのインフラストラクチャを提供します。
Ray on Vertex AI は、同じオープンソースの Ray コードを使用して、最小限の変更でプログラムを作成し、Vertex AI でアプリケーションを開発できるように設計されています。これにより、ML ワークフローの一部として、Vertex AI Prediction や BigQuery など、他の Google Cloud サービスとの Vertex AI のインテグレーションを使用できます。
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-06-16 UTC。"],[],[],null,["# MLOps on Vertex AI\n\nThis section describes Vertex AI services that help you implement\n*Machine learning operations (MLOps)* with your machine learning (ML) workflow.\n\nAfter your models are deployed, they must keep up with changing data from the\nenvironment to perform optimally and stay relevant. MLOps is a set of practices\nthat improves the stability and reliability of your ML systems.\n\nVertex AI MLOps tools help you collaborate across AI teams and improve your\nmodels through predictive model monitoring, alerting, diagnosis, and actionable\nexplanations. All the tools are modular, so you can integrate them into your\nexisting systems as needed.\n\nFor more information about MLOps, see [Continuous delivery and automation\npipelines in machine learning](/architecture/mlops-continuous-delivery-and-automation-pipelines-in-machine-learning) and the [Practitioners Guide to MLOps](https://services.google.com/fh/files/misc/practitioners_guide_to_mlops_whitepaper.pdf).\n\n- **Orchestrate workflows**: Manually training and serving your models\n can be time-consuming and error-prone, especially if you need to repeat the\n processes many times.\n\n - [Vertex AI Pipelines](/vertex-ai/docs/pipelines/introduction) helps you automate, monitor, and govern your ML workflows.\n- **Track the metadata used in your ML system**: In data science, it's\n important to track the parameters, artifacts, and metrics used in your ML\n workflow, especially when you repeat the workflow multiple times.\n\n - [Vertex ML Metadata](/vertex-ai/docs/ml-metadata/introduction) lets you record the metadata, parameters, and artifacts that are used in your ML system. You can then query that metadata to help analyze, debug, and audit the performance of your ML system or the artifacts that it produces.\n- **Identify the best model for a use case**: When you try new training algorithms,\n you need to know which trained model performs the best.\n\n - [Vertex AI Experiments](/vertex-ai/docs/experiments/intro-vertex-ai-experiments) lets you track and analyze\n different model architectures, hyper-parameters, and training environments\n to identify the best model for your use case.\n\n - [Vertex AI TensorBoard](/vertex-ai/docs/experiments/tensorboard-introduction) helps you track, visualize, and\n compare ML experiments to measure how well your models perform.\n\n- **Manage model versions**: Adding models to a central repository helps you\n keep track of model versions.\n\n - [Vertex AI Model Registry](/vertex-ai/docs/model-registry/introduction) provides an overview of your models so you can better organize, track, and train new versions. From Model Registry, you can evaluate models, deploy models to an endpoint, create batch inferences, and view details about specific models and model versions.\n- **Manage features**: When you re-use ML features across multiple teams, you\n need a quick and efficient way to share and serve the features.\n\n - [Vertex AI Feature Store](/vertex-ai/docs/featurestore/latest/overview) provides a centralized repository for organizing, storing, and serving ML features. Using a central featurestore enables an organization to re-use ML features at scale and increase the velocity of developing and deploying new ML applications.\n- **Monitor model quality**: A model deployed in production performs best on\n inference input data that is similar to the training data. When the input\n data deviates from the data used to train the model, the model's performance\n can deteriorate, even if the model itself hasn't changed.\n\n - [Vertex AI Model Monitoring](/vertex-ai/docs/model-monitoring/overview) monitors models for training-serving skew and inference drift and sends you alerts when the incoming inference data skews too far from the training baseline. You can use the alerts and feature distributions to evaluate whether you need to retrain your model.\n- **Scale AI and Python applications** : [Ray](https://docs.ray.io/en/latest/ray-overview/index.html) is an open-source framework for scaling AI and Python applications. Ray provides the infrastructure to perform distributed computing and parallel processing for your machine learning (ML) workflow.\n\n - [Ray on Vertex AI](/vertex-ai/docs/open-source/ray-on-vertex-ai/overview) is designed so you can use the same open source Ray code to write programs and develop applications on Vertex AI with minimal changes. You can then use Vertex AI's integrations with other Google Cloud services such as [Vertex AI Inference](/vertex-ai/pricing#prediction-prices) and [BigQuery](/bigquery/docs/introduction) as part of your machine learning (ML) workflow.\n\nWhat's next\n-----------\n\n- [Vertex AI interfaces](/vertex-ai/docs/start/introduction-interfaces)"]]