This document in Architecture Framework: AI and ML perspective provides an overview of principles and recommendations to optimize the cost of your AI systems throughout the ML lifecycle. By adopting a proactive and informed cost management approach, your organization can realize the full potential of AI and ML systems and also maintain financial discipline. The recommendations in this document align with the cost optimization pillar of the Architecture Framework.
AI and ML systems can help you to unlock valuable insights and predictive capabilities from data. For example, you can reduce friction in internal processes, improve user experiences, and gain deeper customer insights. The cloud offers vast amounts of resources and quick time-to-value without large up-front investments for AI and ML workloads. To maximize business value and to align the spending with your business goals, you need to understand the cost drivers, proactively optimize costs, set up spending controls, and adopt FinOps practices.
Define and measure costs and returns
To effectively manage your AI and ML costs in Google Cloud, you must define and measure the expenses for cloud resources and the business value of your AI and ML initiatives. Google Cloud provides comprehensive tools for billing and cost management to help you to track expenses granularly. Business value metrics that you can measure include customer satisfaction, revenue, and operational costs. By establishing concrete metrics for both costs and business value, you can make informed decisions about resource allocation and optimization.
Consider the following recommendations:
- Establish clear business objectives and key performance indicators (KPIs) for your AI and ML projects.
- Use the billing information provided by Google Cloud to implement cost monitoring and reporting processes that can help you to attribute costs to specific AI and ML activities.
- Establish dashboards, alerting, and reporting systems to track costs and returns against KPIs.
Optimize resource allocation
To achieve cost efficiency for your AI and ML workloads in Google Cloud, you must optimize resource allocation. By carefully aligning resource allocation with the needs of your workloads, you can avoid unnecessary expenses and ensure that your AI and ML systems have the resources that they need to perform optimally.
Consider the following recommendations:
- Use autoscaling to dynamically adjust resources for training and inference.
- Start with small models and data. Save costs by testing hypotheses at a smaller scale when possible.
- Discover your compute needs through experimentation. Rightsize the resources that are used for training and serving based on your ML requirements.
- Adopt MLOps practices to reduce duplication, manual processes, and inefficient resource allocation.
Enforce data management and governance practices
Effective data management and governance practices play a critical role in cost optimization. Well-organized data helps your organization to avoid needless duplication, reduces the effort required to obtain high quality data, and encourages teams to reuse datasets. By proactively managing data, you can reduce storage costs, enhance data quality, and ensure that your ML models are trained and operate on the most relevant and valuable data.
Consider the following recommendations:
- Establish and adopt a well-defined data governance framework.
- Apply labels and relevant metadata to datasets at the point of data ingestion.
- Ensure that datasets are discoverable and accessible across the organization.
- Make your datasets and features reusable throughout the ML lifecycle wherever possible.
Automate and streamline with MLOps
A primary benefit of adopting MLOps practices is a reduction in costs, both from a technology perspective and in terms of personnel activities. Automation helps you to avoid duplication of ML activities and improve the productivity of data scientists and ML engineers.
Consider the following recommendations:
- Increase the level of automation and standardization in your data collection and processing technologies to reduce development effort and time.
- Develop automated training pipelines to reduce the need for manual interventions and increase engineer productivity. Implement mechanisms for the pipelines to reuse existing assets like prepared datasets and trained models.
- Use the model evaluation and tuning services in Google Cloud to increase model performance with fewer iterations. This enables your AI and ML teams to achieve more objectives in less time.
Use managed services and pre-trained or existing models
There are many approaches to achieving business goals by using AI and ML. Adopt an incremental approach to model selection and model development. This helps you to avoid excessive costs that are associated with starting fresh every time. To control costs, start with a simple approach: use ML frameworks, managed services, and pre-trained models.
Consider the following recommendations:
- Enable exploratory and quick ML experiments by using notebook environments.
- Use existing and pre-trained models as a starting point to accelerate your model selection and development process.
- Use managed services to train or serve your models. Both AutoML and managed custom model training services can help to reduce the cost of model training. Managed services can also help to reduce the cost of your model-serving infrastructure.
Foster a culture of cost awareness and continuous optimization
Cultivate a collaborative environment that encourages communication and regular reviews. This approach helps teams to identify and implement cost-saving opportunities throughout the ML lifecycle.
Consider the following recommendations:
- Adopt FinOps principles across your ML lifecycle.
- Ensure that all costs and business benefits of AI and ML projects have assigned owners with clear accountability.
Contributors
Authors:
- Isaac Lo | AI Business Development Manager
- Filipe Gracio, PhD | Customer Engineer
Other contributors:
- Kumar Dhanagopal | Cross-Product Solution Developer
- Marwan Al Shawi | Partner Customer Engineer
- Nicolas Pintaux | Customer Engineer, Application Modernization Specialist