Cloud Wisdom Weekly: 4 ways AI/ML boosts innovation and reduces costs
“Cloud Wisdom Weekly: for tech companies and startups” is a new blog series we’re running this fall to answer common questions our tech and startup customers ask us about how to build apps faster, smarter, and cheaper. In this installment, we explore how to leverage artificial intelligence (AI) and machine learning (ML) for faster innovation and efficient operational growth.
Whether they’re trying to extract insights from data, create faster and more efficient workflows via intelligent automation, or build innovative customer experiences, leaders at today’s tech companies and startups know that proficiency in AI and ML is more important than ever.
AI and ML technologies are often expensive and time-consuming to develop, and the demand for AI and ML experts still largely outpaces the existing talent pool. These factors put pressure on tech companies and startups to allocate resources carefully when considering bringing AI/ML into their business strategy. In this article, we’ll explore four tips to help tech companies and startups accelerate innovation and reduce costs with AI and ML.
4 tips to accelerate innovation and reduces costs with AI and ML
Many of today’s most innovative companies are creating services or products that couldn’t exist without AI—but that doesn’t mean they’re building their AI and ML infrastructure and pipelines from scratch. Even for startups whose businesses don’t directly revolve around AI, injecting AI into operational processes can help manage costs as the company grows. By relying on a cloud provider for AI services, organizations can unlock opportunities to energize development, automate processes, and reduce costs.
1. Leverage pre-trained ML APIs to jumpstart product development
Tech companies and startups want their technical talent focused on proprietary projects that will make a difference to the business. This often involves the development of new applications for an AI technology, but not necessarily the development of the AI technology itself. In such scenarios, pre-trained APIs help organizations quickly and cost-effectively establish a foundation on which higher-value, more differentiated work can be layered.
For example, many companies building conversational AI into their products and services leverage Google Cloud APIs such as Speech-to-Text and Natural Language. With these APIs, developers can easily integrate capabilities like transcription, sentiment analysis, content classification, profanity filtering, speaker diarization, and more. These powerful technologies help organizations focus on creating products rather than having to build the base technologies.
See this article for examples of why tech companies and startups have chosen Google Cloud’s Speech APIs for use cases that range from deriving customer insights to giving robots empathetic personalities. For an even deeper dive, see
2. Use managed services to scale ML development and accelerate deployment of models to production
Pre-trained models are extremely useful, but in many cases, tech companies and startups need to create custom models to either derive insights from their own data or to apply new use cases to public data. Regardless of whether they’re building data-driven products or generating forecasting models from customer data, companies need ways to accelerate the building and deployment of models into their production environments.
A data scientist typically starts a new ML project in a notebook, experimenting with data stored on the local machine. Moving these efforts into a production environment requires additional tooling and resources, including more complicated infrastructure management. This is one reason many organizations struggle to bring models into production and burn through time and resources without moving the revenue needle.
Managed cloud platforms can help organizations transition from projects to automated experimentation at scale or the routine deployment and retraining of production models. Strong platforms offer flexible frameworks, fewer lines of code required for model training, unified environments across tools and datasets, and user-friendly infrastructure management and deployment pipelines.
At Google Cloud, we’ve seen customers with these needs embrace Vertex AI, our platform for accelerating ML development, in increasing numbers since it launched last year. Accelerating time to production by up to 80% compared to competing approaches, Vertex AI provides advanced end-to-end ML Ops capabilities so that data scientists, ML engineers, and developers can contribute to ML acceleration. It includes low-code features, like AutoML, that make it possible to train high performing models without ML expertise.
Over the first half of 2022, our performance tests found that the number of customers utilizing AI Workbench increased by 25x. It’s exciting to see the impact and value customers are gaining with Vertex AI Workbench, including seeing it help companies speed up large model training jobs by 10x and helping data science teams improve modeling precision from the 70-80% range to 98%.
If you are new to Vertex AI, check out this video series to learn how to take models from prototype to production. For deeper dives, see
this article about Vertex AI’s role in an ambitious project to measure climate change with AI;
BigQuery has built-in Machine Learning (ML) and Analytics that you can use to create no-code predictions using just SQL queries.
this blog about how Vertex AI and BigQuery work together to make data analysis easier and more powerful;
and this blog about Example-based explanations, one of our most recent updates to make model iteration more intuitive and efficient.
3. Harness the cloud to match hardware to use cases while minimizing costs and management overhead
ML infrastructure is generally expensive to build, and depending on the use case, specific hardware requirements and software integrations can make projects costly and complicated at scale. To solve for this, many tech companies and startups look to cloud services for compute and storage needs, attracted by the ability to pay only for resources they use while scaling up and down according to changing business needs.
At Google Cloud, customers share that they need the ability to optimize around a variety of infrastructure approaches for diverse ML workloads. Some use Central Processing Units (CPUs) for flexible prototyping. Others leverage our support for NVIDIA Graphics Processing Units (GPUs) for image-oriented projects and larger models, especially those with custom TensorFlow operations that must run partially on CPUs. Some choose to run on the same custom ML processors that power Google applications—Tensor Processing Units (TPUs). And many use different combinations of all of the preceding.
Beyond matching use cases to the right hardware and benefiting from the scale and operational simplicity of a managed service, tech companies and startups should explore configuration features that help further control costs. For example, Google Cloud features like time-sharing and multi-instance capabilities for GPUs — as well as features like Vertex AI Training Reduction Server — are built to optimize GPU costs and usage.
Vertex AI Workbench also integrates with the NVIDIA NGC catalog for deploying frameworks, software development kits and Jupyter Notebooks with a single click—another feature that, like Reduction Server, speaks to the ways organizations can make AI more efficient and less costly via managed services.
4. Implement AI for operations
Besides using pre-trained APIs and ML model development to develop and deliver products, startup and tech companies can improve operational efficiency, especially as they scale, by leveraging AI solutions built for specific business and operational needs, like contract processing or customer service.
Google Cloud’s DocumentAI products, for instance, apply ML to text for use cases ranging from contract lifecycle management to mortgage processing. For businesses whose customer support needs are growing, there’s Contact Center AI, which helps organizations build intelligent virtual agents, facilitate handoffs as appropriate between virtual agents and human agents, and generate insights from call center interactions. By leveraging AI to help manage operational processes, startups and tech companies can allocate more resources to innovation and growth.
Next steps toward an intelligent future
The tips in this article can help any tech company or startup find ways to save money and boost efficiency with AI and ML. You can learn more about these topics by registering for Google Cloud Next, kicking off October 11, where you’ll hear Google Cloud’s latest AI news, discussions, and perspectives—in the meantime, you can also dive into our Vertex AI quickstarts and BigQuery ML tutorials. And for the latest on our work with tech companies and startups, be sure to visit our Startups page.