Jump to Content
AI & Machine Learning

Ask OCTO: What to know about the data-AI virtuous cycle

September 25, 2024
https://storage.googleapis.com/gweb-cloudblog-publish/images/GettyImages-119439109.max-2000x2000.jpg
Will Grannis

VP and CTO, Google Cloud

Google Cloud Office of the CTO experts share insights on preparing your data for AI and what implementation challenges to look out for.

In our Ask OCTO column, experts from Google Cloud's Office of the CTO answer your questions about the business and IT challenges facing you and your organization now. Think of this series as Google Cloud’s version of an advice column — except the relationships we're looking to improve are the ones in your tech stack.

Please submit questions for future columns here.

The motto of Google Cloud’s Office of the CTO is “collaborative, practical magic.” The team is made up of Google Cloud technical experts and former CTOs of leading organizations, all of whom work in the service of helping our largest and most strategic customers tackle their biggest challenges.

This edition we're tackling a pair of questions, first on the AI lifecycle and then what it takes to implement and scale these AI solutions.

What's the behind-the-scenes process of the AI lifecycle in terms of how data is collected, processed, evaluated, and used to train AI models?

Scott Penberthy, Director, Applied AI, Office of the CTO

“Clean data” refers to data that has been extracted from operational systems, processes, and prepared for analysis.

For unstructured data, such as images, audio, videos, or text, you will need to build AI pipelines that transform data into vector embeddings — numerical representations that capture semantic relationships and similarities.

This transformation depends on the business problem at hand, as it “embeds” the signal you need in a much more compact form. AI models help with this, frequently called VAEs or variational auto encoders. For example, we recently announced a VAE we developed with Ginkgo to encore protein sequences as vectors.

Semi-structured data will also need to be collected and stored in a cloud data warehouse or database. There are many standards for this, such as FHIR in healthcare. This “metadata” is what AI likes to analyze.

Next, you’ll use an AI platform like Vertex AI to prompt or train existing models with your clean data to generate outputs, fine-tune or create new models to improve results, and eventually build more complex workflows that combine multiple models. AI output will contain mistakes, but it can help show areas where you need to gather and clean even more data. You’ll simply go back and repeat the whole cycle.

Everyone’s data is a dumpster fire, so relax. You’re in good company.

You can spend millions and years digging through and cleaning up data, but it’s best to resist the urge. Instead, focus first on what you’re trying to do and the problems that are slowing your business down, costing you money, and getting in the way of winning market share.

With this in hand, you can then explore the data you already have around these related workflows or processes to highlight the data you’ll need to clean first. Welcome to the fun of AI!

Video Thumbnail

Sarah Gerweck, Technical Director, Office of the CTO

AI and machine learning have historically offered a bridge between the data we have and the data we wish we had. Modern AI offers us new ways to move among all our information — structured, unstructured, and semi structured — and turn it into intelligible signals.

A key trap to avoid is thinking that the AI data cycle is a one-time process. Like any software you develop and deploy, you will have an ongoing operational lifecycle. Make sure you’re tracking the lineage of your data — which model or process produced which data and when. New models will bring increased performance and new capabilities, so always have a plan to test, compare performance, and roll out new versions. In addition, you should think about choosing a platform that has the kind of operability and integrations you’ll need as your use of AI grows.

The great news is that AI operations involve many of the same things your organization already knows how to do! You likely already have things like data pipelines, access controls, change management, testing and auditing. Find the right partners and thought leaders to help you bring along these new capabilities as you improve all those existing frameworks.

Orna Berry, Technical Director, Office of the CTO

Gaining large language model insights and recommendations requires data warehouses that combine and use clean responsible data. Following the removal or correction of errors and inconsistencies, your data should be accurate, consistent, and relevant. This helps improve the performance and reliability of your AI models.

It is important to mention that when data comes in multiple modalities, gen AI models benefit directly from the quality and multimodal aspects of the data they are trained on, helping to improve their correctness, semantic similarity, or likelihood for hallucinations.

In practice, APIs can be used to understand different types of inputs (e.g., text, images, video or code) and generate almost any output using gen AI models. You can also accelerate the use of data to train or fine-tune your models by embracing AI platforms that allow you to train, build, and monitor AI models in one place while providing access to proprietary data for training and fine-tuning.

What are the biggest real-world obstacles companies face when trying to implement and scale AI solutions?

Jeff Sternberg, Technical Director, Office of the CTO

Generative AI can increase implementation complexity since these models are inherently “creative” and therefore non-deterministic by default. Non-determinism means that each generation can produce slightly different results, such as different wording of a sentence or different pixels in a generated image. In highly regulated industries, such as financial services, this is particularly challenging as AI models must be explainable (both internally and to regulators), and organizations must be able to prove their outputs are correct. Nobody wants “hallucination” in a banking or payments transaction.

AI practitioners in these industries can mitigate the risks of non-determinism with strategies like grounding, which instruct models to base their outputs on authoritative reference information, such as real-time data from enterprise systems or policy documents. This context can be provided directly in the prompt itself or using techniques like retrieval augmented generation (RAG). Furthermore, you can adjust model parameters, such as temperature, to instruct the model to generate answers that are more factual.

Importantly, no matter which technical approaches are used to gain confidence in model behavior, a robust testing and validation system should be developed alongside the AI system itself. Evaluation is critical during development and should continue after production deployment, so that stakeholders and the AI product team can verify that the system is performing as expected over time. Logging is key — and don’t forget that AI tools can summarize logs and perform outlier detection to spot issues like model drift. It’s imperative to bring everyone in the organization together to learn the techniques being leveraged for model controls and governance and set up a positive feedback loop between teams.

It’s imperative to bring everyone in the organization together to learn the techniques being leveraged for model controls and governance and set up a positive feedback loop between teams.

Jeff Sternberg, Technical Director, Office of the CTO

Olaf Schnapauff, Distinguished Technical Director, Office of the CTO

Today, executive boards have an interest and need to show progress in AI — whether companies are in the IT business or not. The first AI projects are the hardest to execute, especially as many boards and teams lack the experiences with how to set up a company to succeed, driven by a new technology. Supervisory boards that I have worked with have repeatedly mentioned that they need strong business cases before funding initial AI project proposals.

AI should be considered as an investment, which should not necessarily be managed along the traditional business processes initially. Traditional budgeting, staffing, and execution processes might need to be skipped in favor of moving fast enough in this highly competitive and dynamic field. In long existing business models, the terms of data ownership and usage rights might also need to be updated, especially in historic contracts where no clear agreements exist.

While your first steps in AI might move forward, it’s important to cultivate a leadership environment and culture to reach the tipping point, from where AI capabilities and opportunities move quickly and become broadly available. As a leadership team, you should be seen not only declaring the new role of AI but also using and embracing it. For example, having the board join a company AI hackathon, hands on keyboard, can send a strong message. Building and maintaining partnerships with AI savvy companies can also help close the gap to creating the right environment for AI initiatives to thrive.

John Abel, Technical Director, Office of the CTO

We see a number of challenges working with customers to implement and scale AI. In particular, addressing risks, ethics, and responsible AI is a cornerstone of success. Too many times, I hear that people only want 100% accuracy, but they have no idea about their current error level or what level of error is considered acceptable when using AI. Using an AI platform, which scales and provides guardrails can help to empower all users — including business users — to embrace and use AI, safely and securely without curtailing innovation. It’s also critical to have a clear communication strategy to help manage any culture, beliefs, and bias that may impact the outcomes of AI use in your organization.

In addition, it’s extremely important to secure executive sponsorship to achieve success with AI. Without the board or leadership commitment, these initiatives will be incredibly difficult to scale. AI should not be treated as another technology project but as a core fundamental to business and growth.

Chuck Sugnet, Staff Software Engineer, Office of the CTO

Are you going for moonshots or roofshots? AI is moving incredibly fast and being clear about the right risk to reward ratio for your business is an important first step for success. Moonshots have the potential for outsized gains but also require large up front investments and the ability to wait years — even decades — to see results. By comparison, roofshots are much lower risk and lower cost, but they still can have great compounding effects on your business.

For example, it is exciting to see a demo where using AI to design and write software could potentially 10X the productivity of your software team. However, in the shorter term, the real question to ask is whether your business is capable of bringing together all your code bases, data sources, testing and monitoring workflows, and best practices in a way where AI can help your existing developers simply double their productivity, velocity, and stability — and if so, are you able to provide a strong foundation of code, infrastructure, and data integration to continuously support AI as it improves?

The other thing to consider is how you will measure success. Implementing AI is like hiring and onboarding a lot of new employees all at once. The way success is measured needs to be scalable and repeatable as it’s very possible that you’ll have to track the performance of hundreds, even thousands, of new AI agents every quarter as they continue to improve. You'll need to automate these processes and have good tests in place to measure important metrics around latency, security, quality, and stability.

Posted in