The Prompt: Probability, data, and the gen AI mindset
Phil Moyer
Global VP, AI & Business Solutions, Google Cloud
Business leaders are buzzing about generative AI. To help you keep up with this fast-moving, transformative topic, “The Prompt” brings you our ongoing observations from our work with customers and partners, as well as the newest AI happenings at Google. In this edition, Phil Moyer, global vice president for AI & Business Solutions at Google Cloud, explores three mindsets shifts that leaders should explore.
I consider generative AI the most approachable and flexible technology that computer science has ever built. But that doesn’t mean it’s obvious how a business should pursue custom generative apps.
If asked what makes gen AI different and exciting, most enterprise leaders point out how the technology responds with human-like fluency to users’ simple natural language instructions, replacing complex software interfaces with intuitive interactions and chats. This distinction is remarkable because it’s not just a visionary idea but actually coming true before our eyes — mostly.
Given the improvements of model quality and generative AI techniques, their outputs can seem remarkably human-like. Even so, it's important to remember that the foundation models that underpin gen AI are not any more aware of their existence than a pencil is about erasers. Their ostensibly human-like behaviors are rooted firmly in statistical reasoning, able to predict what a person might want but not able to think in human terms.
Therefore, to fully harness the capabilities of generative AI, leaders need an awareness of what models appear to do as well as what they actually do. In this post, we’ll delve into why generative AI is different from past software paradigms, and three mindset shifts it requires from executives.
Why gen AI is different
Because gen AI behaves and leverages data differently than more familiar types of software, leaders should understand some of the contours that define possible or advisable use cases.
The distinction between generative AI and other technologies can get into the weeds of data science, if not also the weeds of philosophy. The deepest discussions center around complicated topics like gradient descent, activation functions, “mixture of experts” techniques, “world models,” prompt chains, scaling laws, and more. There’s ample opportunity to get lost in technical details, but for enterprise leaders, a key distinction is this: Whereas traditional software is programmed, generative AI is trained, tuned, reinforced, and prompted.
It’s the difference between a machine recognizing “If zip code = 94015, the city = San Francisco” and a machine inferring possible answers to “I’ll be visiting San Francisco for a week and staying on the Embarcadero with my three kids, who are all under 10 and one of whom needs a nap every afternoon. What would be some fun, educational activities for us to do?” The former is a static 1:1 relationship between two pieces of information — an if-then dynamic — but the latter is a complex blend of contexts and data with a distribution of possible answers — more like a what-if dynamic.
Mindset one: Generative models are probability engines
Let’s dig into this difference.
Most of the functionality in today’s apps is deterministic in nature, based on rules that return certain outputs. When an app needs to look up product prices or validate customer information, it uses functions to call a database and return a precise, repeatable answer.
In contrast, the foundation models powering generative AI are probabilistic, best thought of as probability engines that humans can tune toward desired outcomes. They use the patterns they learn during training and tuning to calculate the most probable output to a given prompt, such as the most likely answer to a question or an accurate caption for an image. Since they’re not deterministically confined by rows and columns in a database, foundation models are extremely powerful, able to take on many complex, novel, and open-ended tasks that traditional software can’t match. However, they’re also not appropriate for everything.
Suppose a customer interacts with a generative retail app, for example. On one hand, a model’s probabilistic flexibility could create deeply personalized and differentiated experiences. If the model has been tuned with the retailer’s proprietary data — such as product commentaries, reviews, and documentation — it could let customers discover, compare, choose, and accessorize products via a fluid, cohesive conversation instead of website navigation. But on the other hand, what about interactions like, say, looking up current inventory? There is no way this deterministic, constantly-updated information can come from the model itself. That approach would just encourage the model to blithely spit out would-be inventory values, piecing together probabilities based on patterns it saw during training — and very, very rarely inferring the true numbers.
Connecting to real-time or constantly-refreshed data is not possible through the probabilistic nature of the model itself. For this, a generative app requires interactions between the model and other systems.
Mindset two: Define how data shapes the probability engines
When it comes to combining generative models with various data sources and deterministic functions, most of the specific work will fall to developers, data scientists, product managers, and similar experts. But driven by an understanding of how probabilistic and deterministic processes unlock value in different ways, leaders should set the tone for these efforts, identifying which data sources their technical talent can leverage for new apps and experiences.
For example, think how many employees spend time scrutinizing spreadsheets or compiling information from different sources — not because it’s their job, per se, but because they’re tracking down information to do their job. In many cases, it would be much easier if employees could simply ask an AI assistant to assemble the information they need, letting them spend more time acting and less time preparing to act.
The specific potential use cases that fit this scenario are numerous: call center agents finding information for a customer, analysts completing research for reports, blog editors trying to ensure brand tone and style guidelines are implemented, and many more. But to be achieved, all of them require leaders who see the intersections between data and generative AI — and act on them.
Organizations have spent years collecting data, at times complaining there was too much to manage. With generative AI, that’s no longer the case. It’s effectively possible for employees to talk to all that data, so to speak, to ask questions of it, pose theories, and quickly get back actionable responses. But these new workflows require new ways of thinking.
As we’ve established, most generative apps need to be designed to mix deterministic and probabilistic outputs as the context dictates, all while keeping the user experience seamless and personalized. This blend requires not only technical skill but also a clear executive vision for how data should be activated with generative AI. The best foundation models, even when fresh off the shelf, do a good job with many of the prompts thrown at them. But the more niche the topic, the less reliable a model’s probabilities tend to be. This can lead to inaccurate or less useful outputs—unless data, and how it is leveraged, has been carefully considered.
Such challenges raise some of the earliest questions of data leadership. Does a foundation model need to be fine-tuned with data specific to the use case, and if so, with what data? Or is an optimal app less about refining a model and more about extending it to various deterministic systems for transactions and real-time information? When is the experience defined by connecting to a deterministic process, when is the experience defined by refinements to the probabilistic model itself, and when is it defined by some mixture of both concepts?
These questions illustrate why an overall data strategy, both across the organization and within each discrete project, is so important. Executives don’t need to weigh in on every single decision across the organization, but project by project, these questions accrue to whether leaders have defined a clear strategy for data and AI overall.
Suppose a travel company wants to launch an AI assistant to help customers more easily and quickly plan vacations within their budget. Users will describe what they are looking for, such as specific locations or general ideas like “beach” or “lots of museums.” The assistant will identify potential vacation plans and deals, offer in-depth chats about different locales, and if instructed, seamlessly book flights or hotels, create tour itineraries, and make various reservations.
There are a fair number of moving pieces in this theoretical service, including not only a foundation model generating probabilistic answers to questions, but also various deterministic systems for transactions and retrieval of booking information and other real-time information. Various technologies may come into play, such as training or fine-tuning the model with specific data or leveraging embeddings or connectors to extend the model to other systems. But all the technical details aside, the app’s potential value starts by identifying how generative AI can unlock the value of data in new ways. For example, does the app use generative AI to simply package information that other services can replicate, such as booking and destination data from various airline and hotel APIs, or does it activate unique data available only to the travel company, like years of travel reviews with specific details and observations?
Leaders should be the architects of these data-activation efforts, identifying which latent data sources can be unlocked with generative AI, how generative apps can get the right information to the right people more effectively, and how new value can be activated.
Mindset three: Owning the opportunity means owning the responsibility
Generative apps use data differently and behave differently than most earlier kinds of apps, so it follows that they involve new responsibilities. We’ve touched lightly on some of these ideas, like guardrails against undesirable model behavior or various methods to improve model accuracy. Security and privacy considerations are also paramount
But the responsibility is much bigger than any single consideration. Transformative technologies have the potential to make lots of things more efficient — but the real value lies in whether they make them better.
That’s where leaders have the opportunity to shine, defining how new technologies can shape interactions with information, improve how employees do their jobs, and elevate how customers interact with companies. Every executive should be part of an effort to define responsible AI principles — an effort that can only effectively be led from the top.
Build a vision and strategy for generative AI
In many ways, it’s easier than ever for developers without machine learning or data science experience to build sophisticated AI apps — but it’s not a seamless transition, and not one enterprise leaders should expect to make immediately at scale.
Compared to the transition from keyboards and desktops to touchscreens and mobility, the transition to generative AI has the potential to be even more complicated and wide-reaching. Understanding what’s changed — and the mindsets those changes require — is essential to navigating the new environment.
Opening image created with Midjourney, running on Google Cloud, using the prompt "a probability distribution transforms into a certain outcome."