The experimentation advantage: How leaders turn uncertainty into opportunity

Monisha Deshpande
Global Director, Value Creation, Google Cloud
Organizations can overcome bureaucratic friction slowing thier AI adoption by embracing a culture of structured experimentation.
Why are 90% of employees already using AI at work while only 13% of organizations call themselves 'early adopters?' The answer reveals an important barrier to AI value and the path around it.
Individual people are experimenting rapidly with AI tools. They're discovering what works through direct experience. Meanwhile, organizations are slow to adopt. This gap isn't about technology — AI capabilities are advancing rapidly, with models that are more accurate, efficient, and accessible than ever. The real constraint is something more fundamental: how organizations naturally protect what's already working.
The real constraint on AI value
Here's what I see in the market: the technology is incredibly useful and the ROI of well-implemented AI is compelling. But doing AI well requires changing established processes and operating differently — not purely a technical challenge. Research shows 70% of AI implementation challenges stem from people and process issues. Only 10% are about algorithms.
This is only natural. Organizations favor the proven approaches that built their success. Established processes exist because they deliver consistent results. Risk management systems exist because they prevent costly mistakes. The instinct to protect what works is rational.
But AI is an entirely new technology. You often can't fully prove how it will work in your specific context until you try it.
Typical actions like stakeholder approvals, risk assessments, and rounds of analysis delay progress. While each is reasonable on its own, they cumulatively slow adoption, learning, and financial returns.
AI leaders are following a different path: structured experimentation that reduces this friction by design.
How experimentation reduces friction
Innovative companies realized long ago that rapid experimentation beats exhaustive planning when technology moves fast and outcomes are uncertain. This approach emerged in fast-moving tech environments because waiting for certainty meant arriving too late.
AI now brings this same dynamic to every industry. The strategic question is how to build experimentation capability within enterprise constraints.
Companies doing this well have recognized that small, structured experiments face far less friction than large transformation initiatives. Bottom-up AI adoption happens precisely because individual experiments bypass the approval gates, risk reviews, and stakeholder negotiations that slow major process changes.
This is experimentation as a friction-reduction strategy. Each test generates learning without requiring wholesale change upfront. It’s about running a controlled test to see what might work better.
What matters most is the permission structure. When leaders value learning from controlled experiments, even those that don't scale, teams start moving. They test, learn, and build confidence through direct experience, without waiting for perfect certainty.
Experimentation also addresses individual uncertainty about AI. When people participate in tests, they discover firsthand how AI can help them with tedious tasks like data analysis and summarization, freeing them to focus on judgment, creativity, and meaningful problem-solving. This personal discovery builds comfort and enthusiasm, transforming AI from a risk to a valuable tool. Teams with high experimentation rates develop workforces that actively look for AI applications because they've experienced the benefits personally.
Google Cloud's testing environments support this approach, allowing safe, small starts that scale what proves valuable. However, the technology is secondary to the cultural shift: making it acceptable, even celebrated, to learn through doing.
Why the learning gap widens
Companies that develop capability develop a compounding advantage. The gap between AI leaders and laggards increased 60% over just three years and is accelerating.
Teams get better at designing meaningful experiments, developing intuition about what to test, and building confidence in trying new approaches. They also get better at knowing what to do next, feeding successful experiments into a process that determines the best approach to scale, whether that means expanding to new teams, adapting to adjacent workflows, or refining before broader rollout.
Those with high experimentation capability make better choices, based on data from their own operations. They understand not just whether AI works, but how it works with their specific workflows, their teams, their constraints.
Your competitors are actively experimenting, learning what works, and building the organizational capacity to adopt innovations more easily.
The path forward
You don't need to overhaul everything, just start learning.
Organizations capturing AI work with friction, not against it. Small experiments face less resistance and structured tests with clear parameters satisfy risk management while enabling learning. Rapid iteration builds confidence.
Pick one workflow, run a 30-day experiment, then do it again with a different question. The constraint is organizational, not technical. Experimentation is how you overcome it.
The technology is ready. The value is there. The constraint is organizational, not technical. Experimentation is how you get past it.



