Unlocking AI's potential in healthcare: Balancing innovation and responsibility

Aashima Gupta
Global Director, Healthcare Strategy & Solutions, Google Cloud
AI can significantly advance healthcare by improving care and operations. Successful implementation requires a balanced and responsible approach, addressing ethical considerations and strategic planning.
Try Google Cloud
Start building on Google Cloud with $300 in free credits and 20+ always free products.
Free trialArtificial intelligence (AI) is a powerful technology with the potential to improve medical systems and help doctors and nurses provide better care. Beyond aiding in medical diagnoses, AI can transform the behind-the-scenes machinery of healthcare—streamlining nurse handovers, calibrating discharge plans, and untangling appointment scheduling—freeing caregivers to invest their valuable time where it matters most: with patients.
AI can help providers conduct on-demand end-to-end observational studies across millions of patients, an impossible task for any individual doctor. Based on billions of data signals across patients and provider locations, AI can help predict a patient’s post-surgery recovery time, or emergency room staffing needs. AI’s biggest potential, however, may lie in emerging use cases that we can only dream about in these early years.
When applying AI to healthcare, we need to be bold, but we also need to be cautious. As with any new technology, unrealistic expectations and undisciplined approaches can derail investments. How do healthcare leaders determine the most effective ways to use AI—for individual productivity, process automation, medical diagnosis support, care plan creation and management, and treatment breakthroughs—while avoiding potential pitfalls?
AI’s promises and pitfalls
In February 2025, I had the honor of participating in a discussion at the ViVE conference in Nashville with distinguished panelists Dr. Nigam Shah, Professor of Medicine at Stanford University, and Dr. John Brownstein, SVP and Chief Innovation Officer at Boston’s Children’s Hospital. During our session, several topics inspired particularly lively discussion on how healthcare organizations can determine their path forward with AI.
AI hype vs. reality
We debated whether AI in healthcare represents a revolutionary breakthrough or an overhyped risk. On the one hand, we all agreed on its incredible potential. To a certain extent, excitement—even hype—is necessary to unlock resource investments. On the other hand, there’s a concern that AI advocates who overpromise and underdeliver leave healthcare leaders without a clear picture of what AI technologies can actually do and where best to apply them.
For many enterprises, AI is a new muscle. Google’s ethos is to be bold in our vision and responsible in our approach. Healthcare organizations seeking to benefit from the reality while escaping the hype need to define a technology roadmap tied to strategic and business priorities, along with clear guardrails for experimentation. Balancing short-term wins with long-term innovation strategies and investing in AI literacy across leadership teams will help set realistic expectations.
As John Brownstein pointed out, “There's a finite amount of resources.” AI hype will generate opportunities to do more, he contends, but we can’t abandon promising innovations simply because they don’t have the “AI label” on them.
Ethics, safety, and trust in AI
Our panel agreed that over-reliance on AI without human oversight may introduce unintended risks in patient care. On the other hand, too many “humans in the loop” along AI workflows risks burdening caregivers. Regulation is necessary for a technology as important and far-reaching as AI. Given AI’s ever-evolving nature and the need to encourage ethical innovation, regulatory frameworks must adapt. Overly rigid rules could create roadblocks that stifle progress, while lax governance could result in biased or negative outcomes.
Nigam Shah summarized it best: “Whatever you do within an organization, make sure it is fair, usable, reliable—and trust will follow.”
Google first established AI principles in 2018. Since then, we’ve evaluated every project, product, and business decision against them. All healthcare organizations need clear guidelines for human oversight of AI-assisted clinical decision making, as well as ongoing validation protocols for AI models as they continuously evolve. These frameworks will facilitate safety, trust, and fairness.
Investment priorities and ROI of AI
Overall, we agreed that while AI has the potential to generate significant ROI in healthcare, a strategic approach that balances quick wins with long-term benefits is crucial. Low hanging fruit, such as projects to automate repetitive, mundane operational tasks, can save millions of dollars. Organizations can then invest these savings in longer-term projects to help improve patient care. It’s essential to create safe spaces for experimentation and to align investments with strategic and business priorities.
AI isn’t just a technology shift—it’s a strategic transformation. Organizations must move beyond isolated AI experiments and develop a clear strategy that aligns AI adoption with clinical goals, operational priorities, and patient outcomes. In my role at Google, I work with customers who are building platforms to manage and govern AI models effectively. This helps them avoid the chaos of different teams building different models in different ways, leading to disconnected disparate solutions. Instead, teams can build solutions on top of a solid foundation with centralized coordination, management and permission aware access controls.
Without a balanced approach, Shah cautioned, we risk falling into the Turing Trap, where we limit AI use to things we already know how to do and fail to imagine jobs that humans can’t do, such as looking through millions of records and producing an answer that a human could not have produced.
AI’s bright future in healthcare
AI is neither a silver bullet nor a looming threat, but a powerful tool that can drive significant advancements in healthcare when wielded responsibly. Organizations must balance its transformative potential with careful consideration of ethical, regulatory, and practical challenges. Our discussion at ViVE 2025 emphasized the need for trust, transparency, and a balanced approach to regulation to ensure the responsible and equitable use of AI in healthcare.
As we move forward with strategic investments and a clear technology roadmap, we need to remain both bold and responsible. Building the right foundations will unleash AI's full potential to revolutionize healthcare. The future of AI in healthcare is bright, but it requires thoughtful navigation of complex challenges so we can seize the opportunities it presents.