Building AI that benefits humanity: A conversation with Google DeepMind’s leaders
Chau Mai
Global Executive Marketing Manager, Google Cloud
Google DeepMind prioritizes building safe and ethical AI, emphasizing inclusivity and collaboration to advance human potential and improve lives.
AI isn't sci-fi anymore — it's reshaping our world in ways we're just starting to understand. But as AI proliferates across industries, fueled by the generative AI boom, leaders face a critical challenge: how to build AI that's safe, ethical, and trustworthy, all while scaling at speed.
“When we look at the adoption of the technology — how it is getting into the user's hands and being used in different products — I think that expansion of applications is picking up exponentially,” Koray Kavukcuoglu, Google DeepMind’s chief technology officer, said on a recent episode of the “AI That Means Business” podcast, hosted by Custom Content from WSJ. “It is very fast. Sometimes, maybe it's too fast.”
Google DeepMind, Google’s AI research lab, is dedicated to building AI that benefits humanity, spearheading groundbreaking AI products and advancements that aim to improve the lives of billions by advancing science, transforming work, and serving diverse communities. The innovative team, which includes scientists, engineers, and ethicists, is behind some of the most significant developments underpinning modern AI, including the Transformer architecture — one of the foundational technical cornerstones powering gen AI.
The episode, featuring Kavukcuoglu, along with Lila Ibrahim, Google DeepMind’s chief operating officer, offers an inside glimpse into the current state of AI, its ethical considerations, and its potential to reshape our world. Here are three key priorities the team is keeping top of mind as they work.
1. Safety and responsibility first
According to Kavukcuoglu, we are at a pivotal moment, where AI is transitioning from research labs to real-world applications at an exponential rate. For instance, Alphabet company and Google Cloud partner Isomorphic Labs is harnessing AI models to accelerate the discovery of new medicines. Gen AI is helping global healthcare leader Bayer fast-track drug and medical device development. Ginkgo Bioworks is pioneering large language models for biological engineering and biosecurity. The latest advancements in gen AI have unlocked novel ways to organize, understand, and interact with data, enabling users to communicate with AI in natural language and accelerating adoption across industries.
At the same time, organizations have to be more aware than ever of the possible issues, limitations, or unintended consequences of developing AI. Kavukcuoglu draws a compelling analogy between AI development and the design of airplanes or bridges. Safety, he emphasizes, must be an inherent part of the design process, not an afterthought. This principle is central to Google DeepMind's philosophy as they pioneer cutting-edge AI technologies.
“When we design airplanes, when we design bridges, we don't think about safety afterwards. We design them to be safe from the beginning,” Kavukcuoglu said. “We are in the early days [of AI], but this is the main principle that we are trying to keep in mind - that safety is at the core of model development.”
While responsible AI has long been a critical topic, the broader scope and increased sophistication of gen AI create more scenarios for potential misapplication, misuse, and unexpected consequences. For example, multimodal models like Gemini, Google's largest and most capable AI model to date, can seamlessly process, synthesize, and analyze information from multiple modalities — text, code, audio, image, and video. This versatility enables more contextual awareness, increased accuracy, and a wider range of enterprise applications but also opens the door for putting AI in the hands of more people— whether that’s developers, data scientists, analysts, business users, or customers.
Thus, while the possibilities of AI are vast and inspiring, Ibrahim emphasizes the importance for all organizations — not just Google DeepMind — to place responsible AI practices at the core of their efforts.
“We're really working towards how we build AI responsibly to benefit humanity. It's really how we bring this then into the external world,” Ibrahim said. “So, when we release the technology and even beforehand, how do we think about the impact that it might have on society and how do we make sure that we're doing this in the most responsible way?”
When we design airplanes, when we design bridges, we don't think about safety afterwards. We design them to be safe from the beginning. We are in the early days [of AI], but this is the main principle that we are trying to keep in mind - that safety is at the core of model development.
Koray Kavukcuoglu, Google DeepMind, CTO
2. Inclusivity required
As part of its commitment to responsible AI, Google DeepMind recognizes that a broader range of voices will lead to more ethical and beneficial AI systems.
“Unlike computers or the Internet, where it required a lot of hardware, infrastructure, [and] policies, now a model is released, and it’s available worldwide. There are no borders,” Ibrahim said. “So, we need to be thinking about how we are ensuring that communities are ready to take this technology forward.”
To address these challenges, Ibrahim says it’s important to consider both near-term and long-term implications, taking into account both opportunities and potential risks like bias and misuse. In particular, this approach is reflected in Google DeepMind’s work on AlphaFold, its revolutionary AI system for predicting the 3D structure of proteins — a breakthrough with the potential to revolutionize fields like drug discovery, disease understanding, and climate change mitigation. From an early stage, the team consulted with outside experts in the area of bioethics and a wide range of stakeholders to analyze the impact of releasing the system.
To gain more diverse perspectives, Google DeepMind actively engages with different communities, including educators, artists, and people with disabilities, to understand their needs and ensure that AI is developed and deployed in an inclusive way. In addition, the team is working to build a robust, inclusive talent pipeline that can help shape how Google develops and applies AI.
“AI really deserves and requires exceptional care — both in how it's built and how it's used. If we truly want to benefit humanity, we need to have a more diverse view of what's happening, of how AI might land,” Ibrahim said.
3. AI as a collaborator
In the coming years, Google DeepMind believes these new advancements are paving the path to achieving artificial general intelligence (AGI) — AI systems that can match human intelligence across a range of tasks. While this goal remains conceptual, Kavukcuoglu says a future where AI can serve as a collaborator, augment human intelligence and creativity, help solve complex problems, and ultimately improve our lives is much closer than we think.
“We have reached a point where the basics of our technology is proven. We know how to develop algorithms that can learn from data. We know how to develop agents that can experiment with a given environment and learn how to interact with that environment. We are developing technologies that understand different modalities and generate different synthetic data in those different modalities. Now, it’s about combining these together,” he said.
Already, the Google DeepMind team is laying the groundwork for AGI and focusing on the idea of building general technology that can help many people to do a lot of different types of things.
“We are not looking to build intelligence that is specific to a particular area. What we want is general technology that can do a lot of different things that can be a multiplier to human intelligence and human creativity,” Kavukcuoglu said. “In Gemini, there has been a huge investment from the beginning on being natively multimodal. We have started to see really exciting applications being able to understand text, images, videos, audio, all at the same time, and making sense of it seamlessly.”
Overall, Kavukcuoglu and Ibrahim offer a compelling vision of AI as more than a technology but also a partner in progress, helping us unlock human potential and build a better world. The key will be engaging in open and thoughtful discussions about its implications and our responsibilities — only then, can we harness its power for good and ensure a future where AI can truly benefit all of humanity.
Google DeepMind is at the heart of the world’s AI innovation. Listen to the full podcast episode to learn more from Google DeepMind leaders Koray Kavukcuoglu and Lila Ibrahim about what the future holds for AI and their commitment to its safety and responsibility, both in how it's built and how it's used.