Jump to Content
Security & Identity

Cloud CISO Perspectives: Building better cyber defenses with AI

February 28, 2024
https://storage.googleapis.com/gweb-cloudblog-publish/images/2024_Cloud_CISO_Perspectives_header_no_tit.max-2500x2500.jpg
Phil Venables

VP, TI Security & CISO, Google Cloud

Charley Snyder

head of security policy, Google

Hear monthly from our Cloud CISO in your inbox

Get the latest on security from Cloud CISO Phil Venables.

Subscribe

Welcome to the second Cloud CISO Perspectives for February 2024. Today, Charley Snyder, a Google expert on the intersection of security and public policy, talks about our announcements at February’s Munich Security Conference — and why the conference plays a vital role in public policy conversations.

As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.

--Phil Venables, VP, TI Security & CISO, Google Cloud

How we plan to build better cyber defenses with AI

By Charley Snyder, head of security policy, Google

Artificial intelligence (AI) has taken the world by storm, and no technology since the invention of the internet has as much potential to upend our digital world. While much of the debate over the past year has focused on how AI can introduce new risks, we also think disruption can lead to positive change. After all, there’s no shortage of long-standing problems in security — and our work with AI shows the technology has tremendous potential to help.

https://storage.googleapis.com/gweb-cloudblog-publish/images/Charley_Snyder_headshot.max-600x600.jpg

At the Munich Security Conference earlier this month, we launched the AI Cyber Defense Initiative. At this initiative’s heart is a belief, backed up by new commitments and investments, that AI has the potential to strengthen rather than weaken the world’s cyberdefenses — if we build the right foundations today. The initiative includes a policy and technology agenda detailed in our new whitepaper, Secure, Empower, Advance: How AI Can Reverse the Defenders’ Dilemma.

As part of our initiative, we made five AI and security-focused announcements:

  • We are awarding millions of dollars in new research grants to leading academic institutions.
  • We have opened the source code of one of our models to help organizations detect threats.
  • We launched a new cohort of 17 startups in our Google for Startups program who are applying AI to cybersecurity challenges.
  • We expanded our $15 million Google.org Cybersecurity Seminars program to cover all of Europe.
  • We announced our commitment to invest billions of dollars to bring secure, AI-ready infrastructure to organizations of all sizes in Europe.

Moving from AI risk to AI impact

Each year, the Munich Security Conference brings together world leaders to discuss pressing geopolitical challenges. This year, topics included renewed conflict in the Middle East, an ongoing war in Europe, and a pivotal moment for democracy, with billions of people voting in national elections in 2024. At the conference, the role of technology, and particularly AI, was top of mind across all these issues. Leaders are concerned about the destabilizing effect of AI amidst an already unstable geopolitical climate.

There is still more work to do to ensure AI technology is safe and secure, but if we solely focus on risks to avoid, we will never achieve its transformative potential.

Over the past year, most of my conversations on AI with policymakers, operational leaders, and technologists have focused on the risks of the technology. Governments have taken important steps together with companies and other civil society stakeholders to address and mitigate these risks, through initiatives like the U.S. AI Executive Order and industry-led initiatives like our Secure AI Framework (SAIF), which is a vehicle to collaborate on best practices for securing AI systems.

There is still more work to do to ensure AI technology is safe and secure, but if we solely focus on risks to avoid, we will never achieve its transformative potential. It seems like every day at Google we are finding new ways to apply AI to improve security tooling and help defenders, which is why we are calling for a broader discussion across government and industry to drive the technology in a positive direction.

Helping defenders lock in the AI advantage

One of the most important questions about AI is whether it will help attackers or defenders more. Attackers, of course, are every bit as focused on exploiting the potential of AI as defenders are committed to using it to protect data, systems, infrastructure, and human life. We believe a few factors cut in favor of defenders:

  • The security community has a (fragile) data advantage over attackers. Until now, defenders often have too much data, without always having effective mechanisms to convert that data into positive security outcomes. With AI, we can take advantage of the community’s progress over the past decade in sharing threat information and telemetry to use that data to train better models, and in turn use those models for better defenses.
  • AI has huge potential to make the global terrain of cyberspace more defensible by improving secure software development and deployment. In the future, we think AI-powered development tools can guide engineers towards code and configuration that is secure by design, and verify formal properties of a system to ensure that it does not create new safety, privacy, or compliance risks.
  • Generative AI is particularly well-suited to reducing (and possibly eliminating) toilsome tasks and helping train defenders through natural language instruction. In aggregate, AI will help upskill defenders and even uplevel organizations who don’t have any defenders at all, helping to clean up the web.

We need a bold research and policy agenda to ensure that these asymmetries tip in favor of defenders, one that can unlock the science and create structural conditions to provide maximum leverage for defenders — and also limit the potential for malicious use.

Across all of these examples, organizations that use cloud services will particularly benefit. The immense size and scale of data and training needed to build effective AI models naturally benefit from the cloud. While it may be possible to create similar gains for AI with on-premises as in the cloud, it's very unlikely because training AI is much easier (and easier to rapidly improve) with the data available in the cloud.

What's next for AI and security

As AI evolves, the asymmetries in cyber defense and offense will continue to shift, and it will be an ongoing challenge for organizations and governments to adapt their cybersecurity strategies accordingly. We need a bold research and policy agenda to ensure that these asymmetries tip in favor of defenders, one that can unlock the science and create structural conditions to provide maximum leverage for defenders — and also limit the potential for malicious use.

To ensure that AI grows to help defenders, we make three sets of recommendations:

  • Secure AI from the ground up by prioritizing secure-by-design practices and developing guardrails on autonomous cyber defenses. Much of the policy attention to date has been on the long-tail safety risks presented by AI models, yet models make up only one part of technology products. Attackers choose the path of least resistance to accomplish their end goals, and we believe this will seldom require attacks on the model itself.

We cannot ignore the more “mundane” security risks which can be introduced throughout the lifecycle of AI systems (from pre-training through to deployment and run time) and at all layers of the technology stack (including hardware, operating systems, protocols, and APIs.) Otherwise, AI security technology will itself become a vector of vulnerability.

  • Empower defenders over attackers by promoting a balanced regulatory approach to AI usage and adoption to avoid a future where attackers can innovate but defenders cannot. AI governance choices made today can shift the terrain in cyberspace in unintended ways.

Regulations that prohibit training models on publicly available data, for example, will only constrain rule-bound companies from benefiting from the datasets, not attackers. We need to work together to limit harm from AI while ensuring defenders can adopt beneficial uses at scale — particularly for organizations in high-risk fields such as critical infrastructure and public sector.

  • Advance research cooperation to generate scientific breakthroughs to enable new paradigms for security and software development. The research community must play a central role in enabling new paradigms for security and software development. This includes testing and evaluating new security technologies, assessing and prioritizing risks, and introducing new innovations to help eliminate entire classes of threats. While existing publications tend to focus on demonstrating attacks on or using AI, we should prioritize research into building defenses against or with AI.

You can learn more about how Google is boldly and responsibly building AI tools at our AI and Security site.

In case you missed it

Here are the latest updates, products, services, and resources from our security teams so far this month:

  • Here are 5 gen AI security terms busy business leaders should know: Want to better understand ongoing AI developments — and what you can do about them? Learn these 5 terms at the intersection of AI, security, and risk. Read more.
  • How generative AI is altering the enterprise security landscape: Symantec security veteran Adam Bromwich discusses gen AI’s impacts on cybersecurity with Jeff Reed, Google Cloud’s vice president of Cloud Security. Read more.
  • A year in the cybersecurity trenches with Mandiant Managed Defense: Looking back at 2023, the Mandiant Managed Defense team highlights key observations from its cybersecurity engagements. Read more.
  • Deutsche Börse’s cloud transformation: 10 key considerations for a security runbook: As part of its cloud transformation, Deutsche Börse developed a long-term security plan. To help other financial services organizations, they’re sharing what they did. Read more.
  • How data sovereignty can help create economic opportunity in Germany: GovMarket is a new digital marketplace that can help drive innovation in procurement for public institutions in Germany. Here’s how. Read more.
  • Want your cloud to be more secure? Stop using service account keys: There are no simple solutions to securing cloud credentials, but one way to get started is to stop using service account keys. Here’s how. Read more.
  • Wrangle your alerts with open source Falco and the gcpaudit plugin: You can use open-source runtime security platform Falco with Google Kubernetes Engine to monitor cluster and container workload security. Here’s how.
  • Nom for security: A proactive security review of Nomulus: As part of our goal to keep Google/Alphabet products secure and our users safe, our Information Security Engineering team will proactively review products outside of product design/launch lifecycles. Our latest review focused on the open-source project Nomulus. Read more.

News from Mandiant

  • Unveiling our CTI program maturity assessment: As part of Google Cloud's continuing commitment to improving the overall state of cybersecurity, Mandiant is publicly releasing a web-based Intelligence Capability Discovery to help commercial and governmental organizations evaluate the maturity of their cyber threat intelligence (CTI) programs. Read more.
  • Riding dragons: capa harnesses Ghidra: As part of the Google Summer of Code project, the latest release of our popular reverse engineering tool, capa v7, features integration with Ghidra, bringing capa’s detection capabilities directly to Ghidra’s user interface. Read more.
  • Defending against ScreenConnect vulnerabilities: Mandiant released a remediation and hardening guide that defenders can use to help protect their organizations from recently announced ConnectWise ScreenConnect vulnerabilities. Read more.

Now hear this: Google Cloud Security and Mandiant podcasts

  • The techno-legal perspective on the cloud: Victoria Geronimo, security architect at Google Cloud, the intersection of compliance, security, and cloud with our Cloud Security podcast hosts Anton Chuvakin and Tim Peacock to revisit the complex challenges and important victories in cloud migration. Listen here.
  • Seeing secure cloud migration more clearly: Migrating to the cloud is a big deal. How can organizations best maintain and improve their security posture during a cloud migration? And how can migration be used to reduce risk? Merritt Baer, field CTO at Lacework joins Anton and Tim to revisit the complex challenges and important victories in cloud migration. Listen here.
  • The North Korean IT workers: Mandiant Principal Analyst Michael Barnhart joins host Luke McNamara to discuss Mandiant's research into the threat posed by the Democratic People's Republic of Korea's (DPRK) usage of IT workers to gain access to enterprises. Listen here.

To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in two weeks with more security-related updates from Google Cloud.

Posted in