Jump to Content
Security & Identity

Cloud CISO Perspectives: 3 promising AI use cases for cybersecurity

June 18, 2024
https://storage.googleapis.com/gweb-cloudblog-publish/images/2024_Cloud_CISO_Perspectives_header_no_tit.max-2500x2500.jpg
Phil Venables

VP, TI Security & CISO, Google Cloud

Hear monthly from our Cloud CISO in your inbox

Get the latest on security from Cloud CISO Phil Venables.

Subscribe

Welcome to the first Cloud CISO Perspectives for June 2024. In this update, I’m reviewing three of the more promising use cases for AI in cybersecurity.

As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.

--Phil Venables, VP, TI Security & CISO, Google Cloud

3 promising AI use cases for cybersecurity

By Phil Venables, VP, TI Security & CISO, Google Cloud

The potential for artificial intelligence to dramatically change cybersecurity and benefit defenders in the long run is something I’ve long believed in. At Google Cloud, we believe that AI can help defenders move faster by automating tasks that, until now, have required laborious toil from security experts.

While we have a ways to go to reach that goal of full automation, there are already several promising use cases for AI in cybersecurity that are delivering assistive capabilities. Malware analysis, summarization and natural-language searches to benefit security operations teams today, and even using AI to speed up the patching process are three situations worth highlighting right now.

https://storage.googleapis.com/gweb-cloudblog-publish/images/Phil_Venables_small.max-2200x2200.jpg

AI for malware analysis

Malware is one of the oldest threats we face, but attackers have been able to spawn malware variants with alarming speed. As the number of variants has increased, so have the workloads for defenders and particularly malware analysts. This is where automation can help.

We recently tested the ability of our own Gemini 1.5 Pro to analyze malware. We used a simple prompt and provided code for it to analyze, and asked it to tell us if a file was malicious. We also asked it to generate a list of activities and indicators of compromise.

We had Gemini 1.5 Pro look at multiple known malware files, testing decompiled and disassembled code. It was notably accurate each time, and generated summarized reports in human-readable language.

Thanks to its 1 million token context window, Gemini 1.5 Pro was able to process malware code in a single pass and routinely in 30 to 40 seconds — something other foundation models we tested couldn’t do and often resulted in less-accurate analysis. One of the samples we tested Gemini 1.5 Pro on was the entire decompiled code of the malware file for WannaCry. The model was able to analyze it in a single pass, taking 34 seconds to deliver its analysis and identify the killswitch.

We had Gemini 1.5 Pro look at multiple known malware files, testing decompiled and disassembled code. It was notably accurate each time, and generated summarized reports in human-readable language.

“Gemini 1.5 Pro was even able to make an accurate determination of code that — at the time — was receiving zero detections on VirusTotal,” said Google and Mandiant experts in their report of the experiment. We recently announced that Gemini 1.5 Pro will support a 2 million token context window which can continue to transform malware analysis at scale as we work to deliver better outcomes for defenders.

Using AI to boost SecOps teams

Security operations teams currently require an enormous input of manual labor. There are ways we can use AI to reduce the burden of that labor, help train new team members faster, and use summarization to accelerate many process-intensive tasks such as analyzing threat intelligence and sifting through the noise of case investigations. We also need models to understand the nuances of the practice of security. Our security-specialized AI API, SecLM, combines multiple models, business logic, retrieval, and grounding into a cohesive system that is tuned for security-specific tasks. It benefits from the latest advances in AI from Google DeepMind as well as Google’s industry-leading threat intelligence and security data.

Customers like Pfizer and Fiserv are using natural language queries with Gemini in Security Operations to help new team members onboard faster, enable analysts to find answers more quickly, and improve the overall efficiency of their security operations programs.

One of the more valuable ways that AI can boost SecOps is by helping on-board new team members. AI empowers natural language queries, which can help generate reliable search queries regularly, for example, instead of having to memorize proprietary SecOps platform query languages.

Customers like Pfizer and Fiserv are using natural language queries with Gemini in Security Operations to help new team members onboard faster, enable analysts to find answers more quickly, and improve the overall efficiency of their security operations programs.

Similarly, AI-generated summaries can help save time consolidating threat research and explaining complex information in clear, concise, natural language. The director of information security at a leading multinational professional services organization told Google Cloud that AI summaries provided by Gemini in Threat Intelligence can help draft an overview of the threat actor, including details about relevant and associated entities, and which regions they're targeting.

“The information flows really smoothly and helps us gather the intelligence we need in a fraction of the time,” the customer said.

AI can also help generate summaries of investigations. As security operations center teams need to manage increasing volumes of data, it becomes vital for them to be able to detect, validate, and respond to events even quicker than before. Natural-language searches and investigation summaries can go a long way towards helping teams find high-risk needles in haystacks of alerts — and take action.

Using AI to scale security solutions

Google’s Machine Learning for Security team released in January a fuzzing framework as a free, open-source tool to help researchers and developers improve the vulnerability-finding abilities of fuzzing. The team instructed AI foundation models to write project-specific code that could improve fuzzing coverage, with the goal of finding more vulnerabilities. This was implemented in OSS-Fuzz, our free service that runs fuzzers for open-source projects and privately alerts developers when vulnerabilities have been detected.

Well-crafted AI tools can also transform security by providing a sequence of stacked benefits, small improvements that collectively can help productivity skyrocket when stacked together.

The experiment was a success: The AI-generated, expanded fuzzing coverage allowed OSS-Fuzz to cover more than 300 OSS-Fuzz projects, and it found new vulnerabilities in two projects that had already been fuzzed for years.

“Without the completely LLM-generated code, these two vulnerabilities could have remained undiscovered and unfixed indefinitely,” the team wrote.

They also used AI to help patch vulnerabilities. The team built an automated pipeline for foundation models to analyze software for vulnerabilities, then prompted those models to generate patches, and finally to test those fixes before selecting the best candidates for human review.

Clearly, there is growing and important potential for using AI to help track down and fix vulnerabilities. Well-crafted AI tools can also transform security by providing a sequence of stacked benefits, small improvements that collectively can help productivity skyrocket when stacked together.

To ensure that AI foundation models are developed responsibly, we believe that they will need to be governed by our Secure AI Framework or a similar risk-management foundation in order to maximize impact and mitigate risk.

To learn more, you can contact us at Ask Office of the CISO and come meet us at our security leader events. You can also hear more about our product vision for AI-powered security in our upcoming Security Talks event on June 26.

In case you missed it

Here are the latest updates, products, services, and resources from our security teams so far this month:

  • Join the latest Google Cloud Security Talks on the intersection of AI and cybersecurity: We’re bringing together experts from Google Cloud and the broader Google security community on June 26 to share insights, best practices, and actionable strategies to strengthen your security posture. Read more.
  • Lightning-fast decision-making: How AI can boost OODA loop impact on cybersecurity: Long used in boardrooms, the OODA loop can help leaders make better, faster decisions. Make OODA loops even more effective with an AI boost. Here’s how. Read more.
  • Google named a Leader in the Cybersecurity Incident Response Services Forrester Wave report: Google was named a Leader in The Forrester Wave: Cybersecurity Incident Response Services Report for Q2 2024. Read more about the report. Read more.
  • Move from always-on privileges to on-demand access with Privileged Access Manager: To help mitigate the risks associated with excessive privileges and misuses of elevated access, we are excited to announce Google Cloud’s built-in Privileged Access Manager. Read more.
  • How you can build a FedRAMP High-compliant network with Assured Workloads: Securely deploying a network architecture that aligns with FedRAMP High? We’ve outlined several best practices. Here’s how you can use them. Read more.
  • Using BigQuery Encrypt and Decrypt with Sensitive Data Protection: BigQuery now integrates with Sensitive Data Protection with native SQL functions that allow interoperable deterministic encryption and decryption. Read more.
  • How European customers benefit today from the power of choice with Google Sovereign Cloud: Google Sovereign Cloud’s collaboration with customers, local sovereign partners, governments, and regulators has grown. Read on to learn how. Read more.

Threat Intelligence news

  • Threat actors target Snowflake customer instances for data theft and extortion: Mandiant has identified a financially-motivated threat campaign targeting Snowflake customer database instances with the intent of data theft and extortion. Read more.
  • Insights on cyber threats targeting users and enterprises in Brazil: As Brazil’s economic and geopolitical role in global affairs continues to rise, threat actors from an array of motivations will further seek opportunities to exploit the digital infrastructure that Brazilians rely upon across all aspects of society. Read more.
  • Phishing for gold: Cyber threats facing the 2024 Paris Olympics: Mandiant assesses with high confidence that the Paris Olympics faces an elevated risk of cyber threat activity, including cyber espionage, disruptive and destructive operations, financially-motivated activity, hacktivism, and information operations. Read more.
  • Ransomware rebounds: Extortion threats surge in 2023: Mandiant observed an increase in 2023 in ransomware activity as compared to 2022, based on a significant rise in posts on data leak sites and a moderate increase in Mandiant-led ransomware investigations. Read more.

Now hear this: Google Cloud Security and Mandiant podcasts

  • Google on Google Cloud: How Google secures its own cloud use: Google uses its own cloud, but we’re in an obviously unique position. How is Google like other cloud customers, and how is it different? How do those factors affect our cloud security approach? Seth Vargo, principal software engineer responsible for Google's use of the public cloud gets cloud introspective with hosts Anton Chuvakin and Tim Peacock. Listen here.
  • From public sector to Threat Horizons, meet Crystal Lister: Our technical program manager Crystal Lister joins Anton and Tim to discuss her career path, from public sector to Google Cloud Security, to spear-heading our Threat Horizons reports. Listen here.
  • Defender’s Advantage: Lessons learned from responding to cloud compromises: Mandiant consultants Will Silverstone and Omar ElAhdan discuss their research into cloud compromise trends over 2023 with host Luke McNamara. Listen here.

To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in two weeks with more security-related updates from Google Cloud.

Posted in