Jump to Content
Security & Identity

How AI can improve digital security

March 24, 2023
https://storage.googleapis.com/gweb-cloudblog-publish/original_images/ai-blog_hero.gif
Phil Venables

VP/CISO, Google Cloud

Royal Hansen

VP, Engineering for Privacy, Safety, and Security

AI is having a transformative moment and causing profound shifts in what’s possible with technology. It has the power to unlock the potential of communities, companies, and countries around the world, bringing meaningful and positive change that could improve billions of peoples’ lives. Similarly, as these technologies advance, they have the potential to vastly improve how we identify, address, and reduce security risks.

We’re at a key moment in our AI journey

Breakthroughs in generative AI are fundamentally changing how people interact with technology. At Google Cloud, we’re committed to helping developers and organizations stay on top of these developments. That’s why we recently announced new generative AI capabilities for our Google Cloud AI portfolio and committed to launching a range of products that responsibly infuse generative AI into our offerings

AI principles sit at the core of this work. We were one of the first to introduce and advance responsible AI practices, and these principles serve as an ongoing commitment to our customers worldwide who rely on our products to build and grow their businesses safely. Please read our blog on our AI principles in practice for more information. 

One of the benefits of our experience using AI to solve real-world problems is that we have become better at helping secure new technologies as they become mainstream. At the same time, we’re leveraging recent AI advances to provide unique, up-to-date, and actionable threat intelligence, improving visibility across attack surfaces and infrastructure. We know that improving cybersecurity is no longer a human-scale problem, and we’re excited about continuing to work together to prepare for what’s to come. 

Our work is rooted in a basic principle: AI can have a major impact for good on the security ecosystem, but only if we’re being bold and responsible about how we deploy it. We look at this investment like a digital immune system — when we learn and adapt from previous risks to our digital health, our systems become better equipped to protect against, anticipate, and predict future attacks. To maximize the benefits of AI technologies and minimize risks, we take a three-pronged approach to secure, scale, and evolve.

https://storage.googleapis.com/gweb-cloudblog-publish/images/1_ai-blog_prong-secure.max-1600x1600.jpg

1. Secure: Helping organizations deploy AI systems themselves

We’re helping organizations deploy secure AI systems. We approach AI systems in the same way we view other security challenges: we bake in industry-leading security features (often invisible to users) and secure by default protections to keep our users safe. This includes technical controls, contractual protections, and third-party verifications or attestations.

In addition, we have standardized platforms and tools for machine learning that integrate with Google’s data protection, access control, and change management tools. Vertex AI, our machine learning platform for training and deploying ML models and AI applications, allows customers to train models without code and requires minimal expertise to address a broad range of modeling problems, including eliminating common mistakes, minimizing misconfigurations, and reducing attack surface. Vertex AI supplements our robust data governance platforms that control data gathering and classification, and we’re committed to the same data responsibilities for machine learning data that we have for conventional data processing.

https://storage.googleapis.com/gweb-cloudblog-publish/images/2_ai-blog_prong-scale.max-1600x1600.jpg

2. Scale: Leveraging the power of AI to achieve better security outcomes

We’re continuing to launch cutting-edge, AI-powered products and services to help organizations achieve better security outcomes at scale. Historically, the security community has taken a reactionary approach to threats. While these efforts are important, they’re unsustainable. In today’s dynamic threat environment, organizations struggle to keep up with the pace and scope of attacks often leaving defenders feeling outmatched.

While AI technologies don’t offer a one-stop solution for all security problems, we’ve seen a few early use cases emerge for how AI is helping to level the security playing field:

Detecting anomalous and malicious behavior; 

automating security recommendations; and

scaling the productivity of security specialists. 

Here are just some of the examples of how we currently use AI in our products to help relieve humans from the security burdens of incredibly dynamic systems: 

Gmail’s AI-powered spam-filtering capabilities block nearly 10 million spam emails every minute. This keeps 99.9% of phishing attempts and malware from reaching your inbox. 

Google’s Safe Browsing, an industry-leading service, uses AI classifiers running directly in the Chrome web browser to warn users about unsafe websites.

IAM recommender uses AI technologies to analyze usage patterns to recommend more secure IAM policies that are custom tailored to an organization’s environment. Once implemented, they can make cloud deployments more secure and cost-effective, with maximum performance.

Chronicle Security Operations and Mandiant Automated Defense use integrated reasoning and machine learning to identify critical alerts, suppress false positives, and generate a security event score to help reduce alert fatigue.

Breach Analytics for Chronicle uses machine learning to calculate a Mandiant IC-Score, a data science-based “maliciousness” scoring algorithm that filters out benign indicators and helps teams focus on relevant, high-priority IOCs. These IOCs are then matched to security data stored in Chronicle to find incidents in need of further investigation. 

reCAPTCHA Enterprise and Web Risk use unsupervised learning models to detect clusters of hijacked and fake accounts to help speed up investigation time for analysts and act to protect accounts, and minimize risk. 

Cloud Armor Adaptive Protection uses machine learning to automatically detect threats at Layer 7, which contributed to detecting and blocking one of the largest DDoS attacks ever reported.  

Together, these capabilities help organizations take Google's AI and apply it to security challenges everywhere they operate.https://cloud.google.com/security/products/security-operations

https://storage.googleapis.com/gweb-cloudblog-publish/images/3_ai-blog_prong-evolve.max-1600x1600.jpg

3. Evolve: Adopting a future-state mindset to stay ahead of threats

We’re also constantly evolving to stay ahead of threats. AI technologies present novel security risks, and we’re working to understand those risks to better protect AI deployments from potential attacks. We operate from the basic assumption that attackers will seek out these technologies and attempt to use them to circumvent defenses, and we’re building towards that future state. This includes advancing progress on important topics on the horizon like post-quantum cryptography and how to detect efforts to evade voice-verification via synthetic speech, staying on top of research into adversarial attacks on machine learning and AI systems and partnering with customers to develop best practices, tools, and threat models that address typical AI interactions and risks.

As far back as 2011, for example, we began using machine learning to detect potential attackers on our internal networks. Today, with investments in AI, we’re able to detect our own Red Team attempting to attack our internal systems, and we continue to collaborate with research teams to perform red team attacks on and using the latest AI developments. 

A continuous cycle

Together, our approach reinforces a continuous cycle where frontline intelligence meets AI-powered cloud innovation. We’ll continue to revisit and explore this topic as we work with developers, organizations and the broader security community to advance bold and responsible AI capabilities.

Posted in