Jump to Content
Security & Identity

New Mandiant AI security report: Boost fundamentals with AI to counter adversaries

March 13, 2026
https://storage.googleapis.com/gweb-cloudblog-publish/images/GettyImages-172226343.max-2300x2300.jpg
Jurgen Kutscher

Vice President, Mandiant Consulting, Google Cloud

Get original CISO insights in your inbox

The latest on security from Google Cloud's Office of the CISO, twice a month.

Subscribe

When threat actors started using AI, it was primarily as a productivity multiplier to draft phishing emails and troubleshoot code. That’s beginning to change: By the end of 2025, they were integrating large language models directly into malware for on-demand code generation.

This shift necessitates a transition for defenders away from static security controls. As adversaries scale their operations with adaptive malware, the margin for error in environments has narrowed considerably. That alone should encourage security and business leaders to adopt behavioral analytics, API-level monitoring, and AI-boosted defenses.

We developed the new Mandiant AI risk and resilience report to provide organizations with guidance on navigating this evolution. By combining the frontline experience of Mandiant with the deep adversarial research of Google Threat Intelligence Group (GTIG), this report offers a unique perspective. In the report we cover three areas:

  • Adversarial use of AI: Threat actors are moving beyond simple productivity gains to deploying AI-orchestrated espionage and adaptive malware that dynamically obfuscates code to evade detection.
  • Securing AI systems: Mandiant conducted numerous AI system assessments, AI threat modeling exercises, and detection workshops globally. We see organizations encountering similar security challenges in their AI pipelines as they do in their overall infrastructure security.
  • AI-powered defense: We see AI's rapid transition into practical application in cyber defense and security operations–with specific use cases, such as in threat hunting, gaining significant traction.

Recent Mandiant red team engagements and AI system assessments highlight how traditional application security flaws can have amplified consequences in AI workflows. For example, log-based server-side request forgery (SSRF) attacks were granted unauthorized access to cloud resources simply by asking a chatbot to summarize an application log.

In recent assessments, Mandiant found organizations often lack application registries, software bill of materials (SBOMs) for AI, and standard security scanners capable of viewing AI-specific artifacts. However, many enterprises are making strides in empowering defenders with AI-enabled solutions, including Gemini.

At Google Cloud, we strongly recommend that organizations build robust governance for AI usage to define clear policies and approval workflows to securely adopt these tools. Comprehensive AI governance acts as a maturity multiplier, according to recent research from the Cloud Security Alliance and Google Cloud. Organizations with clear guidelines are nearly twice as likely (46%) to adopt advanced agentic AI, and innovate with confidence and speed.

Alongside strong governance, businesses should embrace regular AI red teaming to stress-test systems and expose vulnerabilities before they can be exploited. The root causes of the most pressing challenges are rarely novel AI-specific attacks, but rather basic implementation flaws and missing IT hygiene.

During client engagements, Mandiant has seen how AI is transforming how security teams operate. Security operations center (SOC) teams are using AI for retrospective incident analysis, reviewing months of closed tickets to identify subtle commonalities and architectural issues that human analysts might miss.

When implemented securely, AI can become a powerful force multiplier for security operations. The shift from AI chatbots to interconnected systems will help organizations evolve toward agentic defense, where AI agents dynamically reason through tasks and automate complex workflows and empower human analysts to prioritize strategic challenges and investigations.

During client engagements, Mandiant has seen how AI is transforming how security teams operate. Security operations center (SOC) teams are using AI for retrospective incident analysis, reviewing months of closed tickets to identify subtle commonalities and architectural issues that human analysts might miss.

AI also acts as a helpful translation layer, allowing analysts to use natural language prompts to instantly generate complex endpoint queries, effectively lowering the barrier to entry for threat hunting and empowering more junior analysts. During high-pressure incident response, AI-driven tools can ingest raw telemetry to instantly construct a coherent, chronological narrative of an attack.

We believe that navigating the year ahead requires a proactive stance. AI presents an unprecedented opportunity to elevate your organization's cyber defense and operational capabilities. We can help your teams establish foundational governance for AI adoption, conduct regular AI threat modeling and red teaming, and fully embrace AI-powered defense. Learn how Mandiant Consulting can help your organization today.

Posted in