Cloud CISO Perspectives: Our 2026 Cybersecurity Forecast report

Francis deSouza
Chief Operating Officer, Google Cloud
Get original CISO insights in your inbox
The latest on security from Google Cloud's Office of the CISO, twice a month.
SubscribeWelcome to the first Cloud CISO Perspectives for December 2025. Today, Francis deSouza, COO and president, Security Products, Google Cloud, shares our Cybersecurity Forecast report for the coming year, with additional insights from our Office of the CISO colleagues.
As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.
Forecasting 2026: The year AI rewrites the security playbook
By Francis deSouza, COO, Google Cloud


We are at a unique point in time where we’re facing a generational refactoring of the entire technology stack, including the threat landscape. 2025 was a watershed year in cybersecurity, where AI moved to the forefront of every company’s agenda, changing the game for both security offense and defense.
While threats continue to intensify — with attackers using AI for sophisticated phishing and deepfakes — defenders also have been gaining ground. This year’s evolutions will continue to drive change in the coming year, and our annual Cybersecurity Forecast report for 2026 explores how today’s lessons will impact tomorrow’s cybersecurity across four key areas: artificial intelligence, cybercrime, nation-state threats, and regulatory obligations.
The report is a collaborative effort across our security teams, including the Google Threat Intelligence Group, Mandiant Consulting, Google Security Operations, Google Cloud’s Office of the CISO, and VirusTotal. Drawing from their work, I expect we’re going to see two significant shifts in enterprise security next year.
Organizations haven’t spent enough time preparing their workforces to use AI securely. It is essential that companies build a learning culture around security that includes true AI fluency.
1. The rise of agentic security automation
AI and agents will redefine how organizations secure their environment, turning the security operations center from a monitoring hub into an engine for automated action. This is critical because the window of opportunity has decreased; bad actors operate in hours, not weeks.
As data volumes explode, AI agents can give defenders a speed advantage we haven’t had in years. By stepping in to detect anomalies, automate data analysis, and initiate response workflows, your security teams can focus on the complex decisions that require human judgment. This shift won't just improve speed — it will drive similar gains in proactively strengthening your entire security posture.
2. Building AI fluency as a defense
We will likely see a wave of AI-driven attacks targeting employees, largely because the weak link in security remains the user. Organizations haven’t spent enough time preparing their workforces to use AI securely. It is essential that companies build a learning culture around security that includes true AI fluency.
Every organization should deploy something like our Model Armor to protect their AI models. Implementing a validation layer at the gateway level ensures that guardrails are active controls rather than just theoretical guidelines.
However, technology is only half the equation. We also need a security-conscious workforce. If we don't help our employees build these skills, teams simply won't be equipped to identify the new wave of threats or understand how best to defend against them.
This means looking past standard training, and investing in efforts, like agentic security operations center (SOC) workshops and internal cyber war games efforts, to help educate their employees on what the threat landscape looks like in an AI world.
Read on for the key points from the Cybersecurity Forecast report, bolstered with new insights from our Office of the CISO.
AI advantages
Widespread adoption of AI agents will create new security challenges, requiring organizations to develop new methodologies and tools to effectively map their new AI ecosystems. A key part of this will be the evolution of identity and access management (IAM) to treat AI agents as distinct digital actors with their own managed identities.
AI adoption will transform security analysts’ roles, shifting them from drowning in alerts to directing AI agents in an agentic SOC. This will allow analysts to focus on strategic validation and high-level analysis, as AI handles data correlation, incident summaries, and threat intelligence drafting.
The heightened capability of agentic AI to take actions and execute tasks autonomously elevates the importance of cybersecurity basics. Organizations will need to create discrete boundary definitions for the authorization, authentication, and monitoring of each agent.
Taylor Lehmann, director, health care and life sciences
A year from now, we're going to have an awesome security opportunity to secure a new persona in our organizations: Knowledge workers who produce truly useful, mission-critical applications and software using ideas and words — but not necessarily well-written, vetted, and tested code.
We're going to need better and more fine-grained paths to help these new “idea-native developers" who use powerful AI tools and agents to build, test, submit, manage and blast secure code into secure production as safely and as fast as they can. In 2026 and 2027, we're going to see how big this opportunity is. We should prepare to align our organizations, operations, and technology (OOT) to take advantage of it.
A corollary to this comes from our DORA reports: Just as AI has amplified productivity and begun optimizing work, it amplifies organizational dysfunctions — especially those that lead to inefficiently and ineffectively secured data.
Marina Kaganovich, executive trust lead
The heightened capability of agentic AI to take actions and execute tasks autonomously elevates the importance of cybersecurity basics. Organizations will need to create discrete boundary definitions for the authorization, authentication, and monitoring of each agent.
Beyond technical controls, organizational defense will depend on fostering an AI-literate workforce through training and awareness, as staff shift from performing tasks to architecting and overseeing agents. To be successful, organizations will require a fundamental shift in risk-informed culture.
Bill Reid, security advisor
Aggressive adoption of agentic AI will drive a renewed interest in threat modeling practices. Security teams will be asked to deeply understand what teams are trying to build, and will need to think about the data flows, the trust boundaries, and the guardrails needed.
Agentic AI will also demand that the supply chain be considered within that threat model, beyond the software bill of materials (SBOM), to look at how those services will control autonomous actions. It will also force a renewed look at identity and entitlements, as agents are asked to act on behalf of or as an extension of employees in the enterprise.
What may have been acceptable wide scopes covered by detective controls may no longer be sufficient, given the speed of action that comes with automation and the chaining of models together in goal seeking behavior.
Vesselin Tzvetkov, senior cybersecurity advisor
As Francis noted, agentic security operations are set to become the standard for modern SOCs, dramatically enhancing the speed and capabilities of security organizations. The agentic SOC in 2026 will feature multiple small, dedicated agents for tasks like summarization, alert grouping, similarity detection, and predictive remediation.
This shift will transform modern SOC roles and processes, moving away from tiered models in favor of CI/CD-like automation. AI capabilities and relevant know-how are essential for security personnel.
As AI drives new AI threat hunting capabilities to gain insight from data lakes in previously underexplored areas, such as OT protocols for manufacturing and industry-specific protocols like SS7 for telecommunications, the overall SOC coverage and overall industry security will improve.
Vinod D’Souza, director, manufacturing and industry
In 2026, agentic AI will help the manufacturing and industrial sector cross the critical threshold from static automation to true autonomy. Machines will self-correct and self-optimize with a speed and precision that exceeds human capacity.
The engine powering this transformation is the strategic integration of cloud-native SCADA and AI-native architectures. Security leaders should redefine their mandate from protecting a perimeter to enabling a trusted ecosystem anchored in cyber-physical identity.
Every sensor, service, autonomous agent, and digital twin should be treated as a verified entity. By rooting security strategies in data-centered Zero Trust, organizations stop treating security as a gatekeeper and transform it into the architectural foundation. More than just securing infrastructure, the goal is to secure the decision-making integrity of autonomous systems.
AI threats
We anticipate threat actors will move decisively from using AI as an exception to using it as the norm. They will use AI to enhance the speed, scope, and effectiveness of their operations, streamlining and scaling attacks.
A critical and growing threat is prompt injection, an attack that manipulates AI to bypass its security protocols and follow an attacker's hidden command. Expect a significant rise in targeted attacks on enterprise AI systems.
Threat actors will accelerate the use of highly manipulative AI-enabled social engineering. This includes vishing (voice phishing) with AI-driven voice cloning to create hyperrealistic impersonations of executives or IT staff, making attacks harder to detect and defend against.
The increasing complexity of hybrid and multicloud architectures, coupled with the rapid, ungoverned introduction of AI agents, will accelerate the crisis in IAM failures, cementing them as the primary initial access vector for significant enterprise compromise.
Anton Chuvakin, security advisor
We’ve been hearing about the sizzle of AI for some time, but now we need the steak to be served. While there’s still a place for exciting, hypothetical use cases, we need tangible AI benefits backed by solid security data of value and benefits obtained and proven.
Whether your company adopts agents or not, your employees will use them for work. Shadow agents raise new and interesting risks, especially when your employees connect their personal agents to corporate systems. Organizations will have to invest to mitigate the risks of shadow agents — merely blocking them simply won’t work (they will sneak back in immediately).
David Stone, director, financial services
As highlighted in the Google Threat Intelligence Group report on adversarial use of AI, attackers will use gen AI to exploit bad hygiene, employ deepfake capabilities to erode trust in processes, and discover zero-day vulnerabilities. Cyber defenders will likewise have to adopt gen AI capabilities to find and fix cyber hygiene, patch code at scale, and scrutinize critical business processes to get signals to find and stop exploitation of humans in the process.
Security will continue to grow in importance in the boardroom as the key focus on resilience, business enablement, and business continuity — especially as AI-driven attacks evolve.
Jorge Blanco, director, Iberia and Latin America
The increasing complexity of hybrid and multicloud architectures, coupled with the rapid, ungoverned introduction of AI agents, will accelerate the crisis in IAM failures, cementing them as the primary initial access vector for significant enterprise compromise.
The proliferation of sophisticated, autonomous agents — often deployed by employees without corporate approval (the shadow agent risk) — will create invisible, uncontrolled pipelines for sensitive data, leading to data leaks and compliance violations. The defense against this requires the evolution of IAM to agentic identity management, treating AI agents as distinct digital actors with their own managed identities.
Organizations that fail to adopt this dynamic, granular control — focusing on least privilege, just-in-time access, and robust delegation — will be unable to minimize the potential for privilege creep and unauthorized actions by these new digital actors. The need for practical guidance on securing multicloud environments, including streamlined IAM configuration, will be acutely felt as security teams grapple with this evolving threat landscape.
Sri Gourisetti, senior cybersecurity advisor
The increased adversarial use of AI for the development of malware modules may likely result in "malware bloat" — a high volume of AI-generated malicious code that is non-functional or poorly optimized, creating significant noise for amateur adversaries and defenders.
Functional malware will become more modular and mature, designed to be compatible and interact with factory floor and OT environments as the manufacturing and industrial sector moves beyond initial exploration of generative AI toward the structural deployment of agentic AI in IT, OT, and manufacturing workflows.
Widya Junus, strategy operations
Over 70% of cloud breaches stem from compromised identities, according to a recent Cloud Threat Horizons report, and we expect that trend to accelerate as threat actors exploit AI. The security focus should shift from human-centered authentication to automated governance of non-human identities using Cloud Infrastructure Entitlement Management (CIEM) and Workload Identity Federation (WIF).
Accordingly, as AI-assisted attacks lower the barrier for entry and cloud-native ransomware specifically targets APIs to encrypt workloads, organizations will increasingly rely on tamper-proof backups (such as Backup Vault) and AI-driven automated recovery workflows to ensure business continuity — rather than relying solely on perimeter defenses to stop every attack.
Cybercrime
The combination of ransomware, data theft, and multifaceted extortion will remain the most financially disruptive category of cybercrime. The volume of activity is escalating, with focus on targeting third-party providers and exploiting zero-day vulnerabilities for high-volume data exfiltration.
As the financial sector increasingly adopts cryptocurrencies, threat actors are expected to migrate core components of their operations onto public blockchains for unprecedented resilience against traditional takedown efforts.
As security controls mature in guest operating systems, adversaries are pivoting to the underlying virtualization infrastructure, which is becoming a critical blind spot. A single compromise here can grant control over the entire digital estate and render hundreds of systems inoperable in a matter of hours.
Next year, we’ll see the first sustained, automated campaigns where threat actors use agentic AI to autonomously discover and exploit vulnerabilities faster than human defenders can patch exploited vulnerabilities.
David Homovich, advocacy lead
In 2026, we expect to see more boards pressuring CISOs to translate security exposure and investment into financial terms, focusing on metrics like potential dollar losses and the actual return on security investment. Crucially, operational resilience — the organization’s ability to quickly recover from an AI-fueled attack — is a non-negotiable board expectation.
CISOs take note: Boards are asking us about business resilience and the impact of advanced, machine-speed attacks — like adversarial AI and securing autonomous identities such as AI agents. Have your dollar figures ready, because this is the new language of defense for boards.
Crystal Lister, security advisor
Next year, we’ll see the first sustained, automated campaigns where threat actors use agentic AI to autonomously discover and exploit vulnerabilities faster than human defenders can patch exploited vulnerabilities.
2025 showed us that adversaries are no longer leveraging artificial intelligence just for productivity gains, they are deploying novel AI-enabled malware in active operations. The ShadowV2 botnet was likely a test run for autonomous C2 infrastructure.
Furthermore, the November 2025 revelations about Chinese state-sponsored actors using Anthropic’s Claude to automate espionage code-writing demonstrates that barriers to entry for sophisticated attacks have collapsed. Our security value proposition should shift from detection to AI-speed preemption.
The global stage: Threat actors
Cyber operations in Russia are expected to undergo a strategic shift, prioritizing long-term global strategic goals and the development of advanced cyber capabilities over just tactical support for the conflict in Ukraine.
The volume of China-nexus cyber operations is expected to continue surpassing that of other nations. They will prioritize stealthy operations, aggressively targeting edge devices and exploiting zero-day vulnerabilities.
Driven by regional conflicts and the goal of regime stability, Iranian cyber activity will remain resilient, multifaceted, and semi-deniable, deliberately blurring the lines between espionage, disruption, and hacktivism.
North Korea will continue to conduct financial operations to generate revenue for the regime, cyber espionage against perceived adversaries, and seek to expand IT worker operations.
Sovereign cloud will become a drumbeat across most of Europe, as EU member states seek to decrease their reliance on American tech companies.
Bob Mechler, director, Telco, Media, Entertainment and Gaming
The telecom cybersecurity landscape in 2026 will be dominated by the escalation of AI-driven attacks and persistent geopolitical instability. We may witness the first major AI-driven cybersecurity breach, as adversaries use AI to automate exploit development and craft sophisticated attacks that outpace traditional defenses.
This technological escalation coincides with a baseline of state-backed and politically-motivated cyber-threat activity, where critical infrastructure is targeted as part of broader geopolitical conflicts. Recent state-sponsored campaigns, such as Salt Typhoon, highlight how adversaries are already penetrating telecommunications networks to establish long-term access, posing a systemic threat to national security.
Toby Scales, security advisor
Sovereign cloud will become a drumbeat across most of Europe, as EU member states seek to decrease their reliance on American tech companies.
At the same time, the AI capability gap will continue to widen and both enterprises and governments will chase agreements with frontier model providers. Regulatory bodies may seek to enforce “locally hosted fine-tuned models” as a way to protect state secrets, but will face predictable opposition from frontier model developers.
Meeting regulatory obligations
Governance has taken on new importance in the AI era. Key areas of focus are expanding to include data integrity to prevent poisoning attacks, model security to defend against evasion and theft, and governance fundamentals to ensure transparency and accountability.
CISOs and governance, risk, and compliance teams should work together to build an AI resilience architecture, establish continuous AI health monitoring, integrate AI into business continuity and incident response, and embed AI resilience into security governance.
Bhavana Bhinder, security, privacy, and compliance advisor
In 2026, we will see the validated AI operating model become the industry standard for healthcare and life sciences (HCLS), with a shift from pilot projects to organizations seeking full-scale production deployments that are compliant and audit-ready by design. The logical evolution for HCLS will move towards agentic evaluation, where autonomous agents act as real-time auditors.
Instead of periodic reviews, these agents will continuously validate that generative AI outputs (such as clinical study reports) remain factually grounded and conform to regulatory standards. Organizations using governed, quality-scored data necessary to trust advanced models like Gemini across the drug lifecycle, clinical settings, and quality management will depend on AI workflows that natively support industry- and domain-specific regulations.
Odun Fadahunsi, senior security risk and compliance advisor
As regulators and sectoral bodies in finance, healthcare and critical infrastructure define AI-specific resilience obligations, CISOs must treat AI resilience as a primary pillar of security, not a separate or optional discipline. AI systems are poised to become so deeply embedded in identity, fraud detection, customer operations, cloud automation, and decisioning workflows that AI availability and reliability will directly determine an organization’s operational resilience.
Unlike traditional systems, AI can fail in silent, emergent, or probabilistic ways — drifting over time, degrading under adversarial prompt, and behaving unpredictably after upstream changes in data or model weights. These failure modes will create security blindspots, enabling attackers to exploit model weaknesses that bypass traditional controls.
CISOs and governance, risk, and compliance teams should work together to build an AI resilience architecture, establish continuous AI health monitoring, integrate AI into business continuity and incident response, and embed AI resilience into security governance.
For more leadership guidance from Google Cloud experts, please see our CISO Insights hub.
In case you missed it
Here are the latest updates, products, services, and resources from our security teams so far this month:
- Responding to React2Shell (CVE-2025-55182): Follow these recommendations to minimize remote code execution risks in React and Next.js from the React2Shell (CVE-2025-55182) vulnerability. Read more.
- How Google Does It: Securing production services, servers, and workloads: Here are the three core pillars that define how we protect production workloads at Google-scale. Read more.
- How Google Does It: Using Binary Authorization to boost supply chain security: “Don’t trust, verify,” guides how we secure our entire software supply chain. Here’s how we use Binary Authorization to ensure that every component meets our security best practices and standards. Read more.
- New data on ROI of AI in security: Our new ROI of AI in security report showcases how organizations are getting value from AI in cybersecurity, and finds a significant, practical shift is underway. Read more.
- Using MCP with Web3: How to secure blockchain-interacting agents: In the Web3 world, who hosts AI agents, and who holds the private key to operations, are pressing questions. Here’s how to get started with the two most likely agent models. Read more.
- Expanding the Google Unified Security Recommended program: We are excited to announce Palo Alto Networks as the latest addition to the Google Unified Security Recommended program, joining previously announced partners CrowdStike, Fortinet and Wiz. Read more.
- Why PQC is Google's path forward (and not QKD): After closely evaluating Quantum Key Distribution (QKD), here’s why we chose post-quantum cryptography (PQC) as the more scalable solution for our needs. Read more.
- Architecting security for agentic capabilities in Chrome: Following the recent launch of Gemini in Chrome and the preview of agentic capabilities, here’s our approach and some new innovations to improve the safety of agentic browsing. Read more.
- Android Quick Share support for AirDrop: As part of our efforts to continue to make cross-platform communication easier, we've made Quick Share interoperable with AirDrop, allowing for two-way file sharing between Android and iOS devices, starting with the Pixel 10 Family. Read more.
Please visit the Google Cloud blog for more security stories published this month.
Threat Intelligence news
- Intellexa’s prolific zero-day exploits continue: Despite extensive scrutiny and public reporting, commercial surveillance vendors such as Intellexa continue to operate unimpeded. Known for its “Predator” spyware, new GTIG analysis shows that Intellexa is evading restrictions and thriving. Read more.
- APT24's pivot to multi-vector attacks: GTIG is tracking a long-running and adaptive cyber espionage campaign by APT24, a People's Republic of China (PRC)-nexus threat actor that has been deploying BADAUDIO over the past three years. Here’s our analysis of the malware, and how defenders can detect and mitigate this persistent threat. Read more.
- Get going with Time Travel Debugging using a .NET process hollowing case study: Unlike traditional live debugging, this technique captures a deterministic, shareable record of a program's execution. Here’s how to start incorporating TTD into your analysis. Read more.
- Analysis of UNC1549 targeting the aerospace and defense ecosystem: Following last year’s post on suspected Iran-nexus espionage activity targeting the aerospace, aviation, and defense industries in the Middle East, we discuss additional tactics, techniques, and procedures (TTPs) observed in incidents Mandiant has responded to. Read more.
Please visit the Google Cloud blog for more threat intelligence stories published this month.
Now hear this: Podcasts from Google Cloud
- The truth about autonomous AI hacking: Heather Adkins, Google’s Security Engineering vice-president, separates the hype from the hazards of autonomous AI hacking, with hosts Anton Chuvakin and Tim Peacock. Listen here.
- Escaping 1990s vulnerability management: Caleb Hoch, consulting manager for security transformations, Mandiant, discusses with Anton and Tim how vulnerability management has evolved beyond basic scanning and reporting, and the biggest gaps between modern practices and what organizations are actually doing. Listen here.
- The art and craft of cloud bug hunting: Bug bounty professionals Sivanesh Ashok and Sreeram KL, have won the Most Valuable Hacker award from the Google Cloud VRP team. They chat about all things buggy with Anton and Tim, including how to write excellent bug bounty reports. Listen here.
- Behind the Binary: The art of deconstructing problems: Host Josh Stroschein is joined by Nino Isakovic, a long-time low-level security expert, for a thought-provoking conversation that spans the foundational and the cutting-edge — including his discovery of the ScatterBrain obfuscating compiler. Listen here.
To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in a few weeks with more security-related updates from Google Cloud.

