Cloud CISO Perspectives: 5 top CISO priorities in 2026

Taylor Lehmann
Director, Office of the CISO
Get original CISO insights in your inbox
The latest on security from Google Cloud's Office of the CISO, twice a month.
SubscribeWelcome to the second Cloud CISO Perspectives for January 2026. Today, Taylor Lehmann, director of healthcare and life sciences, Office of the CISO, offers his insights after decades of experience on what CISOs should prioritize this year.
As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.
5 top priorities for CISOs this year
By Taylor Lehmann, director, Office of the CISO


Life as a CISO means managing the constant influx of requests across the organization, yet these responsibilities have never been more meaningful. We have the privilege of leading at a critical turning point, when AI is reshaping the fabric of business. As the industry undergoes vast changes, we also must succeed in our role to guide our security teams and our business leaders on what the organization actually does, analyzing the threats to that service, and countering those threats.
It’s important that we right-size our security mindset from a high level, and also we should follow through with key security goals. While it may seem that there’s too much to do — every crisis, everywhere, all at once — returning to cybersecurity fundamentals can help us stay focused.
To help security leaders proactively tackle today’s shifting concerns, here are what we see as our top 5 CISO priorities for this year.
Identities are the central piece of digital evidence that ties everything together. Organizations need to know who's using AI models, what the model's identity is, what the code driving the interaction’s identity is, what the user's identity is, and be able to differentiate between those things — especially with AI agents.
1. Align strategics for compliance and resilience: Setting your cybersecurity goals to meet regulatory compliance requirements as the most important and only outcome is a good strategy for helping the bad guys win. Compliance is essential and non-negotiable, but in most industries compliance-driven efforts are almost always focused on addressing historical threats — not emerging activities, not threatening behavior we’re seeing in the wild, and not the very real consequences that follow a successful cyberattack.
Organizations should seek alignment between what they’re doing to comply with international laws and regulations as part of a larger operational resilience strategy and efforts to build resilient systems. Systems that are designed to stay up, running, and secure in the face of known and unknown threats demonstrate compliance as a natural by-product.
Given the technical and regulatory progress in AI, we know operational resilience is going to require alignment and coordination of these efforts to keep up with the speed at which these systems are evolving — and how they’re being used in our own companies.
2. Securing the AI supply chain: Determining why a model is right (or wrong) is going to be one of the key challenges of the AI era, and especially when that error occurs because of interference by a threat actor. CISOs should be prioritizing how they get and maintain continuous visibility into and the security of their entire AI supply chain.
Nothing short of end-to-end view and provenance of all AI system components and origins — models, data sources, applications and infrastructure services — will do. Google’s overall approach to supply-chain security is to support Supply-chain Levels for Software Artifacts (SLSA), supplemented by software bills of materials (SBOM), and we carry that forward to AI.
Securing the AI supply chain is vital to the success of any AI initiative, as we discussed in the previous Cloud CISO Perspectives, and we’ll be talking more about it throughout the year as we continue to work on the Secure AI Framework (SAIF).
3. Master identity: Identity management has always been important, but it is absolutely critical to have a robust plan for identities in the agentic world we now live in. Managing human and non-human identities and their access privileges is essential to mitigating new categories of risks that appear when we use non-deterministic systems to perform actions in the real world.
While incidents are inevitable, the blast radius can be controlled when we have crisp, well-defined abilities to identify and log the behavior of agentic actors. The goal is to control their access and see what they did in case we have to go back.
Identities are the central piece of digital evidence that ties everything together. Organizations need to know who's using AI models, what the model's identity is, what the code driving the interaction’s identity is, what the user's identity is, and be able to differentiate between those things — especially with AI agents.
Some industries, such as financial services, have so far done a good job of focusing on identity management, but others are lagging. For example, the healthcare industry is struggling to understand how far and by how much can agentic AI help in the treatment, diagnosis, and delivery of care. Some developers seem satisfied that the use of AI and agents will always be constrained because humans must be in the loop to make consequential medical decisions, but we’re seeing evidence that the market is heading in a very different direction.
Whether it’s machine identities, model identities, data identities, application identities, agent identities, or people identities, we all need to get really, really strong on identity to build tomorrow’s resilient systems today.
4. Defend (and fix, rebuild, and deploy) at machine speed: Weaponized AI systems can attack at lightning speed, so defense must also accelerate. To survive modern threats, organizations should automate their ability to detect, respond, and apply preventions to systems in seconds or milliseconds, not hours.
Strategically, we encourage Google Cloud customers to evaluate their architecture, systems, and applications, and measure how quickly they can deploy fixes, how they might operate in likely situations where services are degraded through high-likelihood attack types, and how quickly they could rebuild and re-deploy a system from scratch if they had to. Similarly, we advise our customers to set baselines for how quickly they can generate and deploy new detections and mitigations from threat intelligence.
Shrinking dwell times tell us that organizations should focus on continuously driving down how long it takes them to complete these defensive activities, from hours to minutes, or minutes to seconds. We recommend prioritizing any approach that can help deploy corrections and detect and respond to issues in the fastest way possible, including new ephemeral architectures, automated deployment tools, and vulnerability management tools with increasingly greater scope and pipelines to capture and cull system audit logs efficiently.
5. Uplevel AI governance through context, advanced testing and evaluation: Probably the most frequently-requested topic our customers ask about is how to effectively govern the use of AI systems, agents, and cloud. AI governance is crucial because it provides a holistic approach to shoring up AI defenses. Proper AI governance requires a combination of technical skill, understanding how AI works and is built, as well as the regulatory and business context to determine what is an important risk — and then doing something about it.
In 2026, we see our customer conversations bringing more business context to governance, with questions like, “How should we govern agentic AI systems involved in the dosing of [prescription] for [diagnosis] when the patient is in the emergency room?”
Driving more business context into increasingly-deep AI governance activities will unlock greater, more sophisticated, and valuable uses of AI systems and agents. Organizations are seeing the need to bring more advanced testing and evaluation activities to the table, and activities such as AI red teaming are becoming more common because they provide crucial feedback assessments.
How to get started
While organizations that have years of experience building Secure by Design into their culture are more likely to be able to jump on these priorities faster, the best place to start is where CISOs know that they have a deficit that aligns with these priorities.
For more guidance, please check out our CISO Insights Hub.
In case you missed it
Here are the latest updates, products, services, and resources from our security teams so far this month:
- The truths about AI hacking that every CISO needs to know (Q&A): How will AI boost threat actors? And what can CISOs do about it? Google’s Heather Adkins explores how securing the enterprise is about to change. Read more.
- 5 critical gaps in incident response planning — and how to fix them: To help organizations build smarter, more resilient incident responses, here are our top 5 recommendations for boosting IR readiness. Read more.
- How Google Does It: Collecting and analyzing cloud forensics: Here’s how Google’s Incident Management and Digital Forensics team gathers and analyzes digital evidence. Read more.
- Auto-ISAC, Google partner to boost automotive sector cybersecurity: Google Cloud is proud to join Auto-ISAC as an Innovator Partner to significantly deepen our commitment to the automotive and transportation sectors. Read more.
Please visit the Google Cloud blog for more security stories published this month.
Threat Intelligence news
- Diverse threat actors exploiting critical WinRAR vulnerability: The Google Threat Intelligence Group (GTIG) has identified widespread, active exploitation of the critical WinRAR vulnerability CVE-2025-8088, discovered and patched in July 2025. Read more.
- Closing the door on Net-NTLMv1: Net-NTLMv1 has been known to be insecure for more than two decades, with cryptanalysis dating back to 1999. Despite that, Mandiant consultants continue to see it used in active environments. That’s why we’re publicly releasing a comprehensive dataset of Net-NTLMv1 rainbow tables, to encourage organizations to urgently migrate away from this outdated protocol. Read more.
- Auditing Salesforce Aura for data exposure: Mandiant released AuraInspector, a new open-source tool designed to help defenders identify and audit access control misconfigurations in the Salesforce Aura framework. Read more.
- How threat actors are exploiting React2Shell: Shortly after CVE-2025-55182 was disclosed, Google Threat Intelligence Group (GTIG) began observing widespread exploitation across many threat clusters, from opportunistic cybercrime actors to suspected espionage groups. Here’s what GTIG has observed so far. Read more.
Please visit the Google Cloud blog for more threat intelligence stories published this month.
Now hear this: Podcasts from Google Cloud
- Why your agents need better permissions than your admins: Vishwas Manral, CEO, Precize.ai joins hosts Anton Chuvakin and Tim Peacock to explain how agentic security is different from gen AI model security, and what steps security professionals need to take to keep the combination of agents and identity and access management (IAM) from becoming a security trainwreck. Listen here.
- Why Google DeepMind built a security-focused LLM: Elie Burstein, distinguished scientist, Google DeepMind, discusses with Anton and Tim the who, what, why, where, and how of Sec-Gemini, Google’s gen AI model purpose-built for cybersecurity. Listen here.
- Cyber-Savvy Board: Resiliency is a year-long discussion: PwC’s Joe Nocera makes a critical distinction between "compliant" boards that focus on annual audits and checking boxes, and "resilient" boards that emphasize a year-long strategic discussion. Listen here.
- Defender’s Advantage: How Android Combats Mobile Scams: Host Luke McNamara is joined by Eugene Liderman, senior director, Android's Security and Privacy Group, to discuss the evolving world of mobile-targeting scams — and what’s being done about them. Listen here.
- Behind the Binary: Web3, music, and the future of hacking: Host Josh Stroschein is joined by Dhillon Kannabhiran, who shares the gritty origin story of Hack in the Box (HITB), the use of AI to boost creativity, the challenges of securing Web3, and the intersection of math and music. Listen here.
To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in a few weeks with more security-related updates from Google Cloud.


