Jump to Content
Security & Identity

7 key questions CISOs need to answer to drive secure, effective AI

April 25, 2024
Marina Kaganovich

Executive Trust Lead, Office of the CISO, Google Cloud

Anton Chuvakin

Security Advisor, Office of the CISO, Google Cloud

Hear monthly from our Cloud CISO in your inbox

Get the latest on security from Cloud CISO Phil Venables.


For the discerning chief information security officer, deciding how to secure the use of AI in their organization can be a lot like captaining a starship. You need to identify and respond to external threats quickly, of course, but also design solutions to internal problems on the fly, and use your previous experiences to anticipate and mitigate risks so you don’t find yourself stranded in the harsh vacuum of space.

As excitement around AI intensifies and business teams are eager to take advantage of the benefits AI can bring, Google Cloud’s Office of the CISO often hears from CISOs that to avoid the wrong side of an airlock they’re facing tough questions on how to secure AI, addressing novel threats, and crafting their approach to AI security on the fly.

However, CISOs can and should manage their organization’s expectations around security and AI. “AI is advancing rapidly, and it’s important that effective risk management strategies evolve along with it,” said our own CISO Phil Venables at a recent cloud security conference.

Securing AI involves both technical and policy safeguards. We recommend measures that ensure humans remain in the loop and have appropriate oversight of AI systems.

To create a consistent, repeatable response to the myriad of questions about securely implementing AI, we have grounded ourselves in Google’s Secure AI Framework (SAIF). SAIF is informed by best practices for security that we’ve applied to software development, and incorporates our understanding of security megatrends and risks specific to AI systems.

We’ve taken some of the most common security concerns around AI that we hear from CISOs around the world, and have summarized them below, along with our answers. CISOs should be asking — and answering — these questions.

How can I develop clear guidelines and oversight mechanisms to ensure that AI is used in a secure and compliant manner, and as intended by my organization?

While you may feel like the answer to this question starts from scratch, most organizations can begin by assessing their existing governance structure. Existing data governance frameworks are well-suited for evolving AI technologies, but there are factors to consider revisiting around this. These include:

  • Reviewing and refining existing security policies and procedures to ensure data usage, model training, and output monitoring are adequately covered;
  • Identifying new threat vectors posed by use of gen AI (more on how to mitigate these below);
  • Enhancing the scope and cadence of your risk oversight activities to cover AI; and,
  • Revising your training programs’ scope and cadence to keep up with rapid advancements in AI capabilities.

How do I implement technical and policy guardrails and oversight?

Securing AI involves both technical and policy safeguards. We recommend measures that ensure humans remain in the loop and have appropriate oversight of AI systems. Keeping a human hand on the starship console can help mitigate risks associated with AI, and promote the safe and responsible use of AI technology.

That, of course, leads to defining the scenarios where humans should be involved. We suggest focusing on three key areas:

  • Ranking the risks of AI use cases based on agreed-upon criteria such as whether they are for internal or external use, involve the use of sensitive data, are used to make decisions that have important impact on individuals, or are part of mission-critical applications. These triggers can be based on factors such as the sensitivity of the data being processed, the potential impact of the AI's decisions, or the level of uncertainty associated with the AI's outputs.
  • Once risks have been identified and ranked, implement technical or operational triggers that will require human intervention to review, approve, or reject AI-generated decisions and actions. These controls can include manual review processes, confirmation prompts, or the ability to override AI-generated decisions. Importantly, these shouldn’t just be controls noted in policies, but rather technical controls that can be monitored.
  • Articulate AI do’s and don’ts in an Acceptable Use Policy to mitigate the risk of shadow AI.

Mitigating cybersecurity threats against AI requires a proactive approach.

What security architecture and technical controls do I need to have in place to support AI?

A secure AI implementation requires both infrastructure and application-level controls that support security and data protection. This is a great opportunity to use the organization’s enthusiasm for AI to re-evaluate and recalibrate your security control environment.

Here are several security architecture choices and technical controls you should have in place to support AI, as outlined in SAIF:



  • Use secure development practices. This means developing AI systems in a way that is secure by design, which includes incorporating security best practices into the development process, such as threat modeling to identify potential attack vectors.
  • Employ secure coding using techniques such as input validation to prevent vulnerabilities.
  • Use modern approaches to scanning code for vulnerabilities during development and live in production.
  • Implement strong authentication and authorization measures by ensuring that only authorized users have access to AI systems and data, and that access is controlled based on the user's role and need.
  • Deploy AI systems in a secure manner. This includes taking steps to protect AI systems from attack, such as isolating them from the network, monitoring them for suspicious activity, and patching them regularly.


  • Train models to be resistant to adversarial attacks, including data poisoning, and evasion attacks. Incorporate adversarial training and testing techniques during development.
  • Use techniques to detect manipulated and biased data during training and test models against different types of attacks to identify weaknesses.
  • Conduct red team exercises to simulate cyberattacks and identify vulnerabilities in your AI systems.


  • Employ robust data security protocols, encrypt data at rest and in transit.
  • Implement data masking or tokenization for sensitive information.
  • Maintain detailed records of the origins, transformations, and usage of data to ensure its integrity and trustworthiness for data provenance.
  • Implement strict data-level access controls.

What steps can I take to detect and mitigate the cybersecurity threats targeting AI?

As noted in SAIF, mitigating cybersecurity threats against AI requires a proactive approach. Effective measures to strengthen your defenses include:

  • Defining the types of risks posed such as prompt attacks, extraction of training data, backdooring the AI model, adversarial examples to trick the model, data poisoning, and exfiltration.
  • Extend detection and response capabilities by incorporating AI into an organization's threat detection and response capabilities. AI can be used to identify and respond to threats in real-time, which can help to mitigate the impact of attacks. More on this below.
  • Craft a comprehensive incident-response plan if you haven’t already, to address AI-specific threats and vulnerabilities. The plan should include clear protocols for detecting, containing, and eradicating security incidents involving AI systems.

How can I ensure the security and privacy of your data when training and using AI?

As organizations embrace AI, it’s crucial to prioritize the security and privacy of the data used to train and operate these models. Some key considerations for enabling data protection include:

  • Establishing stringent data governance policies and access-control mechanisms to safeguard sensitive information.
  • Partnering with reputable data providers who adhere to industry standards and regulations.
  • Regularly reviewing and updating data security policies to stay apprised of evolving threats and technologies.
  • Considering the use of federated learning approaches that allow multiple parties to collaborate on training models without sharing sensitive data.

How do I safeguard my organization’s intellectual property and data privacy rights when using a third-party AI tool?

The answer largely depends on whether you’re using a home-grown tool or one provided by a third party. In the majority of cases we’ve encountered, the latter is the more likely scenario and often leads to questions regarding whether using third-party models will incur legal liability down the road.

AI is becoming a vital security tool as well as an accelerant for other cloud security megatrends. While it’s no science fiction trope where the problem magically resolves itself, it can give defenders an advantage.

Reputable AI model providers, such as Google Cloud, are keen to instill trust and confidence in their AI offerings, which is why Google Cloud recently announced generative AI indemnification commitments as part of our commitment to a shared fate operating model with customers.

For additional reference, you can review the Google Cloud AI/ML Specific Service Terms.

How can gen AI be used to improve the efficiency and effectiveness of my organization’s security operations?

AI is becoming a vital security tool as well as an accelerant for other cloud security megatrends. While it’s no science fiction trope where the problem magically resolves itself, it can give defenders an advantage because, as Venables said, “AI is good at amplifying capability based on data — and defenders have more data.”

“While AI technologies don’t offer a one-stop solution for all security problems, we’ve seen a few early use cases emerge for how AI is helping to level the security playing field by detecting anomalous and malicious behavior, automating security recommendations, and scaling the productivity of security specialists,” he said.

In this way, gen AI can be used to both increase developer productivity and supercharge security operations.

Get started today

These questions are intended to be a starting point for deeper discussions. By proactively exploring these areas, executives and CISOs can make informed decisions about using AI, leveraging its security benefits, and mitigating potential risks effectively.

For more on these topics, please read Google Cloud’s Approach to Trust in AI and Securing AI: Similar or Different. You can also contact our Office of the CISO with your questions.

Posted in