Oops! 5 serious gen AI security mistakes to avoid
Marina Kaganovich
Executive Trust Lead, Office of the CISO, Google Cloud
Anton Chuvakin
Security Advisor, Office of the CISO, Google Cloud
Hear monthly from our Cloud CISO in your inbox
Get the latest on security from Cloud CISO Phil Venables.
SubscribeGenerative AI is at its best when it’s built securely and responsibly. With broad AI adoption on the rise, and gen AI rapidly moving into production environments, potential security missteps increase as well.
Based on Office of the CISO discussions with customers, we’ve observed five key AI security mistakes to watch for: weak governance guidance, data security, too much access, failure to consider inherited vulnerabilities, and over-indexing on certain risks.
While we’re excited to see rapid adoption of gen AI, we also want to see it implemented safely, securely, and responsibly. For instance, as gen AI adoption skyrockets, organizations have been oversharing in their gen AI inputs.
“55% of generative AI inputs contained sensitive and personally identifiable information,” according to research from Menlo Labs Threat Research published earlier this year.
Pitfalls are inevitable as gen AI becomes more widespread. In highlighting the most common of these mistakes, we hope to help you avoid them as you implement gen AI in your organizations.
Mistake 1: Weak AI governance
The mistake
The primary mistake we see is a failure to implement robust gen AI governance. Some organizations have taken an open approach to gen AI, even going so far as to use consumer gen AI applications for business purposes, which poses risks to data security and privacy. Others impose excessively restrictive governance, leading to shadow AI as individuals find work-arounds to use gen AI outside of what they see as stifling rules.
In either case, we frequently see an oversimplified approach to governance that’s focused on form over substance.
Why it's a mistake
Without proper AI governance, organizations may inadvertently perpetuate inconsistent practices and potential security gaps, which can result in a higher likelihood of data breaches or misuse. Weak AI governance can hinder transparency and accountability, making it difficult to track how AI models are used, the data they are accessing, and the decisions they inform. This lack of visibility can create risks for the business and its customers, as their data may be mishandled or used in ways they did not consent to.
Additionally, weak governance can lead to a loss of confidence in the overall governance process. Examples of this include excessive delays in reaching a decision on whether a gen AI use case can proceed, or too freely granting rule-bending exceptions instead of enforcing controls that guard against risk.
The solution
Establish a robust gen AI governance structure. Good governance can help ensure that experts from across the organization are empowered to review gen AI initiatives in a holistic, programmatic, and repeatable format, one that can influence and improve the entire process from start to finish.
The good news is that many organizations are looking to bolster their AI governance structures. Nearly three-quarters of organizations “are planning to create teams dedicated to governing the secure use of AI, emphasizing the serious consideration companies are giving to AI integration,” according to a Cloud Security Alliance survey of IT and security professionals.
Additionally, the SANS 2024 AI survey found that nearly 70% of respondents said that the cybersecurity team has “a role in enabling and governing the use of AI across the enterprise."
As part of our best-practices recommendations on designing gen AI governance structures, we advise that organizations should craft an iterative, flexible, and clearly-communicated approach informed by stakeholder feedback.
Why it's better for customers
This approach can help position the governance body as an enabler — rather than a blocker. In addition to serving as a decision-making body, the governance process can also serve as an input mechanism for creative ideas, such as helping to innovate use-case proposals.
Good AI governance fosters trust not just internally, but also between organizations and their customers. By implementing clear policies, ensuring transparency, and prioritizing accountability, organizations demonstrate their commitment to using AI responsibly and securely. This helps build customer confidence in the organization's AI-powered products and services.
Additionally, robust AI governance can lead to improved customer experiences, such as by providing transparency that the systems are developed and deployed responsibly. Effective governance also ensures that AI systems are tested rigorously, minimizing the risk of errors and malfunctions.
Mistake 2: Bad data
The mistake
Organizations often underestimate the effort needed to collect, clean, and label data, and the iterative steps required throughout. Training AI models requires extensive, high-quality data. If that data is inaccurate or incomplete, the AI output will also be flawed.
Why it's a mistake
Poor data governance practices can severely hinder AI development and lead to a range of negative consequences. This lack of transparency erodes trust in AI outputs and makes it harder to identify and correct errors.
Likewise, when data lineage and transformations are not properly tracked, it becomes challenging to identify the root cause of problems in AI models, hindering debugging and improvement efforts.
Lastly, absent clear guidelines on data usage, there's a risk of data being used for purposes beyond its intended scope, potentially harming individuals and exposing the organization to legal and reputational risk.
The solution
Implement robust data governance practices. Since AI models typically require high-quality data which should be appropriately sourced, cleansed, and normalized, the AI governance program should closely align to the organization’s existing data governance program. In addition, business leaders should implement strong security measures to protect data confidentiality and integrity, including using encryption, deploying access controls, and applying anonymization techniques.
Why it's better for customers
Good data governance can help ensure data quality, leading to more accurate and reliable AI. Robust data security measures can protect customer information from breaches and misuse. Data governance can help track data lineage and data transformations, making AI more transparent and explainable.
With good data governance, your customers can have confidence that their sensitive data is handled responsibly, in compliance with applicable rules and regulations, and is treated as a valuable asset. This leads to better outcomes informed by transparency and accountability measures.
Mistake 3: Excessive, overprovisioned access
The mistake
A frequent oversight in gen AI security is granting internal chatbots and other large language model (LLM)-based technology excessive access to corporate data.
Why it's a mistake
Granting excessive access to AI systems can expose customers' sensitive information. This could include personal details, financial data, or any other information the AI has access to, exposing customer information to unauthorized access or breaches. AI models that process sensitive customer data without proper security measures can lead to data leaks, identity theft, and model poisoning.
When it comes to company-wide use of gen AI tools in a corporate environment, data leakage is the top concern of 50% of the IT and security leaders who participated in a Dark Reading survey this year.
If the AI's data-access permissions aren't properly restricted, data exposure could lead to identity theft, financial loss, and other forms of harm for customers. This applies equally to public-facing and internally-facing gen AI projects. For example, a gen AI chatbot intended for internal use by IT support may share sensitive vulnerability management data that human users shouldn’t be able to see.
Overprovisioned access can also enable the AI to access and use data in ways that customers haven't consented to or are unaware of. This can erode trust in the organization’s use of AI, and lead to concerns from internal stakeholders, customers, and if applicable, regulators.
The solution
Adopt strict, role-based access controls and apply the principle of least privilege. AI systems should only have access to the data and resources necessary to perform their specific tasks. Regularly reviewing and updating these access controls is crucial to prevent unintended data exposure.
Holistically evaluating and tailoring role-based access rights can help ensure that security is built in, rather than having a need to bolt on compensating controls later. This can occur when gen AI implementation is retrofitted into existing workflows and development pipelines that don’t adequately assess and implement security controls coverage.
In addition, access rights granted through APIs should also be considered as the review of these rights isn’t always apparent or done with the same consistency and depth as rights granted to humans. This applies to access to control APIs, as well as structured and unstructured data.
Why it's better for customers
Limiting AI access to only necessary data can reduce the risk of data breaches, unauthorized exposure, and better safeguard customers' personal and financial well-being.
Implementing strict access controls demonstrates a commitment to responsible AI use. Customers are more likely to trust organizations that prioritize data security and transparency, fostering stronger and more positive relationships.
Mistake 4: Neglecting inherited vulnerabilities
The mistake
AI models trained and operated on poorly-secured platforms can increase the risks of data leakage and potential model tampering.
In addition, AI models often interact with other systems in production, and each interaction can introduce security risks. Continuous testing is necessary to understand the interconnectedness between AI models and other applications and systems, and how these interactions might be exploited.
Consider, for example, a company that deploys multiple AI models that interact with one another and other systems. A vulnerability in one model could allow attackers to compromise the entire system through its interconnectedness.
Organizations often assume they have full control over third-party or fine-tuned AI models. However, these models can inherit vulnerabilities from their foundation models, and lead to unexpected security risks.
Why it's a mistake
When organizations use third-party or fine-tuned AI models without careful scrutiny, they may unknowingly expose themselves and their customers to potential security risks. These models can be complex and opaque to scrutiny, and may have inherited vulnerabilities from their foundation models that are hard to detect.
Cyberattacks that exploit inherited vulnerabilities can lead to unauthorized access to sensitive customer data, manipulation of AI-generated outputs, and even complete compromise of the AI system. They can result in severe consequences for customers, including privacy breaches, financial loss, and erosion of trust in the organization using the AI.
Similarly, systems should be kept up to date and updates well-documented. Without rigorous version control, security patches applied to one model version may not carry over during updates, resulting in inconsistent security across different versions.
The solution
When organizations conduct thorough due diligence on third-party and fine-tuned AI models before incorporating them into their systems, they are taking the steps needed to mitigate the risks associated with inherited vulnerabilities. This due diligence involves evaluating the security posture of the foundation models, understanding their potential vulnerabilities, and implementing appropriate risk-mitigation strategies.
Additionally, organizations should prioritize transparency and demand clear documentation of the model's lineage — and any known security risks. More than one-third of respondents to the Cloud Security Alliance survey cited a “lack of transparency” as their biggest concern regarding AI in security.
Why it's better for customers
By addressing inherited vulnerabilities, organizations can significantly reduce the risk of security incidents that could compromise customer data. This helps to protect customers' privacy and sensitive information, preventing potential harm such as identity theft, financial loss, and reputational damage.
When organizations demonstrate a commitment to security and transparency in their use of AI models, it can foster trust and confidence among their customers.
Mistake 5: Assuming risks only apply to public-facing AI
The mistake
Organizations often believe that security risks associated with generative AI are limited to public-facing applications. However, internal AI tools, such as chatbots or code generation systems, can also introduce risks.
Why it's a mistake
Organizations often prioritize security for public-facing applications because of their direct exposure to external threats and potential for reputational damage. This prioritization can create a false sense of security for internal AI tools, which are presumed to be less vulnerable due to their limited accessibility.
The perception that internal AI tools are primarily used by trusted employees can create a false sense of security, and encourage a more relaxed approach to security. It’s important to fight this urge. Actions taken based on this perception often overlook the possibilities of insider threats, accidental data exposure, and vulnerabilities in the internal network, all of which can be exploited to access the sensitive information processed by these AI tools.
For instance, a seemingly harmless internal chatbot could inadvertently leak confidential information if queried cleverly, or worse, if compromised. For instance, an internal chatbot with unrestricted access to employee records could potentially reveal confidential details such as performance reviews or compensation if queried in a specific way.
The solution
Apply consistent security measures to both public-facing and internal AI tools. This involves implementing robust access controls, data encryption, and regular security assessments for all AI implementations, regardless of their intended audience.
Why it's better for customers
Eliminating the misconception that security risks only apply to public-facing AI is crucial for protecting customer data. When organizations prioritize security solely for public-facing applications, they inadvertently expose customer information used by internal AI tools to potential breaches.
This also helps keep customer data more secure from insider threats, those that seek to exploit company and customer data for profit.
For more Google Cloud guidance, please see our CISO Insights hub and our new report on how we approach trust in artificial intelligence.