How to craft an Acceptable Use Policy for gen AI (and look smart doing it)
Marina Kaganovich
Executive Trust Lead, Office of the CISO, Google Cloud
Anton Chuvakin
Security Advisor, Office of the CISO, Google Cloud
Hear monthly from our Cloud CISO in your inbox
Get the latest on security from Cloud CISO Phil Venables.
SubscribeSimilar to a construction project, using generative AI in the workplace should adhere to robust guardrails. Just as building an office tower or apartment complex adheres to building codes to be safe, secure, dependable, and robust, organizations that want to use AI in a safe, secure, dependable, and robust way should devise their own “building code” for gen AI through an internal Acceptable Use Policy (AUP).
The way that you shape and evolve your AUP for the gen AI era can help establish a shared understanding in your organization about the values and principles that govern gen AI, which can be increasingly important as widespread adoption and everyday use become more common.
At Google Cloud’s Office of the CISO, we believe that an AUP can be an important, multifaceted guide in shaping your organization’s governance structure and its relationship to gen AI because it ties into other organizational governance pillars, including broad-scale awareness campaigns, training, and ongoing monitoring for compliance.
It’s important to align the use of gen AI to your organization’s overall goals and values, as well as the broader regional and industry requirements that may apply.
There are three key reasons why a well-defined AUP is crucial:
- Risk mitigation: The AUP can help mitigate risk in support of the enterprise’s overall risk management program by articulating do’s and don’ts when it comes to the use of enterprise data and resources.
- Strategic organizational alignment: The AUP can help promote the use of gen AI in ways that align with your organization's values and commitments, and empower employees to use emerging technologies appropriately.
- Promoting compliance: The AUP should support an alignment of your organization’s strategy to its risk appetite and is an integral part of defining the types of use that are in line with regulatory compliance requirements.
What should you include in your AUP?
Begin by clearly articulating the purpose of your AUP relative to your organization’s views of acceptable and prohibited gen AI use cases. It’s important to align the use of gen AI to your organization’s overall goals and values, as well as the broader regional and industry requirements that may apply.
When you specify the scope of the AUP, indicate which applications, users, and activities it covers. Be sure to clearly articulate accountability in terms of who owns and maintains the policy, who will enforce it, and how the AUP interplays with other policies — especially those pertaining to data governance, information security and data loss prevention, and third party risk management.
Communication with stakeholders is vital when designing your AUP. Partner with key stakeholders across your organization to align the guidance in the AUP with the owners of other related policies to reduce the potential for duplication, inconsistencies, and confusion. This is also a good opportunity to surface potential gaps in coverage that may need to be addressed.
How the AUP can promote secure and responsible AI
The AUP should indicate which tools are approved for use and how to manage exception requests for tools that are not indicated as approved. The risk of shadow AI in your organization is growing with the proliferation of AI-enabled consumer tools, irrespective of whether it arises from well-intentioned use. For example, using a consumer AI tool in a healthcare enterprise setting may present a HIPAA risk.
Along with approved tools, the AUP can be used to encourage the secure and responsible handling of data used to train and operate gen AI models. The AUP can provide guidance on what types of data can be input into gen AI products, including the organization’s rationale for approving some types of data for such use, but not others.
We recommend being specific when possible, so that employees are not forced to make judgment calls that may not align with your organization’s risk appetite and values.
The AUP also presents an opportunity to frame the use of gen AI in real-life examples, and articulate the rationale for why certain uses are deemed appropriate while others are not. Key here is the ability to explain how the implementation of guardrails, such as through human oversight and the use of grounding techniques, can shift a use case from blocked to approved.
Include practical examples
At Google Cloud’s Office of the CISO, we’ve found that real-world examples can help convince business leaders and boards of directors that your organization needs to be vigilant to avoid gen AI misuse. Including these types of examples in the AUP can help highlight the need for clear lines of accountability and oversight for gen AI development, deployment, and monitoring, and help explain why organizations should be investing in enterprise-grade gen AI tools.
However, overly broad and vague restrictions and prohibitions can hinder the beneficial applications of gen AI for appropriate use cases — and lead to shadow AI. For instance, banning all forms of AI may even increase risk, and may even increase risk if staff turn to personal devices for their AI needs.
Similarly, broad statements that seek to prevent employees from putting sensitive information into AI tools can backfire without clearly defining what kinds of data are sensitive. We recommend being specific when possible, so that employees are not forced to make judgment calls that may not align with your organization’s risk appetite and values.
Bonus tips for enhancing your AUP
A comprehensive and well-crafted AUP can be an essential tool for organizations embracing AI technologies. By following these best practices and taking a proactive stance on responsible AI governance, you can unlock the potential of gen AI while minimizing risks and building a foundation of trust.
An AUP is a living document and should be regularly re-assessed as part of policy review processes. Adapt it to your organization’s goals and priorities, draft it in a manner that’s flexible and not overly prescriptive, and be sure to review and update your AUP regularly as technology and regulations evolve.
Since keeping up with emerging technologies can be challenging, we suggest a multi-pronged approach informed by formal training programs, town halls, and team meetings. Ideally, you can tie these channels into a feedback loop to the overall AI governance program to better highlight areas of concern, provide visibility into commonly-raised topics that may benefit from more communication, and identify controls that may need to be implemented and enhanced.
Investing in an innovative and collaborative culture where staff engage in open dialogue about the use of gen AI can provide opportunities to educate and build awareness on gen AI’s capabilities and appropriate use. For additional information on this topic refer to our recent publication on AI governance best practices and the FS-ISAC’s Framework for an Acceptable Use Policy for External Generative AI.