Integrating Model Armor with Google Agentspace lets you screen user prompts to and responses from your AI agents, mitigating risks such as prompt injection, harmful content, and sensitive data leakage. After the integration is set up, it applies to all user interactions.
Before you begin
Create a Model Armor template and ensure that it is in the same Google Cloud project as Google Agentspace. The location of the template and the Google Agentspace instance must match. For more information about supported locations, see Model Armor locations and Google Agentspace locations.
Required roles
Before you integrate Model Armor with Google Agentspace, ensure that you have the required role.
To get the permissions that
you need to create and manage Model Armor templates,
ask your administrator to grant you the
Model Armor Admin (roles/modelarmor.admin
)
IAM role on Model Armor templates.
For more information about granting roles, see Manage access to projects, folders, and organizations.
You might also be able to get the required permissions through custom roles or other predefined roles.
Enable the integration
To enable Google Agentspace with Model Armor, your security administrator creates policies in Model Armor and your Google Agentspace administrator applies these policies to the Google Agentspace instance.
How it works
After the integration is configured, Google Agentspace routes user inputs and assistant outputs through the Model Armor API for screening using the selected templates. The Google Agentspace authenticates to Model Armor using the service agent. Model Armor responds based on the filter configuration defined in the template, which Google Agentspace acts upon by either blocking the request or response or allowing it.
For example, Google Agentspace might detect a request containing personally identifiable information and route it to Model Armor for screening. If the Model Armor template is configured to block personally identifiable information (PII), it tells Google Agentspace to block the request.
Logging
Model Armor generates platform logs for sanitization requests and
their responses in Cloud Logging. You need the Private Logs Viewer
(roles/logging.privateLogViewer
) IAM role to view the
Model Armor audit logs. For more information about the
auto-generated audit logs, see
Model Armor audit logging.
To log the template operations, set the templateMetadata.logSanitizeOperations
field to true
. For more information, see
Configure logging in templates.