[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-04 UTC。"],[],[],null,["# Configure feature-based visualization settings\n\nVertex Explainable AI provides built-in visualization capabilities for your image data.\nYou can configure visualizations for custom-trained image models.\n\nWhen you request an explanation on an image classification model, you get the\npredicted class along with an image overlay showing which pixels\nor regions contributed to the prediction.\n\nThe following images show visualizations on a husky image. The left\nvisualization uses the integrated gradients method and highlights areas of\npositive attribution. The right visualization uses an XRAI method with a color\ngradient indicating areas of lesser (blue) and greater (yellow) influence in\nmaking a positive prediction. \n\nThe type of data you're working with can influence whether you use an\nintegrated gradients or XRAI approach to visualizing your explanations.\n\n- XRAI tends to be better with natural images and provides a better high-level summary of insights, like showing that positive attribution is related to the shape of a dog's face.\n- Integrated gradients (IG) tends to provide details at the pixel level and is useful for uncovering more granular attributions.\n\nLearn more about the attribution methods in the Vertex Explainable AI\n[Overview page](/vertex-ai/docs/explainable-ai/overview).\n\nGet started\n-----------\n\nConfigure visualization when you [create a `Model` resource that supports\nVertex Explainable AI](/vertex-ai/docs/explainable-ai/configuring-explanations), or when you [override\nthe `Model`'s\n`ExplanationSpec`](/vertex-ai/docs/explainable-ai/configuring-explanations#override).\n\nTo configure visualization for your model, populate the `visualization` field of\nthe [`InputMetadata` message](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#inputmetadata) corresponding to the feature\nthat you want to visualize. In this configuration message, you can include\noptions such as the type of overlay used, which attributions are highlighted,\ncolor, and more. All settings are optional.\n\nVisualization options\n---------------------\n\nThe default and recommended settings depend on the attribution method\n(integrated gradients or XRAI). The following list describes configuration\noptions and how you might use them. For a full list of options, see the\n[API reference for the `Visualization` message](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#visualization).\n\n- `type`: The type of visualization used: `OUTLINES` or `PIXELS`. The `type`\n field defaults to `OUTLINES`, which shows regions of attribution. To show\n per-pixel attribution, set the field to `PIXELS`.\n\n- `polarity`: The directionality of the highlighted attributions. `positive` is\n set by default, which highlights areas with the highest positive\n attributions. This means highlighting pixels that\n were most influential to the model's positive prediction.\n Setting polarity to `negative` highlights areas that lead the model to not\n predicting the positive class. Using a negative polarity can be useful for\n debugging your model by identifying false negative regions. You can also set\n polarity to `both` which shows positive and negative attributions.\n\n- `clip_percent_upperbound`: Excludes attributions above the specified\n percentile\n from the highlighted areas. Using the clip parameters together can be useful\n for filtering out noise and making it easier to see areas of strong\n attribution.\n\n- `clip_percent_lowerbound`: Excludes attributions below the specified\n percentile\n from the highlighted areas.\n\n- `color_map`: The color scheme used for the highlighted areas. Default is\n `pink_green` for integrated gradients, which shows positive attributions in\n green and negative in pink. For XRAI visualizations, the color map is a\n gradient. The XRAI default is `viridis` which highlights the most influential\n regions in yellow and the least influential in blue.\n\n For a full list of possible values, see the [API reference for the\n `Visualization` message](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#visualization).\n- `overlay_type`: How the original image is displayed in the visualization.\n Adjusting the overlay can help increase visual clarity if the original image\n makes it difficult to view the visualization.\n\n For a full list of possible values, see the [API reference for the\n `Visualization` message](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#visualization).\n\nExample configurations\n----------------------\n\nTo get started, below are sample `Visualization` configurations that you can use\nas a starting point and images that show a range of settings applied.\n\n### Integrated gradients\n\nFor integrated gradients, you may need to adjust the clip values if the\nattribution areas are too noisy. \n\n visualization: {\n \"type\": \"OUTLINES\",\n \"polarity\": \"positive\",\n \"clip_percent_lowerbound\": 70,\n \"clip_percent_upperbound\": 99.9,\n \"color_map\": \"pink_green\",\n \"overlay_type\": \"grayscale\"\n }\n\nThe following visualizations use both the `outlines` and `pixels`\ntypes. The columns labeled \"Highly predictive only,\" \"Moderately predictive,\"\nand \"Almost all\" are examples of clipping at different levels that can help\nfocus your visualization.\n\n### XRAI\n\nFor XRAI visualizations, we recommend starting with no clip values for\nXRAI because the overlay uses a gradient to show areas of high and low\nattribution. \n\n visualization: {\n \"type\": \"PIXELS\",\n \"polarity\": \"positive\",\n \"clip_percent_lowerbound\": 0,\n \"clip_percent_upperbound\": 100,\n \"color_map\": \"viridis\",\n \"overlay_type\": \"grayscale\"\n }\n\nThe following image is an XRAI visualization that uses the default viridis\ncolor map and a range of overlay types. The areas in yellow indicate the most\ninfluential regions that contributed positively to the prediction.\n\nWhat's next\n-----------\n\n- For details about other Vertex Explainable AI configuration options, read [Configuring\n explanations for custom-trained models](/vertex-ai/docs/explainable-ai/configuring-explanations)."]]