[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[],[],null,["# Improve feature-based explanations\n\nWhen you are working with custom-trained models, you can configure specific\nparameters to improve your explanations. This guide describes how to inspect the\nexplanations that you get from Vertex Explainable AI for error, and it describes how to\nadjust your Vertex Explainable AI configuration to mitigate error.\n\nIf you want to use Vertex Explainable AI with an AutoML tabular model,\nthen you don't need to perform any configuration; Vertex AI\nautomatically configures the model for Vertex Explainable AI. Skip this document and read\n[Getting explanations](/vertex-ai/docs/explainable-ai/getting-explanations).\n\nThe [Vertex Explainable AI feature attribution\nmethods](/vertex-ai/docs/explainable-ai/overview#compare-methods) are all based on variants of\nShapley values. Because\nShapley values are computationally very expensive, Vertex Explainable AI provides\napproximations instead of the exact values.\n\nYou can reduce the approximation error and get closer to the exact values by\nchanging the following inputs:\n\n- Increasing the number of integral steps or number of paths.\n- Changing the input baseline(s) you select.\n- Adding more input baselines. With the integrated gradients and XRAI methods, using additional baselines increases latency. Using additional baselines with the sampled Shapley method does not increase latency.\n\nInspect explanations for error\n------------------------------\n\nAfter you have [requested and received explanations from\nVertex Explainable AI](/vertex-ai/docs/explainable-ai/getting-explanations), you can check the explanations\nfor approximation error. If the explanations have high approximation error, then\nthe explanations might not be reliable. This section describes several ways to\ncheck for error.\n\n### Check the `approximationError` field\n\nFor each\n[`Attribution`](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations#attribution),\nVertex Explainable AI returns approximation error in the `approximationError` field. If\nyour approximation error exceeds 0.05, consider adjusting your Vertex Explainable AI\nconfiguration.\n\nFor the integrated gradients technique, we calculate the approximation error by\ncomparing the sum of the feature attributions to the difference between the predicted\nvalues for the input score and the baseline score. For the integrated gradients technique,\nthe feature attribution is an approximation of the integral of gradient values\nbetween the baseline and the input. We use the\n[Gaussian quadrature](https://en.wikipedia.org/wiki/Gaussian_quadrature) rule to\napproximate the integral because it is more accurate than Riemann Sum methods.\n\n### Check the difference between inferences and baseline output\n\nFor each\n[`Attribution`](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations#attribution),\nVertex Explainable AI returns an `instanceOutputValue`, which represents the part of the\ninference output that feature attributions are for, and a\n`baselineOutputValue`, which represents what this part of the inference output\nwould be if the inference was performed on an input baseline rather than the\nactual input instance.\n\nIf the difference between `instanceOutputValue` and `baselineOutputValue` is\nless than 0.05 for any attributions, then you might need to [change your\ninput baselines](#baselines).\n\nAdjust your configuration\n-------------------------\n\nThe following sections describe ways to adjust your Vertex Explainable AI configuration\nto reduce error. To make any of the following changes, you must [configure a new\n`Model` resource with an updated\n`ExplanationSpec`](/vertex-ai/docs/explainable-ai/configuring-explanations) or [override the\n`ExplanationSpec` of your existing\n`Model`](/vertex-ai/docs/explainable-ai/configuring-explanations#override) by\nredeploying it to an `Endpoint` resource or by getting new batch inferences.\n\n### Increase steps or paths\n\nTo reduce approximation error, you can increase:\n\n- the number of paths for sampled Shapley ([`SampledShapleyAttribution.pathCount`](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#sampledshapleyattribution))\n- the number of integral steps for integrated gradients ([`IntegratedGradientsAttribution.stepCount`](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#integratedgradientsattribution)) or XRAI ([`XraiAttribution.stepCount`](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#xraiattribution))\n\n### Adjust baselines\n\nInput baselines represent a feature that provides no additional information.\nBaselines for tabular models can be median, minimum, maximum, or random\nvalues in relation to your training data. Similarly, for image models, your\nbaselines can be a black image, a white image, a gray image, or an image with\nrandom pixel values.\n\nWhen you configure Vertex Explainable AI, you can optionally specify the\n[`input_baselines` field](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#inputmetadata).\nOtherwise, Vertex AI chooses input baselines for you. If you are\nencountering the problems described in previous sections of this guide, then you\nmight want to adjust the `input_baselines` for each input of your `Model`.\n\nIn general:\n\n- Start with one baseline representing median values.\n- Change this baseline to one representing random values.\n- Try two baselines, representing the minimum and maximum values.\n- Add another baseline representing random values.\n\n#### Example for tabular data\n\nThe following Python code creates a [`ExplanationMetadata`\nmessage](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#explanationmetadata) for a hypothetical\nTensorFlow model trained on tabular data.\n\nNotice that `input_baselines` is a list where you can specify multiple\nbaselines. This example sets just one baseline. The baseline is a list of median\nvalues for the training data (`train_data` in this example). \n\n explanation_metadata = {\n \"inputs\": {\n \"\u003cvar translate=\"no\"\u003eFEATURE_NAME\u003c/var\u003e\": {\n \"input_tensor_name\": \"\u003cvar translate=\"no\"\u003eINPUT_TENSOR_NAME\u003c/var\u003e\",\n \"input_baselines\": [train_data.median().values.tolist()],\n \"encoding\": \"bag_of_features\",\n \"index_feature_mapping\": train_data.columns.tolist()\n }\n },\n \"outputs\": {\n \"\u003cvar translate=\"no\"\u003eOUTPUT_NAME\u003c/var\u003e\": {\n \"output_tensor_name\": \"\u003cvar translate=\"no\"\u003eOUTPUT_TENSOR_NAME\u003c/var\u003e\"\n }\n }\n }\n\nSee [Configuring explanations for custom-trained\nmodels](/vertex-ai/docs/explainable-ai/configuring-explanations) for more context on how to use\nthis `ExplanationMetadata`\n\nTo set two baselines representing minimum and maximum values, set\n`input_baselines` as follows:\n`[train_data.min().values.tolist(), train_data.max().values.tolist()]`\n\n#### Example for image data\n\nThe following Python code creates a [`ExplanationMetadata`\nmessage](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#explanationmetadata) for a hypothetical\nTensorFlow model trained on image data.\n\nNotice that `input_baselines` is a list where you can specify multiple\nbaselines. This example sets just one baseline. The baseline is a list of random\nvalues. Using random values for an image baseline is a good approach if the\nimages in your training dataset contain a lot of black and white.\n\nOtherwise, set `input_baselines` to `[0, 1]` to represent black and white\nimages. \n\n random_baseline = np.random.rand(192,192,3)\n\n explanation_metadata = {\n \"inputs\": {\n \"\u003cvar translate=\"no\"\u003eFEATURE_NAME\u003c/var\u003e\": {\n \"input_tensor_name\": \"\u003cvar translate=\"no\"\u003eINPUT_TENSOR_NAME\u003c/var\u003e\",\n \"modality\": \"image\",\n \"input_baselines\": [random_baseline.tolist()]\n }\n },\n \"outputs\": {\n \"\u003cvar translate=\"no\"\u003eOUTPUT_NAME\u003c/var\u003e\": {\n \"output_tensor_name\": \"\u003cvar translate=\"no\"\u003eOUTPUT_TENSOR_NAME\u003c/var\u003e\"\n }\n }\n }\n\nWhat's next\n-----------\n\n- Follow the guide to [configuring\n explanations](/vertex-ai/docs/explainable-ai/configuring-explanations) to implement any configuration changes described on this page."]]