public sealed class ImageSegmentationPredictionResult : IMessage<ImageSegmentationPredictionResult>, IEquatable<ImageSegmentationPredictionResult>, IDeepCloneable<ImageSegmentationPredictionResult>, IBufferMessage, IMessage
Reference documentation and code samples for the Vertex AI v1beta1 API class ImageSegmentationPredictionResult.
A PNG image where each pixel in the mask represents the category in which
the pixel in the original image was predicted to belong to. The size of
this image will be the same as the original image. The mapping between the
AnntoationSpec and the color can be found in model's metadata. The model
will choose the most likely category and if none of the categories reach
the confidence threshold, the pixel will be marked as background.
A one channel image which is encoded as an 8bit lossless PNG. The size of
the image will be the same as the original image. For a specific pixel,
darker color means less confidence in correctness of the cateogry in the
categoryMask for the corresponding pixel. Black means no confidence and
white means complete confidence.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[[["\u003cp\u003eThe \u003ccode\u003eImageSegmentationPredictionResult\u003c/code\u003e class is a prediction output format for Image Segmentation in the Vertex AI v1beta1 API.\u003c/p\u003e\n"],["\u003cp\u003eThis class is part of the \u003ccode\u003eGoogle.Cloud.AIPlatform.V1Beta1.Schema.Predict.Prediction\u003c/code\u003e namespace, within the \u003ccode\u003eGoogle.Cloud.AIPlatform.V1Beta1.dll\u003c/code\u003e assembly.\u003c/p\u003e\n"],["\u003cp\u003e\u003ccode\u003eImageSegmentationPredictionResult\u003c/code\u003e implements several interfaces, including \u003ccode\u003eIMessage\u003c/code\u003e, \u003ccode\u003eIEquatable\u003c/code\u003e, \u003ccode\u003eIDeepCloneable\u003c/code\u003e, and \u003ccode\u003eIBufferMessage\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eIt contains two main properties, \u003ccode\u003eCategoryMask\u003c/code\u003e and \u003ccode\u003eConfidenceMask\u003c/code\u003e, both represented as string types, describing the predicted category and confidence level.\u003c/p\u003e\n"],["\u003cp\u003eThere are two constructor overloads for creating new instances of \u003ccode\u003eImageSegmentationPredictionResult\u003c/code\u003e, including one for copying from another instance.\u003c/p\u003e\n"]]],[],null,["# Vertex AI v1beta1 API - Class ImageSegmentationPredictionResult (1.0.0-beta47)\n\nVersion latestkeyboard_arrow_down\n\n- [1.0.0-beta47 (latest)](/dotnet/docs/reference/Google.Cloud.AIPlatform.V1Beta1/latest/Google.Cloud.AIPlatform.V1Beta1.Schema.Predict.Prediction.ImageSegmentationPredictionResult)\n- [1.0.0-beta46](/dotnet/docs/reference/Google.Cloud.AIPlatform.V1Beta1/1.0.0-beta46/Google.Cloud.AIPlatform.V1Beta1.Schema.Predict.Prediction.ImageSegmentationPredictionResult) \n\n public sealed class ImageSegmentationPredictionResult : IMessage\u003cImageSegmentationPredictionResult\u003e, IEquatable\u003cImageSegmentationPredictionResult\u003e, IDeepCloneable\u003cImageSegmentationPredictionResult\u003e, IBufferMessage, IMessage\n\nReference documentation and code samples for the Vertex AI v1beta1 API class ImageSegmentationPredictionResult.\n\nPrediction output format for Image Segmentation. \n\nInheritance\n-----------\n\n[object](https://learn.microsoft.com/dotnet/api/system.object) \\\u003e ImageSegmentationPredictionResult \n\nImplements\n----------\n\n[IMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IMessage-1.html)[ImageSegmentationPredictionResult](/dotnet/docs/reference/Google.Cloud.AIPlatform.V1Beta1/latest/Google.Cloud.AIPlatform.V1Beta1.Schema.Predict.Prediction.ImageSegmentationPredictionResult), [IEquatable](https://learn.microsoft.com/dotnet/api/system.iequatable-1)[ImageSegmentationPredictionResult](/dotnet/docs/reference/Google.Cloud.AIPlatform.V1Beta1/latest/Google.Cloud.AIPlatform.V1Beta1.Schema.Predict.Prediction.ImageSegmentationPredictionResult), [IDeepCloneable](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IDeepCloneable-1.html)[ImageSegmentationPredictionResult](/dotnet/docs/reference/Google.Cloud.AIPlatform.V1Beta1/latest/Google.Cloud.AIPlatform.V1Beta1.Schema.Predict.Prediction.ImageSegmentationPredictionResult), [IBufferMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IBufferMessage.html), [IMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IMessage.html) \n\nInherited Members\n-----------------\n\n[object.GetHashCode()](https://learn.microsoft.com/dotnet/api/system.object.gethashcode) \n[object.GetType()](https://learn.microsoft.com/dotnet/api/system.object.gettype) \n[object.ToString()](https://learn.microsoft.com/dotnet/api/system.object.tostring)\n\nNamespace\n---------\n\n[Google.Cloud.AIPlatform.V1Beta1.Schema.Predict.Prediction](/dotnet/docs/reference/Google.Cloud.AIPlatform.V1Beta1/latest/Google.Cloud.AIPlatform.V1Beta1.Schema.Predict.Prediction)\n\nAssembly\n--------\n\nGoogle.Cloud.AIPlatform.V1Beta1.dll\n\nConstructors\n------------\n\n### ImageSegmentationPredictionResult()\n\n public ImageSegmentationPredictionResult()\n\n### ImageSegmentationPredictionResult(ImageSegmentationPredictionResult)\n\n public ImageSegmentationPredictionResult(ImageSegmentationPredictionResult other)\n\nProperties\n----------\n\n### CategoryMask\n\n public string CategoryMask { get; set; }\n\nA PNG image where each pixel in the mask represents the category in which\nthe pixel in the original image was predicted to belong to. The size of\nthis image will be the same as the original image. The mapping between the\nAnntoationSpec and the color can be found in model's metadata. The model\nwill choose the most likely category and if none of the categories reach\nthe confidence threshold, the pixel will be marked as background.\n\n### ConfidenceMask\n\n public string ConfidenceMask { get; set; }\n\nA one channel image which is encoded as an 8bit lossless PNG. The size of\nthe image will be the same as the original image. For a specific pixel,\ndarker color means less confidence in correctness of the cateogry in the\ncategoryMask for the corresponding pixel. Black means no confidence and\nwhite means complete confidence."]]