- 2.51.0 (latest)
- 2.49.0
- 2.48.0
- 2.47.0
- 2.46.0
- 2.45.0
- 2.44.0
- 2.43.0
- 2.42.0
- 2.41.0
- 2.40.0
- 2.39.0
- 2.37.0
- 2.36.0
- 2.35.0
- 2.34.0
- 2.33.0
- 2.32.0
- 2.31.0
- 2.30.0
- 2.29.0
- 2.28.0
- 2.27.0
- 2.24.0
- 2.23.0
- 2.22.0
- 2.21.0
- 2.20.0
- 2.19.0
- 2.18.0
- 2.17.0
- 2.16.0
- 2.15.0
- 2.14.0
- 2.13.0
- 2.12.0
- 2.11.0
- 2.10.0
- 2.9.0
- 2.8.0
- 2.7.0
- 2.6.0
- 2.5.0
- 2.4.0
- 2.3.18
- 2.2.3
- 2.1.23
public final class BatchPredictInputConfig extends GeneratedMessageV3 implements BatchPredictInputConfigOrBuilder
Input configuration for BatchPredict Action.
The format of input depends on the ML problem of the model used for
prediction. As input source the
gcs_source
is expected, unless specified otherwise.
The formats are represented in EBNF with commas being literal and with
non-terminal symbols defined near the end of this comment. The formats
are:
<h4>AutoML Vision</h4>
<div class="ds-selector-tabs"><section><h5>Classification</h5>
One or more CSV files where each line is a single column:
GCS_FILE_PATH
The Google Cloud Storage location of an image of up to
30MB in size. Supported extensions: .JPEG, .GIF, .PNG.
This path is treated as the ID in the batch predict output.
Sample rows:
gs://folder/image1.jpeg
gs://folder/image2.gif
gs://folder/image3.png
</section><section><h5>Object Detection</h5>
One or more CSV files where each line is a single column:
GCS_FILE_PATH
The Google Cloud Storage location of an image of up to
30MB in size. Supported extensions: .JPEG, .GIF, .PNG.
This path is treated as the ID in the batch predict output.
Sample rows:
gs://folder/image1.jpeg
gs://folder/image2.gif
gs://folder/image3.png
</section>
</div>
<h4>AutoML Video Intelligence</h4>
<div class="ds-selector-tabs"><section><h5>Classification</h5>
One or more CSV files where each line is a single column:
GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END
GCS_FILE_PATH
is the Google Cloud Storage location of video up to 50GB in
size and up to 3h in duration duration.
Supported extensions: .MOV, .MPEG4, .MP4, .AVI.
TIME_SEGMENT_START
and TIME_SEGMENT_END
must be within the
length of the video, and the end time must be after the start time.
Sample rows:
gs://folder/video1.mp4,10,40
gs://folder/video1.mp4,20,60
gs://folder/vid2.mov,0,inf
</section><section><h5>Object Tracking</h5>
One or more CSV files where each line is a single column:
GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END
GCS_FILE_PATH
is the Google Cloud Storage location of video up to 50GB in
size and up to 3h in duration duration.
Supported extensions: .MOV, .MPEG4, .MP4, .AVI.
TIME_SEGMENT_START
and TIME_SEGMENT_END
must be within the
length of the video, and the end time must be after the start time.
Sample rows:
gs://folder/video1.mp4,10,40
gs://folder/video1.mp4,20,60
gs://folder/vid2.mov,0,inf
</section>
</div>
<h4>AutoML Natural Language</h4>
<div class="ds-selector-tabs"><section><h5>Classification</h5>
One or more CSV files where each line is a single column:
GCS_FILE_PATH
GCS_FILE_PATH
is the Google Cloud Storage location of a text file.
Supported file extensions: .TXT, .PDF, .TIF, .TIFF
Text files can be no larger than 10MB in size.
Sample rows:
gs://folder/text1.txt
gs://folder/text2.pdf
gs://folder/text3.tif
</section><section><h5>Sentiment Analysis</h5>
One or more CSV files where each line is a single column:
GCS_FILE_PATH
GCS_FILE_PATH
is the Google Cloud Storage location of a text file.
Supported file extensions: .TXT, .PDF, .TIF, .TIFF
Text files can be no larger than 128kB in size.
Sample rows:
gs://folder/text1.txt
gs://folder/text2.pdf
gs://folder/text3.tif
</section><section><h5>Entity Extraction</h5>
One or more JSONL (JSON Lines) files that either provide inline text or
documents. You can only use one format, either inline text or documents,
for a single call to [AutoMl.BatchPredict].
Each JSONL file contains a per line a proto that
wraps a temporary user-assigned TextSnippet ID (string up to 2000
characters long) called "id", a TextSnippet proto (in
JSON representation) and zero or more TextFeature protos. Any given
text snippet content must have 30,000 characters or less, and also
be UTF-8 NFC encoded (ASCII already is). The IDs provided should be
unique.
Each document JSONL file contains, per line, a proto that wraps a Document
proto with input_config
set. Each document cannot exceed 2MB in size.
Supported document extensions: .PDF, .TIF, .TIFF
Each JSONL file must not exceed 100MB in size, and no more than 20
JSONL files may be passed.
Sample inline JSONL file (Shown with artificial line
breaks. Actual line breaks are denoted by "\n".):
{
"id": "my_first_id",
"text_snippet": { "content": "dog car cat"},
"text_features": [
{
"text_segment": {"start_offset": 4, "end_offset": 6},
"structural_type": PARAGRAPH,
"bounding_poly": {
"normalized_vertices": [
{"x": 0.1, "y": 0.1},
{"x": 0.1, "y": 0.3},
{"x": 0.3, "y": 0.3},
{"x": 0.3, "y": 0.1},
]
},
}
],
}\n
{
"id": "2",
"text_snippet": {
"content": "Extended sample content",
"mime_type": "text/plain"
}
}
Sample document JSONL file (Shown with artificial line
breaks. Actual line breaks are denoted by "\n".):
{
"document": {
"input_config": {
"gcs_source": { "input_uris": [ "gs://folder/document1.pdf" ]
}
}
}
}\n
{
"document": {
"input_config": {
"gcs_source": { "input_uris": [ "gs://folder/document2.tif" ]
}
}
}
}
</section>
</div>
<h4>AutoML Tables</h4><div class="ui-datasection-main"><section
class="selected">
See Preparing your training
data for more
information.
You can use either
gcs_source
or
bigquery_source.
For gcs_source:
CSV file(s), each by itself 10GB or smaller and total size must be
100GB or smaller, where first file must have a header containing
column names. If the first row of a subsequent file is the same as
the header, then it is also treated as a header. All other rows
contain values for the corresponding columns.
The column names must contain the model's
input_feature_column_specs'
display_name-s
(order doesn't matter). The columns corresponding to the model's
input feature column specs must contain values compatible with the
column spec's data types. Prediction on all the rows, i.e. the CSV
lines, will be attempted.
Sample rows from a CSV file:
<pre>
"First Name","Last Name","Dob","Addresses"
"John","Doe","1968-01-22","[{"status":"current","address":"123_First_Avenue","city":"Seattle","state":"WA","zip":"11111","numberOfYears":"1"},{"status":"previous","address":"456_Main_Street","city":"Portland","state":"OR","zip":"22222","numberOfYears":"5"}]"
"Jane","Doe","1980-10-16","[{"status":"current","address":"789_Any_Avenue","city":"Albany","state":"NY","zip":"33333","numberOfYears":"2"},{"status":"previous","address":"321_Main_Street","city":"Hoboken","state":"NJ","zip":"44444","numberOfYears":"3"}]}
</pre>
For bigquery_source:
The URI of a BigQuery table. The user data size of the BigQuery
table must be 100GB or smaller.
The column names must contain the model's
input_feature_column_specs'
display_name-s
(order doesn't matter). The columns corresponding to the model's
input feature column specs must contain values compatible with the
column spec's data types. Prediction on all the rows of the table
will be attempted.
</section>
</div>
Input field definitions:
GCS_FILE_PATH
: The path to a file on Google Cloud Storage. For example,
"gs://folder/video.avi".
TIME_SEGMENT_START
: (TIME_OFFSET
)
Expresses a beginning, inclusive, of a time segment
within an example that has a time dimension
(e.g. video).
TIME_SEGMENT_END
: (TIME_OFFSET
)
Expresses an end, exclusive, of a time segment within
n example that has a time dimension (e.g. video).
TIME_OFFSET
: A number of seconds as measured from the start of an
example (e.g. video). Fractions are allowed, up to a
microsecond precision. "inf" is allowed, and it means the end
of the example.
Errors:
If any of the provided CSV files can't be parsed or if more than certain
percent of CSV rows cannot be processed then the operation fails and
prediction does not happen. Regardless of overall success or failure the
per-row failures, up to a certain count cap, will be listed in
Operation.metadata.partial_failures.
Protobuf type google.cloud.automl.v1.BatchPredictInputConfig
Inheritance
Object > AbstractMessageLite<MessageType,BuilderType> > AbstractMessage > GeneratedMessageV3 > BatchPredictInputConfigImplements
BatchPredictInputConfigOrBuilderStatic Fields
GCS_SOURCE_FIELD_NUMBER
public static final int GCS_SOURCE_FIELD_NUMBER
Type | Description |
int |
Static Methods
getDefaultInstance()
public static BatchPredictInputConfig getDefaultInstance()
Type | Description |
BatchPredictInputConfig |
getDescriptor()
public static final Descriptors.Descriptor getDescriptor()
Type | Description |
Descriptor |
newBuilder()
public static BatchPredictInputConfig.Builder newBuilder()
Type | Description |
BatchPredictInputConfig.Builder |
newBuilder(BatchPredictInputConfig prototype)
public static BatchPredictInputConfig.Builder newBuilder(BatchPredictInputConfig prototype)
Name | Description |
prototype | BatchPredictInputConfig |
Type | Description |
BatchPredictInputConfig.Builder |
parseDelimitedFrom(InputStream input)
public static BatchPredictInputConfig parseDelimitedFrom(InputStream input)
Name | Description |
input | InputStream |
Type | Description |
BatchPredictInputConfig |
Type | Description |
IOException |
parseDelimitedFrom(InputStream input, ExtensionRegistryLite extensionRegistry)
public static BatchPredictInputConfig parseDelimitedFrom(InputStream input, ExtensionRegistryLite extensionRegistry)
Name | Description |
input | InputStream |
extensionRegistry | ExtensionRegistryLite |
Type | Description |
BatchPredictInputConfig |
Type | Description |
IOException |
parseFrom(byte[] data)
public static BatchPredictInputConfig parseFrom(byte[] data)
Name | Description |
data | byte[] |
Type | Description |
BatchPredictInputConfig |
Type | Description |
InvalidProtocolBufferException |
parseFrom(byte[] data, ExtensionRegistryLite extensionRegistry)
public static BatchPredictInputConfig parseFrom(byte[] data, ExtensionRegistryLite extensionRegistry)
Name | Description |
data | byte[] |
extensionRegistry | ExtensionRegistryLite |
Type | Description |
BatchPredictInputConfig |
Type | Description |
InvalidProtocolBufferException |
parseFrom(ByteString data)
public static BatchPredictInputConfig parseFrom(ByteString data)
Name | Description |
data | ByteString |
Type | Description |
BatchPredictInputConfig |
Type | Description |
InvalidProtocolBufferException |
parseFrom(ByteString data, ExtensionRegistryLite extensionRegistry)
public static BatchPredictInputConfig parseFrom(ByteString data, ExtensionRegistryLite extensionRegistry)
Name | Description |
data | ByteString |
extensionRegistry | ExtensionRegistryLite |
Type | Description |
BatchPredictInputConfig |
Type | Description |
InvalidProtocolBufferException |
parseFrom(CodedInputStream input)
public static BatchPredictInputConfig parseFrom(CodedInputStream input)
Name | Description |
input | CodedInputStream |
Type | Description |
BatchPredictInputConfig |
Type | Description |
IOException |
parseFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
public static BatchPredictInputConfig parseFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
Name | Description |
input | CodedInputStream |
extensionRegistry | ExtensionRegistryLite |
Type | Description |
BatchPredictInputConfig |
Type | Description |
IOException |
parseFrom(InputStream input)
public static BatchPredictInputConfig parseFrom(InputStream input)
Name | Description |
input | InputStream |
Type | Description |
BatchPredictInputConfig |
Type | Description |
IOException |
parseFrom(InputStream input, ExtensionRegistryLite extensionRegistry)
public static BatchPredictInputConfig parseFrom(InputStream input, ExtensionRegistryLite extensionRegistry)
Name | Description |
input | InputStream |
extensionRegistry | ExtensionRegistryLite |
Type | Description |
BatchPredictInputConfig |
Type | Description |
IOException |
parseFrom(ByteBuffer data)
public static BatchPredictInputConfig parseFrom(ByteBuffer data)
Name | Description |
data | ByteBuffer |
Type | Description |
BatchPredictInputConfig |
Type | Description |
InvalidProtocolBufferException |
parseFrom(ByteBuffer data, ExtensionRegistryLite extensionRegistry)
public static BatchPredictInputConfig parseFrom(ByteBuffer data, ExtensionRegistryLite extensionRegistry)
Name | Description |
data | ByteBuffer |
extensionRegistry | ExtensionRegistryLite |
Type | Description |
BatchPredictInputConfig |
Type | Description |
InvalidProtocolBufferException |
parser()
public static Parser<BatchPredictInputConfig> parser()
Type | Description |
Parser<BatchPredictInputConfig> |
Methods
equals(Object obj)
public boolean equals(Object obj)
Name | Description |
obj | Object |
Type | Description |
boolean |
getDefaultInstanceForType()
public BatchPredictInputConfig getDefaultInstanceForType()
Type | Description |
BatchPredictInputConfig |
getGcsSource()
public GcsSource getGcsSource()
Required. The Google Cloud Storage location for the input content.
.google.cloud.automl.v1.GcsSource gcs_source = 1 [(.google.api.field_behavior) = REQUIRED];
Type | Description |
GcsSource | The gcsSource. |
getGcsSourceOrBuilder()
public GcsSourceOrBuilder getGcsSourceOrBuilder()
Required. The Google Cloud Storage location for the input content.
.google.cloud.automl.v1.GcsSource gcs_source = 1 [(.google.api.field_behavior) = REQUIRED];
Type | Description |
GcsSourceOrBuilder |
getParserForType()
public Parser<BatchPredictInputConfig> getParserForType()
Type | Description |
Parser<BatchPredictInputConfig> |
getSerializedSize()
public int getSerializedSize()
Type | Description |
int |
getSourceCase()
public BatchPredictInputConfig.SourceCase getSourceCase()
Type | Description |
BatchPredictInputConfig.SourceCase |
getUnknownFields()
public final UnknownFieldSet getUnknownFields()
Type | Description |
UnknownFieldSet |
hasGcsSource()
public boolean hasGcsSource()
Required. The Google Cloud Storage location for the input content.
.google.cloud.automl.v1.GcsSource gcs_source = 1 [(.google.api.field_behavior) = REQUIRED];
Type | Description |
boolean | Whether the gcsSource field is set. |
hashCode()
public int hashCode()
Type | Description |
int |
internalGetFieldAccessorTable()
protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
Type | Description |
FieldAccessorTable |
isInitialized()
public final boolean isInitialized()
Type | Description |
boolean |
newBuilderForType()
public BatchPredictInputConfig.Builder newBuilderForType()
Type | Description |
BatchPredictInputConfig.Builder |
newBuilderForType(GeneratedMessageV3.BuilderParent parent)
protected BatchPredictInputConfig.Builder newBuilderForType(GeneratedMessageV3.BuilderParent parent)
Name | Description |
parent | BuilderParent |
Type | Description |
BatchPredictInputConfig.Builder |
newInstance(GeneratedMessageV3.UnusedPrivateParameter unused)
protected Object newInstance(GeneratedMessageV3.UnusedPrivateParameter unused)
Name | Description |
unused | UnusedPrivateParameter |
Type | Description |
Object |
toBuilder()
public BatchPredictInputConfig.Builder toBuilder()
Type | Description |
BatchPredictInputConfig.Builder |
writeTo(CodedOutputStream output)
public void writeTo(CodedOutputStream output)
Name | Description |
output | CodedOutputStream |
Type | Description |
IOException |