- 0.61.0 (latest)
- 0.60.0
- 0.59.0
- 0.58.0
- 0.57.0
- 0.56.0
- 0.55.0
- 0.54.0
- 0.53.0
- 0.52.0
- 0.51.0
- 0.50.0
- 0.49.0
- 0.48.0
- 0.47.0
- 0.46.0
- 0.45.0
- 0.44.0
- 0.43.0
- 0.42.0
- 0.41.0
- 0.40.0
- 0.39.0
- 0.38.0
- 0.37.0
- 0.36.0
- 0.35.0
- 0.34.0
- 0.33.0
- 0.32.0
- 0.31.0
- 0.30.0
- 0.29.0
- 0.28.0
- 0.27.0
- 0.26.0
- 0.25.0
- 0.24.0
- 0.23.0
- 0.22.0
- 0.21.0
- 0.20.0
- 0.19.0
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.14.0
- 0.13.0
- 0.12.0
- 0.11.0
- 0.10.0
- 0.9.1
- 0.8.0
- 0.7.0
- 0.6.0
- 0.5.0
- 0.4.0
- 0.3.0
- 0.2.0
- 0.1.0
Reference documentation and code samples for the Vertex AI V1 API class Google::Cloud::AIPlatform::V1::PredictionService::Client.
Client for the PredictionService service.
A service for online predictions and explanations.
Inherits
- Object
Methods
.configure
def self.configure() { |config| ... } -> Client::Configuration
Configure the PredictionService Client class.
See Configuration for a description of the configuration fields.
- (config) — Configure the Client client.
- config (Client::Configuration)
# Modify the configuration for all PredictionService clients ::Google::Cloud::AIPlatform::V1::PredictionService::Client.configure do |config| config.timeout = 10.0 end
#configure
def configure() { |config| ... } -> Client::Configuration
Configure the PredictionService Client instance.
The configuration is set to the derived mode, meaning that values can be changed, but structural changes (adding new fields, etc.) are not allowed. Structural changes should be made on Client.configure.
See Configuration for a description of the configuration fields.
- (config) — Configure the Client client.
- config (Client::Configuration)
#direct_predict
def direct_predict(request, options = nil) -> ::Google::Cloud::AIPlatform::V1::DirectPredictResponse
def direct_predict(endpoint: nil, inputs: nil, parameters: nil) -> ::Google::Cloud::AIPlatform::V1::DirectPredictResponse
Perform an unary online prediction request to a gRPC model server for Vertex first-party products and frameworks.
def direct_predict(request, options = nil) -> ::Google::Cloud::AIPlatform::V1::DirectPredictResponse
direct_predict
via a request object, either of type
DirectPredictRequest or an equivalent Hash.
- request (::Google::Cloud::AIPlatform::V1::DirectPredictRequest, ::Hash) — A request object representing the call parameters. Required. To specify no parameters, or to keep all the default parameter values, pass an empty Hash.
- options (::Gapic::CallOptions, ::Hash) — Overrides the default settings for this call, e.g, timeout, retries, etc. Optional.
def direct_predict(endpoint: nil, inputs: nil, parameters: nil) -> ::Google::Cloud::AIPlatform::V1::DirectPredictResponse
direct_predict
via keyword arguments. Note that at
least one keyword argument is required. To specify no parameters, or to keep all
the default parameter values, pass an empty Hash as a request object (see above).
-
endpoint (::String) — Required. The name of the Endpoint requested to serve the prediction.
Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
- inputs (::Array<::Google::Cloud::AIPlatform::V1::Tensor, ::Hash>) — The prediction input.
- parameters (::Google::Cloud::AIPlatform::V1::Tensor, ::Hash) — The parameters that govern the prediction.
- (response, operation) — Access the result along with the RPC operation
- response (::Google::Cloud::AIPlatform::V1::DirectPredictResponse)
- operation (::GRPC::ActiveCall::Operation)
- (::Google::Cloud::Error) — if the RPC is aborted.
Basic example
require "google/cloud/ai_platform/v1" # Create a client object. The client can be reused for multiple calls. client = Google::Cloud::AIPlatform::V1::PredictionService::Client.new # Create a request. To set request fields, pass in keyword arguments. request = Google::Cloud::AIPlatform::V1::DirectPredictRequest.new # Call the direct_predict method. result = client.direct_predict request # The returned object is of type Google::Cloud::AIPlatform::V1::DirectPredictResponse. p result
#direct_raw_predict
def direct_raw_predict(request, options = nil) -> ::Google::Cloud::AIPlatform::V1::DirectRawPredictResponse
def direct_raw_predict(endpoint: nil, method_name: nil, input: nil) -> ::Google::Cloud::AIPlatform::V1::DirectRawPredictResponse
Perform an unary online prediction request to a gRPC model server for custom containers.
def direct_raw_predict(request, options = nil) -> ::Google::Cloud::AIPlatform::V1::DirectRawPredictResponse
direct_raw_predict
via a request object, either of type
DirectRawPredictRequest or an equivalent Hash.
- request (::Google::Cloud::AIPlatform::V1::DirectRawPredictRequest, ::Hash) — A request object representing the call parameters. Required. To specify no parameters, or to keep all the default parameter values, pass an empty Hash.
- options (::Gapic::CallOptions, ::Hash) — Overrides the default settings for this call, e.g, timeout, retries, etc. Optional.
def direct_raw_predict(endpoint: nil, method_name: nil, input: nil) -> ::Google::Cloud::AIPlatform::V1::DirectRawPredictResponse
direct_raw_predict
via keyword arguments. Note that at
least one keyword argument is required. To specify no parameters, or to keep all
the default parameter values, pass an empty Hash as a request object (see above).
-
endpoint (::String) — Required. The name of the Endpoint requested to serve the prediction.
Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
-
method_name (::String) — Fully qualified name of the API method being invoked to perform
predictions.
Format:
/namespace.Service/Method/
Example:/tensorflow.serving.PredictionService/Predict
- input (::String) — The prediction input.
- (response, operation) — Access the result along with the RPC operation
- response (::Google::Cloud::AIPlatform::V1::DirectRawPredictResponse)
- operation (::GRPC::ActiveCall::Operation)
- (::Google::Cloud::Error) — if the RPC is aborted.
Basic example
require "google/cloud/ai_platform/v1" # Create a client object. The client can be reused for multiple calls. client = Google::Cloud::AIPlatform::V1::PredictionService::Client.new # Create a request. To set request fields, pass in keyword arguments. request = Google::Cloud::AIPlatform::V1::DirectRawPredictRequest.new # Call the direct_raw_predict method. result = client.direct_raw_predict request # The returned object is of type Google::Cloud::AIPlatform::V1::DirectRawPredictResponse. p result
#explain
def explain(request, options = nil) -> ::Google::Cloud::AIPlatform::V1::ExplainResponse
def explain(endpoint: nil, instances: nil, parameters: nil, explanation_spec_override: nil, deployed_model_id: nil) -> ::Google::Cloud::AIPlatform::V1::ExplainResponse
Perform an online explanation.
If deployed_model_id is specified, the corresponding DeployModel must have explanation_spec populated. If deployed_model_id is not specified, all DeployedModels must have explanation_spec populated.
def explain(request, options = nil) -> ::Google::Cloud::AIPlatform::V1::ExplainResponse
explain
via a request object, either of type
ExplainRequest or an equivalent Hash.
- request (::Google::Cloud::AIPlatform::V1::ExplainRequest, ::Hash) — A request object representing the call parameters. Required. To specify no parameters, or to keep all the default parameter values, pass an empty Hash.
- options (::Gapic::CallOptions, ::Hash) — Overrides the default settings for this call, e.g, timeout, retries, etc. Optional.
def explain(endpoint: nil, instances: nil, parameters: nil, explanation_spec_override: nil, deployed_model_id: nil) -> ::Google::Cloud::AIPlatform::V1::ExplainResponse
explain
via keyword arguments. Note that at
least one keyword argument is required. To specify no parameters, or to keep all
the default parameter values, pass an empty Hash as a request object (see above).
-
endpoint (::String) — Required. The name of the Endpoint requested to serve the explanation.
Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
- instances (::Array<::Google::Protobuf::Value, ::Hash>) — Required. The instances that are the input to the explanation call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata] instance_schema_uri.
- parameters (::Google::Protobuf::Value, ::Hash) — The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' [Model's ][google.cloud.aiplatform.v1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata] parameters_schema_uri.
-
explanation_spec_override (::Google::Cloud::AIPlatform::V1::ExplanationSpecOverride, ::Hash) —
If specified, overrides the explanation_spec of the DeployedModel. Can be used for explaining prediction results with different configurations, such as:
- Explaining top-5 predictions results as opposed to top-1;
- Increasing path count or step count of the attribution methods to reduce approximate errors;
- Using different baselines for explaining the prediction results.
- deployed_model_id (::String) — If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding Endpoint.traffic_split.
- (response, operation) — Access the result along with the RPC operation
- response (::Google::Cloud::AIPlatform::V1::ExplainResponse)
- operation (::GRPC::ActiveCall::Operation)
- (::Google::Cloud::Error) — if the RPC is aborted.
Basic example
require "google/cloud/ai_platform/v1" # Create a client object. The client can be reused for multiple calls. client = Google::Cloud::AIPlatform::V1::PredictionService::Client.new # Create a request. To set request fields, pass in keyword arguments. request = Google::Cloud::AIPlatform::V1::ExplainRequest.new # Call the explain method. result = client.explain request # The returned object is of type Google::Cloud::AIPlatform::V1::ExplainResponse. p result
#generate_content
def generate_content(request, options = nil) -> ::Google::Cloud::AIPlatform::V1::GenerateContentResponse
def generate_content(model: nil, contents: nil, system_instruction: nil, tools: nil, tool_config: nil, safety_settings: nil, generation_config: nil) -> ::Google::Cloud::AIPlatform::V1::GenerateContentResponse
Generate content with multimodal inputs.
def generate_content(request, options = nil) -> ::Google::Cloud::AIPlatform::V1::GenerateContentResponse
generate_content
via a request object, either of type
GenerateContentRequest or an equivalent Hash.
- request (::Google::Cloud::AIPlatform::V1::GenerateContentRequest, ::Hash) — A request object representing the call parameters. Required. To specify no parameters, or to keep all the default parameter values, pass an empty Hash.
- options (::Gapic::CallOptions, ::Hash) — Overrides the default settings for this call, e.g, timeout, retries, etc. Optional.
def generate_content(model: nil, contents: nil, system_instruction: nil, tools: nil, tool_config: nil, safety_settings: nil, generation_config: nil) -> ::Google::Cloud::AIPlatform::V1::GenerateContentResponse
generate_content
via keyword arguments. Note that at
least one keyword argument is required. To specify no parameters, or to keep all
the default parameter values, pass an empty Hash as a request object (see above).
-
model (::String) — Required. The name of the publisher model requested to serve the
prediction. Format:
projects/{project}/locations/{location}/publishers/*/models/*
-
contents (::Array<::Google::Cloud::AIPlatform::V1::Content, ::Hash>) — Required. The content of the current conversation with the model.
For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request.
- system_instruction (::Google::Cloud::AIPlatform::V1::Content, ::Hash) — Optional. The user provided system instructions for the model. Note: only text should be used in parts and content in each part will be in a separate paragraph.
-
tools (::Array<::Google::Cloud::AIPlatform::V1::Tool, ::Hash>) — Optional. A list of
Tools
the model may use to generate the next response.A
Tool
is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. - tool_config (::Google::Cloud::AIPlatform::V1::ToolConfig, ::Hash) — Optional. Tool config. This config is shared for all tools provided in the request.
- safety_settings (::Array<::Google::Cloud::AIPlatform::V1::SafetySetting, ::Hash>) — Optional. Per request settings for blocking unsafe content. Enforced on GenerateContentResponse.candidates.
- generation_config (::Google::Cloud::AIPlatform::V1::GenerationConfig, ::Hash) — Optional. Generation config.
- (response, operation) — Access the result along with the RPC operation
- response (::Google::Cloud::AIPlatform::V1::GenerateContentResponse)
- operation (::GRPC::ActiveCall::Operation)
- (::Google::Cloud::Error) — if the RPC is aborted.
Basic example
require "google/cloud/ai_platform/v1" # Create a client object. The client can be reused for multiple calls. client = Google::Cloud::AIPlatform::V1::PredictionService::Client.new # Create a request. To set request fields, pass in keyword arguments. request = Google::Cloud::AIPlatform::V1::GenerateContentRequest.new # Call the generate_content method. result = client.generate_content request # The returned object is of type Google::Cloud::AIPlatform::V1::GenerateContentResponse. p result
#iam_policy_client
def iam_policy_client() -> Google::Iam::V1::IAMPolicy::Client
Get the associated client for mix-in of the IAMPolicy.
- (Google::Iam::V1::IAMPolicy::Client)
#initialize
def initialize() { |config| ... } -> Client
Create a new PredictionService client object.
- (config) — Configure the PredictionService client.
- config (Client::Configuration)
- (Client) — a new instance of Client
# Create a client using the default configuration client = ::Google::Cloud::AIPlatform::V1::PredictionService::Client.new # Create a client using a custom configuration client = ::Google::Cloud::AIPlatform::V1::PredictionService::Client.new do |config| config.timeout = 10.0 end
#location_client
def location_client() -> Google::Cloud::Location::Locations::Client
Get the associated client for mix-in of the Locations.
- (Google::Cloud::Location::Locations::Client)
#predict
def predict(request, options = nil) -> ::Google::Cloud::AIPlatform::V1::PredictResponse
def predict(endpoint: nil, instances: nil, parameters: nil) -> ::Google::Cloud::AIPlatform::V1::PredictResponse
Perform an online prediction.
def predict(request, options = nil) -> ::Google::Cloud::AIPlatform::V1::PredictResponse
predict
via a request object, either of type
Google::Cloud::AIPlatform::V1::PredictRequest or an equivalent Hash.
- request (::Google::Cloud::AIPlatform::V1::PredictRequest, ::Hash) — A request object representing the call parameters. Required. To specify no parameters, or to keep all the default parameter values, pass an empty Hash.
- options (::Gapic::CallOptions, ::Hash) — Overrides the default settings for this call, e.g, timeout, retries, etc. Optional.
def predict(endpoint: nil, instances: nil, parameters: nil) -> ::Google::Cloud::AIPlatform::V1::PredictResponse
predict
via keyword arguments. Note that at
least one keyword argument is required. To specify no parameters, or to keep all
the default parameter values, pass an empty Hash as a request object (see above).
-
endpoint (::String) — Required. The name of the Endpoint requested to serve the prediction.
Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
- instances (::Array<::Google::Protobuf::Value, ::Hash>) — Required. The instances that are the input to the prediction call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the prediction call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata] instance_schema_uri.
- parameters (::Google::Protobuf::Value, ::Hash) — The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' [Model's ][google.cloud.aiplatform.v1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata] parameters_schema_uri.
- (response, operation) — Access the result along with the RPC operation
- response (::Google::Cloud::AIPlatform::V1::PredictResponse)
- operation (::GRPC::ActiveCall::Operation)
- (::Google::Cloud::Error) — if the RPC is aborted.
Basic example
require "google/cloud/ai_platform/v1" # Create a client object. The client can be reused for multiple calls. client = Google::Cloud::AIPlatform::V1::PredictionService::Client.new # Create a request. To set request fields, pass in keyword arguments. request = Google::Cloud::AIPlatform::V1::PredictRequest.new # Call the predict method. result = client.predict request # The returned object is of type Google::Cloud::AIPlatform::V1::PredictResponse. p result
#raw_predict
def raw_predict(request, options = nil) -> ::Google::Api::HttpBody
def raw_predict(endpoint: nil, http_body: nil) -> ::Google::Api::HttpBody
Perform an online prediction with an arbitrary HTTP payload.
The response includes the following HTTP headers:
X-Vertex-AI-Endpoint-Id
: ID of the Endpoint that served this prediction.X-Vertex-AI-Deployed-Model-Id
: ID of the Endpoint's DeployedModel that served this prediction.
def raw_predict(request, options = nil) -> ::Google::Api::HttpBody
raw_predict
via a request object, either of type
RawPredictRequest or an equivalent Hash.
- request (::Google::Cloud::AIPlatform::V1::RawPredictRequest, ::Hash) — A request object representing the call parameters. Required. To specify no parameters, or to keep all the default parameter values, pass an empty Hash.
- options (::Gapic::CallOptions, ::Hash) — Overrides the default settings for this call, e.g, timeout, retries, etc. Optional.
def raw_predict(endpoint: nil, http_body: nil) -> ::Google::Api::HttpBody
raw_predict
via keyword arguments. Note that at
least one keyword argument is required. To specify no parameters, or to keep all
the default parameter values, pass an empty Hash as a request object (see above).
-
endpoint (::String) — Required. The name of the Endpoint requested to serve the prediction.
Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
-
http_body (::Google::Api::HttpBody, ::Hash) — The prediction input. Supports HTTP headers and arbitrary data payload.
A DeployedModel may have an upper limit on the number of instances it supports per request. When this limit it is exceeded for an AutoML model, the RawPredict method returns an error. When this limit is exceeded for a custom-trained model, the behavior varies depending on the model.
You can specify the schema for each instance in the predict_schemata.instance_schema_uri field when you create a Model. This schema applies when you deploy the
Model
as aDeployedModel
to an Endpoint and use theRawPredict
method.
- (response, operation) — Access the result along with the RPC operation
- response (::Google::Api::HttpBody)
- operation (::GRPC::ActiveCall::Operation)
- (::Google::Cloud::Error) — if the RPC is aborted.
Basic example
require "google/cloud/ai_platform/v1" # Create a client object. The client can be reused for multiple calls. client = Google::Cloud::AIPlatform::V1::PredictionService::Client.new # Create a request. To set request fields, pass in keyword arguments. request = Google::Cloud::AIPlatform::V1::RawPredictRequest.new # Call the raw_predict method. result = client.raw_predict request # The returned object is of type Google::Api::HttpBody. p result
#server_streaming_predict
def server_streaming_predict(request, options = nil) -> ::Enumerable<::Google::Cloud::AIPlatform::V1::StreamingPredictResponse>
def server_streaming_predict(endpoint: nil, inputs: nil, parameters: nil) -> ::Enumerable<::Google::Cloud::AIPlatform::V1::StreamingPredictResponse>
Perform a server-side streaming online prediction request for Vertex LLM streaming.
def server_streaming_predict(request, options = nil) -> ::Enumerable<::Google::Cloud::AIPlatform::V1::StreamingPredictResponse>
server_streaming_predict
via a request object, either of type
StreamingPredictRequest or an equivalent Hash.
- request (::Google::Cloud::AIPlatform::V1::StreamingPredictRequest, ::Hash) — A request object representing the call parameters. Required. To specify no parameters, or to keep all the default parameter values, pass an empty Hash.
- options (::Gapic::CallOptions, ::Hash) — Overrides the default settings for this call, e.g, timeout, retries, etc. Optional.
def server_streaming_predict(endpoint: nil, inputs: nil, parameters: nil) -> ::Enumerable<::Google::Cloud::AIPlatform::V1::StreamingPredictResponse>
server_streaming_predict
via keyword arguments. Note that at
least one keyword argument is required. To specify no parameters, or to keep all
the default parameter values, pass an empty Hash as a request object (see above).
-
endpoint (::String) — Required. The name of the Endpoint requested to serve the prediction.
Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
- inputs (::Array<::Google::Cloud::AIPlatform::V1::Tensor, ::Hash>) — The prediction input.
- parameters (::Google::Cloud::AIPlatform::V1::Tensor, ::Hash) — The parameters that govern the prediction.
- (response, operation) — Access the result along with the RPC operation
- response (::Enumerable<::Google::Cloud::AIPlatform::V1::StreamingPredictResponse>)
- operation (::GRPC::ActiveCall::Operation)
- (::Enumerable<::Google::Cloud::AIPlatform::V1::StreamingPredictResponse>)
- (::Google::Cloud::Error) — if the RPC is aborted.
Basic example
require "google/cloud/ai_platform/v1" # Create a client object. The client can be reused for multiple calls. client = Google::Cloud::AIPlatform::V1::PredictionService::Client.new # Create a request. To set request fields, pass in keyword arguments. request = Google::Cloud::AIPlatform::V1::StreamingPredictRequest.new # Call the server_streaming_predict method to start streaming. output = client.server_streaming_predict request # The returned object is a streamed enumerable yielding elements of type # ::Google::Cloud::AIPlatform::V1::StreamingPredictResponse output.each do |current_response| p current_response end
#stream_direct_predict
def stream_direct_predict(request, options = nil) { |response, operation| ... } -> ::Enumerable<::Google::Cloud::AIPlatform::V1::StreamDirectPredictResponse>
Perform a streaming online prediction request to a gRPC model server for Vertex first-party products and frameworks.
- request (::Gapic::StreamInput, ::Enumerable<::Google::Cloud::AIPlatform::V1::StreamDirectPredictRequest, ::Hash>) — An enumerable of StreamDirectPredictRequest instances.
- options (::Gapic::CallOptions, ::Hash) — Overrides the default settings for this call, e.g, timeout, retries, etc. Optional.
- (response, operation) — Access the result along with the RPC operation
- response (::Enumerable<::Google::Cloud::AIPlatform::V1::StreamDirectPredictResponse>)
- operation (::GRPC::ActiveCall::Operation)
- (::Enumerable<::Google::Cloud::AIPlatform::V1::StreamDirectPredictResponse>)
- (::Google::Cloud::Error) — if the RPC is aborted.
Basic example
require "google/cloud/ai_platform/v1" # Create a client object. The client can be reused for multiple calls. client = Google::Cloud::AIPlatform::V1::PredictionService::Client.new # Create an input stream. input = Gapic::StreamInput.new # Call the stream_direct_predict method to start streaming. output = client.stream_direct_predict input # Send requests on the stream. For each request object, set fields by # passing keyword arguments. Be sure to close the stream when done. input << Google::Cloud::AIPlatform::V1::StreamDirectPredictRequest.new input << Google::Cloud::AIPlatform::V1::StreamDirectPredictRequest.new input.close # The returned object is a streamed enumerable yielding elements of type # ::Google::Cloud::AIPlatform::V1::StreamDirectPredictResponse output.each do |current_response| p current_response end
#stream_direct_raw_predict
def stream_direct_raw_predict(request, options = nil) { |response, operation| ... } -> ::Enumerable<::Google::Cloud::AIPlatform::V1::StreamDirectRawPredictResponse>
Perform a streaming online prediction request to a gRPC model server for custom containers.
- request (::Gapic::StreamInput, ::Enumerable<::Google::Cloud::AIPlatform::V1::StreamDirectRawPredictRequest, ::Hash>) — An enumerable of StreamDirectRawPredictRequest instances.
- options (::Gapic::CallOptions, ::Hash) — Overrides the default settings for this call, e.g, timeout, retries, etc. Optional.
- (response, operation) — Access the result along with the RPC operation
- response (::Enumerable<::Google::Cloud::AIPlatform::V1::StreamDirectRawPredictResponse>)
- operation (::GRPC::ActiveCall::Operation)
- (::Enumerable<::Google::Cloud::AIPlatform::V1::StreamDirectRawPredictResponse>)
- (::Google::Cloud::Error) — if the RPC is aborted.
Basic example
require "google/cloud/ai_platform/v1" # Create a client object. The client can be reused for multiple calls. client = Google::Cloud::AIPlatform::V1::PredictionService::Client.new # Create an input stream. input = Gapic::StreamInput.new # Call the stream_direct_raw_predict method to start streaming. output = client.stream_direct_raw_predict input # Send requests on the stream. For each request object, set fields by # passing keyword arguments. Be sure to close the stream when done. input << Google::Cloud::AIPlatform::V1::StreamDirectRawPredictRequest.new input << Google::Cloud::AIPlatform::V1::StreamDirectRawPredictRequest.new input.close # The returned object is a streamed enumerable yielding elements of type # ::Google::Cloud::AIPlatform::V1::StreamDirectRawPredictResponse output.each do |current_response| p current_response end
#stream_generate_content
def stream_generate_content(request, options = nil) -> ::Enumerable<::Google::Cloud::AIPlatform::V1::GenerateContentResponse>
def stream_generate_content(model: nil, contents: nil, system_instruction: nil, tools: nil, tool_config: nil, safety_settings: nil, generation_config: nil) -> ::Enumerable<::Google::Cloud::AIPlatform::V1::GenerateContentResponse>
Generate content with multimodal inputs with streaming support.
def stream_generate_content(request, options = nil) -> ::Enumerable<::Google::Cloud::AIPlatform::V1::GenerateContentResponse>
stream_generate_content
via a request object, either of type
GenerateContentRequest or an equivalent Hash.
- request (::Google::Cloud::AIPlatform::V1::GenerateContentRequest, ::Hash) — A request object representing the call parameters. Required. To specify no parameters, or to keep all the default parameter values, pass an empty Hash.
- options (::Gapic::CallOptions, ::Hash) — Overrides the default settings for this call, e.g, timeout, retries, etc. Optional.
def stream_generate_content(model: nil, contents: nil, system_instruction: nil, tools: nil, tool_config: nil, safety_settings: nil, generation_config: nil) -> ::Enumerable<::Google::Cloud::AIPlatform::V1::GenerateContentResponse>
stream_generate_content
via keyword arguments. Note that at
least one keyword argument is required. To specify no parameters, or to keep all
the default parameter values, pass an empty Hash as a request object (see above).
-
model (::String) — Required. The name of the publisher model requested to serve the
prediction. Format:
projects/{project}/locations/{location}/publishers/*/models/*
-
contents (::Array<::Google::Cloud::AIPlatform::V1::Content, ::Hash>) — Required. The content of the current conversation with the model.
For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request.
- system_instruction (::Google::Cloud::AIPlatform::V1::Content, ::Hash) — Optional. The user provided system instructions for the model. Note: only text should be used in parts and content in each part will be in a separate paragraph.
-
tools (::Array<::Google::Cloud::AIPlatform::V1::Tool, ::Hash>) — Optional. A list of
Tools
the model may use to generate the next response.A
Tool
is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. - tool_config (::Google::Cloud::AIPlatform::V1::ToolConfig, ::Hash) — Optional. Tool config. This config is shared for all tools provided in the request.
- safety_settings (::Array<::Google::Cloud::AIPlatform::V1::SafetySetting, ::Hash>) — Optional. Per request settings for blocking unsafe content. Enforced on GenerateContentResponse.candidates.
- generation_config (::Google::Cloud::AIPlatform::V1::GenerationConfig, ::Hash) — Optional. Generation config.
- (response, operation) — Access the result along with the RPC operation
- response (::Enumerable<::Google::Cloud::AIPlatform::V1::GenerateContentResponse>)
- operation (::GRPC::ActiveCall::Operation)
- (::Enumerable<::Google::Cloud::AIPlatform::V1::GenerateContentResponse>)
- (::Google::Cloud::Error) — if the RPC is aborted.
Basic example
require "google/cloud/ai_platform/v1" # Create a client object. The client can be reused for multiple calls. client = Google::Cloud::AIPlatform::V1::PredictionService::Client.new # Create a request. To set request fields, pass in keyword arguments. request = Google::Cloud::AIPlatform::V1::GenerateContentRequest.new # Call the stream_generate_content method to start streaming. output = client.stream_generate_content request # The returned object is a streamed enumerable yielding elements of type # ::Google::Cloud::AIPlatform::V1::GenerateContentResponse output.each do |current_response| p current_response end
#stream_raw_predict
def stream_raw_predict(request, options = nil) -> ::Enumerable<::Google::Api::HttpBody>
def stream_raw_predict(endpoint: nil, http_body: nil) -> ::Enumerable<::Google::Api::HttpBody>
Perform a streaming online prediction with an arbitrary HTTP payload.
def stream_raw_predict(request, options = nil) -> ::Enumerable<::Google::Api::HttpBody>
stream_raw_predict
via a request object, either of type
StreamRawPredictRequest or an equivalent Hash.
- request (::Google::Cloud::AIPlatform::V1::StreamRawPredictRequest, ::Hash) — A request object representing the call parameters. Required. To specify no parameters, or to keep all the default parameter values, pass an empty Hash.
- options (::Gapic::CallOptions, ::Hash) — Overrides the default settings for this call, e.g, timeout, retries, etc. Optional.
def stream_raw_predict(endpoint: nil, http_body: nil) -> ::Enumerable<::Google::Api::HttpBody>
stream_raw_predict
via keyword arguments. Note that at
least one keyword argument is required. To specify no parameters, or to keep all
the default parameter values, pass an empty Hash as a request object (see above).
-
endpoint (::String) — Required. The name of the Endpoint requested to serve the prediction.
Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
- http_body (::Google::Api::HttpBody, ::Hash) — The prediction input. Supports HTTP headers and arbitrary data payload.
- (response, operation) — Access the result along with the RPC operation
- response (::Enumerable<::Google::Api::HttpBody>)
- operation (::GRPC::ActiveCall::Operation)
- (::Enumerable<::Google::Api::HttpBody>)
- (::Google::Cloud::Error) — if the RPC is aborted.
Basic example
require "google/cloud/ai_platform/v1" # Create a client object. The client can be reused for multiple calls. client = Google::Cloud::AIPlatform::V1::PredictionService::Client.new # Create a request. To set request fields, pass in keyword arguments. request = Google::Cloud::AIPlatform::V1::StreamRawPredictRequest.new # Call the stream_raw_predict method to start streaming. output = client.stream_raw_predict request # The returned object is a streamed enumerable yielding elements of type # ::Google::Api::HttpBody output.each do |current_response| p current_response end
#streaming_predict
def streaming_predict(request, options = nil) { |response, operation| ... } -> ::Enumerable<::Google::Cloud::AIPlatform::V1::StreamingPredictResponse>
Perform a streaming online prediction request for Vertex first-party products and frameworks.
- request (::Gapic::StreamInput, ::Enumerable<::Google::Cloud::AIPlatform::V1::StreamingPredictRequest, ::Hash>) — An enumerable of StreamingPredictRequest instances.
- options (::Gapic::CallOptions, ::Hash) — Overrides the default settings for this call, e.g, timeout, retries, etc. Optional.
- (response, operation) — Access the result along with the RPC operation
- response (::Enumerable<::Google::Cloud::AIPlatform::V1::StreamingPredictResponse>)
- operation (::GRPC::ActiveCall::Operation)
- (::Enumerable<::Google::Cloud::AIPlatform::V1::StreamingPredictResponse>)
- (::Google::Cloud::Error) — if the RPC is aborted.
Basic example
require "google/cloud/ai_platform/v1" # Create a client object. The client can be reused for multiple calls. client = Google::Cloud::AIPlatform::V1::PredictionService::Client.new # Create an input stream. input = Gapic::StreamInput.new # Call the streaming_predict method to start streaming. output = client.streaming_predict input # Send requests on the stream. For each request object, set fields by # passing keyword arguments. Be sure to close the stream when done. input << Google::Cloud::AIPlatform::V1::StreamingPredictRequest.new input << Google::Cloud::AIPlatform::V1::StreamingPredictRequest.new input.close # The returned object is a streamed enumerable yielding elements of type # ::Google::Cloud::AIPlatform::V1::StreamingPredictResponse output.each do |current_response| p current_response end
#streaming_raw_predict
def streaming_raw_predict(request, options = nil) { |response, operation| ... } -> ::Enumerable<::Google::Cloud::AIPlatform::V1::StreamingRawPredictResponse>
Perform a streaming online prediction request through gRPC.
- request (::Gapic::StreamInput, ::Enumerable<::Google::Cloud::AIPlatform::V1::StreamingRawPredictRequest, ::Hash>) — An enumerable of StreamingRawPredictRequest instances.
- options (::Gapic::CallOptions, ::Hash) — Overrides the default settings for this call, e.g, timeout, retries, etc. Optional.
- (response, operation) — Access the result along with the RPC operation
- response (::Enumerable<::Google::Cloud::AIPlatform::V1::StreamingRawPredictResponse>)
- operation (::GRPC::ActiveCall::Operation)
- (::Enumerable<::Google::Cloud::AIPlatform::V1::StreamingRawPredictResponse>)
- (::Google::Cloud::Error) — if the RPC is aborted.
Basic example
require "google/cloud/ai_platform/v1" # Create a client object. The client can be reused for multiple calls. client = Google::Cloud::AIPlatform::V1::PredictionService::Client.new # Create an input stream. input = Gapic::StreamInput.new # Call the streaming_raw_predict method to start streaming. output = client.streaming_raw_predict input # Send requests on the stream. For each request object, set fields by # passing keyword arguments. Be sure to close the stream when done. input << Google::Cloud::AIPlatform::V1::StreamingRawPredictRequest.new input << Google::Cloud::AIPlatform::V1::StreamingRawPredictRequest.new input.close # The returned object is a streamed enumerable yielding elements of type # ::Google::Cloud::AIPlatform::V1::StreamingRawPredictResponse output.each do |current_response| p current_response end
#universe_domain
def universe_domain() -> String
The effective universe domain
- (String)