- 1.73.0 (latest)
- 1.72.0
- 1.71.1
- 1.70.0
- 1.69.0
- 1.68.0
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
- 1.58.0
- 1.57.0
- 1.56.0
- 1.55.0
- 1.54.1
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.39.0
- 1.38.1
- 1.37.0
- 1.36.4
- 1.35.0
- 1.34.0
- 1.33.1
- 1.32.0
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.1
- 1.26.1
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.1
- 1.18.3
- 1.17.1
- 1.16.1
- 1.15.1
- 1.14.0
- 1.13.1
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.1
- 1.7.1
- 1.6.2
- 1.5.0
- 1.4.3
- 1.3.0
- 1.2.0
- 1.1.1
- 1.0.1
- 0.9.0
- 0.8.0
- 0.7.1
- 0.6.0
- 0.5.1
- 0.4.0
- 0.3.1
API documentation for aiplatform
package.
Classes
Artifact
Metadata Artifact resource for Vertex AI
AutoMLForecastingTrainingJob
Class to train AutoML forecasting models.
AutoMLImageTrainingJob
Constructs a AutoML Image Training Job.
AutoMLTabularTrainingJob
Constructs a AutoML Tabular Training Job.
Example usage:
job = training_jobs.AutoMLTabularTrainingJob( display_name="my_display_name", optimization_prediction_type="classification", optimization_objective="minimize-log-loss", column_specs={"column_1": "auto", "column_2": "numeric"}, labels={'key': 'value'}, )
AutoMLTextTrainingJob
Constructs a AutoML Text Training Job.
AutoMLVideoTrainingJob
Constructs a AutoML Video Training Job.
BatchPredictionJob
Retrieves a BatchPredictionJob resource and instantiates its representation.
CustomContainerTrainingJob
Class to launch a Custom Training Job in Vertex AI using a Container.
CustomJob
Vertex AI Custom Job.
CustomPythonPackageTrainingJob
Class to launch a Custom Training Job in Vertex AI using a Python Package.
Takes a training implementation as a python package and executes that package in Cloud Vertex AI Training.
CustomTrainingJob
Class to launch a Custom Training Job in Vertex AI using a script.
Takes a training implementation as a python script and executes that script in Cloud Vertex AI Training.
Endpoint
Retrieves an endpoint resource.
EntityType
Public managed EntityType resource for Vertex AI.
Execution
Metadata Execution resource for Vertex AI
Experiment
Represents a Vertex AI Experiment resource.
ExperimentRun
A Vertex AI Experiment run.
Feature
Managed feature resource for Vertex AI.
Featurestore
Managed featurestore resource for Vertex AI.
HyperparameterTuningJob
Vertex AI Hyperparameter Tuning Job.
ImageDataset
A managed image dataset resource for Vertex AI.
Use this class to work with a managed image dataset. To create a managed image dataset, you need a datasource file in CSV format and a schema file in YAML format. A schema is optional for a custom model. You put the CSV file and the schema into Cloud Storage buckets.
Use image data for the following objectives:
- Single-label classification. For more information, see Prepare image training data for single-label classification.
- Multi-label classification. For more information, see Prepare image training data for multi-label classification.
- Object detection. For more information, see Prepare image training data for object detection.
The following code shows you how to create an image dataset by importing data from a CSV datasource file and a YAML schema file. The schema file you use depends on whether your image dataset is used for single-label classification, multi-label classification, or object detection.
my_dataset = aiplatform.ImageDataset.create(
display_name="my-image-dataset",
gcs_source=['gs://path/to/my/image-dataset.csv'],
import_schema_uri=['gs://path/to/my/schema.yaml']
)
MatchingEngineIndex
Matching Engine index resource for Vertex AI.
MatchingEngineIndexEndpoint
Matching Engine index endpoint resource for Vertex AI.
Model
Retrieves the model resource and instantiates its representation.
ModelDeploymentMonitoringJob
Vertex AI Model Deployment Monitoring Job.
This class should be used in conjunction with the Endpoint class in order to configure model monitoring for deployed models.
ModelEvaluation
Retrieves the ModelEvaluation resource and instantiates its representation.
PipelineJob
Retrieves a PipelineJob resource and instantiates its representation.
PipelineJobSchedule
Retrieves a PipelineJobSchedule resource and instantiates its representation.
PrivateEndpoint
Represents a Vertex AI PrivateEndpoint resource.
SequenceToSequencePlusForecastingTrainingJob
Class to train Sequence to Sequence (Seq2Seq) forecasting models.
TabularDataset
A managed tabular dataset resource for Vertex AI.
Use this class to work with tabular datasets. You can use a CSV file, BigQuery, or a pandas
DataFrame
to create a tabular dataset. For more information about paging through
BigQuery data, see Read data with BigQuery API using
pagination. For more
information about tabular data, see Tabular
data.
The following code shows you how to create and import a tabular dataset with a CSV file.
my_dataset = aiplatform.TabularDataset.create(
display_name="my-dataset", gcs_source=['gs://path/to/my/dataset.csv'])
The following code shows you how to create and import a tabular dataset in two distinct steps.
my_dataset = aiplatform.TextDataset.create(
display_name="my-dataset")
my_dataset.import(
gcs_source=['gs://path/to/my/dataset.csv']
import_schema_uri=aiplatform.schema.dataset.ioformat.text.multi_label_classification
)
If you create a tabular dataset with a pandas
DataFrame
,
you need to use a BigQuery table to stage the data for Vertex AI:
my_dataset = aiplatform.TabularDataset.create_from_dataframe(
df_source=my_pandas_dataframe,
staging_path=f"bq://{bq_dataset_id}.table-unique"
)
TemporalFusionTransformerForecastingTrainingJob
Class to train Temporal Fusion Transformer (TFT) forecasting models.
Tensorboard
Managed tensorboard resource for Vertex AI.
TensorboardExperiment
Managed tensorboard resource for Vertex AI.
TensorboardRun
Managed tensorboard resource for Vertex AI.
TensorboardTimeSeries
Managed tensorboard resource for Vertex AI.
TextDataset
Managed text dataset resource for Vertex AI.
TimeSeriesDataset
Managed time series dataset resource for Vertex AI
TimeSeriesDenseEncoderForecastingTrainingJob
Class to train Time series Dense Encoder (TiDE) forecasting models.
VideoDataset
Managed video dataset resource for Vertex AI.
Packages Functions
autolog
autolog(disable=False)
Enables autologging of parameters and metrics to Vertex Experiments.
After calling aiplatform.autolog()
, any metrics and parameters from
model training calls with supported ML frameworks will be automatically
logged to Vertex Experiments.
Using autologging requires setting an experiment and experiment_tensorboard.
Parameter | |
---|---|
Name | Description |
disable |
Optional. Whether to disable autologging. Defaults to False. If set to True, this resets the MLFlow tracking URI to its previous state before autologging was called and remove logging filters. |
end_run
end_run(
state: google.cloud.aiplatform_v1.types.execution.Execution.State = State.COMPLETE,
)
Ends the the current experiment run.
aiplatform.start_run('my-run')
...
aiplatform.end_run()
end_upload_tb_log
end_upload_tb_log()
Ends the current TensorBoard uploader
aiplatform.start_upload_tb_log(...)
...
aiplatform.end_upload_tb_log()
get_experiment_df
get_experiment_df(
experiment: typing.Optional[str] = None, *, include_time_series: bool = True
) -> pd.DataFrame
Returns a Pandas DataFrame of the parameters and metrics associated with one experiment.
Example:
aiplatform.init(experiment='exp-1')
aiplatform.start_run(run='run-1')
aiplatform.log_params({'learning_rate': 0.1})
aiplatform.log_metrics({'accuracy': 0.9})
aiplatform.start_run(run='run-2')
aiplatform.log_params({'learning_rate': 0.2})
aiplatform.log_metrics({'accuracy': 0.95})
aiplatform.get_experiment_df()
Will result in the following DataFrame:
experiment_name | run_name | param.learning_rate | metric.accuracy
exp-1 | run-1 | 0.1 | 0.9
exp-1 | run-2 | 0.2 | 0.95
Parameters | |
---|---|
Name | Description |
experiment |
Name of the Experiment to filter results. If not set, return results of current active experiment. |
include_time_series |
Optional. Whether or not to include time series metrics in df. Default is True. Setting to False will largely improve execution time and reduce quota contributing calls. Recommended when time series metrics are not needed or number of runs in Experiment is large. For time series metrics consider querying a specific run using get_time_series_data_frame. |
get_experiment_model
get_experiment_model(
artifact_id: str,
*,
metadata_store_id: str = "default",
project: typing.Optional[str] = None,
location: typing.Optional[str] = None,
credentials: typing.Optional[google.auth.credentials.Credentials] = None
) -> google.cloud.aiplatform.metadata.schema.google.artifact_schema.ExperimentModel
Retrieves an existing ExperimentModel artifact given an artifact id.
Parameters | |
---|---|
Name | Description |
artifact_id |
Required. An artifact id of the ExperimentModel artifact. |
metadata_store_id |
Optional. MetadataStore to retrieve Artifact from. If not set, metadata_store_id is set to "default". If artifact_id is a fully-qualified resource name, its metadata_store_id overrides this one. |
project |
Optional. Project to retrieve the artifact from. If not set, project set in aiplatform.init will be used. |
location |
Optional. Location to retrieve the Artifact from. If not set, location set in aiplatform.init will be used. |
credentials |
Optional. Custom credentials to use to retrieve this Artifact. Overrides credentials set in aiplatform.init. |
get_pipeline_df
get_pipeline_df(pipeline: str) -> pd.DataFrame
Returns a Pandas DataFrame of the parameters and metrics associated with one pipeline.
Parameter | |
---|---|
Name | Description |
pipeline |
Name of the Pipeline to filter results. |
init
init(
*,
project: typing.Optional[str] = None,
location: typing.Optional[str] = None,
experiment: typing.Optional[str] = None,
experiment_description: typing.Optional[str] = None,
experiment_tensorboard: typing.Optional[
typing.Union[
str,
google.cloud.aiplatform.tensorboard.tensorboard_resource.Tensorboard,
bool,
]
] = None,
staging_bucket: typing.Optional[str] = None,
credentials: typing.Optional[google.auth.credentials.Credentials] = None,
encryption_spec_key_name: typing.Optional[str] = None,
network: typing.Optional[str] = None,
service_account: typing.Optional[str] = None,
api_endpoint: typing.Optional[str] = None,
api_transport: typing.Optional[str] = None,
request_metadata: typing.Optional[typing.Sequence[typing.Tuple[str, str]]] = None
)
Updates common initialization parameters with provided options.
Parameters | |
---|---|
Name | Description |
project |
The default project to use when making API calls. |
location |
The default location to use when making API calls. If not set defaults to us-central-1. |
experiment |
Optional. The experiment name. |
experiment_description |
Optional. The description of the experiment. |
experiment_tensorboard |
Optional. The Vertex AI TensorBoard instance, Tensorboard resource name, or Tensorboard resource ID to use as a backing Tensorboard for the provided experiment. Example tensorboard resource name format: "projects/123/locations/us-central1/tensorboards/456" If |
staging_bucket |
The default staging bucket to use to stage artifacts when making API calls. In the form gs://... |
credentials |
The default custom credentials to use when making API calls. If not provided credentials will be ascertained from the environment. |
encryption_spec_key_name |
Optional. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: |
network |
Optional. The full name of the Compute Engine network to which jobs and resources should be peered. E.g. "projects/12345/global/networks/myVPC". Private services access must already be configured for the network. If specified, all eligible jobs and resources created will be peered with this VPC. |
service_account |
Optional. The service account used to launch jobs and deploy models. Jobs that use service_account: BatchPredictionJob, CustomJob, PipelineJob, HyperparameterTuningJob, CustomTrainingJob, CustomPythonPackageTrainingJob, CustomContainerTrainingJob, ModelEvaluationJob. |
api_endpoint |
Optional. The desired API endpoint, e.g., us-central1-aiplatform.googleapis.com |
api_transport |
Optional. The transport method which is either 'grpc' or 'rest'. NOTE: "rest" transport functionality is currently in a beta state (preview). |
log
log(
*,
pipeline_job: typing.Optional[
google.cloud.aiplatform.pipeline_jobs.PipelineJob
] = None
)
Log Vertex AI Resources to the current experiment run.
aiplatform.start_run('my-run')
my_job = aiplatform.PipelineJob(...)
my_job.submit()
aiplatform.log(my_job)
Parameter | |
---|---|
Name | Description |
pipeline_job |
Optional. Vertex PipelineJob to associate to this Experiment Run. |
log_classification_metrics
log_classification_metrics(
*,
labels: typing.Optional[typing.List[str]] = None,
matrix: typing.Optional[typing.List[typing.List[int]]] = None,
fpr: typing.Optional[typing.List[float]] = None,
tpr: typing.Optional[typing.List[float]] = None,
threshold: typing.Optional[typing.List[float]] = None,
display_name: typing.Optional[str] = None
) -> (
google.cloud.aiplatform.metadata.schema.google.artifact_schema.ClassificationMetrics
)
Create an artifact for classification metrics and log to ExperimentRun. Currently support confusion matrix and ROC curve.
my_run = aiplatform.ExperimentRun('my-run', experiment='my-experiment')
classification_metrics = my_run.log_classification_metrics(
display_name='my-classification-metrics',
labels=['cat', 'dog'],
matrix=[[9, 1], [1, 9]],
fpr=[0.1, 0.5, 0.9],
tpr=[0.1, 0.7, 0.9],
threshold=[0.9, 0.5, 0.1],
)
Parameters | |
---|---|
Name | Description |
labels |
Optional. List of label names for the confusion matrix. Must be set if 'matrix' is set. |
matrix |
Optional. Values for the confusion matrix. Must be set if 'labels' is set. |
fpr |
Optional. List of false positive rates for the ROC curve. Must be set if 'tpr' or 'thresholds' is set. |
tpr |
Optional. List of true positive rates for the ROC curve. Must be set if 'fpr' or 'thresholds' is set. |
threshold |
Optional. List of thresholds for the ROC curve. Must be set if 'fpr' or 'tpr' is set. |
display_name |
Optional. The user-defined name for the classification metric artifact. |
log_metrics
log_metrics(metrics: typing.Dict[str, typing.Union[float, int, str]])
Log single or multiple Metrics with specified key and value pairs.
Metrics with the same key will be overwritten.
aiplatform.start_run('my-run', experiment='my-experiment')
aiplatform.log_metrics({'accuracy': 0.9, 'recall': 0.8})
Parameter | |
---|---|
Name | Description |
metrics |
Required. Metrics key/value pairs. |
log_model
log_model(
model: typing.Union[sklearn.base.BaseEstimator, xgb.Booster, tf.Module],
artifact_id: typing.Optional[str] = None,
*,
uri: typing.Optional[str] = None,
input_example: typing.Union[list, dict, pd.DataFrame, np.ndarray] = None,
display_name: typing.Optional[str] = None,
metadata_store_id: typing.Optional[str] = "default",
project: typing.Optional[str] = None,
location: typing.Optional[str] = None,
credentials: typing.Optional[google.auth.credentials.Credentials] = None
) -> google.cloud.aiplatform.metadata.schema.google.artifact_schema.ExperimentModel
Saves a ML model into a MLMD artifact and log it to this ExperimentRun.
Supported model frameworks: sklearn, xgboost, tensorflow.
Example usage:
model = LinearRegression()
model.fit(X, y)
aiplatform.init(
project="my-project",
location="my-location",
staging_bucket="gs://my-bucket",
experiment="my-exp"
)
with aiplatform.start_run("my-run"):
aiplatform.log_model(model, "my-sklearn-model")
Parameters | |
---|---|
Name | Description |
model |
Required. A machine learning model. |
artifact_id |
Optional. The resource id of the artifact. This id must be globally unique in a metadataStore. It may be up to 63 characters, and valid characters are |
uri |
Optional. A gcs directory to save the model file. If not provided, |
input_example |
Optional. An example of a valid model input. Will be stored as a yaml file in the gcs uri. Accepts list, dict, pd.DataFrame, and np.ndarray The value inside a list must be a scalar or list. The value inside a dict must be a scalar, list, or np.ndarray. |
display_name |
Optional. The display name of the artifact. |
metadata_store_id |
Optional. The <metadata_store_id> portion of the resource name with the format: projects/123/locations/us-central1/metadataStores/<metadata_store_id>/artifacts/<resource_id> If not provided, the MetadataStore's ID will be set to "default". |
project |
Optional. Project used to create this Artifact. Overrides project set in aiplatform.init. |
location |
Optional. Location used to create this Artifact. Overrides location set in aiplatform.init. |
credentials |
Optional. Custom credentials used to create this Artifact. Overrides credentials set in aiplatform.init. |
log_params
log_params(params: typing.Dict[str, typing.Union[float, int, str]])
Log single or multiple parameters with specified key and value pairs.
Parameters with the same key will be overwritten.
aiplatform.start_run('my-run')
aiplatform.log_params({'learning_rate': 0.1, 'dropout_rate': 0.2})
Parameter | |
---|---|
Name | Description |
params |
Required. Parameter key/value pairs. |
log_time_series_metrics
log_time_series_metrics(
metrics: typing.Dict[str, float],
step: typing.Optional[int] = None,
wall_time: typing.Optional[google.protobuf.timestamp_pb2.Timestamp] = None,
)
Logs time series metrics to to this Experiment Run.
Requires the experiment or experiment run has a backing Vertex Tensorboard resource.
my_tensorboard = aiplatform.Tensorboard(...)
aiplatform.init(experiment='my-experiment', experiment_tensorboard=my_tensorboard)
aiplatform.start_run('my-run')
# increments steps as logged
for i in range(10):
aiplatform.log_time_series_metrics({'loss': loss})
# explicitly log steps
for i in range(10):
aiplatform.log_time_series_metrics({'loss': loss}, step=i)
Parameters | |
---|---|
Name | Description |
metrics |
Required. Dictionary of where keys are metric names and values are metric values. |
step |
Optional. Step index of this data point within the run. If not provided, the latest step amongst all time series metrics already logged will be used. |
wall_time |
Optional. Wall clock timestamp when this data point is generated by the end user. If not provided, this will be generated based on the value from time.time() |
save_model
save_model(
model: typing.Union[sklearn.base.BaseEstimator, xgb.Booster, tf.Module],
artifact_id: typing.Optional[str] = None,
*,
uri: typing.Optional[str] = None,
input_example: typing.Union[list, dict, pd.DataFrame, np.ndarray] = None,
tf_save_model_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None,
display_name: typing.Optional[str] = None,
metadata_store_id: typing.Optional[str] = "default",
project: typing.Optional[str] = None,
location: typing.Optional[str] = None,
credentials: typing.Optional[google.auth.credentials.Credentials] = None
) -> google.cloud.aiplatform.metadata.schema.google.artifact_schema.ExperimentModel
Saves a ML model into a MLMD artifact.
Supported model frameworks: sklearn, xgboost, tensorflow.
Example usage: aiplatform.init(project="my-project", location="my-location", staging_bucket="gs://my-bucket") model = LinearRegression() model.fit(X, y) aiplatform.save_model(model, "my-sklearn-model")
Parameters | |
---|---|
Name | Description |
model |
Required. A machine learning model. |
artifact_id |
Optional. The resource id of the artifact. This id must be globally unique in a metadataStore. It may be up to 63 characters, and valid characters are |
uri |
Optional. A gcs directory to save the model file. If not provided, |
input_example |
Optional. An example of a valid model input. Will be stored as a yaml file in the gcs uri. Accepts list, dict, pd.DataFrame, and np.ndarray The value inside a list must be a scalar or list. The value inside a dict must be a scalar, list, or np.ndarray. |
tf_save_model_kwargs |
Optional. A dict of kwargs to pass to the model's save method. If saving a tf module, this will pass to "tf.saved_model.save" method. If saving a keras model, this will pass to "tf.keras.Model.save" method. |
display_name |
Optional. The display name of the artifact. |
metadata_store_id |
Optional. The <metadata_store_id> portion of the resource name with the format: projects/123/locations/us-central1/metadataStores/<metadata_store_id>/artifacts/<resource_id> If not provided, the MetadataStore's ID will be set to "default". |
project |
Optional. Project used to create this Artifact. Overrides project set in aiplatform.init. |
location |
Optional. Location used to create this Artifact. Overrides location set in aiplatform.init. |
credentials |
Optional. Custom credentials used to create this Artifact. Overrides credentials set in aiplatform.init. |
start_execution
start_execution(
*,
schema_title: typing.Optional[str] = None,
display_name: typing.Optional[str] = None,
resource_id: typing.Optional[str] = None,
metadata: typing.Optional[typing.Dict[str, typing.Any]] = None,
schema_version: typing.Optional[str] = None,
description: typing.Optional[str] = None,
resume: bool = False,
project: typing.Optional[str] = None,
location: typing.Optional[str] = None,
credentials: typing.Optional[google.auth.credentials.Credentials] = None
) -> google.cloud.aiplatform.metadata.execution.Execution
Create and starts a new Metadata Execution or resumes a previously created Execution.
To start a new execution:
with aiplatform.start_execution(schema_title='system.ContainerExecution', display_name='trainer) as exc:
exc.assign_input_artifacts([my_artifact])
model = aiplatform.Artifact.create(uri='gs://my-uri', schema_title='system.Model')
exc.assign_output_artifacts([model])
To continue a previously created execution:
with aiplatform.start_execution(resource_id='my-exc', resume=True) as exc:
...
Parameters | |
---|---|
Name | Description |
schema_title |
Optional. schema_title identifies the schema title used by the Execution. Required if starting a new Execution. |
resource_id |
Optional. The <resource_id> portion of the Execution name with the format. This is globally unique in a metadataStore: projects/123/locations/us-central1/metadataStores/<metadata_store_id>/executions/<resource_id>. |
display_name |
Optional. The user-defined name of the Execution. |
schema_version |
Optional. schema_version specifies the version used by the Execution. If not set, defaults to use the latest version. |
metadata |
Optional. Contains the metadata information that will be stored in the Execution. |
description |
Optional. Describes the purpose of the Execution to be created. |
metadata_store_id |
Optional. The <metadata_store_id> portion of the resource name with the format: projects/123/locations/us-central1/metadataStores/<metadata_store_id>/artifacts/<resource_id> If not provided, the MetadataStore's ID will be set to "default". |
project |
Optional. Project used to create this Execution. Overrides project set in aiplatform.init. |
location |
Optional. Location used to create this Execution. Overrides location set in aiplatform.init. |
credentials |
Optional. Custom credentials used to create this Execution. Overrides credentials set in aiplatform.init. |
start_run
start_run(
run: str,
*,
tensorboard: typing.Optional[
typing.Union[
google.cloud.aiplatform.tensorboard.tensorboard_resource.Tensorboard, str
]
] = None,
resume=False
) -> google.cloud.aiplatform.metadata.experiment_run_resource.ExperimentRun
Start a run to current session.
aiplatform.init(experiment='my-experiment')
aiplatform.start_run('my-run')
aiplatform.log_params({'learning_rate':0.1})
Use as context manager. Run will be ended on context exit:
aiplatform.init(experiment='my-experiment')
with aiplatform.start_run('my-run') as my_run:
my_run.log_params({'learning_rate':0.1})
Resume a previously started run:
aiplatform.init(experiment='my-experiment')
with aiplatform.start_run('my-run', resume=True) as my_run:
my_run.log_params({'learning_rate':0.1})
Parameters | |
---|---|
Name | Description |
run |
Required. Name of the run to assign current session with. |
resume |
Whether to resume this run. If False a new run will be created. |
start_upload_tb_log
start_upload_tb_log(
tensorboard_experiment_name: str,
logdir: str,
tensorboard_id: typing.Optional[str] = None,
project: typing.Optional[str] = None,
location: typing.Optional[str] = None,
experiment_display_name: typing.Optional[str] = None,
run_name_prefix: typing.Optional[str] = None,
description: typing.Optional[str] = None,
allowed_plugins: typing.Optional[typing.FrozenSet[str]] = None,
)
Continues to listen for new data in the logdir and uploads when it appears.
Note that after calling start_upload_tb_log()
your thread will kept alive even if
an exception is thrown. To ensure the thread gets shut down, put any code after
start_upload_tb_log()
and before end_upload_tb_log()
in a try
statement, and call
end_upload_tb_log()
in finally
.
Sample usage:
aiplatform.init(location='us-central1', project='my-project')
aiplatform.start_upload_tb_log(tensorboard_id='123',tensorboard_experiment_name='my-experiment',logdir='my-logdir')
try:
# your code here
finally:
aiplatform.end_upload_tb_log()
Parameters | |
---|---|
Name | Description |
tensorboard_experiment_name |
Required. Name of this tensorboard experiment. Unique to the given projects/{project}/locations/{location}/tensorboards/{tensorboard_id}. |
logdir |
Required. path of the log directory to upload |
tensorboard_id |
Optional. TensorBoard ID. If not set, tensorboard_id in aiplatform.init will be used. |
project |
Optional. Project the TensorBoard is in. If not set, project set in aiplatform.init will be used. |
location |
Optional. Location the TensorBoard is in. If not set, location set in aiplatform.init will be used. |
experiment_display_name |
Optional. The display name of the experiment. |
run_name_prefix |
Optional. If present, all runs created by this invocation will have their name prefixed by this value. |
description |
Optional. String description to assign to the experiment. |
allowed_plugins |
Optional. List of additional allowed plugin names. |
upload_tb_log
upload_tb_log(
tensorboard_experiment_name: str,
logdir: str,
tensorboard_id: typing.Optional[str] = None,
project: typing.Optional[str] = None,
location: typing.Optional[str] = None,
experiment_display_name: typing.Optional[str] = None,
run_name_prefix: typing.Optional[str] = None,
description: typing.Optional[str] = None,
verbosity: typing.Optional[int] = 1,
allowed_plugins: typing.Optional[typing.FrozenSet[str]] = None,
)
upload only the existing data in the logdir and then return immediately
Sample usage:
aiplatform.init(location='us-central1', project='my-project')
aiplatform.upload_tb_log(tensorboard_id='123',tensorboard_experiment_name='my-experiment',logdir='my-logdir')
Parameters | |
---|---|
Name | Description |
tensorboard_experiment_name |
Required. Name of this tensorboard experiment. Unique to the given projects/{project}/locations/{location}/tensorboards/{tensorboard_id} |
logdir |
Required. The location of the TensorBoard logs that resides either in the local file system or Cloud Storage |
tensorboard_id |
Optional. TensorBoard ID. If not set, tensorboard_id in aiplatform.init will be used. |
project |
Optional. Project the TensorBoard is in. If not set, project set in aiplatform.init will be used. |
location |
Optional. Location the TensorBoard is in. If not set, location set in aiplatform.init will be used. |
experiment_display_name |
Optional. The display name of the experiment. |
run_name_prefix |
Optional. If present, all runs created by this invocation will have their name prefixed by this value. |
description |
Optional. String description to assign to the experiment. |
verbosity |
Optional. Level of verbosity, an integer. Supported value: 0 - No upload statistics is printed. 1 - Print upload statistics while uploading data (default). |
allowed_plugins |
Optional. List of additional allowed plugin names. |