This code sample shows how to use the multimodal model to generate embeddings for text and image inputs.
Explore further
For detailed documentation that includes this code sample, see the following:
- Get multimodal embeddings
- Method: projects.locations.endpoints.predict
- Method: projects.locations.endpoints.predict
- Method: projects.locations.endpoints.predict
- Method: projects.locations.endpoints.predict
- Method: projects.locations.publishers.models.predict
- Method: projects.locations.publishers.models.predict
- Method: projects.locations.publishers.models.predict
- Method: projects.locations.publishers.models.predict
- Multimodal embeddings API
Code sample
Go
Before trying this sample, follow the Go setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Go API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Python
Before trying this sample, follow the Python setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Python API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
What's next
To search and filter code samples for other Google Cloud products, see the Google Cloud sample browser.