LlamaIndex on Vertex AI for RAG API

Retrieval-augmented generation (RAG) gives large langauge models (LLM) access to external knowledge sources, such as documents and databases. By using RAG, LLMs can generate more accurate and informative responses based on the data that the external knowledge sources contain.

For more information about how RAG works, see LlamaIndex on Vertex AI for RAG API. For information about the supported generation models, see Supported generation models. For information about the supported embedding models, see Supported embedding models.

Example syntax

Syntax to create a RAG corpus.

curl

curl -X POST \
  -H "Authorization: Bearer $(gcloud auth print-access-token)" \
  -H "Content-Type: application/json" \
  https://${LOCATION}-aiplatform.googleapis.com/v1beta1/projects/${PROJECT_ID}/locations/${LOCATION}/ragCorpora\
  -d '{
  "display_name" : "...",
  "description": "...",
  "rag_embedding_model_config": {
    "vertex_prediction_endpoint": {
      "endpoint": "..."
    }
  }
}'

Python

corpus = rag.create_corpus(display_name=..., description=...)
print(corpus)

Parameters list

See examples for implementation details.

Corpus management

For information about a RAG corpus, see Index management.

Create RagCorpus

Parameters

display_name

Optional: string

The display name of the RagCorpus.

description

Optional: string

The description of the RagCorpus.

rag_embedding_model_config.vertex_prediction_endpoint.endpoint

Optional: string

The embedding model to use for the RagCorpus.

List RagCorpora

Parameters

page_size

Optional: int

The standard list page size.

page_token

Optional: string

The standard list page token. Typically obtained from [ListRagCorporaResponse.next_page_token][] of the previous [VertexRagDataService.ListRagCorpora][] call.

Get RagCorpus

Parameters

rag_corpus_id

string

The ID of the RagCorpus resource. Format: projects/{project}/locations/{location}/ragCorpora/{rag_corpus_id}

Delete RagCorpus

Parameters

rag_corpus_id

string

The ID of the RagCorpus resource. Format: projects/{project}/locations/{location}/ragCorpora/{rag_corpus_id}

File management

For information about a RAG file, see File management.

Upload RagFile

Parameters

rag_corpus_id

string

The ID of the RagCorpus resource. Format: projects/{project}/locations/{location}/ragCorpora/{rag_corpus_id}

display_name

Optional: string

The display name of the RagCorpus.

description

Optional: string

The description of the RagCorpus.

Import RagFile

Parameters

rag_corpus_id

string

The ID of the RagCorpus resource. Format: projects/{project}/locations/{location}/ragCorpora/{rag_corpus_id}

gcs_source.uris

list

Cloud Storage URI that contains the upload file.

google_drive_source.resource_id

Optional: string

The type of the Google Drive resource.

google_drive_source.resource_ids.resource_type

Optional: string

The ID of the Google Drive resource.

rag_file_chunking_config.chunk_size

Optional: int

Number of tokens each chunk should have.

rag_file_chunking_config.chunk_overlap

Optional: int

Number of tokens overlap between two chunks.

max_embedding_requests_per_min

Optional: int

Number that represents a limit to restrict the rate at which LlamaIndex calls the embedding model during the indexing process. Default limit is 1000. For more information about rate limits, see LlamaIndex on Vertex AI quotas.

Parameters

rag_corpus_id

string

The ID of the RagCorpus resource. Format: projects/{project}/locations/{location}/ragCorpora/{rag_corpus_id}

page_size

Optional: int

The standard list page size.

page_token

Optional: string

The standard list page token. Typically obtained from [ListRagCorporaResponse.next_page_token][] of the previous [VertexRagDataService.ListRagCorpora][]< call.

Get RagFile

Parameters

rag_file_id

string

The ID of the RagCorpus resource. Format: projects/{project}/locations/{location}/ragCorpora/{rag_file_id}

Delete RagFile

Parameters

rag_file_id

string

The ID of the RagCorpus resource. Format: projects/{project}/locations/{location}/ragCorpora/{rag_file_id}

Retrieval and prediction

Retrieval

Parameter Description
similarity_top_k Controls the maximum number of contexts that are retrieved.
vector_distance_threshold Only contexts with a distance smaller than the threshold are considered.

Prediction

Parameters

model_id

string

LLM model for content generation.

rag_corpora

string

The name of the RagCorpus resource. Format: projects/{project}/locations/{location}/ragCorpora/{rag_corpus}

text

string (list)

The text to LLM for content generation. Maximum value: 1 list.

vector_distance_threshold

Optional: double

Only contexts with a vector distance smaller than the threshold are returned.

similarity_top_k

Optional: int

The number of top contexts to retrieve.

Examples

The following examples demonstrate corpus management, file management, and retrieval and prediction.

Create a RAG corpus

REST

Before using any of the request data, make the following replacements:

  • PROJECT_ID: Your project ID.
  • LOCATION: The region to process the request.
  • CORPUS_DISPLAY_NAME: The display name of the RagCorpus.
  • CORPUS_DESCRIPTION: The description of the RagCorpus.
  • RAG_EMBEDDING_MODEL_CONFIG_ENDPOINT: The embedding model of the RagCorpus.

HTTP method and URL:

POST https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora

Request JSON body:

{
  "display_name" : "CORPUS_DISPLAY_NAME",
  "description": "CORPUS_DESCRIPTION",
  "rag_embedding_model_config_endpoint": "RAG_EMBEDDING_MODEL_CONFIG_ENDPOINT"
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json, and execute the following command:

curl -X POST \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora"

PowerShell

Save the request body in a file named request.json, and execute the following command:

$headers = @{  }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora" | Select-Object -Expand Content
You should receive a successful status code (2xx).

The following example demonstrates how to create a RAG corpus by using the REST API.

    // Either your first party publisher model or fine-tuned endpoint
    // Example: projects/${PROJECT_ID}/locations/${LOCATION}/publishers/google/models/textembedding-gecko@003
    // or
    // Example: projects/${PROJECT_ID}/locations/${LOCATION}/endpoints/12345
    ENDPOINT_NAME=${RAG_EMBEDDING_MODEL_CONFIG_ENDPOINT}

    // Corpus display name
    // Such as "my_test_corpus"
    CORPUS_DISPLAY_NAME=YOUR_CORPUS_DISPLAY_NAME

    // CreateRagCorpus
    // Input: ENDPOINT, PROJECT_ID, CORPUS_DISPLAY_NAME
    // Output: CreateRagCorpusOperationMetadata
    curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json" \
    https://${ENDPOINT}/v1beta1/projects/${PROJECT_ID}/locations/${LOCATION}/ragCorpora \
    -d '{
          "display_name" : '\""${CORPUS_DISPLAY_NAME}"\"',
          "rag_embedding_model_config" : {
                  "vertex_prediction_endpoint": {
                        "endpoint": '\""${ENDPOINT_NAME}"\"'
                  }
          }
      }'

    // Poll the operation status.
    // The last component of the RagCorpus "name" field is the server-generated
    // rag_corpus_id: (only Bold part)
    // projects/${PROJECT_ID}/locations/${LOCATION}/ragCorpora/7454583283205013504.
    OPERATION_ID=OPERATION_ID
    poll_op_wait ${OPERATION_ID}

Python

To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.


from vertexai.preview import rag
import vertexai

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# display_name = "test_corpus"
# description = "Corpus Description"

# Initialize Vertex AI API once per session
vertexai.init(project=project_id, location="us-central1")

corpus = rag.create_corpus(display_name=display_name, description=description)
print(corpus)

List a RAG corpus

REST

Before using any of the request data, make the following replacements:

  • PROJECT_ID: Your project ID.
  • LOCATION: The region to process the request.
  • PAGE_SIZE: The standard list page size. You may adjust the number of RagCorpora to return per page by updating the page_size parameter.
  • PAGE_TOKEN: The standard list page token. Obtained typically using ListRagCorporaResponse.next_page_token of the previous VertexRagDataService.ListRagCorpora call.

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora?page_size=PAGE_SIZE&page_token=PAGE_TOKEN

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
"https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora?page_size=PAGE_SIZE&page_token=PAGE_TOKEN"

PowerShell

Execute the following command:

$headers = @{  }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora?page_size=PAGE_SIZE&page_token=PAGE_TOKEN" | Select-Object -Expand Content
You should receive a successful status code (2xx) and a list of RagCorpora under the given PROJECT_ID.

Python

To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.


from vertexai.preview import rag
import vertexai

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"

# Initialize Vertex AI API once per session
vertexai.init(project=project_id, location="us-central1")

corpora = rag.list_corpora()
print(corpora)

Get a RAG corpus

REST

Before using any of the request data, make the following replacements:

  • PROJECT_ID: Your project ID.
  • LOCATION: The region to process the request.
  • RAG_CORPUS_ID: The ID of the RagCorpus resource.

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
"https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID"

PowerShell

Execute the following command:

$headers = @{  }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID" | Select-Object -Expand Content
A successful response returns the RagCorpus resource.

The get and list commands are used in an example to demonstrate how RagCorpus uses the rag_embedding_model_config field, which points to the embedding model you have chosen.

// Server-generated rag_corpus_id in CreateRagCorpus
RAG_CORPUS_ID=RAG_CORPUS_ID

// GetRagCorpus
// Input: ENDPOINT, PROJECT_ID, RAG_CORPUS_ID
// Output: RagCorpus
curl -X GET \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://${ENDPOINT}/v1beta1/projects/${PROJECT_ID}/locations/${LOCATION}/ragCorpora/${RAG_CORPUS_ID}

// ListRagCorpora
curl -sS -X GET \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://${ENDPOINT}/v1beta1/projects/${PROJECT_ID}/locations/${LOCATION}/ragCorpora"

Python

To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.


from vertexai.preview import rag
import vertexai

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# corpus_name = "projects/{project_id}/locations/us-central1/ragCorpora/{rag_corpus_id}"

# Initialize Vertex AI API once per session
vertexai.init(project=project_id, location="us-central1")

corpus = rag.get_corpus(name=corpus_name)
print(corpus)

Delete a RAG corpus

REST

Before using any of the request data, make the following replacements:

  • PROJECT_ID: Your project ID.
  • LOCATION: The region to process the request.
  • RAG_CORPUS_ID: The ID of the RagCorpus resource.

HTTP method and URL:

DELETE https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID

To send your request, choose one of these options:

curl

Execute the following command:

curl -X DELETE \
"https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID"

PowerShell

Execute the following command:

$headers = @{  }

Invoke-WebRequest `
-Method DELETE `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID" | Select-Object -Expand Content
A successful response returns the DeleteOperationMetadata.

Python

To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.


from vertexai.preview import rag
import vertexai

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# corpus_name = "projects/{project_id}/locations/us-central1/ragCorpora/{rag_corpus_id}"

# Initialize Vertex AI API once per session
vertexai.init(project=project_id, location="us-central1")

rag.delete_corpus(name=corpus_name)
print(f"Corpus {corpus_name} deleted.")

Upload a RAG file

REST

Before using any of the request data, make the following replacements:

  • PROJECT_ID: Your project ID.
  • LOCATION: The region to process the request.
  • RAG_CORPUS_ID: The ID of the RagCorpus resource.
  • INPUT_FILE: The path of a local file.
  • FILE_DISPLAY_NAME: The display name of the RagFile.
  • RAG_FILE_DESCRIPTION: The description of the RagFile.

HTTP method and URL:

POST https://LOCATION-aiplatform.googleapis.com/upload/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles:upload

Request JSON body:

{
 "rag_file": {
  "display_name": "FILE_DISPLAY_NAME",
  "description": "RAG_FILE_DESCRIPTION"
 }
}

To send your request, choose one of these options:

curl

Save the request body in a file named INPUT_FILE, and execute the following command:

curl -X POST \
-H "Content-Type: application/json; charset=utf-8" \
-d @INPUT_FILE \
"https://LOCATION-aiplatform.googleapis.com/upload/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles:upload"

PowerShell

Save the request body in a file named INPUT_FILE, and execute the following command:

$headers = @{  }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile INPUT_FILE `
-Uri "https://LOCATION-aiplatform.googleapis.com/upload/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles:upload" | Select-Object -Expand Content
A successful response returns the RagFile resource. The last component of the RagFile.name field is the server-generated rag_file_id.

Python

To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.


from vertexai.preview import rag
import vertexai

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# corpus_name = "projects/{project_id}/locations/us-central1/ragCorpora/{rag_corpus_id}"
# path = "path/to/local/file.txt"
# display_name = "file_display_name"
# description = "file description"

# Initialize Vertex AI API once per session
vertexai.init(project=project_id, location="us-central1")

rag_file = rag.upload_file(
    corpus_name=corpus_name,
    path=path,
    display_name=display_name,
    description=description,
)
print(rag_file)

Import RAG files

Files and folders can be imported from Drive or Cloud Storage.

REST

Before using any of the request data, make the following replacements:

  • PROJECT_ID: Your project ID.
  • LOCATION: The region to process the request.
  • RAG_CORPUS_ID: The ID of the RagCorpus resource.
  • GCS_URIS: A list of Cloud Storage locations. Example: gs://my-bucket1, gs://my-bucket2.
  • DRIVE_RESOURCE_ID: The ID of the Drive resource. Examples:
    • https://drive.google.com/file/d/ABCDE
    • https://drive.google.com/corp/drive/u/0/folders/ABCDEFG
  • DRIVE_RESOURCE_TYPE: Type of the Drive resource. Options:
    • RESOURCE_TYPE_FILE - File
    • RESOURCE_TYPE_FOLDER - Folder
  • CHUNK_SIZE: Optional: Number of tokens each chunk should have.
  • CHUNK_OVERLAP: Optional: Number of tokens overlap between chunks.

HTTP method and URL:

POST https://LOCATION-aiplatform.googleapis.com/upload/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles:import

Request JSON body:

{
  "import_rag_files_config": {
    "gcs_source": {
      "uris": GCS_URIS
    },
    "google_drive_source": {
      "resource_ids": {
        "resource_id": DRIVE_RESOURCE_ID,
        "resource_type": DRIVE_RESOURCE_TYPE
      },
    }
  }
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json, and execute the following command:

curl -X POST \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/upload/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles:import"

PowerShell

Save the request body in a file named request.json, and execute the following command:

$headers = @{  }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/upload/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles:import" | Select-Object -Expand Content
A successful response returns the ImportRagFilesOperationMetadata resource.

The following sample demonstrates how to import a file from Cloud Storage. Use the max_embedding_requests_per_min control field to limit the rate at which LlamaIndex calls the embedding model during the ImportRagFiles indexing process. The field has a default value of 1000 calls per minute.

// Cloud Storage bucket/file location.
// Such as "gs://rag-e2e-test/"
GCS_URIS=YOUR_GCS_LOCATION

// Enter the QPM rate to limit RAG's access to your embedding model
// Example: 1000
EMBEDDING_MODEL_QPM_RATE=MAX_EMBEDDING_REQUESTS_PER_MIN_LIMIT

// ImportRagFiles
// Import a single Cloud Storage file or all files in a Cloud Storage bucket.
// Input: ENDPOINT, PROJECT_ID, RAG_CORPUS_ID, GCS_URIS
// Output: ImportRagFilesOperationMetadataNumber
// Use ListRagFiles to find the server-generated rag_file_id.
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${ENDPOINT}/v1beta1/projects/${PROJECT_ID}/locations/${LOCATION}/ragCorpora/${RAG_CORPUS_ID}/ragFiles:import \
-d '{
  "import_rag_files_config": {
    "gcs_source": {
      "uris": '\""${GCS_URIS}"\"'
    },
    "rag_file_chunking_config": {
      "chunk_size": 512
    },
    "max_embedding_requests_per_min": '"${EMBEDDING_MODEL_QPM_RATE}"'
  }
}'

// Poll the operation status.
// The response contains the number of files imported.
OPERATION_ID=OPERATION_ID
poll_op_wait ${OPERATION_ID}

The following sample demonstrates how to import a file from Drive. Use the max_embedding_requests_per_min control field to limit the rate at which LlamaIndex calls the embedding model during the ImportRagFiles indexing process. The field has a default value of 1000 calls per minute.

// Google Drive folder location.
FOLDER_RESOURCE_ID=YOUR_GOOGLE_DRIVE_FOLDER_RESOURCE_ID

// Enter the QPM rate to limit RAG's access to your embedding model
// Example: 1000
EMBEDDING_MODEL_QPM_RATE=MAX_EMBEDDING_REQUESTS_PER_MIN_LIMIT

// ImportRagFiles
// Import all files in a Google Drive folder.
// Input: ENDPOINT, PROJECT_ID, RAG_CORPUS_ID, FOLDER_RESOURCE_ID
// Output: ImportRagFilesOperationMetadataNumber
// Use ListRagFiles to find the server-generated rag_file_id.
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${ENDPOINT}/v1beta1/projects/${PROJECT_ID}/locations/${LOCATION}/ragCorpora/${RAG_CORPUS_ID}/ragFiles:import \
-d '{
  "import_rag_files_config": {
    "google_drive_source": {
      "resource_ids": {
        "resource_id": '\""${FOLDER_RESOURCE_ID}"\"',
        "resource_type": "RESOURCE_TYPE_FOLDER"
      }
    },
    "max_embedding_requests_per_min": '"${EMBEDDING_MODEL_QPM_RATE}"'
  }
}'

// Poll the operation status.
// The response contains the number of files imported.
OPERATION_ID=OPERATION_ID
poll_op_wait ${OPERATION_ID}

Python

To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.


from vertexai.preview import rag
import vertexai

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# corpus_name = "projects/{project_id}/locations/us-central1/ragCorpora/{rag_corpus_id}"
# paths = ["https://drive.google.com/file/123", "gs://my_bucket/my_files_dir"]  # Supports Google Cloud Storage and Google Drive Links

# Initialize Vertex AI API once per session
vertexai.init(project=project_id, location="us-central1")

response = rag.import_files(
    corpus_name=corpus_name,
    paths=paths,
    chunk_size=512,  # Optional
    chunk_overlap=100,  # Optional
)
print(f"Imported {response.imported_rag_files_count} files.")

Get a RAG file

REST

Before using any of the request data, make the following replacements:

  • PROJECT_ID: Your project ID.
  • LOCATION: The region to process the request.
  • RAG_CORPUS_ID: The ID of the RagCorpus resource.
  • RAG_FILE_ID: The ID of the RagFile resource.

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles/RAG_FILE_ID

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
"https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles/RAG_FILE_ID"

PowerShell

Execute the following command:

$headers = @{  }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles/RAG_FILE_ID" | Select-Object -Expand Content
A successful response returns the RagFile resource.

Python

To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.


from vertexai.preview import rag
import vertexai

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# file_name = "projects/{project_id}/locations/us-central1/ragCorpora/{rag_corpus_id}/ragFiles/{rag_file_id}"

# Initialize Vertex AI API once per session
vertexai.init(project=project_id, location="us-central1")

rag_file = rag.get_file(name=file_name)
print(rag_file)

List RAG files

REST

Before using any of the request data, make the following replacements:

  • PROJECT_ID: Your project ID.
  • LOCATION: The region to process the request.
  • RAG_CORPUS_ID: The ID of the RagCorpus resource.
  • PAGE_SIZE: The standard list page size. You may adjust the number of RagFiles to return per page by updating the page_size parameter.
  • PAGE_TOKEN: The standard list page token. Obtained typically using ListRagFilesResponse.next_page_token of the previous VertexRagDataService.ListRagFiles call.

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles?page_size=PAGE_SIZE&page_token=PAGE_TOKEN

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
"https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles?page_size=PAGE_SIZE&page_token=PAGE_TOKEN"

PowerShell

Execute the following command:

$headers = @{  }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles?page_size=PAGE_SIZE&page_token=PAGE_TOKEN" | Select-Object -Expand Content
You should receive a successful status code (2xx) along with a list of RagFiles under the given RAG_CORPUS_ID.

Python

To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.


from vertexai.preview import rag
import vertexai

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# corpus_name = "projects/{project_id}/locations/us-central1/ragCorpora/{rag_corpus_id}"

# Initialize Vertex AI API once per session
vertexai.init(project=project_id, location="us-central1")

files = rag.list_files(corpus_name=corpus_name)
for file in files:
    print(file)

Delete a RAG file

REST

Before using any of the request data, make the following replacements:

  • PROJECT_ID: Your project ID.
  • LOCATION: The region to process the request.
  • RAG_CORPUS_ID: The ID of the RagCorpus resource.
  • RAG_FILE_ID: The ID of the RagFile resource. Format: projects/{project}/locations/{location}/ragCorpora/{rag_corpus}/ragFiles/{rag_file_id}.

HTTP method and URL:

DELETE https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles/RAG_FILE_ID

To send your request, choose one of these options:

curl

Execute the following command:

curl -X DELETE \
"https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles/RAG_FILE_ID"

PowerShell

Execute the following command:

$headers = @{  }

Invoke-WebRequest `
-Method DELETE `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles/RAG_FILE_ID" | Select-Object -Expand Content
A successful response returns the DeleteOperationMetadata resource.

Python

To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.


from vertexai.preview import rag
import vertexai

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# file_name = "projects/{project_id}/locations/us-central1/ragCorpora/{rag_corpus_id}/ragFiles/{rag_file_id}"

# Initialize Vertex AI API once per session
vertexai.init(project=project_id, location="us-central1")

rag.delete_file(name=file_name)
print(f"File {file_name} deleted.")

Retrieval query

When a user asks a question or provides a prompt, the retrieval component in RAG searches through its knowledge base to find information that is relevant to the query.

REST

Before using any of the request data, make the following replacements:

  • LOCATION: The region to process the request.
  • PROJECT_ID: Your project ID.
  • RAG_CORPUS_RESOURCE: The name of the RagCorpus resource. Format: projects/{project}/locations/{location}/ragCorpora/{rag_corpus}.
  • VECTOR_DISTANCE_THRESHOLD: Only contexts with a vector distance smaller than the threshold are returned.
  • TEXT: The query text to get relevant contexts.
  • SIMILARITY_TOP_K: The number of top contexts to retrieve.

HTTP method and URL:

POST https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION:retrieveContexts

Request JSON body:

{
 "vertex_rag_store": {
    "rag_resources": {
      "rag_corpus": "RAG_CORPUS_RESOURCE",
    },
    "vector_distance_threshold": 0.8
  },
  "query": {
   "text": "TEXT",
   "similarity_top_k": SIMILARITY_TOP_K
  }
 }

To send your request, choose one of these options:

curl

Save the request body in a file named request.json, and execute the following command:

curl -X POST \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION:retrieveContexts"

PowerShell

Save the request body in a file named request.json, and execute the following command:

$headers = @{  }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION:retrieveContexts" | Select-Object -Expand Content
You should receive a successful status code (2xx) and a list of related RagFiles.

Python

To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.


from vertexai.preview import rag
import vertexai

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# rag_corpus_id = "9183965540115283968" # Only one corpus is supported at this time

# Initialize Vertex AI API once per session
vertexai.init(project=project_id, location="us-central1")

response = rag.retrieval_query(
    rag_resources=[
        rag.RagResource(
            rag_corpus=rag_corpus_id,
            # Supply IDs from `rag.list_files()`.
            # rag_file_ids=["rag-file-1", "rag-file-2", ...],
        )
    ],
    text="What is RAG and why it is helpful?",
    similarity_top_k=10,  # Optional
    vector_distance_threshold=0.5,  # Optional
)
print(response)

Prediction

A prediction controls the LLM method that generates content.

REST

Before using any of the request data, make the following replacements:

  • PROJECT_ID: Your project ID.
  • LOCATION: The region to process the request.
  • MODEL_ID: LLM model for content generation. Example: gemini-1.5-pro-001
  • GENERATION_METHOD: LLM method for content generation. Options: generateContent, streamGenerateContent
  • INPUT_PROMPT: The text sent to the LLM for content generation. Try to use a prompt relevant to the uploaded rag Files.
  • RAG_CORPUS_RESOURCE: The name of the RagCorpus resource. Format: projects/{project}/locations/{location}/ragCorpora/{rag_corpus}.
  • SIMILARITY_TOP_K: Optional: The number of top contexts to retrieve.
  • VECTOR_DISTANCE_THRESHOLD: Optional: Contexts with a vector distance smaller than the threshold are returned.

HTTP method and URL:

POST https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:GENERATION_METHOD

Request JSON body:

{
 "contents": {
  "role": "user",
  "parts": {
    "text": "INPUT_PROMPT"
  }
 },
 "tools": {
  "retrieval": {
   "disable_attribution": false,
   "vertex_rag_store": {
    "rag_resources": {
      "rag_corpus": "RAG_CORPUS_RESOURCE",
    },
    "similarity_top_k": SIMILARITY_TOP_K,
    "vector_distance_threshold": VECTOR_DISTANCE_THRESHOLD
   }
  }
 }
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json, and execute the following command:

curl -X POST \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:GENERATION_METHOD"

PowerShell

Save the request body in a file named request.json, and execute the following command:

$headers = @{  }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:GENERATION_METHOD" | Select-Object -Expand Content
A successful response returns the generated content with citations.

Python

To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.


from vertexai.preview import rag
from vertexai.preview.generative_models import GenerativeModel, Tool
import vertexai

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# rag_corpus_id = "9183965540115283968" # Only one corpus is supported at this time

# Initialize Vertex AI API once per session
vertexai.init(project=project_id, location="us-central1")

rag_retrieval_tool = Tool.from_retrieval(
    retrieval=rag.Retrieval(
        source=rag.VertexRagStore(
            rag_resources=[
                rag.RagResource(
                    rag_corpus=rag_corpus_id,  # Currently only 1 corpus is allowed.
                    # Supply IDs from `rag.list_files()`.
                    # rag_file_ids=["rag-file-1", "rag-file-2", ...],
                )
            ],
            similarity_top_k=3,  # Optional
            vector_distance_threshold=0.5,  # Optional
        ),
    )
)

rag_model = GenerativeModel(
    model_name="gemini-1.5-flash-001", tools=[rag_retrieval_tool]
)
response = rag_model.generate_content("Why is the sky blue?")
print(response.text)

What's Next