Import files from Google Cloud Storage or Google Drive into a RagCorpus.
Endpoint
posthttps://aiplatform.googleapis.com/v1beta1/{parent}/ragFiles:import
Path parameters
parent
string
Required. The name of the RagCorpus resource into which to import files. Format: projects/{project}/locations/{location}/ragCorpora/{ragCorpus}
Request body
The request body contains data with the following structure:
Required. The config for the RagFiles to be synced and imported into the RagCorpus. VertexRagDataService.ImportRagFiles
.
Response body
If successful, the response body contains an instance of Operation
.
ImportRagFilesConfig
Config for importing RagFiles.
Specifies the size and overlap of chunks after importing RagFiles.
Specifies the transformation config for RagFiles.
Optional. Specifies the parsing config for RagFiles. RAG will use the default parser if this field is not set.
maxEmbeddingRequestsPerMin
integer
Optional. The max number of queries per minute that this job is allowed to make to the embedding model specified on the corpus. This value is specific to this job and not shared across other import jobs. Consult the Quotas page on the project to set an appropriate value here. If unspecified, a default value of 1,000 QPM would be used.
import_source
Union type
import_source
can be only one of the following:Google Cloud Storage location. Supports importing individual files as well as entire Google Cloud Storage directories. Sample formats: - gs://bucketName/my_directory/objectName/my_file.txt
- gs://bucketName/my_directory
Google Drive location. Supports importing individual files as well as Google Drive folders.
Slack channels with their corresponding access tokens.
Jira queries with their corresponding authentication.
partial_failure_sink
Union type
import_result_sink
. partial_failure_sink
can be only one of the following:The Cloud Storage path to write partial failures to. Deprecated. Prefer to use importResultGcsSink
.
The BigQuery destination to write partial failures to. It should be a bigquery table resource name (e.g. "bq://projectId.bqDatasetId.bqTableId"). The dataset must exist. If the table does not exist, it will be created with the expected schema. If the table exists, the schema will be validated and data will be added to this existing table. Deprecated. Prefer to use import_result_bq_sink
.
JSON representation |
---|
{ "ragFileChunkingConfig": { object ( |
BigQueryDestination
The BigQuery location for the output content.
outputUri
string
Required. BigQuery URI to a project or table, up to 2000 characters long.
When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist.
Accepted forms:
- BigQuery path. For example:
bq://projectId
orbq://projectId.bqDatasetId
orbq://projectId.bqDatasetId.bqTableId
.
JSON representation |
---|
{ "outputUri": string } |
RagFileParsingConfig
Specifies the parsing config for RagFiles.
useAdvancedPdfParsing
(deprecated)
boolean
Whether to use advanced PDF parsing.
parser
Union type
parser
can be only one of the following:The Advanced Parser to use for RagFiles.
The Layout Parser to use for RagFiles.
The LLM Parser to use for RagFiles.
JSON representation |
---|
{ "useAdvancedPdfParsing": boolean, // parser "advancedParser": { object ( |
AdvancedParser
Specifies the advanced parsing for RagFiles.
useAdvancedPdfParsing
boolean
Whether to use advanced PDF parsing.
JSON representation |
---|
{ "useAdvancedPdfParsing": boolean } |
LayoutParser
Document AI Layout Parser config.
processorName
string
The full resource name of a Document AI processor or processor version. The processor must have type LAYOUT_PARSER_PROCESSOR
. If specified, the additionalConfig.parse_as_scanned_pdf
field must be false. Format: * projects/{projectId}/locations/{location}/processors/{processorId}
* projects/{projectId}/locations/{location}/processors/{processorId}/processorVersions/{processor_version_id}
maxParsingRequestsPerMin
integer
The maximum number of requests the job is allowed to make to the Document AI processor per minute. Consult https://cloud.google.com/document-ai/quotas and the Quota page for your project to set an appropriate value here. If unspecified, a default value of 120 QPM would be used.
JSON representation |
---|
{ "processorName": string, "maxParsingRequestsPerMin": integer } |
LlmParser
Specifies the advanced parsing for RagFiles.
modelName
string
The name of a LLM model used for parsing. Format: gemini-1.5-pro-002
maxParsingRequestsPerMin
integer
The maximum number of requests the job is allowed to make to the LLM model per minute. Consult https://cloud.google.com/vertex-ai/generative-ai/docs/quotas and your document size to set an appropriate value here. If unspecified, a default value of 5000 QPM would be used.
customParsingPrompt
string
The prompt to use for parsing. If not specified, a default prompt will be used.
JSON representation |
---|
{ "modelName": string, "maxParsingRequestsPerMin": integer, "customParsingPrompt": string } |