REST Resource: routines

Stay organized with collections Save and categorize content based on your preferences.

Resource: Routine

A user-defined function or a stored procedure.

JSON representation
{
  "etag": string,
  "routineReference": {
    object (RoutineReference)
  },
  "routineType": enum (RoutineType),
  "creationTime": string,
  "lastModifiedTime": string,
  "language": enum (Language),
  "arguments": [
    {
      object (Argument)
    }
  ],
  "returnType": {
    object (StandardSqlDataType)
  },
  "returnTableType": {
    object (StandardSqlTableType)
  },
  "importedLibraries": [
    string
  ],
  "definitionBody": string,
  "description": string,
  "determinismLevel": enum (DeterminismLevel),
  "remoteFunctionOptions": {
    object (RemoteFunctionOptions)
  },
  "sparkOptions": {
    object (SparkOptions)
  }
}
Fields
etag

string

Output only. A hash of this resource.

routineReference

object (RoutineReference)

Required. Reference describing the ID of this routine.

routineType

enum (RoutineType)

Required. The type of routine.

creationTime

string (int64 format)

Output only. The time when this routine was created, in milliseconds since the epoch.

lastModifiedTime

string (int64 format)

Output only. The time when this routine was last modified, in milliseconds since the epoch.

language

enum (Language)

Optional. Defaults to "SQL" if remoteFunctionOptions field is absent, not set otherwise.

arguments[]

object (Argument)

Optional.

returnType

object (StandardSqlDataType)

Optional if language = "SQL"; required otherwise. Cannot be set if routineType = "TABLE_VALUED_FUNCTION".

If absent, the return type is inferred from definitionBody at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time.

For example, for the functions created with the following statements:

  • CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);

  • CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));

  • CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));

The returnType is {typeKind: "FLOAT64"} for Add and Decrement, and is absent for Increment (inferred as FLOAT64 at query time).

Suppose the function Add is replaced by CREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);

Then the inferred return type of Increment is automatically changed to INT64 at query time, while the return type of Decrement remains FLOAT64.

returnTableType

object (StandardSqlTableType)

Optional. Can be set only if routineType = "TABLE_VALUED_FUNCTION".

If absent, the return table type is inferred from definitionBody at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.

importedLibraries[]

string

Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.

definitionBody

string

Required. The body of the routine.

For functions, this is the expression in the AS clause.

If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement:

CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))

The definitionBody is concat(x, "\n", y) (\n is not replaced with linebreak).

If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:

CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'

The definitionBody is

return "\n";\n

Note that both \n are replaced with linebreaks.

description

string

Optional. The description of the routine, if defined.

determinismLevel

enum (DeterminismLevel)

Optional. The determinism level of the JavaScript UDF, if defined.

remoteFunctionOptions

object (RemoteFunctionOptions)

Optional. Remote function specific options.

sparkOptions

object (SparkOptions)

Optional. Spark specific options.

RoutineReference

Id path of a routine.

JSON representation
{
  "projectId": string,
  "datasetId": string,
  "routineId": string
}
Fields
projectId

string

Required. The ID of the project containing this routine.

datasetId

string

Required. The ID of the dataset containing this routine.

routineId

string

Required. The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.

RoutineType

The fine-grained type of the routine.

Enums
ROUTINE_TYPE_UNSPECIFIED
SCALAR_FUNCTION Non-builtin permanent scalar function.
PROCEDURE Stored procedure.
TABLE_VALUED_FUNCTION Non-builtin permanent TVF.

Language

The language of the routine.

Enums
LANGUAGE_UNSPECIFIED
SQL SQL language.
JAVASCRIPT JavaScript language.
PYTHON Python language.

Argument

Input/output argument of a function or a stored procedure.

JSON representation
{
  "name": string,
  "argumentKind": enum (ArgumentKind),
  "mode": enum (Mode),
  "dataType": {
    object (StandardSqlDataType)
  }
}
Fields
name

string

Optional. The name of this argument. Can be absent for function return argument.

argumentKind

enum (ArgumentKind)

Optional. Defaults to FIXED_TYPE.

mode

enum (Mode)

Optional. Specifies whether the argument is input or output. Can be set for procedures only.

dataType

object (StandardSqlDataType)

Required unless argumentKind = ANY_TYPE.

ArgumentKind

Enums
ARGUMENT_KIND_UNSPECIFIED
FIXED_TYPE The argument is a variable with fully specified type, which can be a struct or an array, but not a table.
ANY_TYPE The argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE

Mode

The input/output mode of the argument.

Enums
MODE_UNSPECIFIED
IN The argument is input-only.
OUT The argument is output-only.
INOUT The argument is both an input and an output.

StandardSqlTableType

A table type

JSON representation
{
  "columns": [
    {
      object (StandardSqlField)
    }
  ]
}
Fields
columns[]

object (StandardSqlField)

The columns in this table type

DeterminismLevel

JavaScript UDF determinism levels.

If all JavaScript UDFs are DETERMINISTIC, the query result is potentially cachable (see below). If any JavaScript UDF is NOT_DETERMINISTIC, the query result is not cacheable.

Even if a JavaScript UDF is deterministic, many other factors can prevent usage of cached query results. Example factors include but not limited to: DDL/DML, non-deterministic SQL function calls, update of referenced tables/views/UDFs or imported JavaScript libraries.

SQL UDFs cannot have determinism specified. Their determinism is automatically determined.

Enums
DETERMINISM_LEVEL_UNSPECIFIED The determinism of the UDF is unspecified.
DETERMINISTIC The UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
NOT_DETERMINISTIC The UDF is not deterministic.

RemoteFunctionOptions

Options for a remote user-defined function.

JSON representation
{
  "endpoint": string,
  "connection": string,
  "userDefinedContext": {
    string: string,
    ...
  },
  "maxBatchingRows": string
}
Fields
endpoint

string

Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add

connection

string

Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"

userDefinedContext

map (key: string, value: string)

User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

maxBatchingRows

string (int64 format)

Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.

SparkOptions

Options for a user-defined Spark routine.

JSON representation
{
  "connection": string,
  "runtimeVersion": string,
  "containerImage": string,
  "properties": {
    string: string,
    ...
  },
  "mainFileUri": string,
  "pyFileUris": [
    string
  ],
  "jarUris": [
    string
  ],
  "fileUris": [
    string
  ],
  "archiveUris": [
    string
  ]
}
Fields
connection

string

Fully qualified name of the user-provided Spark connection object. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"

runtimeVersion

string

Runtime version. If not specified, the default runtime version is used.

containerImage

string

Custom container image for the runtime environment.

properties

map (key: string, value: string)

Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

mainFileUri

string

The main file/jar URI of the Spark application. Exactly one of the definitionBody field and the mainFileUri field must be set for Python.

pyFileUris[]

string

Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.

jarUris[]

string

JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.

fileUris[]

string

Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.

archiveUris[]

string

Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.

Methods

delete

Deletes the routine specified by routineId from the dataset.

get

Gets the specified routine resource by routine ID.

insert

Creates a new routine in the dataset.

list

Lists all routines in the specified dataset.

update

Updates information in an existing routine.