Wrangle supports a set of variables, which can be used to programmatically reference aspects of the dataset or its source data. These metadata references allow you to create individual transformations of much greater scope and flexibility.
Tip: Some transformation steps make access to metadata about the original data source impossible to retain. It's best to use these references, where possible, early in your recipe. Additional information is available below.
Tip: You can use the
$sourcerownumber to create a primary key to identify source information for any row in your file-based datasets.
This reference retrieves the fully qualified path for a row of data sourced from a file. As you are working with a dataset in the application, it can be helpful to know where the file from which each row of data originated. Using the
$filepath function, you can generate columns of data early in your recipe to retain this useful information.
The following transforms might make file path information invalid or otherwise unavailable:
NOTE: This reference returns null values for values from relational database sources.
NOTE: This reference returns the file path. It does not include the scheme or authority information from the URI. So, protocol identifiers such as
http:// are not available in the output.
Supported File Formats
Base file formats:
Full path to the source location of the Excel file.
|Compressed files (Gzip, Bzip2, etc)||Limited||Support for single-file archives only. Full path is returned only if the archive contains a single file.|
|folders||Yes||Full path to the file is returned. You can modify the output column to return the folder path only.|
Additional file formats:
Other source types:
- The FILEPATH function produces a null value for any samples collected before the feature was enabled, since the information was not available. To see that lineage information, you must switch to the initial sample or collect a new sample.
- After this function is enabled, non-initial samples collected in the future are slightly smaller in size, due to the space consumed by the filepath lineage information that is tracked as part of the sample. You may see a change in the number of rows in your sample.
Example 1 - Generate filename column
The following example generates a column containing the filepath information for each row in the dataset:
derive type: single value: $filepath as: 'src_filepath'
You can use the following additional steps to extract the filename from the above
derive type: single value: RIGHTFIND(src_filepath, '\/', false, 0) as: 'rightfind__src_filepath'
derive type: single value: SUBSTRING(src_filepath, rightfind_src_filepath + 1, LEN(src_filepath)) as: 'filename'
drop col: rightfind_src_filepath action: Drop
Example 2 - Source row number across dataset with parameters
When you import a dataset with parameters, the
$sourcerownumber value returns a continuously incrementing row number across all files in the dataset, effectively creating a primary key. Using the following example, you can create a new column to capture the source row number within individual files.
Here is some example data spread across three files after import using a single dataset with parameters.
As you can see, lineage is hard to determine across the files.
Gather the filepath and source row number information into two new columns:
derive type: single value: $filepath as: 'filepath'
derive type: single value: $sourcerownumber as: 'source_row_number'
Create a new column called
start_of_file_offset which contains the offset value of the row from the first row in the file. In the first statement, mark the value of
$sourcerownumber for the first row of the file, leaving the other rows for the file empty:
derive type: multiple value: IF(PREV($filepath, 1) == $filepath, NULL(), $sourcerownumber) order: $sourcerownumber as: 'start_of_file_offset'
Create a new column that contains the values from the previous column, with the empty rows filled in with the last previous value, which is the
$sourcerownumber for the first row of the current file:
derive type: multiple value: FILL(start_of_file_offset, -1, 0) order: $sourcerownumber as: 'filled_start_file_offset'
Create a new column computing the file-based source row number as the difference between the raw source row number and the start of file offset value computed in the previous step. This generated value is the source row number for a row within its own file:
derive type: single value: ($sourcerownumber - filled_start_file_offset) + 1 as: 'source_row_number_per_file'
Delete the columns used for the intermediate calculations:
drop col: filled_start_file_offset,start_of_file_offset action: Drop
$sourcerownumber variable is a reference to the row number in which the current row originally appeared in the source of the data.
Tip: If the source row information is still available, you can hover over the left side of a row in the data grid to see the source row number in the original source data.
- The following transforms might make original row information invalid or otherwise unavailable. In these cases, the reference returns null values:
When working with datasets sourced from Avro or Parquet files, the
$sourcerownumberreference is not supported.
The following example generates a new column containing the source row number for each row in the dataset, if available:
derive type: single value: $sourcerownumber as: 'src_rownumber'
If you have already used the
$filepath reference, as in the previous example, you can combine these two columns to create a unique key to the source of each row:
derive type: single value: MERGE([src_filename,src_rownumber],'-') as: 'src_key'
$col variable is a reference to the column that is currently being evaluated. This variable references the state of the current dataset, instead of the original source.
NOTE: This reference works only for the
edit with formula transformation (
In the following example, all columns in the dataset that are of String type are converted to uppercase:
set col: * value: IF(ISMISMATCHED($col, ['String']), $col, UPPER($col))
In the above, the wildcard applies the edit to each column. Each column is tested to see if it is mismatched with the String data type. If mismatched, the value in the column (
$col) is written. Otherwise, the value in the column is converted to uppercase (
$col is useful for multi-column transformations.