See the supported connectors for Application Integration.

Data mapping overview

This page provides a general overview of data mapping and the different ways to perform data mapping in Application Integration.

Enterprise data may reside in various sources and formats, making it extremely difficult to integrate them into a unified data model or data pipeline. Data mapping is the process of extracting and standardizing data from multiple sources in order to establish a relationship between them and the related target data fields in the destination. Some examples of using data mapping in an integration include:

  • Extracting fields from a complex data structure such as a JSON.
  • Mapping data source to the target schema.
  • Transforming data by appling transform functions.
  • Generating output values and storing/using them as integration variables.

Application Integration lets you perform data mapping using the following tasks:

Data Transformer Script task

The Data Transformer Script task is a template engine based data mapping feature available in Application Integration. It uses Google's Jsonnet configuration language to create and edit Jsonnet templates that define the mapping relationships for the specified source and target integration variables in your integration.

Using the Data Transformer Script editor and the supported Data Transformer functions you can write custom data mapping logic, perform variable assignments, and add or modify integration variables.

The following image shows the sample layout of the Data Transformer Script editor:

image showing data-transformer script editor image showing data-transformer script editor

For information about how to add and configure the Data Transformer Script task, see Data Transformer Script task.

Data Mapping task

The Data Mapping task is a no-code low-code feature in Application Integration that provides a visual mapping canvas (Data Mapping editor) to perform data assignments and mappings in your integrations. In addition, you can also use the supported mapping functions to further transform your data into meaningful variables/formats to make them accessible to the other tasks or triggers in your integration.

With the Data Mapping task, you can:

  • Use the Data Mapping editor to visualize and define variable mapping for single or nested variables.
  • Transform variables from one data type to another data type. The Data Mapping task lets you apply multiple mapping functions (including nested functions) to transform the variable data.
For information about how to add and configure the Data Mapping task, see Data Mapping task.

Data Mapping editor and layout

The Data Mapping editor provides a visual canvas containing the following integration elements:

  • Variables pane: Displays the different types of variables that are available to the integration:
    • Inputs. Input variables of the integration.
    • Outputs. Output variables of the integration.
    • Local Variables. Variables that exist within the scope of the integration.

    If no variables are listed, click Add + to configure a new variable.

    Click (Expand) to expand each variable and view the available subfields of that variable. To search for any variable or its subfield from the available variable list, click (Search variables).

    For more information about variables in Application Integration, see Using variables in Application Integration.

  • Input column: Displays input mapping rows containing the source of the data mapping input. Source can be a literal value, a base function, or an input variable, with mapping functions. Click Variable or Value in an input mapping row to add a source.
  • Output column: Displays the output mapping rows containing the related target variables for the respective input mapping row. Target variables can be used for mapping in subsequent input rows. To assign an output variable, you can either create a new variable or directly drag and drop an existing output variable from the Variables column.
The following image shows the sample layout of the Data Mapping editor:

image showing data mapping editor image showing data mapping editor

Mapping functions

The Data Mapping task provides various predefined mapping functions to transform and standardize the mapping data in your integration. A mapping function can have one or more input parameters, wherein each parameter can further hold a literal value, a variable, or a base function with mapping functions applied. You can use multiple mapping functions for a single input source, forming a mapping transform expression.

The end data type of an input source is based on the return type of the transform expression defined in the respective data mapping input row. The Data Mapping editor displays a validation error under the respective data mapping input row if the return type of the input source doesn't match the return type of the corresponding output mapping target variable.

Transform expression

A transform expression is a combination of several mapping functions that are either chained together in-series or in a nested structure. Using the Data Mapping editor, you can easily insert, modify, or remove a function or a function parameter in a defined transform expression. If the defined transform expression is invalid, the Data Mapping editor displays a validation error next to the respective function or function parameter that is causing the error in the expression. To view the complete error message, hold the pointer over the validation error icon.

The following image shows a sample mapping with validation errors in the Data Mapping editor:

image showing data mapping validation error image showing data mapping validation error

For more information about how to configure a mapping in a Data Mapping task, see Add a mapping.

For information about the supported pre-defined mapping functions in Application Integration, see Supported data types and mapping functions.

Mapping order

Mappings specified in the Data Mapping editor are run in sequence from top to bottom. For example, in the preceding image, Num1 is mapped to Num1ToInt in the first row, making Num1ToInt available for mapping in the subsequent rows.

What's next