Page tree
Skip to end of metadata
Go to start of metadata

Snap type:




This Snap takes an expression, evaluates it, and writes the result to the provided target path. If an expression fails to evaluate, then the handling of the error can be selected in the "Views" tab.

Expression functions can be found here

Structural Transformations

The following structural transformations from the Structure Snap are supported in the Mapper Snap:

  • Move - A move is equivalent to doing a mapping without pass-through. The source value will be read from the input data and placed into the output data. Since pass-through is turned off, the input data will not be copied into the output.  Also, the source value is treated as an expression in the mapper, whereas it was a JSONPath in the Structure Snap. A jsonPath() function was added to the expression language that can be used to execute a JSONPath on a given value. If pass-through is enabled, then you would probably have to delete the old value.
  • Delete - Write a JSONPath in the source column and leave the target column blank.
  • Update - All of the cases for update can be handled by writing the appropriate JSONPath.
    • Update valuetarget path = $last_name
    • Update maptarget = $address.first_name
    • Update listtarget = $names[(value.length)]
      • The '(value.length)' will evaluate to the current length of the array, so the new value will be placed there at the end.
    • Update list of mapstarget = $customers[*].first_name
      • This translates into "write the value into the 'first_name' field in all elements of the 'customers' array".
    • Update list of liststarget = $lists_of_lists[*][(value.length)]

Note that the Mapper does not make a copy of any arrays or objects written to the Target Path for performance reasons. Therefore, if you write the same array or object to more than one target path and are going to modify the object, you will need to make the copy yourself. For example, given the array "$myarray" and the following mappings:

$myarray -> $MyArray
$myarray -> $OtherArray

Any future changes made to either "$MyArray" or "$OtherArray" will be reflected in the both arrays. In that case, you should make a copy of the array, like so:

$myarray -> $MyArray
[].concat($myarray) -> $OtherArray


The same is true for objects, except you can make a copy using the ".extend()" method, like so:

$myobject -> $MyObject
{}.extend($myobject) -> $OtherObject



Support and limitations:



Accounts are not used with this Snap.


InputThis Snap has at most one document input view. With no input view specified, it generates a downstream flow of one row.
OutputThis Snap has exactly one document output view.

This Snap has at most one document error view and produces zero or more documents in the view. If the Snap fails during the operation, an error document is sent to the error view containing the fields error, reason, original, resolution, and stacktrace:

{ error: "$['SFDCID__c\"name'] is undefined" reason: "$['SFDCID__c\"name'] was not found in the containing object." original: {[:{} 
resolution: "Please check expression syntax and data types."
stacktrace: "com.Snaplogic.Snap.api.SnapDataException: ...

In Spark mode:

  • the errors will be routed to error view documents if the error policy is defined as CONTINUE;
  • the execution will stop on first error if the error policy is defined as FAIL;
  • the execution will simply ignore the error if the error policy is defined as IGNORE.





Required. The name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your pipeline.

Null-safe access


Enabled: Lets you set the target value to null in case the source path does not exist, such as $person.phonenumbers.pop() ->$ lastphonenumber can result in an error in case person.phonenumbers does not exist on the source data. Enabling this allows to write null to lastphonenumber instead. 
Disabled: Fails if the source path does not exist, ignores the record entirely, or writes the record to the error view depending on the setting of the error view property.


Pass through

Required. This setting determines if data should be passed through or not. If not selected, then only the data transformation results that are defined in the mapping section will appear in the output document and the input data will be discarded. If selected, then all of the original input data will be passed into the output document together with the data transformation results.

This setting is impacted by Mapping Root. If Mapping Root is set to $ and Pass through is not selected, anything not mapped in the table will not pass through. However, if Mapping Root is set to $customer and Pass through is not selected, it will only apply to the items within the Mapping Root level. That means that anything above the Mapping Root level will pass though and items at the Mapping level that are not mapped in the table will not pass through. 

Default: Not selected


Mapping Root

Required. This setting specifies the sub-section of the input data to be mapped.

Default: $


Transformations: Mapping table

Required. Expression and target to write the result of the expression. Expressions that are evaluated will remove the source targets at the end of the run.



Expression                                       | Target Path

$first.concat(" ", $last)   | $full 

Incoming fields from previous Snaps that are not expressly defined in the Mapping Table are passed through the Data Snap to the next Snap. However, when defining output fields in the Target Path, if the field name is the same as a field name that would otherwise "pass-through", the field in the mapping table wins and will override the output. 

See Expression Language Overview for more information on the expression language and Expression Language Usage for usage guidelines.



Mapping Table

The mapping table makes it easier to:

  • determine which fields in a schema are mapped or unmapped.
  • create and manage a large mapping table through drag-and-drop.
  • search for specific fields 

Using the Mapping Table

To let the SnapLogic Platform make a best effort at mapping, click SmartLink. See SmartLink for more information.


 In a case where XML schema is exposed in the Target schema column of the Mapper Snap, map the attribute fields first before the element fields of the enclosing element field. This will ensure that the JSON produced from the Mapper Snap is valid for converting itself to XML.

If a Copy Snap is placed directly after a Mapper Snap, schema information will not be visible in the target schema of the Mapping Table.

Manually Mapping Fields Between Schemas

To manually map fields:

  1. Click the plus sign (+) to add a row, then drag a field into the empty field. Doing so will change the input to an expression.
  2. Drag the appropriate output schema field into the target path for the corresponding input schema field. If a value already exists, it will replace it.

If you wish to merge to input fields into one output, you can select and drag one field out, then drag another input field into the same Expression field and they will be combined.



 If you want to insert a newline between entries, use the unicode value of \u000A, as in:
 $element + "\u000A" + $element 

To directly map a field from one schema to the other, you click and drag the field from either schema over to the other schema and the mapping will automatically be created. 

To add multiple fields at a time:

  • add a new row to the end of the table, select the check boxes for your input fields, and then drop them onto the empty row at the end of the table. The system adds or replaces the first item onto the drop target (leaves the opposing values unchanged), and then adds N - 1 new rows to the table and puts the values of the other selected nodes into the new rows.
  • select your input fields and drag them to the Expression column header.

Note: Non-expression targets  (= toggle off) are treated as literal values and will not affect the mapped/unmapped field list.


To delete a field from the target schema, add a mapping row, specify the input field in the Expression column, then leave the Target path blank.


As of the Fall 2015 release, rows within the table can be rearranged by mousing over a row to until the grabber appears on the left.


Searching a Schema within Mapper

To search for a specific field in either schema, enter the term in the appropriate search field.


Because searches are based on full key names, not just the node title, searches may also select a node's ancestry. For example, if you have a tree such as:

+ Parent


      - Grandchild

searching for "Parent" will return all three items since the key names are $Parent, $Parent.Child, and $Parent.Child.Grandchild.

Note: File path wildcards * and ? are supported.


Viewing Mapped and Unmapped Fields

To filter whether to see all, mapped, or unmapped fields, use the drop-down next to the search field.



Additionally, once a field is mapped, it is bolded and the color is changed if you are viewing All.


Click on the mapped field in the schema and the row in the mapping table and the target are highlighted.





Note: If an array of data is coming into the Mapper (Data) Snap, when you drag them into the table, a JSONPath expression is created to handle it.


Data Preview

By clicking the arrow under the mapping table, you can get a preview of what input and target schemas look like.



Other Usability Functions

  • You can collapse the schema views using the gray arrows in the header row to give you larger Expression and Target path columns.
  • Click the down arrow within an expression field to access the expression editor, functions and properties, and the upstream schema. Click the bubble in the Target path field to access the downstream schema suggestions.
  • You can rearrange the rows within the mapping table by highlighting the left side, clicking and dragging.



Example Data Output

Successful Mapping

If your source data looks like:

  "first_name": "John",
  "last_name": "Smith",
  "phone_num": "123-456-7890"


And your mapping looks like:

  • Expression: $first_name.concat(" ", $last_name)
  • Target path: $full_name 

Your outgoing data will look like:

  "full_name: "John Smith",
  "phone_num": "123-456-7890"


Unsuccessful Mapping

If your source data looks like: 

  "first_name": "John",
  "last_name": "Smith",
  "phone_num": "123-456-7890"

And your mapping looks like:

  • Expression: $middle_name.concat(" ", $last_name)
  • Target path: $full_name 

An error will be thrown.

Understanding Mapping Root

Documents in a pipeline can be hierarchical, meaning an object can contain other objects or arrays, which themselves can contain objects or arrays.  For example, the following JSON document is hierarchical since the root object contains an object in the "child" field:

    "name": "Acme",
    "child": { "field1": 1, "field2": 2 } }


Mapping simple hierarchical documents that only contain other objects is straightforward since you can directly map one field to another. However,

performing a mapping for documents that contain arrays of objects is more complicated since the objects in the array need to be mapped separately from the

parent object. The mapping needs to be separate because there is no unambiguous way to describe the array mapping using the expression language and JSON-Paths. To address the need to map arrays, the Mapping Root property has been added to the Mapper Snap. 

The Mapping Root property is a JSONPath that limits the scope of a mapping to the parts of the document that match the given path. For example, a Mapping Root like $.my_array[*] will tell the Mapper to iterate over the objects in the array and transform each object based on the mapping. The other parts of the document that do not match the Mapping Root will be passed through untouched in the output. By default, the root is set to $, which is the root of the document. 

Since array mappings need to be done separately, you will need to add additional Mapper Snaps for each array mapping that needs to be done. The additional Mapper Snaps should be chained together such that the top levels of the hierarchy are mapped before descending down to the lower levels. The reason for this ordering is that the Mapper UI will pare down the Input and Target schema views to only show the fields that are in the objects of the array.

Therefore, the outer structures of the document need to agree between the source and target or else the schema views will not be useful. 

As a more complete example, we'll build a pipeline that maps the following source document to a target document.


Source document:

    "name": "Acme",
    "employee": [ { "first_name": "Bob", "last_name": "Smith", "age": 32 }, { "first_name": "Joe", "last_name": "Doe", "age": 44 } ] }

Target document:

    "company_name": "Acme",
    "workers": [ { "name": "Bob Smith", "age": 32 }, { "name": "Joe Doe", "age": 44 } ] }

The source document is hierarchical since it contains an array of objects, so we'll need two separate Mapper Snaps: one to map the parent fields and another to map the elements in the "employee" array.  The first Mapper's configuration is pretty simple since it is just changing names:
  Source        | Target
  $name         | $company_name
  $employee     | $workers


The second Mapper will be connected to the output of the first so that it can work on the lower levels of the document hierarchy. The "Mapping Root" for this Snap will need to be changed so that only the objects in the "workers" array will be affected by the mapping transformations. After setting the root, note that the Input Schema changes to only show the fields in the array objects. If there was a target schema available, that would also be narrowed down to show the "name" and "age" fields.
  Mapping Root: $workers[*]
  Source                           | Target
  $first_name + " " + $last_name   | $name
  $age                             | $age

See it in Action

The SnapLogic Data Mapper


SnapLogic Best Practices: Data Transformations and Mappings



Related Information


Snap History


  • Snap-aware error handling policy enabled for Spark mode. This ensures the error handling specified on the Snap is used.


  • You can now expand/collapse all nodes of a schema tree by holding the Shift key while clicking on the plus (+) sign.
  • Schemas with less than 1000 entries will now auto-expand when searching/filtering. 


  • Resolved an issue with the Mapper Snap that occurred while evaluating an expression and reporting its error. 


  • No labels