Unify and manage your data

Data Loader at a glance

Learn about preparing and uploading your data using the Reltio Data Loader.

Reltio Entity Resolution Reltio Multidomain MDM Reltio Customer 360

Important: Some Data Loader functionalities are cloud platform specific. We indicate which ones are specific throughout the documentation.

Ready to get going?

Are you a data steward looking to upload several sources of your data? Want to consolidate, clean, and unify them into a perfect golden version?

The Data Loader, Reltio’s intuitive application, enables you to:

  • Load your data from the most common locations and in the most common formats. For more information, see topic Supported data sources and file types.
  • Visualize your data, define mappings, and do the basic transformation before loading the data.
  • Define data load options, like using life-cycle assessment (LCA), or full or partial updates.

What's next?

Learn about the Data Loader interface and how to get around in topic Get started with Data Loader.

There you'll also learn about other things you need to know before you begin:

Threshold errors

During pre-processing of a data load job, the Data Loader can sometimes automatically fail or cancel a job. This action prevents long-running jobs from being interrupted, adding no contribution, or generating large error files. Sometimes, a data load job may timeout if it has more than 10,000 rows.

The Data Loader uses the defined rate of permitted errors; that is, a threshold rate or error threshold. For example, if your job encounters bad data at or above the defined rate, the software cancels or fails the job. When such a cancellation or failure occurs, the Job Status page prompts you to review the error file. Jobs that reach the error threshold have a status of STOPPED and display an error message such as The job was stopped as error threshold limit 15% was reached. Actual is 16.8% (306 to 1819 profiles).

When would I set the ErrorThreshold?

You apply the ErrorThreshold when there are more than 10K records to process. You define the error threshold on the Job Definition level. Set the value as appropriate or leave it at the default, which is 15%.

Here’s an example where the error threshold is set for 25:


    {
    "additionalAttributes": {
        "alwaysCreateDCR": false,
        "acceptFileType": ".csv",
        "errorThreshold": 25
    }
}

If a job doesn't reach the predefined error threshold limit but has some errors, it completes and its status will be COMPLETED_WITH_ERRORS.