Skip to main content


         This documentation site is for previous versions. Visit our new documentation site for current releases.      
 

Importing data from a file

Updated on April 6, 2022

To add data to your data model more quickly, import data from a .csv file.

If your operator ID has Allow rule checkout enabled, import performance might be impacted. Use an operator ID that does not have Allow rule checkout enabled, or disable this option for your operator ID. For more information, see Defining security information for an operator. To view a list of operators that have Allow rule checkout enabled, click View Operators with 'Allow rule checkout' enabled at the bottom of the Migration settings landing page.
Note: You cannot import data by using a .csv file if you are using an Apache Cassandra database.
  1. Access the data object.
    1. In the navigation pane of App Studio, click Data.
    2. Click a data object.
  2. Upload the file.
    1. Click the Records tab, and then click Import.
    2. In the Purpose list, select Add only.
    3. Click Choose File.
    4. Navigate to your .csv file, and then click Open.
    5. Click Next.
  3. Map the columns in your file to the fields in the data object.
    Tip: To make mapping faster, select a template from the Template list to use to define the mapping between the fields in your data object and the fields in the .csv file and skip the rest of this step.
    1. In the Target field column, select the fields in your data object that correspond to the fields in the .csv file shown in the Source field column, or click Select to display a dialog box for searching and filtering fields.
      Note: You can select top level and embedded properties as targets for import.

      If there are fields in your source that do not exist in the template, for example, if there is a field in the source called "Salutation" that is not available in the template, check if there is a field in the object you can map to. If there is no suitable field, you must add it.

      The column headers in your .csv files must be unique as they correspond to field names in your application. For example, if you want to import the "Salutation" field from your .csv file to a property called "Salutation", ensure that the column header for the salutation is “Salutation”.

    2. Optional: If you are importing fields from an external system, you can apply business logic, such as lookups, decision trees, and decision tables to translate the external data into fields that are understood by Pega Platform. In the Mapping options column, click the Mapping options icon to select the type of business logic to use for translation, and click Submit. For more information, see Applying business logic when importing data.
    3. Optional: Enter a default value. For new records, the default value is used if the source field is blank. For existing records, the default value is used if both the source and target fields are blank. If you use a lookup, decision tree, or decision table, the source value is the value obtained from the lookup, tree, or table. It is not the value in the .csv file.
    4. Optional: Set defaults for fields that do not have matching source columns in the .csv file.
      1. Click View custom defaults.
      2. Click Add default value.
      3. In the Target field column, enter the target field or click Select to choose the target field from a list of fields.
      4. In the Default value column, enter the default value. For new records, the default value is used if it is provided. For existing records, the default value is used if the target value is blank.
      5. Click Next if you are finished mapping fields, or click Back to mapping to finish mapping fields.
  4. Click Next.
  5. Optional: If you are using a dashboard gadget in your end-user portal that displays in-progress data imports and recently completed data imports, in the Name for this data import field, enter a short description that describes the import.
  6. To save your import settings for reuse when mapping fields, in the Additional import options section, click Save import settings as a template, and then enter the template name.
  7. To control validation overhead, in the Skip validation section, configure how much validation to use on imported data by selecting one of the following validation options.
    • To skip validation, Skip validation step entirely.
    • Select Skip running validate rules if you want the system to perform basic validation. Basic validation checks that the property type of the data being imported matches the property type of the field in the data type.
    • Clear Skip running validate rules if you want the system to perform advanced validation. Advanced validation performs basic validation, then runs the default validate rule of the data type on each record that is imported. Running the validate rule causes the import to take longer; however, it allows you more control when you need additional validation.
  8. If you selected Skip running validate rules in step 7, click Start validation.
    Result: If an error is found, the record is removed from the valid records and is written to a row in the error .csv file. Fix the flawed rows and restart the import process. If you choose to continue the import without fixing the errors, the flawed rows are not imported.
  9. Click Continue import.
    You can close the dialog box for the data import process. The process runs asynchronously and is visible on your worklist.
  10. If you selected Skip validation step entirely in step 7, click Import.
  11. Click Finish.

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us