You can import a .csv file to add or update data for a data type.
When you import data into a data type, a .csv file is generated if there are any errors.
The .csv file contains the error details for the records for each row in the file. You can
fix the errors and reimport the data. You can change the location to which the
.csv file is written using Dynamic System Settings. For example, for a multi-node
system, you can set the destination to a shared location so that a user can access the file
from any node in the system.
Note: This option is not available for Apache Cassandra databases.
-
Click the Data icon in the navigation panel to display the Data
Explorer.
To view case types for data import, select
Data in the Explorer
panel and select
Show case types from the dropdown menu
at the
top right of the explorer.
-
Click the data type for which you want to import data to display it in the Data
Designer.
-
Click the Records tab.
-
Click Import.
-
Upload a .csv file for data import
-
Map the fields in your data type with the fields in the .csv file
- Optional:
If you are using a dashboard gadget in your end user portal that displays in-progress
data imports and recently completed data imports, enter a short description that describes
the import .
-
Select Save import settings as a template if you want to save
the mapping between the fields in your data type and the fields in the .csv file as a
template for future imports for the data type. Enter a name for the template in the
Name for this data import field.
Note: If you selected a template when you mapped the fields, the template name is
displayed in the Name for this data import field. If you choose
an existing template, the template is replaced with the current import.
-
Select Skip running validate rules if you do not want the system
to validate the data type by using validate rules.
Note: The system always does basic type checking (string, integer, date, and so on) for
imported fields. If you clear this check box, the data import process runs the default
validate rule of the data type on each record that is imported, which causes the import
to take longer. Hence, it is a best practice to clear this check box only when
needed.
-
Click Start validation to begin validating the data. No data is
imported during this step. A dialog box is displayed that shows the validation progress,
the start time, running time, number of valid rows, and the number of invalid rows. If
there are errors, the record number and error details are displayed.
- Optional:
If your data import process has errors, click Download to
download a .csv file that lists the errors. The .csv file has an additional column that
contains the error details for the records on each row in the file. You can fix the errors
and import data again for your data type.
-
Click Continue import to import the data or click
Cancel if there are errors in the data that you want to fix
before importing. If you import even if there are errors, only the rows without errors are
imported. A dialog box listing the time taken for the import, the number of records
created and updated, and the errors in the import is displayed.
You can close the dialog box for the data import process. The process runs
asynchronously and is visible on your worklist.
Cancel the import by clicking
Stop import. The import stops when the current batch finishes
processing.
-
Click Finish to return to the Data Designer and see the new and
updated data for your data type.
CAUTION:
It is possible for a record to pass the validation step and still
have an error that is only detected during the actual import. This can occur when the
schema for externally mapped tables has restrictions that are not known in the rules,
for example, if the length specified for a text column does not match the length in the
property rule. The records without errors are still imported, but performance might be
affected and the error messages might be confusing because they will be generated by the
database. To ensure that this situation does not occur, make the property definitions
match the table's columns as closely as possible or use a validate rule that enforces
the same restrictions as the schema.