Skip to main content

         This documentation site is for previous versions. Visit our new documentation site for current releases.      

This content has been archived and is no longer being updated.

Links may not function; however, this content may be relevant to outdated versions of the product.

Monitoring the progress of your case archival process

Updated on April 5, 2022

The pr_metadata table holds all of the pyArchiveStatus and other information about all records that are processed by the three archival jobs.

View and monitor the progress of the case archival process using the Query Runner to run a SQL statement on the pr_metadata table. For more information about Query Runner, see Running SQL queries on Pega Cloud.

Monitoring your archival process using the pr_metadata table
For a description of each value of pyArchiveStatus, and when the status occurs during the archival process, see the table below.
Archive-Ready1.(Status occurs after the Crawler step of the pyPegaArchiver job.)

The record is pending to go through the Copier step of the pyPegaArchiver job.

Archived2A.(Status occurs after the pyPegaArchiver job.)

The record is copied to Pega Cloud File Storage.

Archive-External2B.(Status occurs after the pyPegaArchiver job.)

The record is an external attachment, and reference is copied to Pega Cloud File Storage.


(Status occurs after the pyPegaArchiver job.)

The record is shared between cases, and may not be eligible for an archival process.

(Status occurs after the pyPegaArchiver job.)

The archival process failed.

(Status occurs after the pyPegaIndexer job.)

The record has been indexed into Elasticsearch and is pending to go through the purge process.

(Status occurs after the pyPegaIndexer job.)

The indexing of the record into Elasticsearch failed.
The pr_metadata table is empty.4.

(The pyPegaPurger job deletes all entries in the pr_metadata table after the job succeeds.)

The record has been purged from the Pega database.

Additionally, you can validate that your case archival process succeeded by running a report for cases that are resolved per their archival policy before and after running the archiving process. The difference in reports shows cases that Pega Platform purged from the primary database and copied to Pega Cloud File Storage.

You can review if Pega Platform archived a specific case by searching for archived work items in the Case Management Portal.

For more information, see Reviewing archived case data.

Using logs to troubleshoot your archiving and purge process

If your archival process did not run as expected, you can obtain the log files for troubleshooting.

  1. Enable the following loggers:
    • datastoreexecutor.crawler.CaseCrawler
    • datastoreexecutor.Archiver.CaseArchivalStrategy
    • datastoreexecutor.databroker.IndexManagerService
    • com.pega.platform.datastoreexecutor.purger.internal.PurgerImpl

    For more information about viewing logs, see Viewing logs.

  2. Access your log files.
    1. Use Kibana to access your log files.

      Job Schedulers run on any available node. You can access the logs of your archiving and purge jobs regardless of what node they run on by using Kibana.

      For more information, see the article Accessing your server logs using Kibana.

    2. Optional: If you cannot use Kibana, use activities to run archive and purge on your local node:

      Instead of running the pyPegaArchiver, pyPegaIndexer, pyPegaPurger jobs, run the following activities:

      Note: Run the activities in the following order.
      • pzPerformArchive
      • pzPerformIndex
      • pzPerformPurge

      For more information about using activities, see Creating an activity.

      The PegaRules log gets created on the node on which you run the activity.

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best. is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us