Skip to main content


         This documentation site is for previous versions. Visit our new documentation site for current releases.      
 

This content has been archived and is no longer being updated.

Links may not function; however, this content may be relevant to outdated versions of the product.

Exporting and importing simulation data automatically with Deployment Manager 4.6.x

Updated on September 10, 2021

Create and run data migration pipelines in Deployment Manager to automatically export simulation data from a production environment into a simulation environment in which you can test simulation data. You can also use Deployment Manager to monitor and obtain information about your simulations, for example, by running diagnostics to ensure that your environment configurations are correct and by and viewing reports that display key performance indicators (KPIs).

See the following topics for more information:

Creating a pipeline

  1. In the navigation pane, click Pipelines > Data migration pipelines.
  2. Click New.
  3. On the Environment Details page, if you are using Deployment Manager on-premises, configure environment details.

    This information is automatically populated if you are using Deployment in Pega Cloud Services environments, but you can change it.
     
    1. In the Environment fields, enter the URLs of the production and simulation environments.
    2. If you are using your own authentication profiles, in the Auth profile lists, select the authentication profiles that you want the orchestration server to use to communicate with the production and simulation environments.
    3. Click Next.
  4. On the Application details page, specify the application information for which you are creating the pipeline.
    1. In the Application list, select the name of the application.
    2. In the Version list, select the application version.
    3. In the Access group list, select the access group for which you want to run pipeline tasks. This access group must be present on the production and simulation environments and have at least the sysadmin4 role.
    4. In the Name of the pipeline field, enter the pipeline name.
    5. Click Next.

The Pipeline page displays the stages and tasks, which you cannot delete, that are in the pipeline.

  1. Click Finish.

Modifying a pipeline

  1. If the pipeline is not open, in the navigation pane, click Pipelines > Data migration pipelines., and then click the name of the pipeline.
  2. Click Action > Settings.
  3. Modify environment details by clicking Environment Details.
  4. In the Environment fields, enter the URLs of the production and simulation environments.
  5. To change the application information for which you are creating the pipeline, click Application details.
    1. In the Version list, select the application version.
    2. In the Access group list, select the access group for which you want to run pipeline tasks. This access group must be present on the production and simulation environments and have at least the sysadmin4 role.
  6. Click Save.

Scheduling a pipeline to run automatically by using a job scheduler rule

You can schedule a data migration pipeline to run during a specified period of time by creating and running a job scheduler. The job scheduler runs a Deployment Manager activity (pzScheduleDataSyncPipeline) on the specified pipeline, based on your coniguration, such as weekly or monthly.

For more information about job scheduler rules, see Job Scheduler rules.

  1. On the orchestration server, in the navigation panel of Dev Studio, click Records > SysAdmin > Job Scheduler, and then click Create.
  2. On the Create Job Scheduler rule form, enter the label of the scheduler and select the ruleset into which to save the job scheduler.
  3. Click Create and open.
  4. On the Edit Job Scheduler rule form, on the Definition tab, in the Runs on list, configure the job scheduler to run on all or one nodes:
    • To run the job scheduler on all nodes in a cluster, click All associated nodes.
    • To run the job scheduler on only one node in a cluster, click Any one associated node.
  5. In the Schedule list, select how often you want to start the job scheduler, and then specify the options for it.
  6. Select the context for the activity resolution.
    • If you want to resolve the pzScheduleDataSyncPipeline activity in the context of Deployment Manager, go to step 7.
    • If you want to resolve the activity in the context that is specified in the System Runtime Context, go to step 8.
  7. To resolve the pzScheduleDataSyncPipeline activity in the context of Deployment Manager:
    1. In the Context list, select Specify access group.
    2. In the Access group field, press the Down arrow key and select the access group that can access Deployment Manager.
    3. Go to step 9.
  8. To to resolve the activity in the context that is specified in the System Runtime Context:
    1. In the Context list, select Use System Runtime Context.
    2. Update the access group of the batch requestor type access group with the access group that can access Deployment Manager. by first, in the header of Dev Studio, clicking Configure > System > General.
    3. On the System:General page, on the Requestors tab, click the BATCH requestor type.
    4. On the Edit Requestor Type rule form, on the Definition tab, in the Access Group Name field, press the Down arrow key and select the access group that can access Deployment Manager.
    5. Click Save.
  9. On the Job Schedule rule form, in the Class field, press the Down arrow key and select Pega-Pipeline-DataSync.
  10. In the Activity field, press the Down arrow key and select pzScheduleDataSyncPipeline.
  11. Click the Parameters link that appears below the Activity field.
  12. In the Activity Parameters dialog box, in the Parameter value field for the PipelineName parameter, enter the data migration pipeline that the job scheduler runs.
  13. In the Parameter value field for the ApplicationName parameter, enter the application that the data migration pipeline is running.
  14. Click Submit.
  15. Save the Job schedule rule form.

When the job scheduler rule starts, it runs the pipeline in Deployment Manager in the background based on your schedule.

Running a data migration manually

If you do not run a data migration pipeline based on a job scheduler, you can run it manually in Deployment Manager.

  1. Do one of the following actions:
    • If the pipeline for which you want to run a data migration is open, click Start data migration.
    • If the pipeline is not open, click Pipelines > Data migration pipelines., and then click Start data migration.
  2. In the Start data migration dialog box, click Yes.

Pausing a data migration

When you pause a data migration, the pipeline completes the current task and stops the data migration.

  1. Optional: If the pipeline is not open, in the navigation pane, click Pipelines > Data migration pipelines., and then click the name of the pipeline.
  2. Click Pause.

Stopping a data migration

  1. Optional: If the pipeline is not open, in the navigation pane, click Pipelines > Data migration pipelines., and then click the name of the pipeline.
  2. Click the Moreicon, and then click Abort.

Stopping or resuming a data migration that has errors

If a data migration has errors, the pipeline stops processing on it, and you can either resume or stop running the pipeline.

  1. Optional: If the pipeline is not open, in the navigation pane, click Pipelines > Data migration pipelines., and then click the name of the pipeline.
  2. Click the More icon, and then do one of the following:
    • To resume running the pipeline from the task, click Start data migration pipeline.
    • To stop running the pipeline, click Abort.

Diagnosing a pipeline

You can diagnose your pipeline to verify its configuration. For example, you can verify that the orchestration system can connect to the production and simulation environments.

  1. If the pipeline is not open, in the navigation pane, click Pipelines, and then click the name of the pipeline.
  2. Click Actions > Diagnose pipeline.
  3. In the Diagnostics window, review the errors, if any.

Viewing data migration logs

View the logs for a data migration to see the completion status of operations, for example, when a data migration moves to a new stage. You can change the logging level to control the events are displayed in the log. For example, you can change logging levels of your deployment from INFO to DEBUG for troubleshooting purposes. For more information, see Logging Level Settings tool.

  1. If the pipeline is not open, in the navigation pane, click Pipelines, and then click the name of the pipeline.
  2. Perform one of the following actions:
    • To view the log for the current data migration, click the More icon, and then click View logs.
    • To view the log for a previous data migration, expand the Deployment History pane and click Logs for the appropriate deployment.

Viewing a report for a specific data migration

  1. Optional: If the pipeline is not open, in the navigation pane, click Pipelines > Data migration pipelines, and then click the name of the pipeline.
  2. Perform one of the following actions:
    • To view the report for the current deployment, click the More icon, and then click View report.
    • To view the report for a previous deployment, expand the Deployment History pane and click Reports for the appropriate deployment.

Viewing reports for all data migrations

Reports provide a variety of information about all the data migrations in your pipeline. You can view the following key performance indicators (KPI):

  • Data migration success – Percentage of successfully completed data migrations
  • Data migration frequency – Frequency of new deployments to production
  • Data migration speed – Average time taken to complete data migrations
  • Start frequency – Frequency at which new data migrations are triggered
  • Failure rate – Average number of failures per data migration

To view reports, do the following tasks:

  1. Do one of the following actions:
    • If the pipeline is open, click Actions >View report.
    • If a pipeline is not open, in the navigation pane, click Reports. Next, in the Pipeline field, press the Down arrow key and select the name of the pipeline for which to view the report.
  2. Optional: In the list that appears in the top right of the Reports page, select whether you want to view reports for all deployments, the last 20 deployments, or the last 50 deployments.

Deleting a pipeline

When you delete a pipeline, its associated application packages are not deleted from the pipeline repositories.

  1. In the navigation pane, click Pipelines.
  2. Click the Delete icon for the pipeline that you want to delete.
  3. Click Submit.

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us