Skip to main content

Published Release Notes

Find release notes for the selected Pega Version and Capability

Browse resolved issues for Platform releases.

This documentation is for non-current versions of Pega Platform. For current release notes, go here.

Integrate your application with external Kafka clusters

Valid from Pega Version 8.4

Configure your application to use an external Kafka cluster for managing real-time data. By having an external Stream service provider, you can perform such maintenance tasks as an upgrade or hotfix deployment faster because external nodes do not require restarting.

For more information, see Externally managed Stream service.

Updated architecture of the data flow service

Valid from Pega Version 8.4

Benefit from improvements to data flow architecture that increase the stability of data flow runs and minimize the need for manual restarting of data flow jobs. Real-time data flows now use improved node rebalancing for better handling of failed or restarted nodes. If the topology changes, batch data flows no longer attempt to pause and resume the run. As such, there are fewer interactions with the database and between the nodes, resulting in the increased resilience of the Data Flow service.

If you are upgrading from a previous version of Pega Platform™, see Changes to the architecture of the Data Flow service for an overview of the changes to the Data Flow service compared to previous versions.

Changes to the architecture of the Data Flow service

Valid from Pega Version 8.4

In Pega Platform™ 8.4, the architecture of batch and real-time data flows uses improved node handling to increase the stability of data flow runs. As a result, there are fewer interactions with the database and between the nodes, resulting in increased resilience of the Data Flow service.

If you upgrade from a previous version of Pega Plaftorm, see the following list for an overview of the changes in the behavior of the Data Flow service compared to previous versions:

Responsiveness

Nodes no longer communicate and trigger each other, but run periodic tasks instead. As such, triggering a new run does not cause the service nodes to immediately start the run. Instead, the run starts a few seconds later. The same applies to user actions such as stopping, starting, and updating the run. The system also processes topology changes as periodic tasks, so it might take a few minutes for new nodes to join runs, or for partitions to redistribute when a node leaves a run.

Updates to lifecycle actions

To make lifecycle actions more intuitive, the Stop action consolidates both the Stop and Pause actions. The Start action consolidates both the Resume and Start actions.

You can resume or restart stopped and failed runs with the Start and Restart actions. The Start action is only available for resumable runs and continues the run from where it stopped. The Restart action causes the run to process from the beginning. Completed runs can only be restarted. If a run completes with failures, you can restart it from the beginning, or process only the errors by using the Reprocess failures action.

Starting a run

New data flow runs have the Initializing status, and start automatically. You no longer need to manually start a new run, so the New status is now removed.

If there are no nodes available to process a run, the run gets the Queued status and waits for an available node.

Triggering pre- and post-activities

The system now triggers pre-activities on a random service node, rather than on the node that triggered the run.

The system triggers post-activities only for runs that complete, fail, or complete with failures. If you manually stop a run with the Stop action, the post-activity does not trigger. However, restarting the run with the Restart action triggers first the post-activity, and then the pre-activity.

You can no longer choose to run pre- and post-activities on all nodes.

Selecting a node fail policy

For resumable runs, you can no longer select a node fail policy. If a node fails, the partitions assigned to that node automatically continue the run on different nodes.

For non-resumable runs, you can choose to restart the partitions assigned to the failed node on different nodes, or to fail the partitions assigned to the failed node.

No service nodes and active runs

If the last data flow node for an in-progress run fails, the run remains in the In Progress state, even if no processing takes place. This behavior results from the fact that data flow architecture now prevents unrelated nodes from affecting runs.

Support for predictive models in PMML version 4.4

Valid from Pega Version 8.5

Pega Platform™ now supports the import of predictive models in Predictive Model Markup Language (PMML) version 4.4. With this feature, you can import PMML models that use the anomaly detection algorithm.

For a list of all supported PMML models, see Supported models for import

 

Limits on active data flow runs

Valid from Pega Version 8.5

You can now configure a maximum number of concurrent active data flow runs for a node type. Set limits to ensure that you do not run out of system resources and that you have a reasonable processing throughput. If a limit is reached, the system queues subsequent runs and waits for active runs to stop or finish before queued runs can be initiated, starting with the oldest.

For more information see, Limit the number of active runs in data flow services (8.5).

Upgrade impact

If you have many data flow runs active at the same time, you might notice that some of the runs are queued and waiting to be executed.

What steps are required to update the application to be compatible with this change?

You do not have to take any action. After the active runs stop or finish, the queued runs start automatically. The default limits are intended to protect your system resources, and you should not see a negative impact on the processing of data flows. However, if you want to allow a greater number of active data flow runs to be active at the same time, you can change the limits. For more information, see Limiting active data flow runs.

Support for Apache HBase 2.1 and Hadoop 3.0

Valid from Pega Version 8.5

Support for these versions extends Pega Platform™ compatibility with HBase releases to ensure that your database implementations integrate seamlessly with Pega Platform.

Pega Platform now supports:

  • Apache HBase 2.1 for the HBase data set
  • Apache Hadoop Distributed File System (HDFS) 3.0 for the HDFS data set

For more information, see Enhance your data sets with Apache HBase 2.1 and Hadoop 3.0 (8.5).

Enhancing your revision management process with Deployment Manager pipelines

Valid from Pega Version 8.5

Pega Platform 8.5 offers improved synergy between revision management and the automated deployment process provided by Pega's Deployment Manager 4.8 pipelines. Use Deployment Manager 4.8 to increase the efficiency of business-as-usual application changes and automatize the deployment of revision packages.

For more information, see Managing the business-as-usual changes.

Support for Cloud AutoML topic detection models

Valid from Pega Version 8.5

In Prediction Studio, you can now connect to topic detection models that you create in Cloud AutoML, Google's cloud-based machine learning service. You can then use the models to categorize and route messages from your customers.

For more information, see Broaden your selection of topic detection models by connecting to third-party services (8.5).

Control group configuration for predictions

Valid from Pega Version 8.5

You can now configure a control group for your predictions in Prediction Studio. Based on the control group, Prediction Studio calculates a lift score for each prediction that you can later use to monitor the success rate of your predictions.

For more information, see Customizing predictions.

Response timeout configuration for predictions

Valid from Pega Version 8.5

You can now set a response timeout for your predictions in Prediction Studio. By setting a response timeout, you control how Prediction Studio registers customer responses that later serve as feedback data for your predictions.

For more information, see Customizing predictions.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us