Skip to main content

Published Release Notes

Find release notes for the selected Pega Version and Capability

Browse resolved issues for Platform releases.

This documentation is for non-current versions of Pega Platform. For current release notes, go here.

Use repositories as sources for File data sets

Valid from Pega Version 8.1

You can configure remote repositories, such as Amazon S3 or JFrog Artifactory, or a local repository, as data sources for File data sets. By referencing an external repository from a File data set, you enable a parallel load from multiple CSV or JSON files, which removes the need for a relational database for transferring data to Pega Platform™ in the cloud.

For more information, see Creating a File data set record for files on repositories and Configuring a remote repository as a source for a File data set.

Define a taxonomy by using the Prediction Studio interface

Valid from Pega Version 8.1

Create a topic hierarchy and define keywords for each topic in Prediction Studio faster and more intuitively than by editing a CSV file. If you have already defined a taxonomy in a CSV file, you can import that file and modify existing topics and keywords by using the Prediction Studio interface.

For more information, see Creating-keyword-based topics for discovering keywords and Tutorial: Configuring a topic detection model for discovering keywords.

Improved performance of decision strategies

Valid from Pega Version 8.1

Strategy rule performance has been improved through the implementation of a new engine. You can perform single and batch test runs to analyze strategy performance, locate and prevent potential issues, and optimize strategy components. Test runs now support data sets and data flows with multiple key properties. The redesigned Test run panel improves the display of information and highlights the most immediately relevant details.

For more information, see Configuring a single case runs and Configuring a batch case runs.

Extract summaries from the analyzed text

Valid from Pega Version 8.1

You can now configure a Text Analyzer rule to extract information-rich blocks of text from the analyzed content and combine them into a comprehensive and coherent summary. By summarizing large documents, such as emails, you can facilitate making business decisions without having to read an entire document. In Text Analyzer rules, you can combine summarization with other types of text analysis, such as topic or entity detection, to extract the full context from a message.

For more information, see Configuring text extraction analysis and Tutorial: Extracting email context with Text Analyzer rules.

Additional adaptive model predictors based on Interaction History

Valid from Pega Version 8.1

Customer interactions are now automatically used in adaptive models to predict future customer decisions. For example, a phone purchase registered in Interaction History allows an adaptive model to predict that a customer is more likely to accept supplementary coverage for a new device. Such interactions, collected in a predefined Interaction History summary, are applied as an additional set of predictors in an adaptive model.

The aggregated Interaction History summary predictors are enabled by default for every adaptive model configuration.

For more information, see Enabling Interaction History predictors for existing adaptive models.

Update text analytics models instantly through an API

Valid from Pega Version 8.1

Use the pxUpdateModels API to automatically retrain text analytics models for which you gathered feedback as a result of the pxCaptureTAFeedback activity. The pxUpdateModels API provides an option to update the model with the latest feedback without having to open Prediction Studio. Instead, you can use the activity from your application, for example, through a button control.

For more information, see Feedback loop for text analytics.

Managed real-time data flow runs

Valid from Pega Version 8.1

Pega Platform™ now fully manages the life cycle of real-time data flow runs which helps you save time and reduce maintenance efforts. You no longer need to re-create the runs in every environment and manually pause and restart them after every modification. The application manages such runs by seamlessly implementing your changes and keeping the runs active until they encounter a specified number of errors or until you exclude the runs from the application.

For more information, see Tutorial: Using managed data flow runs and Creating a real-time run for data flows

Support for predictive models in PMML version 4.4

Valid from Pega Version 8.5

Pega Platform™ now supports the import of predictive models in Predictive Model Markup Language (PMML) version 4.4. With this feature, you can import PMML models that use the anomaly detection algorithm.

For a list of all supported PMML models, see Supported models for import

 

Limits on active data flow runs

Valid from Pega Version 8.5

You can now configure a maximum number of concurrent active data flow runs for a node type. Set limits to ensure that you do not run out of system resources and that you have a reasonable processing throughput. If a limit is reached, the system queues subsequent runs and waits for active runs to stop or finish before queued runs can be initiated, starting with the oldest.

For more information see, Limit the number of active runs in data flow services (8.5).

Upgrade impact

If you have many data flow runs active at the same time, you might notice that some of the runs are queued and waiting to be executed.

What steps are required to update the application to be compatible with this change?

You do not have to take any action. After the active runs stop or finish, the queued runs start automatically. The default limits are intended to protect your system resources, and you should not see a negative impact on the processing of data flows. However, if you want to allow a greater number of active data flow runs to be active at the same time, you can change the limits. For more information, see Limiting active data flow runs.

Support for Apache HBase 2.1 and Hadoop 3.0

Valid from Pega Version 8.5

Support for these versions extends Pega Platform™ compatibility with HBase releases to ensure that your database implementations integrate seamlessly with Pega Platform.

Pega Platform now supports:

  • Apache HBase 2.1 for the HBase data set
  • Apache Hadoop Distributed File System (HDFS) 3.0 for the HDFS data set

For more information, see Enhance your data sets with Apache HBase 2.1 and Hadoop 3.0 (8.5).

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us