Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please note: beginning with the Pega Platform 8.7.4 Patch, the Resolved Issues have moved to the Support Center.

INC-218909 · Issue 715281

Override added to delete records for a stream dataset after processing

Resolved in Pega Version 8.8

Kafka data was accumulating for a Stream data set due to huge volume of inbound calls. This has been resolved by adding support to override pyDeletedProcessed through a DASS in order to remove the records for a particular stream dataset (topic) as soon as they are processed by Pega.

INC-220622 · Issue 680790

Deprecated Libraries

Resolved in Pega Version 8.8

The following library dependencies have been deprecated, excluded, and/or removed: - ant - bsh - commons-compress - gson - htmlunit - io.netty - jackson-mapper-asl - jdom - jdom2 - jdom-legacy - jetty-http - jetty-io - jetty-server - jetty-util - junrar - logback-core - netty-handler - plexus - plexus-utils - xercesImpl - xstream

INC-220622 · Issue 695948

Updated Libraries

Resolved in Pega Version 8.8

The following libraries have been updated to the most recent version: - commons-collections - cxf-rt-rs-security-oauth2 - derby - dom4j - esapi - google-oauth-client - groovy - h2database - jackson-databind - java-sdk-s3 - Jcommander - json-smart - nekohtml - netty-handler - postgres - snakeyaml - spring - spring-core - underscore - woodstox - xmlsec

INC-220642 · Issue 717032

Updated context handling for executing data transforms in dataflows

Resolved in Pega Version 8.8

When performing a sort operation in a data transform on a pagelist and invoking the Data transform through a Data flow, a java.lang.UnsupportedOperationException was generated on the sort step even though the data was correct. Investigation showed that the dataflows were using a different execution context than the context used for regular activity execution. An update has been made to ensure the correct context is being used while executing data transforms in dataflows.

INC-220893 · Issue 728488

Errors persisted for single case runs using a custom error handler

Resolved in Pega Version 8.8

Single case processing (SingleAccountData) had failed records when opening the SubscriptionData dataset, but it was not possible to see the failed record details for further investigation. Investigation showed that when running a single case run with a custom error handler, errors were persisted as part of the dataflow metrics, but not written to the database. To resolve this, an update has been made to ensure errors are persisted for single case runs configured using a custom error handler.

INC-222561 · Issue 721041

Check added for destination type for distribution test reports

Resolved in Pega Version 8.8

When there were two output destinations in the system, one of type VBD and another of type Database table and both had the same name, an incorrect class was set for distribution test reports and an error was generated when trying to open the report. Investigation showed the system was only checking for the name of the destination and not its type; this has been resolved by adding a pzSetSimulationOutputClass data transform to check for the destination type in addition to the destination name when setting the class for reports.

INC-225257 · Issue 730293

Pega DateTime supports pre-1970 timestamps

Resolved in Pega Version 8.8

The function getTimeStampAsDateStamp from Pega DateTime class was not working correctly for dates before Jan 1st, 1970. Support has now been added for this use.

INC-228430 · Issue 738822

Enhanced diagnostics for NLP

Resolved in Pega Version 8.8

To enhance diagnostics for issues related to natural language processing (NLP) and Apache Rule-based Text Annotation (RUTA), support has been added for runtime metrics that will allow benchmarking and debug test scenarios over a set amount of time. NLP diagnostics have been introduced which will capture the text analyzed for all incomplete executions in Data-NLP-Report-Summary table. Every text analyzer execution will insert a record into summary table with text and text analyzer details and at end of execution the row is deleted. On system crash, texts with incomplete executions will be present in the table to help identify any special patterns that could cause issues. This summary record insert/delete will occur only if DSS(PEGA-NLP!ENABLENLPDIAGNOSTIC) is set to false. Currently the DSS is shipped with true. To enable/disable, change the value in DSS and then re-validate and save for the changed DSS to reflect (Configure > System > Release > Upgrade > Validate > click on re-validate and save - Type:rule-NLP-PredictiveModel dropdown and ruleset: give the application ruleset - ruleset in which the text analyzers are created. Click on list and then Click on run.) Additional support: - The activity "pygetdiagnosticnlpsummaryrecords" has been added to retrieve the summary diagnostic records present in summary table whose NLP analysis has not been completed. - The "pyDumpNLPProcessTimersInRepository" activity has been created to dump process timers to a repository. This activity must be called from a job scheduler configured to run on background processing nodes to dump process timers. This will be done on-demand to analyze the performance(time taken) for RUTA execution etc. Before running this JS, configure the repository in prediction studio settings to dump the CSV in that repository. As soon as 1 CSV is dumped, the JS can be disabled. - Debug logs have been enabled for com.pega.nlp.ner.command.RutaCommand (system>operations>log level settings) which prints the RUTA entity type start and end of execution.

INC-228430 · Issue 744988

RUTA handling improved in Prediction Studio

Resolved in Pega Version 8.8

Out of memory errors were seen when using natural language processing (NLP). Investigation showed that certain Apache Rule-based Text Annotation (RUTA) scripts had disjunctive rules which were not able to handle certain types of texts having base64 characters which were introduced in emails via attachments, images, logos etc, and which caused excessive system loads. This has been resolved by modifying the RUTA handling in the Prediction Studio settings to better manage the scenario.

INC-228935 · Issue 731524

Writes to movie data set made optional in Delayed learning flow

Resolved in Pega Version 8.8

In order to prevent the Cassandra event store from accumulating excessive tombstones, an option has been added on the movie landing page to disable writes to the event store dataset.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us