Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please update your bookmarks. This site will be discontinued in Dec 2024.

Pega Platform Resolved Issues for 8.1 and newer are now available on the Support Center.

SR-D80668 · Issue 543866

Performance improvement for queries on Kafka partitions

Resolved in Pega Version 8.3.2

Even though there were multiple dataflow nodes available in the cluster, all requests were going to single node and causing system slowness. Investigation showed there was a queue processor rule that included the pxPartitionKey attribute which forced all records to be sent to a single partition by Kafka producers. This attribute has been removed.

SR-D81532 · Issue 544429

Corrected case-mismatch when using Top Ranked in a subreport

Resolved in Pega Version 8.3.2

Incorrect SQL Generation was seen when using the combination of Union ALL (more than 1 distinct tables) and Rank in Sub report. The sub-report had a ranking logic to pull records with greatest “pxUpdateDateTime”, but when executing the main report definition rule, Oracle responded with the error "There was a problem getting a list." Investigation showed that the query generated was not able to match the columns because they were given as pxUpdateDateTimeR1 in one and PXUPDATEDATETIMER1 in another. This only happened when display Top Ranked was selected in the subreport, and was due to Oracle being case-sensitive. This has been resolved by updating the system to get the correct column alias for rank in a subreport.

SR-D89304 · Issue 519827

ClusterAndDBCleaner repaired

Resolved in Pega Version 8.3.2

An error in the logic order of events caused a compilation error for pyClusterAndDBCleaner, leading to old records created by the system/node utilities to not be automatically removed as expected. This has been corrected.

SR-D39972 · Issue 513459

UpgradeOnOpen updated to use property set

Resolved in Pega Version 8.3.2

After upgrade, using the revalidate & save wizard on MapValue rules (Rule-Obj-MapValue) generated null pointer exceptions in the tracer file and rules failed with bad status. This was traced to changes made in the Java step of UpgradeOnOpen that used the getReference() method, and has been resolved by updating the UpgradeOnOpen activity in the Rule-obj-Mapvalue class to use property set.

SR-D41207 · Issue 512086

Fallover stategy added to chat routing to keep event processor running

Resolved in Pega Version 8.3.2

Chats were becoming stuck in the queue and end users were not able to connect with the customer service representative. An excessive number of queued items were observed in a Queue Processor named "EventProcessor". This was traced to the setting "Browse from the offset" having been removed because of a retention policy. This resulted in "Browse from the end of the stream" being used instead even though browse should start from the earliest known offset. To resolve this, Stream Producer will be cached based on topic, and Stream consumer will fall over to an earliest strategy in case the requested offset isn't found so the event queue will be handled in a timely manner.

SR-D42451 · Issue 518066

ExecuteRDB call updated to use NativeSQL for blob

Resolved in Pega Version 8.3.2

After creating a test activity to clear data set records that used the DataSet-Execute method and passed the data set name and truncate operation, only 51 records were deleted in a single run when the data set had more than 51 records. Investigation showed that for blob tables, the database truncate operation was using executeRDB with an empty results page, i.e. it didn't specify pyMaxRecords, which on some databases might have limited the number affected records. To resolve this, the executeRDB call in the database truncate operation has been modified to use NativeSQL for blob tables.

SR-D43912 · Issue 509736

Fallover stategy added to chat routing to keep event processor running

Resolved in Pega Version 8.3.2

Chats were becoming stuck in the queue and end users were not able to connect with the customer service representative. An excessive number of queued items were observed in a Queue Processor named "EventProcessor". This was traced to the setting "Browse from the offset" having been removed because of a retention policy. This resulted in "Browse from the end of the stream" being used instead even though browse should start from the earliest known offset. To resolve this, Stream Producer will be cached based on topic, and Stream consumer will fall over to an earliest strategy in case the requested offset isn't found so the event queue will be handled in a timely manner.

SR-D45608 · Issue 519900

Correct service instance name passed for data flow in DSMStatus

Resolved in Pega Version 8.3.2

When using the Connect-HTTP service "DSMStatus" to provide the node and status information as seen on the various tabs of the Designer Studio > Decisioning > Infrastructure > Services landing page, using DataFlow as the service parameter for the HTTP service method resulted in an empty response when the expectation was to get the information regarding the cluster details of Dataflow node type. This was traced to the service instance name not being parsed correctly when used for Data Flow services, and has been resolved by ensuring the correct service instance name is passed for this use.

SR-D47618 · Issue 512601

Statistic rounding error in ADMSnapshot Agent with Oracle corrected

Resolved in Pega Version 8.3.2

While running the ADMSnapshot Agent, the exception "internal.mgmt.Executable) ERROR com.pega.decision.adm.client.ADMException: Failed to complete ADM Data Mart snapshot" was seen. This was traced to an issue with the rounding of performance statistics when using Oracle, and has been resolved.

SR-D48010 · Issue 514982

Unit testing validation relaxed for external input strategy

Resolved in Pega Version 8.3.2

When trying to test a strategy, the testing transform had to exist in the same ruleset/version as the strategy or it would not resolve. Investigation showed that because the artifacts were in a different ruleset and version build on top of the application that the testing strategy belongs to, validation failed because it was using the platform based ruleset validation. This was a missed use case, and has been resolved by relaxing the validation for external input strategy so it does not take into account the ruleset and version of it. This same change has been applied for referenced data transforms.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us