SR-D70674 · Issue 535483
Handling added for mobile upload of filename containing dots
Resolved in Pega Version 8.2.6
An issue with uploading a file name containing dots (such as 2019.12.12) while using the mobile browser has been resolved by updating the AttachFile activity in the propertyExist function.
SR-D73237 · Issue 539125
Pagination updated to resolve repeating Pulse notes
Resolved in Pega Version 8.2.6
After adding a case and entering 15 Pulse notes, scrolling down resulted in the notes being duplicated. Investigation traced this to entering more than 10 posts in a single interaction on a previously empty case when "Progressive pagination" was enabled for the repeating dynamic layout. On Private Edit, if the pagination was changed to "NONE" then the issue was resolved. Pagination depends on the pzPagingStartDateTime property, which is set on D_pzFeedParams, but since there were no results in this scenario this property was not being set. To resolve this, an update has been made so that pzPagingStartDateTime will be set if it is empty.
SR-D74246 · Issue 543725
Accessibility improved for Pega Survey Question pages
Resolved in Pega Version 8.2.6
Question Pages in Pega Survey lacked accessibility. This was traced to aria-label not being generated for Radio buttons and Drop-downs, and "title" not being generated for other controls used in Question and Question Pages. This has been resolved.
SR-D76927 · Issue 541422
VirusCheck added to all Pulse uploads
Resolved in Pega Version 8.2.6
The upload file activity has been updated to invoke VirusCheckActivity for all Pulse uploads.
INC-203994 · Issue 698853
DSS added to handle merges with lower versions of Postgres
Resolved in Pega Version 8.7.1
After update, executing the batch campaign with volume constraint resulted in the second data flow DF_Wait failing with error message "ERROR: number of columns (1844) exceeds limit (1664)". This was due to the database set’s change (in 8.5) to use the database layer’s merge statement. Prior to that, the logic used "deletes and inserts". Depending on the version of Postsgres, the generated SQL statement for a merge statement is different. The “INSERT … ON CONFLICT … UPDATE” syntax is generated for Postgres 9.5+ AND when there is a PK constraint defined for the DB table. Otherwise, the complex UPSERT statement (old syntax) is generated, as was the case in this issue. This is a known issue in the Postgres server software where it mis-interprets the number of columns involved. i.e., it mistakenly counts the number of columns twice. As a result, the actual maximum columns allowed is only half of the official limit (1664). The same UPSERT statement does not cause the “exceeds limit” exception if there are 832 or fewer columns in the statement. To resolve this, an option has been provided to select between the “original logic” (deletes and inserts) and the “merge statements” logic by way of the DSS “decision/datasets/db/useMergeStatementForUpdates”. Setting “true” will use the merge statement logic, and setting “false” will use deletes and inserts. When the DSS is not defined, the default is "true" and the system will use merge statements in the form preferred by Postgres 9.5+.
INC-180246 · Issue 699700
Support for apostrophe added to keyword tokenization
Resolved in Pega Version 8.7.1
A keyword containing an apostrophe was not detected properly in Text extraction model. This has been resolved by updating the annotator used in the tokenization.
INC-193399 · Issue 688115
DSS added to handle merges with lower versions of Postgres
Resolved in Pega Version 8.7.1
After update, executing the batch campaign with volume constraint resulted in the second data flow DF_Wait failing with error message "ERROR: number of columns (1844) exceeds limit (1664)". This was due to the database set’s change (in 8.5) to use the database layer’s merge statement. Prior to that, the logic used "deletes and inserts". Depending on the version of Postsgres, the generated SQL statement for a merge statement is different. The “INSERT … ON CONFLICT … UPDATE” syntax is generated for Postgres 9.5+ AND when there is a PK constraint defined for the DB table. Otherwise, the complex UPSERT statement (old syntax) is generated, as was the case in this issue. This is a known issue in the Postgres server software where it mis-interprets the number of columns involved. i.e., it mistakenly counts the number of columns twice. As a result, the actual maximum columns allowed is only half of the official limit (1664). The same UPSERT statement does not cause the “exceeds limit” exception if there are 832 or fewer columns in the statement. To resolve this, an option has been provided to select between the “original logic” (deletes and inserts) and the “merge statements” logic by way of the DSS “decision/datasets/db/useMergeStatementForUpdates”. Setting “true” will use the merge statement logic, and setting “false” will use deletes and inserts. When the DSS is not defined, the default is "true" and the system will use merge statements in the form preferred by Postgres 9.5+.
INC-193632 · Issue 679172
Cassandra driver metrics exposed for performance troubleshooting
Resolved in Pega Version 8.7.1
By default Cassandra driver metrics are now enabled. Metrics can be disabled by setting the dnode/disable_driver_metrics prconfig parameter.
INC-193847 · Issue 695974
DSS added to allow masking of subjectID in alerts
Resolved in Pega Version 8.7.1
In order to allow customizing whether or not a subjectID is included in alerts, a DSS has been added to conditionally mask the subjectID from being logged. To use this, set the "alerts/maskIHsubjectID" DSS in the Pega-DecisionEngine ruleset to true to hide the pySubjectID.
INC-194810 · Issue 691884
Removed services check and added warnings for simulations
Resolved in Pega Version 8.7.1
Attempting to run an audience simulation resulted in the error "Running simulations is not possible, because the required services are not available. Contact your system administrator to enable the data flow and real-time data grid services". Investigation showed the @DsmServices.pxHasFunctionalNodes("DataFlow","Batch") function call contained in the 'when' rule pyUnavailableDecisionServices was returning false even if all the nodes were in the cluster and all the DSM Services were in NORMAL status. To resolve this, the services check has been disabled and the simulation run will show a warning or fail if a data flow run is queued for more than 30 secs or if there is an issue with querying the underlying metrics storage.