SR-A93286 · Issue 270071
Data flow shape batch size increased from 1 to 250
Resolved in Pega Version 7.2.2
In order to support large and complex systems, the data flow shape batch size has been increased from 1 to 250.
SR-A93286 · Issue 265355
Data flow shape batch size increased from 1 to 250
Resolved in Pega Version 7.2.2
In order to support large and complex systems, the data flow shape batch size has been increased from 1 to 250.
SR-A76054 · Issue 251761
Enhancement added to bulk delete ADM models
Resolved in Pega Version 7.2.2
In order to support the deletion of ADM models in bulk, an enhancement has been added to remove the unwanted models from the database provided a list of them is given as input. The new activity is pxDeleteModelsByCriteria, applying to DSMPublicAPI-ADM. The pzInsKey of this activity is: RULE-OBJ-ACTIVITY DSMPUBLICAPI-ADM PXDELETEMODELSBYCRITERIA #20160608T102848.043 GMT It has been created in ruleset Pega-DecisionArchitect:07-10-16 on 7.1.7 HFIX system vengwindb180:8282/. The usage text, explaining the activity, is as follows: "Used to delete ADM models in bulk. Models to be deleted are determined by the criteria selected by this Activity's parameters; a model is deleted if it matches all selected criteria. PLEASE NOTE: activity is not constrained by Application, only by the criteria provided as parameters. Therefore in most cases you will probably want to constrain by 'applies to' class in addition to the other criteria. Please see the tooltip / description of each parameter for more info on their usage. Integer output parameter 'NumberDeleted' returns the number of models that were successfully deleted."
SR-A92631 · Issue 270068
VBD Actuals Query works correctly on WebSphere 8.5
Resolved in Pega Version 7.2.2
Nested JDBC metadata calls were not working in WebSphere 8.5/DB2 due to the nested metadata query causing the initial query's ResultSet to close prematurely. This was traced to unexpected behavior in IBM WebSphere 8.5 when querying DB column metadata while iterating another DB metadata ResultSet, and has been resolved by modifying the DSM/VBD code that queries the DB for dimension metadata so it will load the metadata in two stages to avoid nesting of metadata queries.
SR-A92952 · Issue 262158
Proposition filter validates all rows on save
Resolved in Pega Version 7.2.2
On changing the name of the strategy component , the proposition filter was not validating the change for all the propositions but was only validating for the first proposition which had focus. The initial design was intended to skip validation when the proposition row was not viewed by the user with the goal of saving time by skipping the propositions that were not edited. However, this posed a problem that if the referenced rules were removed then the rule continued to pass validation. To resolve this, the system will force validate all the rows even though they may not have been edited.
SR-B4681 · Issue 274245
Twitter quoted status generation updated
Resolved in Pega Version 7.2.2
Previously, generating connector metadata for Twitter retweet status used a sample of 100 random tweets. However, those 100 tweets may not have any retweets among them. In order to guarantee a retweet will be captured for connector metadata, the approach has been updated to look at the last 100 user tweets for generating the metadata; if any tweet is quoted in the last 100 tweets, quoted status will come in. This can also be used for other properties like image etc.
SR-A100961 · Issue 267297
Configurable multiplier for DF assignments to improve performance
Resolved in Pega Version 7.2.2
Previously, the batch scalability factor was hardcoded to "2". This caused uneven partition distribution in large clusters which had more resources available than were being used. To increase efficiency by allowing a greater number of smaller partitions so that any remaining can be allocated to any idle threads, a batch scalability factor has been exposed via the Dataflow services configuration landing page.
SR-B5911 · Issue 275066
Data flow ramp up queuing improved
Resolved in Pega Version 7.2.2
Running a very large data flow was generating the error "Unable to queue record for 5 minutes for processing in data flow. Retrying." This issue was caused by the data flow run ramp up procedure, which was creating multiple assignments (1000+in some cases) and acquiring a lock for each of them. With very large flows, this procedure would take a long time and cause the "Unable to queue record for 5 minutes ..." error. To resolve this, the system has been modified to reduce the number of simultaneous database locks so the ramp up procedure will take much less time and the flow will run smoothly.
SR-B5634 · Issue 274799
Stack frame handling made more robust
Resolved in Pega Version 7.2.2
Stack frame handling has been made more robust to prevent the stack trace from getting out of sync in cases where the strategy execution throws an exception.
SR-A98315 · Issue 268392
Proposition landing page date/times localized
Resolved in Pega Version 7.2.2
The Date values entered in the Proposition landing page (unversioned propositions) were always rendered in GMT time zone and not the user's time zone. This has been fixed.