Identify job scheduler activities that exceed the run time threshold (8.6)
Analyze new alert messages to improve the performance of the activities in your application that often access databases or external services. When a scheduled job runs longer than the alerts/jobscheduler/currentruntoolong/threshold
value, Pega Platform™ now produces an alert in the performance alert log at the moment the job exceeds the threshold. You can analyze the alert directly in the log or by using Pega Predictive Diagnostic Cloud™ (PDC) to identify and address potential performance issues with long-running processes.
For example, for an activity to clean up the SAML login information log, the value of alerts/jobscheduler/currentruntoolong/threshold
is set to the default of 300 seconds. When the rule runs as scheduled, if the process takes more than 300 seconds, Pega Platform saves "PEGA0130 alert: Current Job Scheduler run is taking too long" to the log file at the moment the job exceeds 300 seconds. Upon investigation, you discover that the cause for the excessive processing time is the lack of a timeout. You can then fix the issue by adding a 200-second timeout. The next time the process runs, the activity ends after 200 seconds and does not generate the alert.
Check the thread stack trace to determine which activity and step the job scheduler is executing.
The following is an example of a job scheduler stack trace. If there are two or more consequent PEGA0130 alerts, you can see that the job scheduler is blocked on the activity step.
Previous topic System Administration Next topic Automatically update dependent applications to be built on the latest application versions in an archive (8.5)