Skip to main content

         This documentation site is for previous versions. Visit our new documentation site for current releases.      

This content has been archived and is no longer being updated.

Links may not function; however, this content may be relevant to outdated versions of the product.

Using the DBTrace and Profiler tools to understand performance

Updated on September 10, 2021

After the PAL data has been collected for an application, it is necessary to analyze the information.  Process Commander gives the application developer a number of performance tools for this analysis. 

  • the PAL Detail display (see Interpreting the Performance Tool (PAL) Detail display)
  • DB Trace (see Using the DB Trace and Profiler tools to understand performance)
  • Profile

    After the PAL data has been collected for an application, it is necessary to analyze the information.  Process Commander gives the application developer a number of performance tools for this analysis. 

    This section begins with an overview of the factors that affect an application’s performance; the linked articles describe the tools in detail.

    IMPORTANT:  In Version 5.3, PAL timer values are “mutually exclusive.” Prior to V5.3, when (for example) a declarative network was built to run a Declare Expression, the Elapsed Time Executing Declarative Rules might be higher than the Total Request Time, due to the fact that the former included other processing that was also occurring, such as Rule I/O, Other I/O, etc.; the Declarative timer was counting time that was also counted in another timer. 

    Beginning in Version 5.3, if one timer is going and another is triggered, the first timer stops (temporarily so that the time is not double-counted.  In the above example, if the Rule I/O timer starts, the Executing Declarative timer stops until the Rule I/O processing is finished, and then continue tracking the Declarative processing.

    NOTE:  The only exception to the “mutually exclusive” setup is Total Request Time, which accumulated no matter what other timers are running.

    Quick Links

    The Theory of PAL

    PAL Tools

    Additional Resources



    Suggested Approach

    DB Trace

    DB Trace is a tracing facility to assist in tuning system performance.  If users are perceiving that work items in the system take a long time to process, the DB Trace facility might help point to where the time was being spent.  DB Trace will display a lot of low-level detail about system-level interactions. 

    This function records the sequence of SQL operations that Process Commander performs during processing, such as reads, commits, etc.  Unlike the Trace facility, it is not possible with DB Trace to do real-time viewing of the operations data.  Instead, DB Trace gathers the data into a text output file which contains all the low-level database operations that Process Commander performs.  Then the Excel template included in DB Trace formats this data for viewing.

    DB Trace should be used when:

    • the Database Access counts are high
    • Elapsed Time is high (especially if CPU time is low)
    • the Database Threshold is exceeded

    If a very complex SQL query to the PegaRULES database is taking a long time to return, DB Trace can give further information on details of the query.

    Top of page

    Preparing a DB Trace

    Before beginning a DB Trace, determine what aspects of the application are being researched.   Click DB Trace Options to see the Database Trace Events form.

    All of the Database Trace Events listed above are database operations which can be traced.  An event is one instance of the chosen item happening – one Commit, one instance of the blob being read, etc.  One line will be printed in the Trace each time each event which is checked in this list occurs.

    Depending upon the focus of the problem area, different items may be checked.  For example, if there is an issue with sending SQL statements to the database, the Prepared Statement items would definitely be chosen, but it is not necessary to track reading and writing to the blob.  Likewise, if there is an issue with caching, it is not necessary to track the SQL statements. 

    The output for DB Trace is read into several worksheets in an Excel workbook (see following sections for details).  Depending upon what Events are checked in the DB Trace Options, the data in these worksheets will change. 

    NOTE:  By default, all of the Trace events are checked.  Thus, if the DB Trace Options step is skipped, and the DB Trace is immediately started, all of these events will be traced.

    Clicking Generate Stack Trace for any of the listed items will present a stack trace of the Java code being called each time that event happens.  Example:

    com/pegarules/generated/activity/ra_activity_pegaaccel_integration_services _partynewsetup_a998caa619a569b312d1790eab029512$PzStep3_circum0_Embed_ WorkPartyDef
    at com.pegarules.generated.activity.ra_activity_pegaaccel_integration_services _partynewsetup_a998caa619a569b312d1790eab029512.step3_circum0_Embed_ WorkPartyDef(ra_activity_pegaaccel_integration_services_partynewsetup_
    at com.pegarules.generated.activity.ra_activity_pegaaccel_integration_services _partynewsetup_a998caa619a569b312d1790eab029512.perform(ra_activity_ pegaaccel_integration_services_partynewsetup_a998caa619a569b312d1790eab029512 .java:165)
    at com.pega.pegarules.engine.runtime.Executable.doActivity( 2186)
    at com.pegarules.generated.activity.ra_activity_pegaaccel_integration_services _new_d20bf170ebd9ebef4cd8e7babf7d01e5.step12_circum0(ra_activity_ 1442)
    at com.pegarules.generated.activity.ra_activity_pegaaccel_integration_services _new_d20bf170ebd9ebef4cd8e7babf7d01e5.perform(ra_activity_pegaaccel_
    at com.pega.pegarules.engine.runtime.Executable.doActivity( 2186)
    at com.pega.pegarules.engine.context.PRThreadImpl.runActivitiesAlt (

    Obviously, if Generate Stack Trace is checked for every event, tracing this information will output a large amount of DB Trace output, and bring system performance to a near-standstill.  However, if there is one event that is happening many more times than expected, the Stack Trace can help you focus on where the issue occurs. 

    For example, if opening one work object calls 200 lists from the PegaRULES database, a Stack Trace can help you find which code is calling all these lists.

    Due to its performance impacts, use the Stack Trace sparingly; it will produce large volumes of information to be examined.  In the example above, look at the activities called when the one work object creates 200 lists, to understand to why they are called; run Tracer to see where these activities start and end.  As a last resort, if it is still not clear where the 200 lists are being called, use Stack Trace.

    NOTE:  Although the names in some Database Trace Events are similar to the PAL readings, these are not readings – they wi list one instance of the event in the DB Trace.  They do not (for example) count and then add together the number of times a cache hit happens, but print out one line in the DB Trace each time it happens.



    Prepared Statements

    Prepared Statements are SQL queries that Process Commander sends to the database for any SQL operations – reading records, updating records, deleting records, etc.  If there is a question on SQL, probably all three Prepared Statements items should be checked.

    These events might be traced if the system is trying to get a list, and no results are returned, especially if the user thinks that there are some valid results that should be returned.

    In certain cases where custom SQL is being used (such as reporting, or other functions using Rule-Connect-SQL), the system can’t predict what SQL statements are included.  The Prepared Statements choice will track all custom SQL.

    Prepared Statement Queries

    If this Prepared Statement event is checked, a line will be added to the DB Trace whenever information is being read (“queried”) from the database.

    Prepared Statement Updates

    If this Prepared Statement event is checked, a line will be added to the DB Trace whenever records are being updated in the database (which includes inserting and deleting).

    Get Database Metadata

    When a query is done on a database table, as part of the query, the cache is checked to see if there is information on what data is in the table (is there a blob column [pzPVStream], what columns are exposed, etc.).  If this information about the table is not in the cache, then it is necessary to go to the database and query the table itself to Get the Metadata (information about the table).  If this Event is checked, then a line is added to DB Trace whenever this happens.

    If the system has been running for awhile, this number should be low, as most of the table information should be cached.  Therefore, if there are a lot of Get Database Metadata events in the DB Trace, there may be some caching issue in the system, and further research should be done.

    Read blob

    If this Event is checked, a line would be added to the DB Trace every time a blob column entry (Storage Stream) is read from the database.   NOTE:  This Event will also report the size of the compressed data in the Storage Stream. 

    Write blob

    If this Event is checked, a line would be added to the DB Trace every time a blob column entry is written to the database.  NOTE:  This Event will also report the size of the compressed data in the Storage Stream. 


    If this Event is checked, a line would be added to the DB Trace every time a commit is done in the database.  This would be a database (low-level) commit, NOT the Process Commander Commit (Activity Method).


    If this Event is checked, a line is added to the DB Trace if, in the middle of doing a “high-level” commit (Commit – method), something goes wrong and the transaction is rolled back.

    Assign Connection to Thread

    This Event would track whenever one of the Process Commander threads takes a connection (from our connection pool) to the database to accomplish some function.

    Cache Hit (Found)

    When trying to get a Rule from the database, the database cache is checked first.  If the Rule is found in the cache, this event is recorded.

    Cache Hit (not found)

    When trying to get a Rule from the database, if the Rule is not in the cache, then the database is checked.  If the Rule is not in the database either, that rule is then marked in the Cache as “does not exist”.  The next time that Rule is requested, the Cache reports the Rule doesn’t exist (without having to check the database for this information), and this event is recorded in DB Trace.

    Cache Miss

    If the cache has no information about the rule being requested (it may or may not be in the database, but the database must be checked), then this event is recorded in DB Trace.

    As the system runs, the application developer would expect to see more and more cache hits and fewer cache misses.  If the cache misses remain high, and the system isn’t in development (i.e., constant changes are being made which would invalidate cache entries), then there may be an issue with caching which should be investigated.

    Run List

    If this Event is checked, a line is added to the DB Trace every time the system does an Obj-List (which corresponds to the listAPI method in Java)

    This event also shows that the blob entry was retrieved, and what property required going to the blob for this event.  When obtaining information from the database, the system will first try to retrieve data from exposed columns, as that is less expensive an operation than retrieving the blob.  If the property or properties requested are not exposed as columns in the database table, then the blob entry must be returned, and all the properties in the blob filtered to report only the list of properties that were requested.  If there is one particular property that is frequently being requested that continually requires going to the blob and filtering the list, it might improve performance to expose that property as a column in the database table.

    NOTE:  Checking this Event will not include tracking custom SQL..

    List Result Count

    displaying on one line how many results are returned from doing a list.

    Activity Start

    If this Event is checked, a line would be added to the DB Trace every time an Activity is started, whether at a top level or as part of the processing of another activity.  These lines provide context for all the other events (which tend to center around activity processes).

    Activity End

    If this Event is checked, a line would be added to the DB Trace every time an Activity finishes. 

    Top of page

    Running a DB Trace

    After choosing the appropriate Events to track (and perhaps one or two Stack Traces), click Start DB Trace.

    Once Start DB Trace has been clicked, the link changes to Stop DB Trace.

    Run the business process being traced, and then click Stop DB Trace.  The Download DB Trace to PC screen will display:

    As the directions state, first click Download or Display DB Trace.  This downloads the text trace file to the user’s PC.

    After the file has been downloaded, click Start Excel to start Excel.  This opens an Excel spreadsheet inside the browser, with a Macro button displayed:

    As instructed, click this button, and choose the text trace file that was created.  A great deal of data will be displayed in the Excel spreadsheet.

    A PivotTable Field List will be displayed.  Unless you are an experienced Excel PivotTable user, just click the X in the upper right corner of the box to close it.

    There are up to four worksheets in the Excel workbook:

    • Interaction Summary
    • SQL Calls
    • Cache Access
    • DB Trace Data

    These worksheets have columns of data corresponding to the DB Trace Events chosen. 

    Top of page

    DB Trace Data

    The DB Trace worksheet contains the list of all the events recorded for this DB Trace.  This worksheet is designed to be used with pivot tables; if the developer is familiar with Microsoft Excel pivot table technology, they should customize their own pivot tables to highlight desired fields.

    This worksheet contains the following columns:

    Column Name




    The order in which the events occurred

    sequential numbers (1, 2, 3, etc.)


    Number of the interaction



    ID of the connection (from the connection pool) that was used for this event



    ID of the requestor which performed this event

    NOTE:  For a requestor-level trace (the DB Trace), this should always be the same value.




    ID of the user associated with the requestor.

    NOTE:  For a requestor-level trace (the DB Trace), this should always be the same value.

    [email protected]


    the duration, in seconds, of this event.


    High Level Op ID

    For every operation, there may be several events.  For example, for the operation of a request to get data from the database, there may be multiple events:  checking the cache, getting a connection to the database, reading the database, reporting data back.  The High-Level Operation ID is the number that links several events which are part of one high-level operation.


    High Level Op

    A more detailed textual description of the event

    open instance of class Rule-Obj-Property by keys


    If the Label property is filled in, this column could be used to label or categorize Events as the processing is being Traced (if desired). 



    the type of Event


    Object Class

    The Class of the instance upon which the Event is acting – the class being opened, or saved, or from which the Activity is being started.



    size of the BLOB entry being read



    English description of Event.

    Looking for instance Rule-Obj-Property


    PRECONDPARAMS in cache


    SQL statement being run to accomplish the Event.

    select pzInsKey, pyRuleStarts, pyRuleSet, pyRuleSetVersion, pyClassName, pyRuleAvailable, pyRuleEnds, pyCircumstanceProp, pyCircumstanceVal, pyCircumstanceDateProp,

    pyCircumstanceDate from pr4_rule_property where pxObjClass = ? and pxInsId = ?

    SQL Inserts

    Values to plug into Prepared SQL Statement.

    <Rule-Obj-Property> <!PYSTEPSPRECONDPARAMS>

    SQL Operations

    Type of SQL operation.  Could be one of four values:

    • select
    • insert
    • update
    • delete


    Database Name

    Name of the database – the value in Data-Admin-DB-Name.


    Table Name

    The fully-qualified name of the table on which the SQL statement is acting.


    Stack Trace

    If Generate Stack Trace were checked for any Events in the DB Trace Options form, then the Stack Trace would be displayed in this column.

    NOTE:  Generate Stack Trace should only be chosen when Pegasystems recommends, as this option has a significant effect on performance.


    The top 20 events that took the most time in this trace are highlighted in yellow:

    If a developer believes they have a large query which is taking too much time to process, they can check details on this spreadsheet.  The entries are for the requestor being investigated, and shows where this user is spending time in the database. 

    Sort on the Time column, so the top 20 interactions are at the top of the spreadsheet, and look at the interactions that took the most time.  For each individual query, DB Trace displays exactly what was passed to the database, including the SQL statement and the variables that were substituted.  The Storage Stream reference is also included, so the developer can see which rows are returned.  The developer should see if a query could be made faster, or if the data from the query is available from the system in a different way that would not require a database query at all.

    In addition to information on individual queries, DB Trace will show the frequency of each query in the specified process.  For a given interaction, the developer should examine how many queries were run, and why – was the same query run more than once?  If so, the developer should look at the business logic of the application to see if this duplication could be eliminated.

    Top of page

    Example: Using DB Trace

    The developer of a new application measured one of his processes using PAL.  The Requestor Summary section of the PAL Detail Screen showed 10 database requests that exceeded the time threshold in this process.

    The developer then turned to the DB Trace information for further details.  After running DB Trace, they clicked on the DB Trace Data tab, and saw the following information:

    The Top 20 steps (measured by time) are highlighted in yellow.  It is possible to rank all the entries in the trace by clicking the arrow to the right of the Time column header and choosing Sort Descending:

    This will then bring all the top 20 queries by time to the top of the spreadsheet:

    The spreadsheet shows the queries which took more than .5 seconds, and gives information about what processing was happening.  In this example, the first two entries, which had the highest times, were Interaction 24 and Interaction 26; they were both a “list using a list spec”. 

    Looking at the SQL code, they were both executing exactly the same code:

    Therefore, in this example, the developer would investigate why there were two lists run (for almost a full second of time, which is very high) almost one right after the other.  If the process is slightly redesigned, perhaps one of these queries could be eliminated.

    Top of page

    Other DB Trace Spreadsheets

    As mentioned before, depending upon what DB Trace Options are selected, some of the columns may not appear in some of the spreadsheets.  For example,  if the SQL option Prepared Statement Queries is not chosen, then that column will not appear in any of the worksheets.  If all three of the SQL events aren’t chosen (Prepared Statement Queries, Prepared Statement Updates, Prepared Statements), then the entire SQL Calls worksheet is omitted, with the following message:

    For each of these worksheets, several drop-down choices are available, which allow the analyst to show only part of the data.  For each sheet, the first column shows the type of data being collected on the spreadsheet (Interactions, SQL calls, etc.).  So on the Interaction Summary sheet, the first column displays the Interaction Numbers.  The drop-down there allows the developer to uncheck one or more interactions, to remove them from the list.  (All are checked and visible from the start.)

    If any of these numbers are unchecked, then those rows will not display in the worksheet.

    These “hidden” rows may be restored by checking the box in the drop-down list again.

    For each column, the data is summarized in three rows:

    Row Title



    the total amount of time, in seconds, spent on this event type for the interaction


    the number of occurrences of this type of event during the interaction


    the average time per instance of this event (Sum divided by Count) for this interaction

    Totals for the entire DB Trace and across each interaction are also provided.

    The Data drop-down list allows the analyst to uncheck and remove one of these three row types (Sum, Count, Average).

    The Operation drop-down list shows the possible operations (Cache Hits, Activity Starts, etc.) with the appropriate events for that worksheet checked.  For example, the SQL operations are checked on the Interaction Summary and the SQL Calls spreadsheet, but not on the Cache Access sheet.  Again, unchecking one or more of these event types will hide that column of data in the worksheet.

    Clicking one of these numbers will show a filtered list of the operations that make up that number.  For example, in the Interaction Summary sheet, there were 21 instances of End Activity in interaction 39:

    Double-clicking that number creates a new worksheet, with just those 21 interactions in full detail:

    NOTE:  This information is filtered from the DB Trace Data worksheet, and has the same columns as that sheet (see below). 

    Interaction Summary

    The Interaction Summary worksheet lists each interaction in the DB Trace (see the Interactions section of this article for a definition of interaction).  For each interaction, a summary of the following types of requestor-level events appears:

    • assignToThread
    • cachedNotFound
    • cacheHit
    • cacheMiss
    • commit
    • endActivity
    • listResultCount
    • preparedStatement
    • preparedStatementQuery
    • preparedStatementUpdate
    • readBlob
    • runList
    • startActivity
    • writeBlob

    SQL Calls

    The SQL Calls worksheet shows data about the SQL calls for the business process during the DB Trace.  This worksheet contains the following columns:

    • preparedStatement
    • preparedStatementQuery
    • preparedStatementUpdate
    • Grand Total

    Cache Access

    The Cache Access worksheet tracks the number of times the cache was accessed when the system requested various Rule instances. This worksheet contains the following columns:

    • cachedNotFound
    • cacheHit
    • cacheMiss
    • Grand Total

    These columns correspond to the Events checked in the DB Trace Options screen, and will only have data if those Events were checked. 

    Global Trace

    Process Commander’ features include the ability to trace the various types of operations globally (across the entire system), as well as just for one requestor.  A full description of global DB Trace is outside the scope of this article. For complete details, contact Global Customer Support .

    If tracing for one requestor, the DB Trace facility should be enabled through the Performance tool under the Administrator portal, as described above.  A global DB Trace is enabled differently, through the prconfig.xml file:

    1. Set the dumpStats entry under database to true
    2. Specify the events to be traced under traceEvents

    If no events are specified, then all events are traced.

       <env name="database/dumpStats" value="true"/>

       <env name="database/traceEvents" value="preparedStatementUpdate;preparedStatement;preparedStatementQuery "/>

    A new file is created each time the server is started.  The file is placed in the ServiceExport directory, and is named dbOperationstimestamp.txt, where timestamp is the timestamp of the file.

    NOTE:  You can't run the global DB Trace and a requestor-level DB Trace at the same time.  If the global DB Trace is being run, it overrides the requestor-level DB Trace, by design.

    Top of page


    The Profiler is a tracing tool introduced in V4.2. In that version, it runs on all requestors. 

    In Version 5.2, the Profiler is enhanced to run just on one requestor, so the activities and when rules executed for one requestor can be traced.

    In addition to the Activities Called information available from Tracer, the Profiler also shows all of the When rules called, both inlined and rule-resolved.  (The Tracer does not show inlined when rule executions. The way the Tracer is constructed, if it tried to track the inlined When rules, the tracing would be as expensive as the when rule executions themselves.) 

    Use the Profiler in conjunction with the Performance Analyzer (PAL).  The PAL tool can help you locate areas to where performance can be improved (example:  if one interaction requires 3,900 activity calls).  Then for further information on activity steps, run the Profiler to get full details.

    Running the Profiler

    To start the Profile, click the Start Profiler link on the Performance window. (The link text changes to Stop Profiler.)

    Run the business process being traced, and then click Stop Profiler.  The Download Profiler Data to PC window appears:

    These files are named by the requestor identifier and the PRThread name.  In the highlighted example above:

    • requestor identifier = “HCA3E5D0AFD332938A7CF63DB0A0E73B6”
    • PRThread name = “STANDARD”

    Choose the file to download or display, and click Download or Display Profiler Data.

    The data appears (either on screen or when the file is opened) in an Excel spreadsheet:

    A new row is added to the file at the completion of every activity step, every When, and every model. 

    NOTE:  The Profiler lists items in the order they are completed.  Therefore, rows may not be output to the trace file in the order the sequences were started. To see the steps in the order they were started, sort on the Sequence column.

    The trace file includes the following data columns:

    Column Name



    An integer, tracking order of the beginning of each step. 


    The Interaction number, which can be used to correlate the output with the PAL data.


    Name of the activity, when condition rule, or model rulei involved in this step.


    Name of the activity which called the rule in the previous column (if known).


    For activities, the step number  (“3”).

    For when rules, “W”. For model rules, “M.”For HTML or XML stream rules, “S.”

    Method Name or When Result

    For activities, step method. For when rules, the status returned .For models, this column is blank.

    Total CPU Time

    The total time used by the CPU, measured in seconds, during this step.  This number includes all the called items.

    CPU Time Without Children

    The time used by the CPU, measured in seconds, for this step, exclusive of any calls to substeps.  (This is useful for isolating Java steps which may perform other work in addition to calling other activities.)

    Total Wall Time

    The elapsed time (wall time), in seconds, used by this step.

    Wall Time Without Children

    Tthe elapsed time (wall time), measured in seconds, for this step, exclusive of any calls to substeps.  (This number is useful for isolating Java steps which may perform other work in addition to calling other activities.) 

    Top of page

    Additional Resources

    The information in this article is derived from the PDFUsing Performance Tools System Tools paper.    That document includes a glossary defining of over 130 PAL counters. For easier searching and viewing, the paper is available as a series of linked articles:

    • PAL:  Overview of Performance Tools and Information Collection – An overview of the Performance Analysis tool and its methodology, written for all users; also a section on Collecting Performance Information, which is written for all users, and describes the method of collecting this data to give to your application developer.
    • PAL:  Understanding Performance  Information  - This section is designed for application developers, and describes the different performance tools in detail.  It looks at the specifics of what the PAL readings measure, as well as how to use them to better understand and improve the performance of your application.
    • Performance (PAL) Detail Screen – details on the PAL counters being tracked in the system, and how to interpret the groupings.
    • DB Trace and Profiler – (This article)


Pega Platform 7.1.1 - 8.3.1 Business Architect System Architect System/Cloud Ops Administrator System Administration

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best. is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us