Skip to main content

         This documentation site is for previous versions. Visit our new documentation site for current releases.      

This content has been archived and is no longer being updated.

Links may not function; however, this content may be relevant to outdated versions of the product.

Support Play: Tuning the Sun JVM 1.4.2 and 5.0 for performance

Updated on September 10, 2021


As a Java application, Process Commander runs in the Java Virtual Machine (JVM). A Java application allocates all its objects in the memory heap, which is managed by the Garbage Collector. To maximize the efficiency of an application running in a JVM, it is important to "tune" the JVM - specifically, the memory allocation and garbage collection functions.

Process Commander is supported by both the Sun JVM and the IBM JVM. This article describes the tuning you can perform with in the Sun JVM. It also includes example error messages that suggest areas which could benefit from this tuning.

For information on IBM JVM tuning, see Support Play: Tuning the IBM JVM 1.4.2 for performance.

Process Commander supports both Sun JVM 1.4.2 and 5.0. Most garbage collection concepts are the same for both versions. However, Version 5.0 has some differences that are highlighted in this article.

Quick Links


Types of Garbage Collectors

JVM Parameters

How to View Garbage Collection Statistics

Tuning Strategy

Troubleshooting an Out-of-Memory Error

Additional Resources

Need Further Help?




Upon startup of an application, the Sun Java Virtual Machine (JVM) allocates space in virtual memory. This space is known as the heap. The heap is the area in memory in which objects are created. Depending on how long the objects remain in memory (referenced), the heap size can expand or contract as necessary

Determining heap size is important to maximize your application’s performance. Specific information on precisely when the heap expands or contracts is beyond the scope of this article, but is described in the following articles located on the Sun Java web site: 

For the purposes of this Support Play, it is enough to know that the heap will handle this resizing efficiently if the application is well written.

Objects in Java are allocated space in the heap wherever there is enough contiguous space. Like data being stored on a PC’s hard drive, the heap space becomes fragmented over time. Unlike a PC, which can store one large file in several locations, space for Java object allocations must be together. For example, the heap may contain enough overall space to open a 640KB object. However, this space is broken up into 128K chunks, so the space is not contiguous and cannot be used for this object.

In addition to size of objects, objects are stored in different locations in the heap based on their longevity. Some objects are short-lived and are discarded and ready for garbage collection quickly. Others, like the Process Commander caches, may be in use as long as the application or the JVM is running. In Java 5, as objects age, they are moved from the short-term allocation (the young) to long-term allocation (tenured). 

Top of Page

Types of Garbage Collectors

The Sun JVM provides several different types of Garbage Collectors. Each collector operates in its own manner, but all use the same Memory Heap allocation concept described below.

As stated above, objects are allocated space in the heap, used, and then discarded.  An object continues to be live while referenced in the system. When an object ceases to be referenced, it becomes Garbage. Rather than keeping all the discarded objects in the heap (which would eventually take up all the space in the system), the Garbage Collector runs in intervals and removes the objects from allocation, freeing up memory.

Adaptive Sizing

Adaptive sizing tracks statistics about garbage collection times, allocation rates, and the free space in the heap after a collection. Use these statistics to make decisions regarding changes to the sizes of the Young and Tenured generations so as to best fit the behavior of the application. Adaptive Sizing is on by default, to turn it on or off, use the parameter:


Top of Page

Memory Heap Allocation Concepts

The Sun JVM divides the memory heap into Generations. Generations are carefully sized memory pools that hold objects of different ages. When one Generation fills up, a Garbage Collection is initiated and any objects that survive the collection are moved to a different Generation. The memory heap is divided into the following Generations: 

  • Young
  • Tenured
  • Permanent

A complete description of the Generation types appears in the Default/Serial Garbage Collection section.

The following are the available garbage collectors for Sun JVM 1.4.2 and 5.0: 

  • Default collector (1.4.2) / Serial collector (5.0) 
  • Throughput collector 
  • Concurrent Low Pause 
  • Incremental Low Pause collector (1.4.2 only)

Top of Page

Default/Serial Garbage Collection

This is the first collector you should run when tuning your system. If you find this collector insufficient after running it on the production system for a good amount of time, use the Throughput Collector.

In 1.4.2, this collector runs automatically (or by default). For 5.0 use the command line option


IMPORTANT:  If you do not use the command line option to specify a collector in 5.0, the JVM will select the collector best suited for your application, based on the class of the machine on which your application is running. For example, if the JVM determines that the Throughput Collector is best-suited for your system, but your testing shows the Serial Collector is best, activate the Serial Collector using the command line parameter.

This collector consists of the following: 

  • Young Generation
  • Tenured Generation
  • Permanent Generation

During Garbage Collection, the collector searches the generations and marks non-referenced objects as garbage. These objects are then collected as garbage and discarded. There are two types of garbage collections:

Minor Collection – Marks and collects non-referenced objects in the Young Generation.  It then moves any remaining objects to the Tenured generation or the Survivor spaces in the Young Generation (see the Young Generation section below). A Minor Collection is initiated when the Young Generation fills up and collects dead objects very quickly. Some objects survive multiple collections. These objects are moved to the Tenured Generation after a pre-determined number of Minor Collections. These Minor Collections are named MC1, MC2, etc.

Major Collection – Marks and collects objects in both the Young and Tenured generations. A Major Collection is triggered when the Tenured generation fills up; Major Collections run slower than a Minor Collection.

Young Generation

The Young Generation consists of three sections: 

  • Eden
  • To
  • From

Eden is the area in memory where objects are initially allocated. When the first minor Garbage Collection occurs (Minor GC1), all non-referenced objects are removed and the remaining objects are moved into one of the Survivor spaces (To and From).

IMPORTANT:To and From are called Survivor spaces and contain referenced objects moved out of Eden. They are referred to as Survivor spaces because objects that survive the Garbage Collection are moved here. Only one Survivor space can be active at any given time. For example, if objects were moved from Eden to To during a Garbage Collection, objects will be moved into From during the next GC. This process is reversed for each subsequent GC. These sections will be referred to as Survivor spaces throughout the rest of the document.

During Minor GC2, the Garbage Collector scans Eden and the active Survivor space for all non-referenced objects and moves them to the inactive Survivor space.

Tenured Generation

After 31 Minor GCs, any remaining referenced objects in the Survivor spaces are moved to the Tenured Generation. When the Tenured Generation gets filled, a Major Garbage Collection occurs.

You can use the –xx:MaxTenuringThreshold parameter to configure the number of Minor GCs that occur before objects are moved to the Tenured Generation, the default is 31. However, Pegasystems strongly recommends that you do not change the default value unless absolutely necessary.

Permanent Generation

The Permanent Generation holds the reflective data of the JVM, such as classes and methods. This generation falls outside of the heap allocation and the default size is 4m. Use the following parameters to configure the Permanent Generation:

-XX:PermSize – the initial size of the Permanent Generation.

-XX:MaxPermSize– the maximum size of the Permanent Generation.

NOTE: You must set the maximum size of the Permanent Generation to a value larger than the default of 64m. You will determine the proper value in the course of tuning the application and JVM.

The image below shows the arrangement of Generations for all collectors except the Throughput collector (see the Throughput Collector section for more information):

IMPORTANT: If you are running JVM 5.0, Pegasystems strongly recommends the Serial collector for your garbage collection. This collector will be sufficient in the majority of applications, because the other collectors have additional overhead and complexity that will affect application performance.

If the Serial Collector is insufficient for your needs, use the Throughput Collector. Though the Throughput Collector is a "stop- the-world" collector, it collects garbage quicker than the Serial collector for large applications that are multi-threaded and run on several CPUs.

Top of Page

The Throughput Collector

The throughput collector uses multiple threads to execute a minor collection and so reduces the serial execution time of the application. Therefore, use the Throughput Collector when you want to improve performance on a multiple CPU system. On a system with N CPUs, the Throughput collector will use the same number of threads to perform the collection. For example, if you have three CPUs, set the number of Garbage Collection threads to 3.

Enable the Throughput collector as follows:


Set the number of Garbage Collection threads as follows:


NOTE: Because Throughput uses multiple garbage collector threads during the minor collection, there is a small possibility of fragmentation due to the promotions from the Young generation to the Tenured generation. Each thread reserves a part of the Tenured generation for promotions and the division of the available space into these promotion buffers can cause fragmentation. Reducing the number of garbage collector threads and increasing the size of the tenured generation will reduce this fragmentation.

IMPORTANT NOTE: The above information is relevant for both JVM 1.4.2 and 5.0. The following information is specific to 5.0.

Throughput Collector 5.0

The generations in the Throughput Collector are organized as follows:

This set-up makes it easier for parallel processing to occur on multiple threads.

Ergonomics in the Throughput Collector, Version 5.0

In JVM 5.0, ergonomics is the concept of running the Throughput Garbage Collector with minimal or no command line tuning. That is, the application starts and garbage collection begins. You do not need to specify a minimum or maximum heap size; the Throughput Collector handles this.

Ergonomics selects the Throughput Collector as the default collector on “server class machines.” Java defines server class machines as machines with: 

  • two or more physical processors
  • 2 or more Gbytes of physical memory

You can use the following parameters, new to 5.0, to help tune the Throughput Collector:




Pause time is the amount of time the application stops while Garbage Collection is taking place.

This parameter allows you to set the maximum pause time during garbage collection to n milliseconds or less. The Throughput Collector will adjust the heap size and other garbage collection parameters to keep the pauses shorter than n milliseconds and still complete the garbage collection.


This parameter allows you to set a throughput goal, measured in terms of time spent collecting garbage and time spent in application processing. The ratio of garbage collection time to application time is:

1 / (1 + n)

For example, if your parameter value is –XX:GCTimRatio=19, that sets a goal of 5% of the total time for garbage collection:

1 / (1 + 19) = 1 / 20 = 5%

The default goal is 1%, or n=99. If the throughput goal is not being met, the size of the heap increases so the application can run between collections.

Top of Page

Concurrent Low Pause Collector

The Concurrent Low Pause Collector pauses the application threads at the beginning of the collection and towards the middle of the collection to mark objects for garbage collection. These pauses are short in duration, though the second one is a little longer. During the rest of the collection time, the garbage collection thread runs at the same time as the application thread. To activate this collector use the command line option:


Use the Concurrent Low Pause Collector if your system can share resources with the Garbage collector when the application is running. This method is often used with applications that have large sets of long-living data, as well as multi-processor machines.

The Concurrent Low Pause collector marks and collects garbage in three steps: 

  1. Concurrently marks the non-referenced objects, causing a small pause in the application.
  2. Performs a second marking, or remarking, of the non-referenced objects (this is the pause in the middle). This marks all the non-referenced objects that were missed during the concurrent marking phase, possibly due to the execution of the application threads at the same time as the marking.
  3. Sweeps all the dead objects, removing them from the heap.

Issues that may arise with Concurrent Low Pause Collector include:

Major Collection – If the Minor Collection cannot complete before the Tenured generation fills up, a Major Collection occurs. This stops all application threads. When this occurs, adjust your Garbage Collection parameters to allow the Minor Collection to complete.

For example, you may need to increase the ratio of the Young Generation to the Tenured generation. In this case, use the –XX:NewRatio parameter. 

Floating Garbage – Floating Garbage consists of objects that were referenced during the marking phases, but non-referenced by the time the marking and sweeping were completed. Increase the Tenured Generation by 20% to minimize Floating Garbage.

Top of Page

Incremental Low Pause Collector (1.4.2 only)

This collector collects portions of the Tenured generation at each minor collection. The Incremental collector attempts to avoid long major collection pauses by performing portions of the major collection work at each minor collection.

Use the Incremental Low Pause Collector when your system will not be negatively affected by trading more frequent and longer minor collection pauses for shorter Full collection pauses. For example, a system with one processor may require a larger Tenured generation (lots of long-lived objects) with a smaller Young Generation (short-lived objects that don’t survive before a minor collection is performed) to experience efficient garbage collection. To activate this collector use the command line option –Xingc.

Top of Page

JVM Parameters

The following table lists the parameters that allow you to configure memory heap size. There are many more parameters available for configuration, but these are the ones you should be concerned with when tuning your Sun JVM for Process Commander:





Initial heap size.



Maximum heap size.



Initial size of the Young Generation.



Maximum size of the Young Generation.



Sets the size of the Eden and Survivor spaces.



The number of times an object is aged before being moved to the Tenured generation.



Initial size of the Permanent Generation.



Maximum size of the Permanent Generation.



Ratio of Young Generation to Tenured Generation



When the percentage of free space in a generation falls below this value, the generation expands to meet this value.



When the percentage of free space in a generation exceeds this value, the generation shrinks to meet this value.


The parameters above allow for customization of each system. Adjust them accordingly for your Process Commander application. You can also adjust other parameters, at the discretion of the your Java administrator.

For more information on these parameters, see the Tuning Strategy section.

Top of Page

Java Heap Size

The Java heap size is one of the most important tunable parameters of your JVM. It is especially important if several processes and JVMs are being run on your system. The heap contains all Java objects, along with free memory.

For the SUN JVM, Pegasystems recommends that you set –Xms and –Xmx to the same value. This eliminates constant resizing of the JVM and allows for quicker startup.

The size of the heap is dependent on your application. When determining heap size, you must balance the frequency and length of garbage collections. You must also remember that memory allocated to an application heap is not available to other applications. For example, increasing the heap size in one area to decrease garbage collections may cause issues on other areas. Heap size should never be larger than physical memory.

IMPORTANT NOTE: When entering the -Xmx and -Xms values, make sure you use a lowercase "m" after the number: -Xms100m . If you use an upper-case M, Java will not read it and reports out-of-memory errors, because the memory was not set.

Top of Page

How to View Garbage Collection Statistics

When doing any tuning of a JVM, begin by using the verboseGC setting to create a garbage collection log file (known as the “verboseGC log” or “verboseGC file”). 

IMPORTANT NOTE:  VerboseGC logging overhead is small, so enabling this switch in any environment is strongly recommended. 

You can use these settings:

  • verbose:gc - Prints garbage collection information.
  • -XX+PrintGCDetails – Prints detailed garbage collection information.
  • -XX:+PrintGCTimeStamps – Adds a timestamp to the printed garbage collection information.
  • -XX:PrintHeapAtGC – Prints detailed garbage collection information, including heap occupancy before and after GC.
  • -XX:PrintTenuringDistribution – Prints object aging or tenuring information.
  • -Xloggc:<filename> - Writes garbage collection information to a specific file.

Top of Page

Viewing verboseGC from the log

A simple example of verbosegc output is:

0.046: [GC 0.046: [DefNew: 569K->46K(576K), 0.0018139 secs] 645K->185K(262080K), 0.0018435 secs]


  • 0.046 is the time, measured from application startup, that the garbage collection occurred.
  • DefNew: indicates the type of collector use, in this case it’s the default collector.
  • 569K is the size of the Young Generation before the collection.
  • 46K is the size of the Young Generation after the collection.
  • 576K is the amount of space available in the Young Generation.
  • 0.0018139 secs is the amount of time (application pause) necessary to complete the garbage collection of the Young Generation.
  • 645K is the size the entire heap before the collection.
  • 185K is the size of the heap after the collection.
  • 262080 is the amount of space available in the heap.
  • 0.0018435 secs is the amount of time (application pause) necessary to complete the entire garbage collection (all generations in the heap). This value would also include the time for a Major Collection, if one occurred.

Use the Java command line to specify the verbosegc commands. For Process Commander, that generally involves using the application server Administrative Console (for WebSphere or WebLogic) or editing the setenv file (for Tomcat). 

Top of Page

Viewing verboseGC from Process Commander Monitoring Tools

The monitoring tool (the Monitor Servlet in Version 4.2, or the System Management Application in Version 5.1) allows data from the verboseGC log to be viewed in increments, by specifying a start time and a stop time.

This form has three buttons::




View the requested data on the page.


Force an explicit garbage collection. This button is provided for debugging purposes only – generally it is recommended to allow the JVM to perform the garbage collection on its own. 

As this button applies to garbage collection, it is unavailable when Log Usage Statistics is selected.


Create a comma-delimited report of the requested data.

Top of Page

Garbage Collector Statistics

This view uses the logfile that is generated by the VerboseGC JVM parameter and organizes the GC data (broken out for each hour) in order to give a picture of the garbage collection activity over time. 

NOTE:  Verbose GC output must be enabled in the JVM in order to use this feature.



GC Log File

Enter the full path to the garbage collection log output file, including the log file name. 

If the location of the log file is included in the JVM options specified when starting the application server, then this field will default to that log file path. 

JVM Start Time

Displays the start time for the system JVM.

Start Time

Displays the start time parameter for the system JVM, which is when garbage collection began. Change this time to report the garbage collection data to investigate a particular time/date segment.

Stop Time

Displays the stop time parameter for the system JVM, which is when garbage collection ended. Change this time to report the garbage collection data to investigate a particular time/date segment.

To view the statistics: 

  1. Click the radio button next to the Garbage Collector Statistics section. 
  2. In the GC Log File field, enter the path to the log file, including the file name.
                 Example:  f:\websphere\tempdir\logs\gclog.out
  3. Set the Start and Stop times. 
  4. Click View to view the report on the screen, or CSV to create a CSV file to download the report to a local machine.

NOTE: The garbage collection statistics all apply to one JVM (one node) in the Process Commander system.

GC Statistic



The start time for the system JVM.


The beginning of the time period chosen for this report.


The end of the time period chosen for this report


The total of all garbage collections done in this JVM/node for the specified time period.


The number of major (or full) garbage collections done in this JVM/node for the specified time period


The number of minor garbage collections done in this JVM/node for the specified time period.


The amount of space, in bytes, released in this JVM/node by all garbage collections done in the specified time period.

When an object is released in Java, it goes to garbage collection and the space required to hold that object in the JVM is freed.


The amount of space, in bytes, released in this JVM/node by major garbage collections done in the specified time period.


The amount of space, in bytes, released in this JVM/node by minor garbage collections done in the specified time period.


The amount of time, in seconds, required by all garbage collections done in this JVM/node in the specified time period.


The amount of time, in seconds, required to do the major garbage collections in this JVM/node in the specified time period.


The amount of time, in seconds, required to do the minor garbage collections in this JVM/node in the specified time period. 


The amount of time, in seconds, between the Start and Stop times specified.


The last time any garbage collection was run in this JVM/node.

CollectionElapsedTime Percentage

A comparison of the elapsed time to the total collection time.

IMPORTANT:  If the system is spending more than 1% of its processing time doing garbage collection, it is spending too much time and a problem exists somewhere in the system setup. Check this measurement, and then if that is high, look at the other collection times to see the collection pattern. 

The Hourly Garbage Collection Statistics show the pattern of garbage collection over time. If the workload on the system is consistent, then the garbage collection should also be consistent. If the workload is consistent but there is a spike in the garbage collection, that should be investigated to see what kind of processing caused the spike.

Column heading



The hour (or portion of the hour) included in the specified time period.

Number of GC

The total number of collections value for this hour.


The total storage freed, in bytes, for this hour.

Collect Time

The Total Collection Time for this hour.


The collection elapsed time percentage for this hour.

Top of Page

Log Usage Statistics

Log Usage shows overall system activity. Based on the number of interactions, the Log Usage shows various Performance Analyzer counts of system usage, so the System Administrator can see what activities are going on from a system-level perspective.

For example, if a user complains that the system is slow at 4 p.m., the System Administrator can choose Start and Stop parameters to highlight that time and see whether there was a sudden spike in the number of users, or in activities run by existing users, or some other occurrence that might cause a system slowdown. 

To view the statistics: 

  1. Click the radio button next to the Log Usage Statistics section. 
  2. In the Node Name field, enter the name of the node to be viewed.
  3. Set the Start and Stop times. 
  4. Click View to view the report on the screen, or CSV to create a CSV file to download the report to a local machine.

NOTE:  The Log Usage statistics all apply to one JVM (one node) in Process Commander.

For definitions of these statistics, reference the Using Performance Tools Tech Note.

Top of Page

Tuning Strategy

It is important that you tune the application before tuning the JVM. This ensures that the application processing is efficient. If you tune the JVM first, then some application problems may be hidden.

For example, due to a complex work object structure with many drop-down lists requiring data from the BLOB, one site's application was essentially loading the data from the entire database for each new work object - over 30MB of data for each work object. The site had tuned the JVM so that it very efficiently ran under this load, and handled the massive amount of garbage this setup created. However, it would have been much better had the staff looked at how the application was working and adjusted that, before tuning the JVM.

Pegasystems recommends starting settings for the upper and lower parameters of the heap as follows:



These parameters are minimum values that allow you to begin using Process Commander. Your settings may need to be higher, based on your server hardware and the number of other applications on the server. Each application’s memory usage will vary, depending upon the load (number of users) and type of processing being done. Therefore, each application will require specific tuning, using the verboseGC information. 

Top of Page

Tuning Your Application

Unlike some of the other performance areas that should be checked as the application is being built, tuning for memory usage should be done after the application is completed.

Begin the tuning process by enabling verboseGC. Note your start time to make it easier to find information in the log file. Then start the application and run it with several users for some amount of time, making sure to sample all the processing (create, process, and close each type of work object; send correspondence; run reports, etc.). After all the features have been exercised, close the application, noting the stop time, and review the GC log.

There are a number of application issues that may be highlighted by the data in the verboseGC log, including: 

  • the volume of garbage
  • quick garbage collections
  • the size of allocations vs. the number of users
  • the heap continuing to grow

When any of these issues occur, it is important to be able to know not only what the JVM is doing, but also what the application is doing. Since it is difficult to tell application processing directly from the verboseGC log, the Process Commander monitoring tool described above shows the verboseGC log information (garbage collection perspective) juxtaposed with the Log Trace information (application perspective). This combined view of the statistics shows an hourly breakdown of the activity in the system, and allows the System Administrator to see what the system is doing vs what the JVM is doing, and how they relate. For example: 

  • If there is a garbage collection problem, and too much garbage is being generated, is that directly or indirectly related to the activities being run? 
  • Is a spike in garbage collection paralleled by a spike in processing? 
  • Is the growth across the two areas consistent – that is., more activities running means more garbage - or is one growing faster than the other?

Use this tool to trace garbage collection anomalies to actions in the application.

Top of Page

Volume of Garbage

As shown in the How to View Garbage Collection Statistics section, each GC message states how much garbage is collected (in bytes)

0.381: [GC 4599K->899K(262080K), 0.0082052 secs

In this example, 3700K of garbage was collected (4599K – 899K). If this number is exceedingly high (for example, 759983392 bytes) in numerous messages, then you may need to review what the application is trying to load. Check the PegaRULES log and the verbosegc log to determine why so much data is being loaded.

Another aspect of the volume is the amount of time the system spends collecting the garbage.  In the Garbage Collector Statistics, the CollectionElapsedTimePercentage is shown. If this value is higher than 1% for the system, then the system is spending too much time collecting garbage – the amount of time spent will become perceptible to users as a system slowdown. 

Check the number of collects, and the amount of garbage collected (TotalStorageFreed). High numbers here may point to issues such as inefficient database queries (loading the entire BLOB into the clipboard in order to retrieve one property, and then throwing all that data away).

IMPORTANT:  The goal of tuning should be that garbage collections take less than 1% of processing time.

Top of Page

Frequent Garbage Collection

The first entry in the Garbage Collection message indicates the amount of time elapsed since system startup

0.381: [GC 3976K->0K(4032K), 0.0082052 secs]

0.503: [GC 3976K->0K(4032K), 0.0044796 secs]

In the above example, the first collection occurred .381 seconds after system startup, and the next collection .503 seconds after startup. Also note that all the garbage was collected in a short amount of time (0.0082052 and 0.0044796 seconds). 

[GC [DefNew: 16000K->16000K (16192K), 0.0000574 secs]

[GC [DefNew: 16000K->16000K (16192K), 0.0000518 secs

The sample above shows an Empty Garbage Collection (16000K space in the heap before the collection started and 16000K after the collection). These should not occur frequently. If this happens, there may not be enough unallocated space or contiguous unallocated space in the heap to allocate objects. Investigate why there is so much space in the heap already allocated. Are objects not being released when they should or is the heap too small?

Another reason for frequent collections may be that the application is calling the Garbage Collector by using the Java System.gc() or Runtime.gc() methods. In this case, disable this functionality using the -XX:+DisableExplicitGC command line option

This can also occur due to distributed RMI garbage collection, when other applications are attempting to connect to the application server through RMI. Again, use the -XX:+DisableExplicitGC command line option to disable this functionality.

Top of Page

Memory Leak

If your heap continues to grow, your system may have a memory leak where objects which are no longer being used are still referenced somewhere in the system. Therefore, they cannot be collected as garbage. In this case, the heap will continue to expand until it hits its limit, and then begin giving Java Out-Of-Memory errors. 

The first test for a memory leak is:  when all the users log off the system, does the garbage collection number and the % used number go down?  Check the verboseGC log first thing in the morning, to see what the first few garbage collection logs show.  If the % used value is low (25% used), then there is probably not a memory leak – some other problem is causing your symptoms.

If the number remains high (95% used first thing in the morning with one user), then look for issues which are known to cause memory leaks, such as: 

  • Check the version of the JVM that is running and see if there are memory leak issues with that version.
  • Check if the system is running Jasper and what version; there are known memory leak issues with some versions.

Top of Page

Tuning Your JVM

Tune the JVM after tuning your application. Once the application is as efficient as it can be, run the test again with verboseGC enabled and minimum load (one or two users) for a specified time, exercising all functionality, including starting up and shutting down the application. Check the verboseGC log to see the heap size at this stage – this should provide a rough guide to the starting size of the heap (the –Xms parameter) that is needed.

Then run the system through the same test under a heavy load (many users), exercising the same functionality and examine the verboseGC log again.

This section provides some tuning tips based on VerboseGC output that may indicate a Garbage Collection problem.

Top of Page

Young Generation is too Small

The Young Generation heap size in the sample below is 4MB, with an overall heap size of 32MB.

[GC [DefNew]: 4032K->64K (4032K), 0.0429742 secs] 9350K->7748K (32704K), 0.0431096 secs
[GC [DefNew]: 4032K->64K (4032K), 0.0403466 secs] 11716K->10121K (32704K), 0.0404867 secs]
[GC [DefNew]: 4032K->64K (4032K), 0.044369 secs)] 14089K->12562K (3270K), 0.0445251 secs]

The output above seems logical when compared to the amount of time spent in Garbage Collection. However, even though the occupancy of the Young Generation is decreasing (for example, by 3968 K in the first line, from 4032 K to 64K), the capacity of the entire heap is decreasing by considerably less (by 1602K in the first line, from 9350K to 7748 K). This indicates that only about 40% of the objects in the Young Generation were garbage and the rest survived the collection and were moved to the Tenured Generation (1602/3968 = .403 or 40%).

To resolve this issue, you can increase the Young Generation size to increase the free space after collection. The sample below increases the Young Generation heap size to 8MB.

[GC [DefNew]: 8128K->64K (8128K), 0.0453670 secs] 13000K->7427K (32704K), 0.0454906 secs
[GC [DefNew]: 8128K->64K (8128K), 0.0388632 secs] 15491K->9663K (32704K), 0.0390013 secs
[GC [DefNew]: 8128K->64K (8128K), 0.0388610 secs] 17727K->11829K (32704K), 0.0389919 secs]

Increasing the Young Generation to 8MB causes most of the Young Generation to be garbage. In the first line, the Young Generation decreases by 8064K (from 8128K to 64K). The entire heap decreases by 5573K (from 13000K to 7427K). This indicates that about 69% of the objects in the Young Generation were garbage (5573/8064 = .691 or 69%).

The size of the Young Generation does not change the number of live objects that survive the minor collection. However, there are fewer minor collections and the cost of these collections (pause times) are comparable. This results in fewer Garbage Collections and less overhead in your application.

To illustrate the cost of 4MB Young Generation versus the 8 MB, the samples below were generated using the command line options:


These commands measure the amount of time the application runs between collection pauses (Application Time), as well as the length of the collection pauses (Total time for which application threads were stopped).

This sample was run on the 4MB Young Generation:

Application Time: 0.5291542 second
[GC [DefNew: 3968K->64K (4032K), 0.0460948 secs] 7451K->6186K (32704K), 0.0462350 secs
Total time for which application threads were stopped: 0.0468229 seconds
Application Time: 0.5279058 seconds
[GC [DefNew: 4032K->64K (4032K), 0.0447854 secs] 10154K->8648K (32704K), 0.0449156 secs]
Total time for which application threads were stopped: 0.0453124 seconds
Application Time: 0.9063706
[GC [DefNew: 4032K->64K (4032K), 0.0464574 secs] 12616K->11187K (32704K), 0.0465921 secs]
Total time for which application threads were stopped: 0.0470484 seconds

The sample above shows the following: 

  • The amount of time the application runs varies from .53 (.5291542 ) seconds to .90 (.9063706) seconds.
  • The minor collection pauses are approximately from .045 (.0449156) to .047 (.0465921) seconds.

This indicates between 5 to 8% overhead for minor collection pauses. Using the actual values from the sample:

.0465921/.9063706 = .051405 (5.1 % rounded to 5%)

 0449156/.5278058 = .084098 (8.4 % rounded to 8%)

 This sample was run on the 8MB Young Generation:

 Application Time: 1.3874623 seconds
GC [DefNew: 8064K->63K (8128K), 0.0509215 secs] 11106K->5994K (32704K), 0.0510972 secs]=
Total time for which application threads were stopped: 0.0517092 seconds
Application Time: 1.5225065 seconds
GC [DefNew: 8127K->63K (8128K), 0.0432982 secs] 14058K->8273K (32704K), 0.0434172 secs]
Total time for which application threads were stopped: 0.0440447 seconds
Application Time: 1.4263524
[GC [DefNew: 8127K->64K (8128K), 0.0363538 secs] 16337K->10381K (32704K), 0.0364811 secs]
Total time for which application threads were stopped: 0.0369103 seconds

The sample above shows the following: 

  • The amount of time the application runs varies from 1.38 to 1.52 seconds.
  • The minor collection pauses are from .043 (.04344172) to .051 (.0510927) seconds

This indicates between 2.8 and 3.6% overhead for minor collection pauses. Again, using the actual numbers from the example:

.04344172/1.5225065 = .0285530 (2.8%)

.0510972/1.3874623 = .0368278 (3.6 %)

Top of Page

The Young Generation Size is too Large

In the section above, when the Young Generation heap size was doubled (from 4MB to 8MB) the overhead due to minor collections was cut in half. If next it is increased to 16MB, the Young Generation may be too large.

 GC [DefNew: 16000K->16000K (16192K), 0.0000574 secs] [Tenured: 2973K->2704K (16384K), 0.1012650 secs] 18972K->2704K (32576K), 0.1015066 secs]
 GC [DefNew: 16000K->16000K (16192K), 0.0000518 secs] [Tenured: 2704K->2535K (16384K), 0.0931034 secs] 18704K->2535K (32576K), 0.0933519 secs]
 GC [DefNew: 16000K->16000K (16192K), 0.0000498 secs] [Tenured: 2535K->2319K (16384K), 0.0860925 secs] 18535K->2319K (32576K), 0.0863350 secs]

Because the Young Generation heap size is too large (half the size of the Tenured Generation) Garbage Collection on the Young Generation cannot be completed successfully. Only Full Collections are taking place. The Young Generation is collected, but only as part of the Full Collection.

Top of Page

Is the Tenured Generation too large or too small?

In the previous tuning example, the Young Generation heap size was 8MB and the total heap size was 32MB. This resulted in major collection pauses of about .13 seconds.

[GC [DefNew: 8128K->8128K (8128K), 0.0000558 secs] [Tenured: 17746K->2309K (24576K), 0.1247669 secs] 25784K->2309K (32704K), 0.1250098 secs]

If you increase the total heap size to 64K, the major collection pauses increase to .22 seconds:

[GC [DefNew: 8128K->8128K (8128K), 0.0000369 secs] [Tenured: 50059K->5338K (57344K), 0.2218914 secs] 58187K->5338K (65472K), 0.2221317 secs]

In the above example, the major collections are occurring at half the frequency as the 8MB Young Generation example. To see this, use the command line option:


The following example shows the 32 MB example with major collections occurring every 10-11 seconds:

111.042: [GC 111.042: [DefNew: 8128K->8128K (8128K), 0.0000505 secs] 111.042: {Tenured: 18154K->2311K (24576K), 0.1290354 secs] 26282K->2311K (32704K), 0.1293306 secs]122.463: [GC 122.463: [DefNew: 8128K->8128K (8128K), 0.0000560 secs] 122.463: {Tenured: 18630K->2366K (24576K), 0.1322560 secs] 26758K->2366K (32704K), 0.1325284 secs]133.896: [GC 133.897: [DefNew: 8128K->8128K (8128K), 0.0000443 secs] 133.897: {Tenured: 18240K->2573K (24576K), 0.1340199 secs] 26368K->2573K (32704K), 0.1343218 secs]144.112: [GC 144.112: [DefNew: 8128K->8128K (8128K), 0.0000544 secs] 144.112: {Tenured: 16564K->2304K (24576K), 0.1246831 secs] 24692K->2304K (32704K), 0.1249602 secs]

The following example shows the 64 MB example with major collections occurring approximately every 30 seconds:

90.597: [GC 90.597: [DefNew: 8128K->8128K (8128K), 0.0000542 secs] 90.597: {Tenured: 49841K->5141K (24576K), 0.2129882 secs] 57969K->5141K (65472K), 0.2133274 secs]
120.899: [GC 120.899: [DefNew: 8128K->8128K (8128K), 0.0000511 secs] 120.899: {Tenured: 50384K->2430K (57344K), 0.2216590 secs] 58512K->2430K (65472K), 0.2219384 secs]
153.968: [GC 153.968: [DefNew: 8128K->8128K (8128K), 0.0000511 secs] 153.968: {Tenured: 51164K->2309K (57344K), 0.2193906 secs] 59292K->2309K (65472K), 0.2196372 secs]

Top of Page

Major Collection Pauses are too Long

If your Major Collection pauses are too long (for example, more than .5 seconds), you can implement the Concurrent Low Pause collector using the command line option


 [GC [ParNew: 64576K->960K (64576K), 0.0377639 secs] 140122K->78078K (261184K), 0.0279598 secs]
[GC [ParNew: 64576K->960K (64576K), 0.0329313 secs] 141694K->79533K (261184K), 0.0331324 secs]
[GC [ParNew: 64576K->960K (64576K), 0.0413880 secs] 143149K->81128K (261184K), 0.0416101 secs]
[GC [1 CMS-initial-mark: 80168K (196608K)] 81144K (261184K), 0.0059036 secs]
[CMS-concurrent-mark: 0.129/0.129 secs]
[CMS-concurrent-preclean: 0.007/0.007 secs]
[GC[Rescan (non-parallel) [grey object rescan, 0.0020879 secs] [root rescan, 0.0144199 secs], 0.0166258 secs] [weak refs processing, 0.0000411 secs] [1 CMS-remark: 80168K (196608K)] 82493K (261184K), 0.0168943 secs]
[CMS-concurrent-sweep: 1.208/1.208 secs]
[CMS-concurrent-reset: 0.036/0.036 secs]
[GC [ParNew: 64576K->960K (64576K), 0.0311520 secs] 66308K->4171K (261184K), 0.0313513 secs]
[GC [ParNew: 64576K->960K (64576K), 0.0348341 secs] 67787K->5695K (261184K), 0.0350776 secs]
[GC [ParNew: 64576K->960K (64576K), 0.0359806 secs] 69311K->7154K (261184K), 0.0362064 secs]

In the above example, note that the concurrent phases are long (for example, the concurrent sweep is 1.2 seconds), but the application does not pause during the sweep. The application pauses occur during the CMS-initial-mark and CMS-remark phases. The average pause time for the remark phase was .019 seconds.

        Rescan  + Reset     =   Average pause time

      .0020879 + .036 / 2  =  .01904395 (.0380879/2)

You can also use the Incremental Low Pause Collector to reduce Major Collection pauses. Initiate this collector using the command line option –Xincgc.

[GC [DefNew: 64575K->959K (64576K), 0.0606204 secs] [Train: 5537K->5474K (196608K), 0.0108803 secs] 68664K->6434K (261184K), 0.0730248 secs]
[GC [DefNew: 64575K->959K (64576K), 0.0559078 secs] [Train: 6813K->6749K (196608K), 0.0115077 secs] 70050K->7709K (261184K), 0.0680289 secs]
[GC [DefNew: 64575K->959K (64576K), 0.0607615 secs] [Train: 8178K->8114K (196608K), 0.0112705 secs] 71325K->9074K (261184K), 0.0727902 secs]

During the minor collection, part of the Tenured generation (Train) is collected and there is no separate major collection. The minor collection pauses are approximately .060 seconds. If the incremental collection of the Tenured generation cannot be completed before the Tenured generation is full, a major collection occurs:

 [GC [DefNew: 64575K->64575K (64576K), 0.0005311 secs] [Train MSC: 68226K->135K (196608K), 0.4347857 secs] 132802K->2565K (261184K), 0.4397831 secs]

Top of Page

Use Adaptive Sizing

You can turn Adaptive Sizing on or off if the output shows your collections are any inconsistent behavior. The following sample shows a collection with Adaptive Sizing turned on:

[GC 62752K->2992K (259328K), 0.0126936 secs]

[GC 62896K->60192K (259328K), 0.0127579 secs]

[GC 63008K->3120K (259328K), 0.0123150 secs]

[GC 63024K->60256K (259328K), 0.0120565 secs]

[GC 63072K->3024K (259328K), 0.0127215 secs]

[GC 62928K->60208K (259328K), 0.0113090 secs]

Adaptive Sizing adjusts the size of the Survivor spaces. In the example above, the adjusting of the Survivor spaces is causing the collection of the Young Generation to alternate between a minimal collection and an almost complete collection (2992K collected on the first line, 60129K collected on the second line).

Turning Adaptive Sizing off results in the following output:

500.285: [GC 51526K->2678K (253952K), 0.0120328 secs]

506.734: [GC 51830K->2646K (253952K), 0.0117832 secs]

513.068: [GC 51798K->2742K (253952K), 0.0124632 secs]

519.566: [GC 51894K->2550K (253952K), 0.0122923 secs]

Note that the Young Generation collections are limited to a minimal collection. This improves performance and allows a Major Collection to take place as intended.

Top of Page

Troubleshooting an Out-of-Memory Error

When diagnosing an out-of-memory error, it is important to consider the information that comes after the java.lang.OutOfMemoryError text. The following are some examples: 

  • Java heap space – This indicates that an object could not be allocated in the heap. This may be caused by the heap being too small or objects not being garbage collected because the application still holds references to them. 
  • PermGen Space – This indicates that the Permanent Generation is full. The Permanent Generation may need to be increased using the –XX:MaxPermSize parameter. 
  • Requested array size exceeds VM limit – This indicates that the application attempted to allocate an array larger than the heap.

Top of Page

Additional Resources

The following resources have more information on tuning your Sun JVM:

Additional PDN Articles:

Top of Page

Need Further Help?

If you have followed this Support Play, but require additional help, contact Global Customer Support by logging a Support Request

Top of Page

  • Previous topic Support Play: Tuning the IBM JVM 5.0 for performance
  • Next topic Alerts, alert analysis tools and usage guidelines


Pega Platform 7.1.1 - 8.3.1 System Architect System/Cloud Ops Administrator System Administration

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best. is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us