Skip to main content


         This documentation site is for previous versions. Visit our new documentation site for current releases.      
 

This content has been archived and is no longer being updated.

Links may not function; however, this content may be relevant to outdated versions of the product.

Troubleshooting the Stream service

Updated on March 11, 2021

To troubleshoot issues that are related to the Stream service, you can copy all the data from one cluster to another. For example, you can copy data from the production environment to the testing environment. Troubleshooting the Stream service in the testing environment does not affect production environment users.

Solution

  1. Stop the Stream service nodes on the cluster (Cluster1) from which you want to copy data.
    Caution: To avoid losing all the data on the node, be careful not to decommission the node.
  2. Disable all services on the cluster to which you want to copy the data (Cluster2), and turn off the Pega Platform instance.
  3. From Cluster1, export the pr_data_stream_nodes table.
  4. Ensure that the pr_data_stream_nodes table on Cluster2 is empty, and import the data from Cluster1.
  5. Move the kafka-data folder from every service node of Cluster1 to Cluster2.
    By default, the Stream service data is in the current working directory. For example, for the Apache Tomcat server this path is <your_tomcat_folder>/kafka-data.
  6. Start the Stream service nodes on Cluster2.
  • Previous topic Stream service nodes fail on IBM WebSphere Application Server 8.5
  • Next topic Troubleshooting Visual Business Director

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us