Skip to main content


         This documentation site is for previous versions. Visit our new documentation site for current releases.      
 

Removing DDS nodes from a Cassandra cluster

Updated on May 17, 2024

This content applies only to On-premises and Client-managed cloud environments

Reduce the size of a Cassandra cluster by permanently removing Decision Data Store (DDS) nodes from the cluster. This procedure allows the cluster to rebalance itself and ensures that the removed nodes are not reintroduced into the cluster.

Caution: Before you remove any Cassandra directories or data directories from a node, ensure that you first remove the node from the Cassandra cluster and then make suitable backups of these assets before you take any non-reversible actions.
  1. Run the nodetool utility from any machine in the Cassandra cluster.
  2. Run a parallel repair on the node to repair and correctly replicate the data owned by the node:
    nodetool repair -par
  3. Remove the node:
    nodetool removenode <ID>

    where ID is the ID of the node that you want to remove. You can view the node ID by running the nodetool status command.

    Tip: You can also remove nodes from Pega Platform by using the Decommission action. For more information, see Managing decision management nodes.
    Result: The data is streamed from the removed node to the nodes that now own or replicate the data. The removal is complete when the command prompt returns or the node is no longer listed when you run the nodetool status command.
  4. Perform a graceful shutdown of the Pega Java virtual machine (JVM).
  5. If there are other services than DDS on the node, and the node still needs to be part of the Pega cluster, remove the DDS entry from the NodeType setting on the node.
    For example:

    If the original node type was set to -DNodeType=DDS, ADM, Web, then set the node type to -DNodeType=ADM, Web.

    In this example, the node is still responsible for the ADM and Web node types, and needs to be restarted.

  6. Restart the Pega JVM.
  7. Repair and clean up the remaining nodes:
    1. Run a parallel repair on each remaining node to ensure that any data that was not replicated for any reason during the removal is repaired and replicated correctly:
      nodetool repair -par
    2. Run a cleanup on each remaining node to remove any data that no longer belongs to the nodes:
      nodetool cleanup
What to do next: Verify that the cluster is in a consistent state with all appropriate nodes reporting the correct ownership, and that the application performs correctly. For more information, see Nodetool commands for monitoring Cassandra clusters.

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us