Skip to main content


         This documentation site is for previous versions. Visit our new documentation site for current releases.      
 

Cassandra error: Too many open files

Updated on May 17, 2024

This content applies only to On-premises and Client-managed cloud environments

The Cassandra process might crash with an error that indicates that there are too many open files. By performing the following task, you can check for issues with querying, saving, or synchronizing data, and then correct the errors.

The root cause is that the Cassandra process has run into system-imposed limits on the maximum number of open files. See the following code snippet for example of the error message:

Caused by: java.lang.RuntimeException: java.nio.file.FileSystemException: 
/path/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/
mc_txn_flush_8bdc78f0-7d48-11e9-9b2e-0f78ea2b6c2b.log: Too many open files
  1. For Linux, enter the following commands in the Unix shell to check the limits on the number of open files:
    • To check the hard limit, enter ulimit -Hn

      Only the root user can raise this limit but any process can lower it.

    • To check the soft limit, enter ulimit -Sn

      Any process can change this limit.

  2. Change the limit on the maximum number of open files, depending on your business needs.
    Do not raise the limit on open files above 100,000. For more information about changing open file limits, see the Apache Cassandra documentation.
  • Previous topic Cassandra user does not have the required permissions
  • Next topic Cassandra error: Too large commit log size

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us