Cassandra error: Too many open files
This content applies only to On-premises and Client-managed cloud environments
The Cassandra process might crash with an error that indicates that there are too many open files. By performing the following task, you can check for issues with querying, saving, or synchronizing data, and then correct the errors.
The root cause is that the Cassandra process has run into system-imposed limits on the maximum number of open files. See the following code snippet for example of the error message:
Caused by: java.lang.RuntimeException: java.nio.file.FileSystemException:
/path/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/
mc_txn_flush_8bdc78f0-7d48-11e9-9b2e-0f78ea2b6c2b.log: Too many open files
- For Linux, enter the following commands in the Unix shell to check the
limits on the number of open files:
- To check the hard limit, enter
ulimit -Hn
Only the root user can raise this limit but any process can lower it.
- To check the soft limit, enter
ulimit -Sn
Any process can change this limit.
- To check the hard limit, enter
- Change the limit on the maximum number of open files, depending on your
business needs.Do not raise the limit on open files above 100,000. For more information about changing open file limits, see the Apache Cassandra documentation.
Previous topic Cassandra user does not have the required permissions Next topic Cassandra error: Too large commit log size