I have a question regarding hardware requirements. In my case, I plan to run hundreds of relatively tiny databases (thousands of edges, megabytes in size for each database) on a single server.
The documentation relies upon some “cache size”, but how could I calculate this number in my case?
Interesting question, we don’t normally optimise for this use case.
The dominant factor here will probably be RocksDB, since each database will create some RocksDB databases. Rocks is relatively memory hungry per database in our experience. If you want to run many databases you should reduce the database cache size to something like 50mb each * number of databases, so say 10gb for 200 small databases.
I think Rocks should also be smart enough to unload the databases that haven’t been used in a while so you may not need more than that amount of cache for long while.