I have an NDB database with a single table and a simple use pattern of very frequent reads and updates of individual rows i.e. no complex joins, aggregates etc. The table is ~200M rows but the operations are mostly limited to 10% of the rows at any time, or in other words the set of these “active” rows is about 10% of the total number of rows at any given time. The content of the 10% slowly (and unpredictably) changes, where slowly means hours.
NDB memory usage for this database is ~100GB per data node as reported by “all report memory usage”
I want to continue using NDB but reduce its memory usage by taking advantage of the fact that the actual active data set is only 10% of the total data set.
Is this doable and if so how?
Another way of saying all the above is that I’d like NDB to cache in memory only a subset of the data but cannot see a way to do it.
Thanks in advance for any suggestions.
Tom
NDB memory usage for this database is ~100GB per data node as reported by “all report memory usage”
I want to continue using NDB but reduce its memory usage by taking advantage of the fact that the actual active data set is only 10% of the total data set.
Is this doable and if so how?
Another way of saying all the above is that I’d like NDB to cache in memory only a subset of the data but cannot see a way to do it.
Thanks in advance for any suggestions.
Tom