Hi,
we are currently running a clean up operation of the data in our disk based tables.
After deleting some data we realized we had to run an "optimize table <tab>" to free up the space. Since optimize does not do anything we just do "alter table <tab> engine=ndbcluster".
So looking at the results we realized that there is always some DATA_FREE left behind. The reason is number of partitions * extent_size - which in our case (4 nodes * 12 LDM * 1MByte extent_size) is 48 MBytes minimum per table. With our number of tables the current total of DATA_FREE is 14GByte high performance SSD storage per node.
So the idea is to reduce the extent_size to get closer or equal to the block size (32K).
Questions:
a) Is there a limit for the number of data files per table space?
b) Is an ALTER TABLE to a new tablespace with a adjusted extent_size the correct way to migrate or are there better options?
c) Is there anything against using 32K as extent_size and 1G for the data file size?
Best regards
we are currently running a clean up operation of the data in our disk based tables.
After deleting some data we realized we had to run an "optimize table <tab>" to free up the space. Since optimize does not do anything we just do "alter table <tab> engine=ndbcluster".
So looking at the results we realized that there is always some DATA_FREE left behind. The reason is number of partitions * extent_size - which in our case (4 nodes * 12 LDM * 1MByte extent_size) is 48 MBytes minimum per table. With our number of tables the current total of DATA_FREE is 14GByte high performance SSD storage per node.
So the idea is to reduce the extent_size to get closer or equal to the block size (32K).
Questions:
a) Is there a limit for the number of data files per table space?
b) Is an ALTER TABLE to a new tablespace with a adjusted extent_size the correct way to migrate or are there better options?
c) Is there anything against using 32K as extent_size and 1G for the data file size?
Best regards