I'm running a 4 datanode cluster (7.2.4) and I want to use all the computing power of the machine's 8 cores. Unfortunately, when running ndbmtd, the data nodes sometimes crash with the error: 'Out of SendBufferMemory in sendSignal'.
I read somewhere that this is a bug and a workaround would be to use ndbd.
Indeed, datanodes running with ndbd don't crash. But they use 1 core only at 100%, while 7 cores are bored.
The documentation says, it's possible to install multiple ndbd's on one host, but it's not recommended or supported (probably because it's a bad idea when 8 nodes go down simultaneously in case of a crash).
So what can I do? Do I really have to stick to the unefficient 1-core solution?
Adjusting buffer sizes won't help, as it's just going to take a little longer until it's full.
I read somewhere that this is a bug and a workaround would be to use ndbd.
Indeed, datanodes running with ndbd don't crash. But they use 1 core only at 100%, while 7 cores are bored.
The documentation says, it's possible to install multiple ndbd's on one host, but it's not recommended or supported (probably because it's a bad idea when 8 nodes go down simultaneously in case of a crash).
So what can I do? Do I really have to stick to the unefficient 1-core solution?
Adjusting buffer sizes won't help, as it's just going to take a little longer until it's full.