Hi everybody,
I need help with this problem please,
I have an NDB cluster with two management nodes, two api nodes and two data nodes
this is the output of "show" command in management console:
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=12 @192.168.2.21 (mysql-5.7.17 ndb-7.5.5, Nodegroup: 0, *)
id=13 @192.168.2.22 (mysql-5.7.17 ndb-7.5.5, Nodegroup: 0)
[ndb_mgmd(MGM)] 2 node(s)
id=1 @192.168.2.29 (mysql-5.7.17 ndb-7.5.5)
id=11 @192.168.2.20 (mysql-5.7.17 ndb-7.5.5)
[mysqld(API)] 2 node(s)
id=14 @192.168.2.22 (mysql-5.7.17 ndb-7.5.5)
id=15 @192.168.2.21 (mysql-5.7.17 ndb-7.5.5)
One of my api servers (the one with id=14) is stuck with system user waiting for commit lock and all queries which insert data to my database are waiting for global read lock the other server is working just fine and I can execute insert queries on it, nothing special in the error log.
Also all data nodes are connected with this server as you can see here:
mysql> select * from ndbinfo.nodes;
+---------+--------+---------+-------------+-------------------+
| node_id | uptime | status | start_phase | config_generation |
+---------+--------+---------+-------------+-------------------+
| 12 | 693397 | STARTED | 0 | 1 |
| 13 | 691556 | STARTED | 0 | 1 |
+---------+--------+---------+-------------+-------------------+
2 rows in set (0.00 sec)
These are the contents of "/etc/my.cnf"
[mysqld]
ndbcluster
bind-address=192.168.2.22
sql-mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
performance_schema
server-id=2
default-storage-engine=NDBCLUSTER
log_error=/var/log/mysql/error.log
long_query_time = 1
slow_query_log=ON
log-bin=/var/log/mysql/mysql-bin.log
binlog_do_db=fastestcow
expire_logs_days = 7
binlog_format=STATEMENT
skip-symbolic-links
local-infile=0
[mysql_cluster]
ndb-connectstring=192.168.2.29:1186
Please I need help with this, why this server is giving errors and the other one is just fine
thanks a lot.
I need help with this problem please,
I have an NDB cluster with two management nodes, two api nodes and two data nodes
this is the output of "show" command in management console:
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=12 @192.168.2.21 (mysql-5.7.17 ndb-7.5.5, Nodegroup: 0, *)
id=13 @192.168.2.22 (mysql-5.7.17 ndb-7.5.5, Nodegroup: 0)
[ndb_mgmd(MGM)] 2 node(s)
id=1 @192.168.2.29 (mysql-5.7.17 ndb-7.5.5)
id=11 @192.168.2.20 (mysql-5.7.17 ndb-7.5.5)
[mysqld(API)] 2 node(s)
id=14 @192.168.2.22 (mysql-5.7.17 ndb-7.5.5)
id=15 @192.168.2.21 (mysql-5.7.17 ndb-7.5.5)
One of my api servers (the one with id=14) is stuck with system user waiting for commit lock and all queries which insert data to my database are waiting for global read lock the other server is working just fine and I can execute insert queries on it, nothing special in the error log.
Also all data nodes are connected with this server as you can see here:
mysql> select * from ndbinfo.nodes;
+---------+--------+---------+-------------+-------------------+
| node_id | uptime | status | start_phase | config_generation |
+---------+--------+---------+-------------+-------------------+
| 12 | 693397 | STARTED | 0 | 1 |
| 13 | 691556 | STARTED | 0 | 1 |
+---------+--------+---------+-------------+-------------------+
2 rows in set (0.00 sec)
These are the contents of "/etc/my.cnf"
[mysqld]
ndbcluster
bind-address=192.168.2.22
sql-mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
performance_schema
server-id=2
default-storage-engine=NDBCLUSTER
log_error=/var/log/mysql/error.log
long_query_time = 1
slow_query_log=ON
log-bin=/var/log/mysql/mysql-bin.log
binlog_do_db=fastestcow
expire_logs_days = 7
binlog_format=STATEMENT
skip-symbolic-links
local-infile=0
[mysql_cluster]
ndb-connectstring=192.168.2.29:1186
Please I need help with this, why this server is giving errors and the other one is just fine
thanks a lot.