Quantcast
Channel: MySQL Forums - NDB clusters
Viewing all 1553 articles
Browse latest View live

Where do i point our webservers to using a 5 node NDB Cluster (no replies)

$
0
0
Hi,

We've managed to configure our 5 node NDB Cluster with shared privileges and all successfully :-))

1 management node
2 data nodes
2 sql nodes

The only question left is to what server / node /IP Address do we point out our webserver?
we what a fully redundant environment N+1 for the webportal.
High available Netscaler Load Balancer;
min. 2 webservers
1 management node (maybe 2 in near future)
2 data nodes
2 sql nodes

So basicaly I need to know how to load balance the SQL nodes ;-)

Thanks in advance!

Marcel

mysql cluster installation: unstable output status with ndb_mgm show command (no replies)

$
0
0
Hi,
Im configuring mysql cluster 7.5 on Centos 7.5. I have
1 management node
2 data node
2 sql node
I have an issue when running ndb_mgm show command on management node.

The first show
ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.100.101 (mysql-5.7.23 ndb-7.5.11, Nodegroup: 0, *)
id=3 @192.168.100.102 (mysql-5.7.23 ndb-7.5.11, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.100.100 (mysql-5.7.23 ndb-7.5.11)

[mysqld(API)] 2 node(s)
id=4 (not connected, accepting connect from 192.168.100.103)
id=5 @192.168.100.104 (mysql-5.7.23 ndb-7.5.11)

The next show command
ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.100.101 (mysql-5.7.23 ndb-7.5.11, Nodegroup: 0, *)
id=3 @192.168.100.102 (mysql-5.7.23 ndb-7.5.11, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.100.100 (mysql-5.7.23 ndb-7.5.11)

[mysqld(API)] 2 node(s)
id=4 @192.168.100.103 (mysql-5.7.23 ndb-7.5.11)
id=5 (not connected, accepting connect from 192.168.100.104)

and so on

id=4 (not connected, accepting connect from 192.168.100.103)
id=5 @192.168.100.104 (mysql-5.7.23 ndb-7.5.11)
...

Please tell me what I am wrong at?
Thanks

What are New and Superseded Configuration Parameter in NDB 7.6.7 compare to 7.4.11 (no replies)

$
0
0
Hi,
What are New and Superseded Configuration Parameter in NDB 7.6.7 compare to 7.4.11.
Regards

New and Superseded Configuration Parameter in NDB 7.6.7 compare to 7.4.11 (no replies)

$
0
0
Hi,

what are New and Superseded Configuration Parameter in NDB 7.6.7 compare to 7.4.11.?

Out of transaction markers (no replies)

$
0
0
When I try and alter a table I get this error. I'm running a ndb mysql cluster on ubuntu 14.04.

mysql> ALTER TABLE polar_defs.attribute_header AUTO_INCREMENT = 4000000;
ERROR 1297 (HY000): Got temporary error 279 ‘Out of transaction markers in transaction coordinator’ from NDBCLUSTER

MySQL Cluster 7.6.8 performance jump of up to 240% (no replies)

Upgrade from 7.5.6 to 7.6.8 (no replies)

$
0
0
Need urgent help. I am trying to upgrade from Cluster 7.5.6 to 7.6.8.

I used mysqldump to export all the databases with --all-databases option

I recreated the Data drives with ndbd --initialize

I created the new SQl NODE with --initialize also

I now try to import the whole export file and it bombs on the user tables that were shared between the SQL nodes. It will import the user tables, but it also wipes out my root password.

What am I missing or what did I do wrong?

Disk Layout / Directory Structure with raid. (no replies)

$
0
0
For any NBD Cluster with 2 dataservers with 24disk of 600gb..what's the best Disk Layout / Directory Structure and RAID configuration? along with root , var, tmp etc with our undo redo ts lcp logs and backsup and home dir for mysql...these 2 mgmt+mysqld servers are separate with 2 disks of 600gb.

data going to sqlnode datadir in NDB config (no replies)

$
0
0
Hi,

I installed a NDB cluster on linux using tarball and I am able to successfully start all 4 nodes. below is the status. But when I create databases from sqlnode, they are going in to sqlnode's local datadir. they are not going in to ndb nodes. Can someone please help me. below are my cnf contents .

NOTE: I am using a custom directory structure and I used below command to bring the sqlnode up

./bin/mysqld --defaults-file=/sites/app/servers/sqlnode/etc/my.cnf --initialize-insecure --datadir=/sites/app/servers/sqlnode/data/ --basedir=/sites/app/APPS/mysql

./bin/mysqld --defaults-file=/sites/app/servers/sqlnode/etc/my.cnf --basedir=/sites/app/APPS/mysql/ --datadir=/sites/app/servers/sqlnode/data/ &


ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @xxx.xxx.xxx.xx (mysql-5.7.24 ndb-7.5.12, Nodegroup: 0, *)
id=3 @xxx.xxx.xxx.xx (mysql-5.7.24 ndb-7.5.12, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @xxx.xxx.xxx.xx (mysql-5.7.24 ndb-7.5.12)

[mysqld(API)] 1 node(s)
id=4 @xxx.xxx.xxx.xx (mysql-5.7.24 ndb-7.5.12)


sqlnode:
[mysqld]
ndbcluster
lc-messages-dir=/sites/app/APP/mysql/share/
lc-messages=en_US
[mysql_cluster]
ndb-connectstring=XXX.XXX.XXX.XX

ndb data nodes:
[mysqld]
ndbcluster
[mysql_cluster]
ndb-connectstring=XXX.XXX.XXX.XX

management node:
[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2
DataMemory=200M
IndexMemory=50M
ServerPort=2202

[ndb_mgmd]
# Management process options:
HostName=XXX.XXX.XXX.XX
DataDir=/sites/app/servers/mgnd/data

[ndbd]
# Options for data node "A":
HostName=XXX.XXX.XXX.XX
NodeId=2
DataDir=/sites/app/servers/datanode1/data/

[ndbd]
# Options for data node "B":
HostName=XXX.XXX.XXX.XX
NodeId=3
DataDir=/sites/app/servers/datanode2/data/

[mysqld]
# SQL node options:
HostName=XXX.XXX.XXX.XX

NDB Cluster 8.0 Table Replication (no replies)

$
0
0
Hi,

I have a read-only table that I want it to be replicated across all data nodes within one NDB Cluster. I have partitioned other tables according to the documentation. The "NoOfReplicas" parameter seems to replicate everything across all data nodes. Is there a way to replicate this read-only table?

NDB Cluster circular replication (no replies)

$
0
0
Hi everyone,
I am trying the POC of NDB Circular replication dual channel with 3 clusters,where every master(SQL node) is slave. Please find below environment details per cluster;

3 Node Cluster
==============
1. Host A with management node.
2. Host B with SQL/DATANODE.
3. Host C with SQL/DATANODE.

my.cnf parameters for slave/binlog.
server-id=6 #THIS ID VARIES ON ALL SERVERS
log-bin=mysql-bin
binlog_format = MIXED
expire_logs_days = 7
max_binlog_size = 100M
max_allowed_packet = 32M
slave-exec-mode=IDEMPOTENT #ADDED THIS FROM A BLOGPOST
log-slave-updates=true #ADDED THIS FROM A BLOGPOST earlier kept it as ON instead of true.


Problem 1:
DB creation logging to Cluster1:
================================
If i create a database on 1st cluster, mysqld log on 3rd cluster's either node shows "database already exists. Sometimes it shows on 2nd node of 1st cluster, most probably because node 2 of 1st cluster is slave of node2 of 3rd cluster.

Problem 2:
mysqld.log on cluster 1 is being contantly flushed by this error;
2018-11-10T16:49:39.602575Z 58 [Warning] Slave SQL for channel '': Could not execute Write_rows event on table samad.t1; Got temporary error 266 'Time-out in NDB, probably caused by deadlock' from NDB, Error_code: 1297; Lock wait timeout exceeded; try restarting transaction, Error_code: 1205; handler error HA_ERR_LOCK_WAIT_TIMEOUT; the event's master log mysql-bin.000001, end_log_pos 21724247, Error_code: 1205

i have only 1 table t1, please find below its structure.

show create table samad.t1;
---------------------------------+
| t1 | CREATE TABLE `t1` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`changed_on` datetime DEFAULT NULL,
`client_id` varchar(75) DEFAULT NULL,
`code_str` varchar(50) DEFAULT NULL,
`created_on` datetime DEFAULT NULL,
`is_used` bit(1) DEFAULT NULL,
`msisdn` varchar(20) DEFAULT NULL,
`nonce` varchar(255) DEFAULT NULL,
`redirect_uri` varchar(255) DEFAULT NULL,
`response_type` varchar(25) DEFAULT NULL,
`scope` varchar(25) DEFAULT NULL,
`state` varchar(125) DEFAULT NULL,
`correlation_id` varchar(25) DEFAULT NULL,
`is_new_user` bit(1) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=ndbcluster AUTO_INCREMENT=62 DEFAULT CHARSET=latin1


this table has only 61 rows, which i've added using different SQL nodes to check ring replication.

Kindly advise if i am missing any parameter in "my.cnf" for ring replication or it is something else.

Regards
Samad

NDB pioneers, would you like to tell me how to check a tablespace's datafile's left free space? (no replies)

$
0
0
NDB pioneers, would you like to tell me how to check a tablespace's datafile's left free space?

I help with finding a bug in NDBCluster 7.5.11 (no replies)

$
0
0
I help with finding a bug in NDBCluster 7.5.11 as below:
Node 23: Stall LCP: current stall time: 0 secs, max wait time:11 secs
2018-11-13 15:01:22 [MgmtSrvr] INFO -- Node 23: Local checkpoint 4043 started. Keep GCI = 3283088 oldest restorable GCI = 3283119
2018-11-13 15:03:33 [MgmtSrvr] INFO -- Node 23: LDM(0): Completed LCP, #frags = 1152 #records = 21314442, #bytes = 4108609828
2018-11-13 15:03:33 [MgmtSrvr] INFO -- Node 23: Local checkpoint 4043 completed
2018-11-13 15:03:34 [MgmtSrvr] INFO -- Node 23: Stall LCP, LCP time = 131 secs, wait for Node24, state Synchronize start node with live nodes
2018-11-13 15:03:34 [MgmtSrvr] INFO -- Node 23: Stall LCP: current stall time: 0 secs, max wait time:9 secs
2018-11-13 15:03:43 [MgmtSrvr] INFO -- Node 23: Local checkpoint 4044 started. Keep GCI = 3283166 oldest restorable GCI = 3283149
2018-11-13 15:03:45 [MgmtSrvr] ALERT -- Node 24: Forced node shutdown completed. Occured during startphase 5. Caused by error 2303: 'System error, node killed during node restart by other node(Internal error, programming error or missing error message, please report a bug). Temporary error, restart node'.
2018-11-13 15:03:45 [MgmtSrvr] ALERT -- Node 23: Node 24 Disconnected
2018-11-13 15:03:45 [MgmtSrvr] INFO -- Node 23: Communication to Node 24 closed
2018-11-13 15:03:45 [MgmtSrvr] ALERT -- Node 23: Network partitioning - arbitration required
2018-11-13 15:03:45 [MgmtSrvr] INFO -- Node 23: President restarts arbitration thread [state=7]
2018-11-13 15:03:45 [MgmtSrvr] ALERT -- Node 22: Node 24 Disconnected
2018-11-13 15:03:45 [MgmtSrvr] ALERT -- Node 23: Arbitration won - positive reply from node 22
2018-11-13 15:03:45 [MgmtSrvr] INFO -- Node 23: NR Status: node=24,OLD=Synchronize start node with live nodes,NEW=Node failed, fail handling on
2018-11-13 15:03:45 [MgmtSrvr] INFO -- Node 23: Removed lock for node 24
2018-11-13 15:03:45 [MgmtSrvr] INFO -- Node 23: DICT: remove lock by failed node 24 for NodeRestart
2018-11-13 15:03:45 [MgmtSrvr] INFO -- Node 23: DICT: unlocked by node 24 for NodeRestart
2018-11-13 15:03:46 [MgmtSrvr] INFO -- Node 23: Started arbitrator node 22 [ticket=f07b00582279a5e5]
2018-11-13 15:04:14 [MgmtSrvr] WARNING -- Node 23: Failure handling of node 24 has not completed in 29 seconds - state = 6
2018-11-13 15:04:14 [MgmtSrvr] INFO -- Node 23: NF Node 24 tc: 1 lqh: 1 dih: 0 dict: 1 recNODE_FAILREP: 1
2018-11-13 15:04:14 [MgmtSrvr] INFO -- Node 23: m_NF_COMPLETE_REP: [SignalCounter: m_count=1 0000000000800000] m_nodefailSteps: 00000002
2018-11-13 15:04:25 [MgmtSrvr] INFO -- Node 23: NR Status: node=24,OLD=Node failed, fail handling ongoing,NEW=Node failure handling complete
2018-11-13 15:04:25 [MgmtSrvr] INFO -- Node 23: Communication to Node 24 opened
2018-11-13 15:05:46 [MgmtSrvr] INFO -- Node 23: LDM(0): Completed LCP, #frags = 1152 #records = 21314469, #bytes = 4108625512
2018-11-13 15:05:46 [MgmtSrvr] INFO -- Node 23: Local checkpoint 4044 completed
2018-11-13 15:05:46 [MgmtSrvr] INFO -- Node 23: Stall LCP, LCP time = 122 secs, wait for Node24, state Node failure handling complete
2018-11-13 15:05:46 [MgmtSrvr] INFO -- Node 23: Stall LCP: current stall time: 0 secs, max wait time:9 secs
2018-11-13 15:05:55 [MgmtSrvr] INFO -- Node 23: Local checkpoint 4045 started. Keep GCI = 3283235 oldest restorable GCI = 3283237
2018-11-13 15:09:42 [MgmtSrvr] INFO -- Node 23: LDM(0): Completed LCP, #frags = 1152 #records = 21314480, #bytes = 4108632000
2018-11-13 15:09:42 [MgmtSrvr] INFO -- Node 23: Local checkpoint 4045 completed
2018-11-13 15:09:43 [MgmtSrvr] INFO -- Node 23: Stall LCP, LCP time = 226 secs, wait for Node24, state Node failure handling complete
2018-11-13 15:09:43 [MgmtSrvr] INFO -- Node 23: Stall LCP: current stall time: 0 secs, max wait time:16 secs
2018-11-13 15:09:58 [MgmtSrvr] INFO -- Node 23: Local checkpoint 4046 started. Keep GCI = 3283299 oldest restorable GCI = 3283295
2018-11-13 15:13:45 [MgmtSrvr] INFO -- Node 23: LDM(0): Completed LCP, #frags = 1152 #records = 21314520, #bytes = 4108655412
2018-11-13 15:13:45 [MgmtSrvr] INFO -- Node 23: Local checkpoint 4046 completed
2018-11-13 15:13:46 [MgmtSrvr] INFO -- Node 23: Stall LCP, LCP time = 226 secs, wait for Node24, state Node failure handling complete
2018-11-13 15:13:46 [MgmtSrvr] INFO -- Node 23: Stall LCP: current stall time: 0 secs, max wait time:16 secs
2018-11-13 15:14:01 [MgmtSrvr] INFO -- Node 23: Local checkpoint 4047 started. Keep GCI = 3283417 oldest restorable GCI = 3283412


this bug is happen after below steps:
1, I have a 2 data nodes ,2 SQL nodes ndbcluster on Centos 6.8 , node 24's hardisk has few space
2, then I stop node 24 by command :24 stop in ndb_mgm console;
3,and then I use pvcreate and other commands to extent the root file system/'s LVM size.
4, after I have succeded in extend hard disk space, I use ndbd's none intial command option to start, but got an error of: " startphase 5 error 2355: 'Failure to restore schema(Resource configuration error). Permanent error, external action needed'. ",
5, then I use ndbd's initial option to start node 24, but get the error logs as up show.
6, sorry I remember I add the 3 variables in config.ini:
TimeBetweenLocalCheckpoints=10
#not work NoOfFragmentLogFiles=32
#ok MaxNoOfExecutionThreads=6
to solve the error 2355, but forget to restart other data node except management node.


would ndbcluster's pioneer can hurry up to help me to solve?

DDL resulting in "errno: 708 - Unknown error 708" (no replies)

$
0
0
Version: 5.6.11-ndb-7.3.2-cluster-gpl

I am trying some DDL on an NDB table. At the end of the DDL I consistently get:

ERROR 1025 (HY000): Error on rename of './ndb/fl_state' to './ndb/#sql2-1aa0-2ed2bb' (errno: 708 - Unknown error 708)

DDS is as follows:

ALTER TABLE ndb.fl_state MODIFY mc_odo DECIMAL(12,3) DEFAULT NULL, MODIFY mc_fuel DECIMAL(11,2) DEFAULT NULL;

I have tried on several nodes and this seems consistent. Any ideas how I can get past this?

Regards,

Ben

MySQL NDB Cluster row level locks and write scalability (no replies)


Is NDB7.5.11's generic Linux binary a bad one? (no replies)

$
0
0
Dear NDB genius engineers,

Is NDB7.5.11's generic Linux binary a bad one? After I inserted several million rows in 2 NDB tables, I got error logs in datanode 23 as below:
2018-11-17 20:38:17 [ndbd] INFO -- Watchdog: User time: 122 System time: 173628
2018-11-17 20:38:17 [ndbd] WARNING -- Ndb kernel thread 0 is stuck in: Job Handling elapsed=6071
2018-11-17 20:38:17 [ndbd] INFO -- Watchdog: User time: 122 System time: 173635
2018-11-17 20:38:17 [ndbd] WARNING -- Ndb kernel thread 0 is stuck in: Job Handling elapsed=6171
2018-11-17 20:38:30 [ndbd] INFO -- Watchdog: User time: 122 System time: 179714
2018-11-17 20:38:30 [ndbd] INFO -- Watchdog: User time: 122 System time: 179721
2018-11-17 20:38:30 [ndbd] WARNING -- Watchdog: Warning overslept 12751 ms, expected 100 ms.
2018-11-17 20:38:30 [ndbd] WARNING -- Ndb kernel thread 0 is stuck in: Job Handling elapsed=18922
2018-11-17 20:38:30 [ndbd] INFO -- Watchdog: User time: 122 System time: 179721
2018-11-17 20:38:31 [ndbd] INFO -- Received signal 6. Running error handler.
2018-11-17 20:38:31 [ndbd] INFO -- Child process terminated by signal 6
2018-11-17 20:38:31 [ndbd] ALERT -- Node 23: Forced node shutdown completed. Occured during startphase 0. Initiated by signal 6.


in Management node's log is:
2018-11-17 20:33:54 [MgmtSrvr] INFO -- Nodeid 23 allocated for NDB at 192.168.70.13
2018-11-17 20:33:55 [MgmtSrvr] INFO -- Node 22: Node 23 Connected
2018-11-17 20:34:05 [MgmtSrvr] INFO -- Alloc node id 24 failed, no new president yet
2018-11-17 20:34:05 [MgmtSrvr] INFO -- Nodeid 24 allocated for NDB at 192.168.70.14
2018-11-17 20:34:18 [MgmtSrvr] INFO -- Node 22: Node 24 Connected
2018-11-17 20:37:34 [MgmtSrvr] ALERT -- Node 22: Node 23 Disconnected
2018-11-17 20:38:13 [MgmtSrvr] ALERT -- Node 23: Forced node shutdown completed. Occured during startphase 0. Initiated by signal 6.
2018-11-17 20:41:04 [MgmtSrvr] ALERT -- Node 22: Node 24 Disconnected
2018-11-17 20:41:22 [MgmtSrvr] INFO -- Node 22: Node 24 Connected
2018-11-17 20:41:23 [MgmtSrvr] INFO -- Node 24: Communication to Node 23 opened
2018-11-17 20:41:23 [MgmtSrvr] INFO -- Node 24: Waiting 30 sec for nodes 23 to connect, nodes [ all: 23 and 24 connected: 24 no-wait: ]
2018-11-17 20:41:27 [MgmtSrvr] INFO -- Node 24: Waiting 27 sec for nodes 23 to connect, nodes [ all: 23 and 24 connected: 24 no-wait: ]
2018-11-17 20:41:30 [MgmtSrvr] INFO -- Node 24: Waiting 24 sec for nodes 23 to connect, nodes [ all: 23 and 24 connected: 24 no-wait: ]
2018-11-17 20:41:33 [MgmtSrvr] INFO -- Node 24: Waiting 21 sec for nodes 23 to connect, nodes [ all: 23 and 24 connected: 24 no-wait: ]
2018-11-17 20:41:36 [MgmtSrvr] INFO -- Node 24: Waiting 18 sec for nodes 23 to connect, nodes [ all: 23 and 24 connected: 24 no-wait: ]
2018-11-17 20:41:39 [MgmtSrvr] INFO -- Node 24: Waiting 15 sec for nodes 23 to connect, nodes [ all: 23 and 24 connected: 24 no-wait: ]
2018-11-17 20:41:42 [MgmtSrvr] INFO -- Node 24: Waiting 12 sec for nodes 23 to connect, nodes [ all: 23 and 24 connected: 24 no-wait: ]
2018-11-17 20:41:45 [MgmtSrvr] INFO -- Node 24: Waiting 9 sec for nodes 23 to connect, nodes [ all: 23 and 24 connected: 24 no-wait: ]
2018-11-17 20:41:48 [MgmtSrvr] INFO -- Node 24: Waiting 6 sec for nodes 23 to connect, nodes [ all: 23 and 24 connected: 24 no-wait: ]
2018-11-17 20:41:51 [MgmtSrvr] INFO -- Node 24: Waiting 3 sec for nodes 23 to connect, nodes [ all: 23 and 24 connected: 24 no-wait: ]





My 2 data nodes 4-node-cluster's config file is as below:
DataDir=/usr/local/mysqlLinJiaXin/ndbdata
#1117 DataMemory=8000M
DataMemory=27212M
#IndexMemory=1000M
IndexMemory=2048M
##BackupMemory: 64M
##ljx新增
#add 1113
#TimeBetweenWatchDogCheck=60000
#TransactionDeadlockDetectionTimeout=5000
#LcpScanProgressTimeout=328
#1110 ChaYiYiBiaoZheng
#TimeBetweenLocalCheckpoints=10
#not work NoOfFragmentLogFiles=32
#ok MaxNoOfExecutionThreads=6
MaxNoOfExecutionThreads=10
DiskPageBufferMemory=160M
BackupDataDir=/usr/local/mysqlLinJiaXin/ndbBack
BackupDataBufferSize=160M
BackupLogBufferSize=32M
BackupMemory=192M
BackupWriteSize=2048K
BackupMaxWriteSize=8M
LockPagesInMainMemory=0
#MHX LockExecuteThreadToCPU=0
#MHX LockMaintThreadsToCPU=1
RealtimeScheduler=1
#1106Change to smaller than 851798
MaxNoOfConcurrentTransactions: 158098
#1106Change to smaller than 8517980
MaxNoOfConcurrentOperations: 180980
SchedulerExecutionTimer=10
SchedulerSpinTimer=100
#CompressedLCP=1
#CompressedBackup=1
#Enabling CompressedLCP and CompressedBackup causes, respectively, local
## Transaction Parameters #
#MaxNoOfConcurrentTransactions: 4096
#MaxNoOfConcurrentOperations: 100000
#1106Change to smaller than 110000
MaxNoOfLocalOperations: 325980
MaxNoOfTables = 1024
MaxNoOfAttributes = 100000
MaxNoOfOrderedIndexes = 10000

[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
#ljx新增
#SendBufferMemory=2M
#ReceiveBufferMemory=2M

[NDB_MGMD]
Nodeid=22
#管理节点服务器
HostName=192.168.70.12
PortNumber=8518
# Storage Engines
DataDir=/usr/local/mysqlLinJiaXin/mgmdata

[NDBD]
Nodeid=23
#MySQL集群db1的IP地址
HostName=192.168.70.13

[NDBD]
Nodeid=24
#MySQL集群db2的IP地址
HostName=192.168.70.14

[MYSQLD]
Nodeid=25
HostName=192.168.70.13

[MYSQLD]
Nodeid=26
HostName=192.168.70.14
[MYSQLD]

has your 7.5.12 solved the above problem?

SQL Node not showing all Databases (no replies)

$
0
0
I just replaced a server that runs one of my Mysql server nodes. The new server is running CENTOS 7, whereas the previous one had CENTOS 6, but the Mysql Cluster version is the same (7.5.12). The new server is using the same my.cnf as the old one, but when I start myoqld, I can't see all of the NDBCluster databases. I can see ndbinfo and ndb_mgm shows the sql node connected. Even logging in as root does not show all of the databases. Two databases are missing and, unfortunately, those are the ones I need to use. Does anyone have any advice as to how to resolve this?

Thanks much@

Management Server Upgrade don't work correctly (no replies)

$
0
0
I am trying to upgrade a MySQL Cluster from version 7.5.7 to 7.6.8. According to the Rolling Update instructions, the management node must be updated first.

I stop the management node, uninstall the old RPM, install the new RPM and restart it.

So far so good - unfortunately both the data nodes and the MySQL API nodes no longer connect to the management server:

ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=11 (not connected, accepting connect from datanode1d)
id=12 (not connected, accepting connect from datanode2d)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @10.1.1.45 (mysql-5.7.24 ndb-7.6.8)

mysqld(API)] 2 node(s)
id=21 (not connected, accepting connect from apinode1d)
id=22 (not connected, accepting connect from apinode2d)

These messages are repeated endlessly in the log file of the management node:
2018-11-28 10:22:08 [MgmtSrvr] WARNING -- Failed to convert connection from '10.1.1.72:36378' to transporter: line: 457 : Incorrect reply from client: >11 1
<, node: 11
2018-11-28 10:22:08 [MgmtSrvr] WARNING -- Failed to convert connection from '10.1.1.73:45086' to transporter: line: 457 : Incorrect reply from client: >12 1
<, node: 12
2018-11-28 10:22:08 [MgmtSrvr] WARNING -- Failed to convert connection from '10.1.1.72:36380' to transporter: line: 457 : Incorrect reply from client: >11 1
<, node: 11
2018-11-28 10:22:08 [MgmtSrvr] WARNING -- Failed to convert connection from '10.1.1.73:45088' to transporter: line: 457 : Incorrect reply from client: >12 1
<, node: 12
2018-11-28 10:22:08 [MgmtSrvr] WARNING -- Failed to convert connection from '10.1.1.72:36382' to transporter: line: 457 : Incorrect reply from client: >11 1
<, node: 11
2018-11-28 10:22:08 [MgmtSrvr] WARNING -- Failed to allocate nodeid for API at 10.1.1.75. Returned error: 'Id 22 already allocated by another node.'
2018-11-28 10:22:08 [MgmtSrvr] WARNING -- Failed to convert connection from '10.1.1.73:45090' to transporter: line: 457 : Incorrect reply from client: >12 1

After a downgrade back to 7.5.7, all nodes connect again...

ndbcluster table can't support more than 51 text columns? (no replies)

$
0
0
ndbcluster table can't support more than 51 text columns?
I create a 5 nodes cluster which has 1 ndb_mgmd node, 2 ndbd nodes, 2 sql nodes.
When I create a table which has more than 51 text columns, it appears not support! Error like below:
ERROR 1031 (HY000) at line 141: Table storage engine for 'jc_acquisition' doesn't have this option
My sql statement is like this:
CREATE TABLE `test` (
`SEQ_ID` int(10) unsigned NOT NULL AUTO_INCREMENT,
`DATA_1` text,
`DATA_2` text,
`DATA_3` text,
,,, Here can only support up to 51 text/blob columns!
`DATA_51` text,
PRIMARY KEY (`SEQ_ID`)
) ENGINE=ndbcluster AUTO_INCREMENT=9 DEFAULT CHARSET=utf8;

Who can help me ? Thanks!

Secondary indexes and stats [NDB-7.4] (no replies)

$
0
0
Hello,

Using MySQL "5.6.37-ndb-7.4.16-cluster-gpl-log" version, it appears that query execution plan is not using secondary indexes.

I can see that for all the secondary indexes created, cardinality is null (unlike innodb where i can see cardinality calculated just after index creation).

Is there an option to update stats automatically ?

Is there a restriction on secondary indexes with this NDB version ?

Thanks for your help.
Viewing all 1553 articles
Browse latest View live




Latest Images