Quantcast
Channel: MySQL Forums - NDB clusters
Viewing all 1560 articles
Browse latest View live

Mysql Cluster datanode backup (1 reply)

$
0
0
Hi,
This is my cluster:

Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @10.9.40.58 (mysql-5.6.21 ndb-7.3.7, single user mode, Nodegroup: 0, *)
id=10 @10.9.40.66 (mysql-5.6.21 ndb-7.3.7, single user mode, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @10.9.40.34 (mysql-5.6.21 ndb-7.3.7)

[mysqld(API)] 2 node(s)
id=3 @10.9.40.42 (mysql-5.6.21 ndb-7.3.7)
id=11 @10.9.40.50 (mysql-5.6.21 ndb-7.3.7)

I want to run a test-backup.
1.ndb_mgm -e "START BACKUP" from mgmt-node.
Connected to Management Server at: localhost:1186
Waiting for completed, this may take several minutes
Node 2: Backup 1 started from node 1
Node 2: Backup 1 started from node 1 completed
StartGCP: 2088 StopGCP: 2091
#Records: 2053 #LogRecords: 0
Data: 50312 bytes Log: 0 bytes

2. I kill the ndbd processes on the datanodes, and then on both datanodes I ran the command: ndbd --initial.

3. Info from datanode:
/usr/local/mysql/data/BACKUP# ls -al
totalt 24
drwxr-x--- 6 root root 4096 nov. 12 14:18 .
drwxr-xr-x 4 root root 4096 nov. 11 12:15 ..
drwxr-x--- 2 root root 4096 nov. 12 14:18 BACKUP-1
drwxr-x--- 2 root root 4096 nov. 12 12:40 BACKUP-3
drwxr-x--- 2 root root 4096 nov. 12 12:41 BACKUP-4
drwxr-x--- 2 root root 4096 nov. 12 12:53 BACKUP-5

/usr/local/mysql/data/BACKUP/BACKUP-1# ls -al
totalt 56
drwxr-x--- 2 root root 4096 nov. 12 14:18 .
drwxr-x--- 7 root root 4096 nov. 12 14:24 ..
-rw-r--r-- 1 root root 25808 nov. 12 14:18 BACKUP-1-0.2.Data
-rw-r--r-- 1 root root 15360 nov. 12 14:18 BACKUP-1.2.ctl
-rw-r--r-- 1 root root 52 nov. 12 14:18 BACKUP-1.2.log


4. Now I want to go back to my backup, but how do I do that?
I've tried, from mgmt-node:

cd /usr/src/mysql-mgm/mysql-cluster-gpl-7.3.7-linux-glibc2.5-x86_64/
/usr/src/mysql-mgm/mysql-cluster-gpl-7.3.7-linux-glibc2.5-x86_64/bin# ./ndb_restore -c voip-datanode-el -m -n2 -b2 -r --backup_path=/usr/local/mysql/data/BACKUP/BACKUP-1/
Nodeid = 2
Backup Id = 2
backup path = /usr/local/mysql/data/BACKUP/BACKUP-1/
Opening file '/usr/local/mysql/data/BACKUP/BACKUP-1/BACKUP-2.2.ctl'
Failed to read /usr/local/mysql/data/BACKUP/BACKUP-1/BACKUP-2.2.ctl


NDBT_ProgramExit: 1 - Failed


please help :)
Can someone give me a easy tutorial on this case?

Error 2352: Ndbd file system inconsistency error (no replies)

$
0
0
I have a simple mysql cluster with 2 nodes (id 3 and 4) that was working perfect. But, yesterday all querys started to fail, so I've just rebooted both nodes.

But now, on node 3, the ndbd daemon is failing to start with the error 2352. This is the full log:

2014-11-12 18:01:04 [MgmtSrvr] INFO -- Node 3: Start phase 0 completed
2014-11-12 18:01:04 [MgmtSrvr] INFO -- Node 3: Communication to Node 4 opened
2014-11-12 18:01:04 [MgmtSrvr] INFO -- Node 3: Waiting 30 sec for nodes 4 to connect, nodes [ all: 3 and 4 connected: 3 no-wait: ]
2014-11-12 18:01:07 [MgmtSrvr] INFO -- Node 3: Waiting 28 sec for nodes 4 to connect, nodes [ all: 3 and 4 connected: 3 no-wait: ]
2014-11-12 18:01:10 [MgmtSrvr] INFO -- Node 3: Waiting 25 sec for nodes 4 to connect, nodes [ all: 3 and 4 connected: 3 no-wait: ]
2014-11-12 18:01:13 [MgmtSrvr] INFO -- Node 3: Waiting 22 sec for nodes 4 to connect, nodes [ all: 3 and 4 connected: 3 no-wait: ]
2014-11-12 18:01:16 [MgmtSrvr] INFO -- Node 3: Waiting 19 sec for nodes 4 to connect, nodes [ all: 3 and 4 connected: 3 no-wait: ]
2014-11-12 18:01:19 [MgmtSrvr] INFO -- Node 3: Waiting 16 sec for nodes 4 to connect, nodes [ all: 3 and 4 connected: 3 no-wait: ]
2014-11-12 18:01:22 [MgmtSrvr] INFO -- Node 3: Waiting 13 sec for nodes 4 to connect, nodes [ all: 3 and 4 connected: 3 no-wait: ]
2014-11-12 18:01:25 [MgmtSrvr] INFO -- Node 3: Waiting 10 sec for nodes 4 to connect, nodes [ all: 3 and 4 connected: 3 no-wait: ]
2014-11-12 18:01:28 [MgmtSrvr] INFO -- Node 3: Waiting 7 sec for nodes 4 to connect, nodes [ all: 3 and 4 connected: 3 no-wait: ]
2014-11-12 18:01:31 [MgmtSrvr] INFO -- Node 3: Waiting 4 sec for nodes 4 to connect, nodes [ all: 3 and 4 connected: 3 no-wait: ]
2014-11-12 18:01:34 [MgmtSrvr] INFO -- Node 3: Waiting 1 sec for nodes 4 to connect, nodes [ all: 3 and 4 connected: 3 no-wait: ]
2014-11-12 18:01:37 [MgmtSrvr] INFO -- Node 3: Waiting 58 sec for non partitioned start, nodes [ all: 3 and 4 connected: 3 missing: 4 no-wait: ]
2014-11-12 18:01:40 [MgmtSrvr] INFO -- Node 3: Waiting 55 sec for non partitioned start, nodes [ all: 3 and 4 connected: 3 missing: 4 no-wait: ]
2014-11-12 18:01:43 [MgmtSrvr] INFO -- Node 3: Waiting 52 sec for non partitioned start, nodes [ all: 3 and 4 connected: 3 missing: 4 no-wait: ]
2014-11-12 18:01:46 [MgmtSrvr] INFO -- Node 3: Waiting 49 sec for non partitioned start, nodes [ all: 3 and 4 connected: 3 missing: 4 no-wait: ]
2014-11-12 18:01:49 [MgmtSrvr] INFO -- Node 3: Waiting 46 sec for non partitioned start, nodes [ all: 3 and 4 connected: 3 missing: 4 no-wait: ]
2014-11-12 18:01:52 [MgmtSrvr] INFO -- Node 3: Waiting 43 sec for non partitioned start, nodes [ all: 3 and 4 connected: 3 missing: 4 no-wait: ]
2014-11-12 18:01:55 [MgmtSrvr] INFO -- Node 3: Waiting 40 sec for non partitioned start, nodes [ all: 3 and 4 connected: 3 missing: 4 no-wait: ]
2014-11-12 18:01:58 [MgmtSrvr] INFO -- Node 3: Waiting 37 sec for non partitioned start, nodes [ all: 3 and 4 connected: 3 missing: 4 no-wait: ]
2014-11-12 18:02:01 [MgmtSrvr] INFO -- Node 3: Waiting 34 sec for non partitioned start, nodes [ all: 3 and 4 connected: 3 missing: 4 no-wait: ]
2014-11-12 18:02:04 [MgmtSrvr] INFO -- Node 3: Waiting 31 sec for non partitioned start, nodes [ all: 3 and 4 connected: 3 missing: 4 no-wait: ]
2014-11-12 18:02:07 [MgmtSrvr] INFO -- Node 3: Waiting 28 sec for non partitioned start, nodes [ all: 3 and 4 connected: 3 missing: 4 no-wait: ]
2014-11-12 18:02:10 [MgmtSrvr] INFO -- Node 3: Waiting 25 sec for non partitioned start, nodes [ all: 3 and 4 connected: 3 missing: 4 no-wait: ]
2014-11-12 18:02:13 [MgmtSrvr] INFO -- Node 3: Waiting 22 sec for non partitioned start, nodes [ all: 3 and 4 connected: 3 missing: 4 no-wait: ]
2014-11-12 18:02:16 [MgmtSrvr] INFO -- Node 3: Waiting 19 sec for non partitioned start, nodes [ all: 3 and 4 connected: 3 missing: 4 no-wait: ]
2014-11-12 18:02:19 [MgmtSrvr] INFO -- Node 3: Waiting 16 sec for non partitioned start, nodes [ all: 3 and 4 connected: 3 missing: 4 no-wait: ]
2014-11-12 18:02:22 [MgmtSrvr] INFO -- Node 3: Waiting 13 sec for non partitioned start, nodes [ all: 3 and 4 connected: 3 missing: 4 no-wait: ]
2014-11-12 18:02:25 [MgmtSrvr] INFO -- Node 3: Waiting 10 sec for non partitioned start, nodes [ all: 3 and 4 connected: 3 missing: 4 no-wait: ]
2014-11-12 18:02:28 [MgmtSrvr] INFO -- Node 3: Waiting 7 sec for non partitioned start, nodes [ all: 3 and 4 connected: 3 missing: 4 no-wait: ]
2014-11-12 18:02:31 [MgmtSrvr] INFO -- Node 3: Waiting 4 sec for non partitioned start, nodes [ all: 3 and 4 connected: 3 missing: 4 no-wait: ]
2014-11-12 18:02:34 [MgmtSrvr] INFO -- Node 3: Waiting 1 sec for non partitioned start, nodes [ all: 3 and 4 connected: 3 missing: 4 no-wait: ]
2014-11-12 18:02:37 [MgmtSrvr] INFO -- Node 3: Start potentially partitioned with nodes 3 [ missing: 4 no-wait: ]
2014-11-12 18:02:37 [MgmtSrvr] INFO -- Node 3: CM_REGCONF president = 3, own Node = 3, our dynamic id = 0/1
2014-11-12 18:02:37 [MgmtSrvr] INFO -- Node 3: Start phase 1 completed
2014-11-12 18:02:37 [MgmtSrvr] INFO -- Node 3: Start phase 2 completed (system restart)
2014-11-12 18:02:37 [MgmtSrvr] INFO -- Node 3: Start phase 3 completed (system restart)
2014-11-12 18:02:37 [MgmtSrvr] INFO -- Node 3: Restarting cluster to GCI: 21050461
2014-11-12 18:02:37 [MgmtSrvr] INFO -- Node 3: Starting to restore schema
2014-11-12 18:02:40 [MgmtSrvr] INFO -- Node 3: Restore of schema complete
2014-11-12 18:02:40 [MgmtSrvr] INFO -- Node 3: DICT: activate index 8 done (sys/def/7/PRIMARY)
2014-11-12 18:02:40 [MgmtSrvr] INFO -- Node 3: DICT: activate index 10 done (sys/def/9/PRIMARY)
2014-11-12 18:02:40 [MgmtSrvr] INFO -- Node 3: DICT: activate index 12 done (sys/def/11/PRIMARY)
2014-11-12 18:02:40 [MgmtSrvr] INFO -- Node 3: DICT: activate index 14 done (sys/def/13/PRIMARY)
2014-11-12 18:02:40 [MgmtSrvr] INFO -- Node 3: DICT: activate index 16 done (sys/def/15/PRIMARY)
2014-11-12 18:02:40 [MgmtSrvr] INFO -- Node 3: DICT: activate index 18 done (sys/def/17/PRIMARY)
2014-11-12 18:02:40 [MgmtSrvr] INFO -- Node 3: DICT: activate index 20 done (sys/def/19/PRIMARY)
2014-11-12 18:02:40 [MgmtSrvr] INFO -- Node 3: DICT: activate index 22 done (sys/def/21/PRIMARY)
2014-11-12 18:02:40 [MgmtSrvr] INFO -- Node 3: DICT: activate index 24 done (sys/def/23/PRIMARY)
2014-11-12 18:02:40 [MgmtSrvr] INFO -- Node 3: DICT: activate index 26 done (sys/def/25/PRIMARY)
2014-11-12 18:02:40 [MgmtSrvr] INFO -- Node 3: DICT: activate index 28 done (sys/def/27/PRIMARY)
2014-11-12 18:02:40 [MgmtSrvr] INFO -- Node 3: DICT: activate index 30 done (sys/def/29/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 32 done (sys/def/31/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 34 done (sys/def/33/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 36 done (sys/def/35/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 38 done (sys/def/37/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 40 done (sys/def/39/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 42 done (sys/def/41/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 44 done (sys/def/43/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 46 done (sys/def/45/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 48 done (sys/def/47/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 49 done (sys/def/6/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 52 done (sys/def/51/ndb_index_stat_sample_x1)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 54 done (sys/def/53/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 56 done (sys/def/55/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 58 done (sys/def/57/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 60 done (sys/def/59/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 62 done (sys/def/61/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 64 done (sys/def/63/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 66 done (sys/def/65/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 68 done (sys/def/67/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 70 done (sys/def/69/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 72 done (sys/def/71/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 74 done (sys/def/73/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 76 done (sys/def/75/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 78 done (sys/def/77/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 80 done (sys/def/79/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 82 done (sys/def/81/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 84 done (sys/def/83/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 86 done (sys/def/85/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 88 done (sys/def/87/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 90 done (sys/def/89/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 92 done (sys/def/91/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 94 done (sys/def/93/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 96 done (sys/def/95/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 98 done (sys/def/97/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 100 done (sys/def/99/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 102 done (sys/def/101/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 104 done (sys/def/103/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 106 done (sys/def/105/PRIMARY)
2014-11-12 18:02:41 [MgmtSrvr] INFO -- Node 3: DICT: activate index 108 done (sys/def/107/PRIMARY)
2014-11-12 18:02:42 [MgmtSrvr] INFO -- Node 3: DICT: activate index 110 done (sys/def/109/PRIMARY)
2014-11-12 18:02:42 [MgmtSrvr] INFO -- Node 3: DICT: activate index 112 done (sys/def/111/PRIMARY)
2014-11-12 18:02:42 [MgmtSrvr] INFO -- Node 3: DICT: activate index 114 done (sys/def/113/PRIMARY)
2014-11-12 18:02:42 [MgmtSrvr] INFO -- Node 3: DICT: activate index 116 done (sys/def/115/PRIMARY)
2014-11-12 18:02:42 [MgmtSrvr] INFO -- Node 3: DICT: activate index 118 done (sys/def/117/PRIMARY)
2014-11-12 18:02:42 [MgmtSrvr] INFO -- Node 3: DICT: activate index 120 done (sys/def/119/PRIMARY)
2014-11-12 18:02:42 [MgmtSrvr] INFO -- Node 3: DICT: activate index 122 done (sys/def/121/PRIMARY)
2014-11-12 18:02:42 [MgmtSrvr] INFO -- Node 3: DICT: activate index 124 done (sys/def/123/PRIMARY)
2014-11-12 18:02:42 [MgmtSrvr] INFO -- Node 3: DICT: activate index 126 done (sys/def/125/PRIMARY)
2014-11-12 18:02:42 [MgmtSrvr] INFO -- Node 3: DICT: activate index 128 done (sys/def/127/PRIMARY)
2014-11-12 18:02:42 [MgmtSrvr] INFO -- Node 3: DICT: activate index 130 done (sys/def/129/PRIMARY)
2014-11-12 18:02:42 [MgmtSrvr] INFO -- Node 3: DICT: activate index 132 done (sys/def/131/PRIMARY)
2014-11-12 18:02:42 [MgmtSrvr] INFO -- Node 3: DICT: activate index 134 done (sys/def/133/PRIMARY)
2014-11-12 18:02:42 [MgmtSrvr] INFO -- Node 3: Node: 3 StartLog: [GCI Keep: 21050317 LastCompleted: 21050461 NewestRestorable: 21050461]
2014-11-12 18:02:44 [MgmtSrvr] ALERT -- Node 3: Forced node shutdown completed. Occured during startphase 4. Caused by error 2352: 'Invalid LCP(Ndbd file system inconsistency error, please report a bug). Ndbd file system error, restart node initial'.
2014-11-12 18:02:44 [MgmtSrvr] ALERT -- Node 1: Node 3 Disconnected
Note that node 4 is currently powered off.

I've been searching for this, but all solutions ends with using --initial command line, which will wipe all data on the node.

There's no way to solve the file system inconsistency ? I think that the node 4 is not in sync, so I prefer to keep the data on node 3 as the correct.

I can access the tables using mysql, so maybe I can take a mysqldump and then do the --initial command.

But, I prefer to try to solve the inconsistent file system first.

Any idea ?

Failed to recreate object 1009 during restart, error 21028 (no replies)

$
0
0
Hi,

I have an error and can't make it works. I was using navicat mysql and on a ndb table I remove a FK. The minute I clicked save, the cluster had reset.

I have 4 datanode, 2 management and 2 sql node. Data are on 2 node groupe (nodeid 1-2 and 3-4).

My problem occure on data node 1 and 2.

Here a log of my probleme. I use mysql-5.6.14 and ndb-7.3.3 on centos 6.


2014-11-21 16:31:03 [ndbd] INFO -- Start phase 3 completed
restartCreateObj(1) file: 1
restartCreateObj(994) file: 1
restartCreateObj(1009) file: 1
error: [ code: 21028 line: 24858 node: 1 count: 1 status: 0 key: 0 name: '' ]
2014-11-21 16:31:04 [ndbd] INFO -- Failed to recreate object 1009 during restart, error 21028.
2014-11-21 16:31:04 [ndbd] INFO -- DBDICT (Line: 4676) 0x00000000
2014-11-21 16:31:04 [ndbd] INFO -- Error handler restarting system
2014-11-21 16:31:04 [ndbd] INFO -- Error handler shutdown completed - exiting
2014-11-21 16:31:05 [ndbd] INFO -- Angel detected startup failure, count: 1
2014-11-21 16:31:05 [ndbd] ALERT -- Node 1: Forced node shutdown completed. Occured during startphase 4. Caused by error 2355: 'Failure to restore schema(Resource configuration error). Permanent error, external action needed'.


I looked the file 1009 I can read the tale name I chnaged using navicat. I tryed to remove the file as I don't car to lose some info on that table but I get an error as a file is missing.

I tryed also to chnage that file to an old backup but it show the same error as above: Failed to recreate object 1009 during restart, error 21028.

sql node do not restart after ndb_restore (no replies)

$
0
0
Hi,

I did a restore after an error whoe was almost impossible to recover. I use ma lasted mackup to recover using: ndb_restore -r -m -n ID -B ID I did it on my 4 nodes. In the mng, everything looks ok. If I call ndb_show_tables, I see a lot of stuff.

Now, I started my sql node. Boom! They can't start.I got:

Starting MySQL.. ERROR! The server quit without updating PID file (/usr/local/mysql/data/sql02.cubiculestudio.com.pid).

/usr/local/mysql/bin/mysqld: Incorrect information in file: './mysql/user.frm'
2014-11-21 20:10:29 5265 [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.user' doesn't exist


I didn't change anything on my slq Node. So then, I run ./scripts/mysql_install_db --user=mysql

I tryed again to start the server and it start!! Problem: I can't see any of my DBS on all data node. If I create a new DB with some ndbcluster engin table, I can see those table using ndb_show_tables. Bit If I look using the second sql node, I can't see the new DB created.

If look like my sql node can't update.

Here the error log after all that:

2014-11-21 23:00:47 2551 [Note] /usr/local/mysql/bin/mysqld: ready for connections.
Version: '5.6.14-ndb-7.3.3-cluster-gpl' socket: '/tmp/mysql.sock' port: 3306 MySQL Cluster Community Server (GPL)
2014-11-21 23:00:47 2551 [Note] NDB Binlog: DISCOVER TABLE Event: REPL$mysql/ndb_schema
2014-11-21 23:00:47 2551 [Note] NDB Binlog: logging ./mysql/ndb_schema (UPDATED,USE_WRITE)
2014-11-21 23:00:47 2551 [Note] NDB Binlog: DISCOVER TABLE Event: REPL$mysql/ndb_apply_status
2014-11-21 23:00:47 2551 [Note] NDB Binlog: logging ./mysql/ndb_apply_status (UPDATED,USE_WRITE)
2014-11-21 23:00:47 2551 [Note] NDB: Cleaning stray tables from database 'information_schema'
2014-11-21 23:00:47 2551 [Note] NDB: Cleaning stray tables from database 'ndbinfo'
2014-11-21 23:00:47 2551 [Note] NDB: Cleaning stray tables from database 'performance_schema'
2014-11-21 23:00:47 2551 [Note] NDB: Cleaning stray tables from database 'test'
2014-11-21 23:00:47 2551 [Note] NDB: Discovered remaining database 'test'
2014-11-21 23:00:47 2551 [Note] NDB: missing frm for mysql.ndb_tables_priv_backup, discovering...
2014-11-21 23:00:47 2551 [Note] NDB: missing frm for mysql.db_backup, discovering...
2014-11-21 23:00:47 2551 [Note] NDB: missing frm for mysql.ndb_db_backup, discovering...
2014-11-21 23:00:47 2551 [Note] NDB: missing frm for mysql.user_backup, discovering...
2014-11-21 23:00:47 2551 [Note] NDB: missing frm for mysql.tables_priv_backup, discovering...
2014-11-21 23:00:47 2551 [Note] NDB: missing frm for mysql.ndb_columns_priv_backup, discovering...
2014-11-21 23:00:47 2551 [Note] NDB: missing frm for mysql.ndb_procs_priv_backup, discovering...
2014-11-21 23:00:47 2551 [Note] NDB: missing frm for mysql.ndb_index_stat_head, discovering...
2014-11-21 23:00:47 2551 [Note] NDB: missing frm for mysql.ndb_index_stat_sample, discovering...
2014-11-21 23:00:48 2551 [Note] NDB: missing frm for mysql.ndb_user_backup, discovering...
2014-11-21 23:00:48 2551 [Note] NDB: missing frm for mysql.ndb_proxies_priv_backup, discovering...
2014-11-21 23:00:48 [NdbApi] INFO -- Flushing incomplete GCI:s < 4777/19
2014-11-21 23:00:48 [NdbApi] INFO -- Flushing incomplete GCI:s < 4777/19
2014-11-21 23:00:48 2551 [Note] NDB Binlog: starting log at epoch 4777/19
2014-11-21 23:00:48 2551 [Note] NDB Binlog: ndb tables writable


Thanks for your help!

PS, I run (mysql-5.6.21 ndb-7.3.7)

MySQL Cluster Slow Queries (no replies)

$
0
0
Hello,

First time poster here, so be gentle. I wonder if anyone has any advice regarding a situation I'm experiencing with MySQL Cluster 7.3.7. I'm using the cluster as the backend of a discussion forum. I just changed the architecture to what I thought was an improved and recommended set-up, but the results were discouraging. Please bare with me while I provide background:

The original setup was two servers, each running Apache, Tomcat, MySQL, and ndbd. I had to other computers set up as management nodes.

Even though we are advised to not combine things like this, the set-up worked fine for almost two years. But, it had started to slow down and I decided to separate the ndbd nodes. Therefore, I purchased two new servers and set things up like this:

Front end node 1: Apache, Tomcat, MySQL
Front end node 2: Apache, Tomcat, MySQL
Back end node 1: ndbmtd
Back end node 2: ndbmtd

All of these are connected by a copper gig switch. In this set up, the discussion forum app simply would not run. The queries were much too slow. Even very basic select statements. The front end nodes were not memory or cpu bound and had very light loads.

I then changed to this set-up:

Front end node 1: Apache, Tomcat
Front end node 2: Apache, Tomcat
Back end node 1: MySQL, ndbmtd
Back end node 2: MySQL, ndbmtd

This works very fast. To give a comparison, this query:

SELECT COUNT(1) FROM posts WHERE forum_id = 2

takes 129 seconds when run from MySQL on a front end node compared to 1.2 seconds from MySQL on a data node. That is an astronomical difference. I don't see any network-related explanation as all servers have their NICs configured correctly and there are good ping times between the servers. Anyway, Tomcat is now using the same connection to connect to MySQL with no problem.

Any idea what is going on here and what I can do about it?

Help!!! What parameters should I modify?? (no replies)

$
0
0
cat config.ini
[ndbd default]
NoOfReplicas=2
DataMemory=210400M
IndexMemory=10048M
MaxNoOfConcurrentOperations=20000000
MaxNoOfConcurrentTransactions=50000
MaxNoOfTables=1024
MaxNoOfOrderedIndexes=8192
MaxNoOfUniqueHashIndexes=2048
MaxNoOfAttributes=32768
MaxNoOfTriggers=32768
FragmentLogFileSize=512M
InitFragmentLogFiles=SPARSE
NoOfFragmentLogFiles=512
[tcp default]
portnumber=2202
[ndb_mgmd]
hostname=192.168.10.2
datadir=/var/lib/mysql-cluster
LogDestination=FILE:filename=ndb_1_cluster.log,maxsize=10000000,maxfiles=6
[ndb_mgmd]
hostname=192.168.10.3
datadir=/var/lib/mysql-cluster
LogDestination=FILE:filename=ndb_2_cluster.log,maxsize=10000000,maxfiles=6
[ndbd]
hostname=192.168.10.9
datadir=/var/lib/mysql/data
[ndbd]
hostname=192.168.10.10
datadir=/var/lib/mysql/data
[mysqld]
hostname=192.168.10.2
[mysqld]
hostname=192.168.10.3
[mysqld]
[mysqld]
[mysqld]


cat my.cnf
[mysqld]
port=2274
ndbcluster
skip_name_resolve=on
slow_query_log=on
long_query_time=5
query_cache_type=1
max_connections=1000
lower_case_table_names=1
character_set_server=gbk
init_connect='SET NAMES gbk'
init_connect='set character_set_database=gbk'
default_storage_engine=ndbcluster
default_tmp_storage_engine=ndbcluster
ndb-connectstring=192.168.10.2,192.168.10.3
[mysql_cluster]
ndb-connectstring=192.168.10.2,192.168.10.3



The manager nodes logs and ndb nodes logs:
ndb-mgm1:
2014-12-02 10:42:00 [MgmtSrvr] ALERT -- Node 1: Node 3 Disconnected
2014-12-02 10:42:07 [MgmtSrvr] ALERT -- Node 1: Node 4 Disconnected
2014-12-02 10:42:08 [MgmtSrvr] INFO -- Node 1: Node 3 Connected
2014-12-02 10:42:08 [MgmtSrvr] INFO -- Node 3: Node 2: API mysql-5.6.11 ndb-7.3.2
2014-12-02 10:42:09 [MgmtSrvr] INFO -- Node 3: Started arbitrator node 1 [ticket=25f4000a6cc83faa]
2014-12-02 10:42:11 [MgmtSrvr] INFO -- Node 3: Communication to Node 5 opened
2014-12-02 10:42:11 [MgmtSrvr] INFO -- Node 3: Communication to Node 6 opened
2014-12-02 10:42:11 [MgmtSrvr] INFO -- Node 3: Node 6 Connected
2014-12-02 10:42:11 [MgmtSrvr] INFO -- Node 1: Node 4 Connected
2014-12-02 10:42:11 [MgmtSrvr] INFO -- Node 3: Node 5 Connected
2014-12-02 10:42:11 [MgmtSrvr] INFO -- Node 4: Node 2: API mysql-5.6.11 ndb-7.3.2
2014-12-02 10:42:11 [MgmtSrvr] INFO -- Node 4: Node 5 Connected
2014-12-02 10:42:11 [MgmtSrvr] INFO -- Node 4: Node 6: API mysql-5.6.11 ndb-7.3.2
2014-12-02 10:42:11 [MgmtSrvr] INFO -- Node 3: Node 6: API mysql-5.6.11 ndb-7.3.2
2014-12-02 10:42:12 [MgmtSrvr] INFO -- Node 3: Node 5: API mysql-5.6.11 ndb-7.3.2
2014-12-02 10:42:12 [MgmtSrvr] INFO -- Node 4: Node 5: API mysql-5.6.11 ndb-7.3.2

ndb-mgm2:
2014-12-02 10:42:00 [MgmtSrvr] ALERT -- Node 2: Node 3 Disconnected
2014-12-02 10:42:08 [MgmtSrvr] ALERT -- Node 2: Node 4 Disconnected
2014-12-02 10:42:08 [MgmtSrvr] INFO -- Node 2: Node 3 Connected
2014-12-02 10:42:09 [MgmtSrvr] INFO -- Node 3: Started arbitrator node 1 [ticket=25f4000a6cc83faa]
2014-12-02 10:42:11 [MgmtSrvr] INFO -- Node 3: Communication to Node 5 opened
2014-12-02 10:42:11 [MgmtSrvr] INFO -- Node 3: Communication to Node 6 opened
2014-12-02 10:42:11 [MgmtSrvr] INFO -- Node 3: Node 6 Connected
2014-12-02 10:42:11 [MgmtSrvr] INFO -- Node 2: Node 4 Connected
2014-12-02 10:42:11 [MgmtSrvr] INFO -- Node 3: Node 5 Connected
2014-12-02 10:42:11 [MgmtSrvr] INFO -- Node 3: Node 6: API mysql-5.6.11 ndb-7.3.2
2014-12-02 10:42:12 [MgmtSrvr] INFO -- Node 3: Node 5: API mysql-5.6.11 ndb-7.3.2
2014-12-02 10:42:12 [MgmtSrvr] INFO -- Node 4: Node 5: API mysql-5.6.11 ndb-7.3.2

ndb-data1:
Backup : Excessive Backup/LCP write rate in last monitoring period - recorded = 12739708 bytes/s, configured = 10485760 bytes/s
Backup : Monitoring period : 1070 millis. Bytes written : 3407872. Max allowed : 3145728
LCP Frag watchdog : No progress on table 56, frag 0 for 20 s. 0 bytes remaining.
2014-12-02 10:41:54 [ndbd] INFO -- part: 3 : sum_outstanding: 1824kb avg_written: 256kb avg_elapsed: 6530ms time to complete: 46
lag_cnt: 0 => 2 retVal: 0
2014-12-02 10:42:02 [ndbd] WARNING -- Ndb kernel thread 0 is stuck in: Job Handling elapsed=100
2014-12-02 10:42:02 [ndbd] WARNING -- Time moved forward with 8249 ms
2014-12-02 10:42:02 [ndbd] INFO -- Watchdog: User time: 936098 System time: 206357
2014-12-02 10:42:02 [ndbd] WARNING -- timerHandlingLab now: 36184796018 sent: 36184787778 diff: 8240
2014-12-02 10:42:02 [ndbd] INFO -- part: 3 : time to complete: 4
2014-12-02 10:42:07 [ndbd] INFO -- Watchdog: User time: 936098 System time: 206358
2014-12-02 10:42:07 [ndbd] WARNING -- Watchdog: Warning overslept 8212 ms, expected 100 ms.
2014-12-02 10:42:07 [ndbd] WARNING -- Time moved forward with 5165 ms
2014-12-02 10:42:07 [ndbd] WARNING -- timerHandlingLab now: 36184801192 sent: 36184796027 diff: 5165
2014-12-02 10:42:07 [ndbd] INFO -- Lost arbitrator node 1 - process failure [state=6]
2014-12-02 10:42:07 [ndbd] INFO -- President restarts arbitration thread [state=1]
2014-12-02 10:42:07 [ndbd] WARNING -- Could not find an arbitrator, cluster is not partition-safe
2014-12-02 10:42:07 [ndbd] INFO -- Watchdog: User time: 936101 System time: 206364
2014-12-02 10:42:07 [ndbd] WARNING -- Watchdog: Warning overslept 5174 ms, expected 100 ms.
2014-12-02 10:42:09 [ndbd] INFO -- Started arbitrator node 1 [ticket=25f4000a6cc83faa]

ndb-data2:
2014-12-02 10:41:46 [ndbd] INFO -- part: 3 : time to complete: 11
2014-12-02 10:41:47 [ndbd] WARNING -- Ndb kernel thread 0 is stuck in: Job Handling elapsed=100
2014-12-02 10:41:47 [ndbd] INFO -- Watchdog: User time: 946546 System time: 228738
2014-12-02 10:41:47 [ndbd] WARNING -- timerHandlingLab now: 36183821249 sent: 36183819818 diff: 1431
2014-12-02 10:41:47 [ndbd] INFO -- part: 3 : time to complete: 3
2014-12-02 10:41:47 [ndbd] INFO -- Watchdog: User time: 946550 System time: 228739
2014-12-02 10:41:47 [ndbd] WARNING -- Watchdog: Warning overslept 1365 ms, expected 100 ms.
Backup : Excessive Backup/LCP write rate in last monitoring period - recorded = 14222967 bytes/s, configured = 10485760 bytes/s
Backup : Monitoring period : 1069 millis. Bytes written : 3801088. Max allowed : 3145728
2014-12-02 10:42:07 [ndbd] INFO -- Lost arbitrator node 1 - process failure [state=6]
2014-12-02 10:42:09 [ndbd] INFO -- Prepare arbitrator node 1 [ticket=25f4000a6cc83faa]
2014-12-02 10:42:09 [ndbd] WARNING -- President 3 proposed disconnected node 1 as arbitrator [ticket=25f4000a6cc83faa]. Cluster may
be partially connected. Connected nodes: 3,4

Is there "cluster mgm api" for java? (no replies)

$
0
0
I want to know if there are ndb cluster mgm api for java? Is it clusterJ or NdbJ or MgmJ? where can I found the link/doc/src etc?
Thanks,

Hobby

ERROR 1296 (HY000): Got error 1229 'Too long frm data supplied' from NDBCLUSTER (2 replies)

$
0
0
Hi,

after a simple:

ALTER TABLE test_table add test_col tinyint(1) default '0';

i got

"ERROR 1296 (HY000): Got error 1229 'Too long frm data supplied' from NDBCLUSTER"

if i do
ALTER TABLE test_table add test_col tinyint(1)

it returns:
ERROR 1229 (HY000): Variable '' is a GLOBAL variable and should be set with SET GLOBAL



It only occurs in one of the cluster tables. A restart of the cluster dosn't help.
I get it on my development and in the production enviroment.

The table had 164 attributes. The maxnoofattributes variable isn't the problem..

please help.
thx in advance..

Mysqld Start failed. * 22: Error * The process has wrong type. Expected a DB process.: Permanent error: Application error (no replies)

$
0
0
Hi I can't seem to start the SQLD (API). However I can log into the mysql of this node. Can anyone help?

######### used this command to start sqld node 61
[root@SQL1 ~]# /usr/bin/mysqld_safe --defaults-file=/opt/SQL/61/my.cnf --user=mysql --explicit_defaults_for_timestamp &
[2] 2572
[root@SQL1 ~]# 141210 10:50:26 mysqld_safe Logging to '/opt/SQL/61/mysqld.61.err'.
141210 10:50:26 mysqld_safe A mysqld process already exists

[2]+ Exit 1 /usr/bin/mysqld_safe --defaults-file=/opt/SQL/61/my.cnf --user=mysql --explicit_defaults_for_timestamp

######### check if its started
[root@SQL1 ~]# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=1 @192.168.168.179 (mysql-5.6.15 ndb-7.3.4, Nodegroup: 0, *)
id=2 @192.168.168.185 (mysql-5.6.15 ndb-7.3.4, Nodegroup: 0, *)

[ndb_mgmd(MGM)] 1 node(s)
id=51 @192.168.168.187 (mysql-5.6.15 ndb-7.3.4)

[mysqld(API)] 3 node(s)
id=61 (not connected, accepting connect from 192.168.168.187)
id=63 (not connected, accepting connect from 192.168.168.174)
id=65 (not connected, accepting connect from any host)

ndb_mgm> 61 start
Start failed.
* 22: Error
* The process has wrong type. Expected a DB process.: Permanent error: Application error

ndb_mgm>
[root@SQL1 ~]# ^C
[root@SQL1 ~]# mysql -u root -p -S /opt/SQL/61/mysql.socket
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.6.15-ndb-7.3.4-cluster-commercial-advanced MySQL Cluster Server - Advanced Edition (Commercial)

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

ndb injector thead (1 reply)

$
0
0
Hi,

Where does the ndb injector thread located in? sql node or ndbd node or mgmd node ?

Please tell me

Thanks

Lock wait timeout exceeded - MySQL 5.6.15-ndb-7.3.4 (3 replies)

$
0
0
Hi,

I am developing an application in VB.Net connecting to MySQL using MySQL ADO.Net; MySQL engine: ndbcluster.

I am regularly receiving "Lock wait timeout exceeded; try restarting transaction" OR "A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond". I am able to trace the issue; and it happens when multiple threads are accessing the same row of a table; one thread is trying to update the record and the other thread is trying to delete the same row entry. Both the connections are locking the same row (I am using ndbcluster, and it does row-level locking).

In the VB.Net application I implemented ADO CommandTimeout to 1 sec; this does bring the control back to the application immediately (as Timeout expires), and the application repeats the failed request and it does work. But sometimes the ADO control returns back after a few seconds. Due to the CommandTimeout parameter, the actual error code is masked and I receive "Timeout expired" in the VB.Net application. Whereas I do have a watchdog application running with my VB.Net application, and it captures the error messages mentioned above.

My question:
1. How this needs to be handled gracefully to avoid the error completely.
2. Once the lock is encountered, the execution pauses till the timeout error reverts - remedy for this (if #1 is handled then this would be rectified also).

Here is the VB.Net application log snippet, reflecting the exact timing of two separate threads processing on the same row:

2014-12-08 11:59:49.527 Worker_DB_6 ERROR: [5: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.] DELETE FROM liveusers WHERE Port = '1313' LIMIT 1

2014-12-08 11:59:49.527 Worker_DB_2 ERROR: [5: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.] UPDATE liveusers SET Req2Connect = 0 WHERE Port = '1313' LIMIT 1

The watchdog error log snippet:
MyApp.exe (1560): debug message: "Could not kill query, aborting connection. Exception was Unknown thread id: 76072792"

MyApp.exe (1560): debug message: "Could not kill query, aborting connection. Exception was Lock wait timeout exceeded; try restarting transaction"

MyApp.exe (1560): debug message: "Could not kill query, aborting connection. Exception was A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond"


Thanks in advance for any help!!!

How to use NDB$EPOCH() and NDB$EPOCH_TRANS() (no replies)

$
0
0
Hi,


How to use NDB$EPOCH() and NDB$EPOCH_TRANS() ?

Please give me for examples.

Thanks

Unable to connect with connect string: nodeid=0,localhost:1186 (no replies)

$
0
0
Good morning everyone,
While configuring a cluster (4 data nodes, 1 mgm node, 1 sql node) with MYSQL CLUSTER MANAGER I'm facing the following error:

mysql@data2:~> ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Unable to connect with connect string: nodeid=0,localhost:1186
Retrying every 5 seconds. Attempts left: 2

The problem is I don't know where to set the option "connect string" for data nodes (using MySQL Cluster Manager). Any ideas?

Kind regards,
Gabriel

Installing NDBCLUSTER engine (no replies)

$
0
0
Hi,

Hopefully you can help me, I have installed MySQL Cluster Server 7.3.7-1 from rpm on SLES 11 SP3 and when I go into mysql and run >show engines; ndbcluster is not listed, I just have nsbinfo but not ndb/nsbcluster

Any ideas guys?

Thanks all

Management Node and SQL Node on the same host (1 reply)

$
0
0
Hi there.

I'm preparing hosts to test the MySQL Cluster infraestructure on a couple of CentOS's 6, and I wanted to know if it is possible to have on the same host the Managemente and SQL node.

I know that it's not advised to do this, but do you know if it can easily be done?

Thank you very much.

Best Regards,
Vilhena

Memcached process at 100% (2 replies)

$
0
0
I am running 4 nodes. All separate servers.

2 mgmt nodes.
2 data/sql nodes.

The problem I have is this.

When all the nodes are running. I telnet into memcached and do the following:

[root@db ~]# telnet localhost 11211
Trying ::1...
Connected to localhost.
Escape character is '^]'.
set key 0 900 4
data
STORED
quit
Connection closed by foreign host.

I then check memcached on the node and I see the process running at 100%, pretty much immediately I do the set command. The box is pretty much toast at this point because doing queries is like running through sludge. I have to kill memcached to get the box back. :(

The top result:
2345 root 20 0 1126412 16428 3124 S **103.1** 0.0 0:46.51 memcached

This even happens when I do a simple get key which returns the value. Again, the memcached process shoots up from %5 to 100%.

Can someone please tell me why this is?

I'm running CentOS 7 on all the servers, with a minimal installation.
I'm running Cluster 7.3.7 and I have even tested with 7.4.2, same issue.
This was a clean install, mysql-cluster is very fresh, no DB installed!

Here is my config.ini for the mgmt nodes:

[ndb_mgmd default]
datadir=/var/lib/mysql-cluster # Directory for MGM node log files

[ndb_mgmd]
# Management process options:
hostname=127.0.0.232 # Hostname or IP address of MGM node
NodeId=101

#[ndb_mgmd]
# Management process options:
hostname=127.0.0.235 # Hostname or IP address of MGM node
NodeId=102

[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=1 # Number of replicas
datadir=/var/lib/mysql-cluster/data # Directory for this data node's data files
DataMemory=10G # How much memory to allocate for data storage
IndexMemory=2G # How much memory to allocate for index storage
LockPagesInMainMemory=1 # Locks data node processes into memory. Doing so prevents them from swapping to disk
MaxNoOfConcurrentOperations=1000000
MaxNoOfTables=200
MaxNoOfAttributes=4000
MaxNoOfOrderedIndexes=10000

[ndbd]
# Options for data node "A":
# (one [ndbd] section per data node)
hostname=127.0.0.233 # Hostname or IP address
NodeId=1

#[ndbd]
# Options for data node "B":
hostname=127.0.0.236 # Hostname or IP address
NodeId=2

[mysqld]
# SQL node options:
hostname=127.0.0.233 # Hostname or IP address
NodeId=51

[mysqld]
# SQL node options (1st for memcached):
hostname=127.0.0.233 # Hostname or IP address

[mysqld]
# SQL node options (2nd for memcached):
hostname=127.0.0.233 # Hostname or IP address

[mysqld]
# SQL node options (3rd for memcached):
hostname=127.0.0.233 # Hostname or IP address

[mysqld]
# SQL node options (4th for memcached):
hostname=127.0.0.233 # Hostname or IP address

#[mysqld]
# SQL node options:
hostname=127.0.0.236 # Hostname or IP address
NodeId=52

#[mysqld]
# SQL node options (1st for memcached):
hostname=127.0.0.236 # Hostname or IP address

#[mysqld]
# SQL node options (2nd for memcached):
hostname=127.0.0.236 # Hostname or IP address

#[mysqld]
# SQL node options (3rd for memcached):
hostname=127.0.0.236 # Hostname or IP address

#[mysqld]
# SQL node options (4th for memcached):
hostname=127.0.0.236 # Hostname or IP address

Here is my mysql.cnf for the sql node:

[mysqld]
# Options for mysqld process:
ndbcluster # run NDB storage engine
ndb-nodeid=51
server-id=51

[mysql_cluster]
# Options for MySQL Cluster processes:
ndb-connectstring=127.0.0.232 # location of management server

[mysql]
prompt="(\u@\h) [\d]>\_"

Here is my show output: (Don't worry that I am only showing you one half). I've even tried just 1 half of the setup to see if I can still replicate it and I can. I've tried even with 3 mgmt nodes, 3 sql/data nodes and still the same result. /sigh

It's midnight here and I'm too tired to get the other half up just to show you the correct show result. Trust me, it shows correctly all the nodes.

ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)] 1 node(s)
id=1 @127.0.0.233 (mysql-5.6.21 ndb-7.3.7, Nodegroup: 0, *)

[ndb_mgmd(MGM)] 1 node(s)
id=101 @127.0.0.232 (mysql-5.6.21 ndb-7.3.7)

[mysqld(API)] 5 node(s)
id=51 @127.0.0.233 (mysql-5.6.21 ndb-7.3.7)
id=52 @127.0.0.233 (mysql-5.6.21 ndb-7.3.7)
id=53 @127.0.0.233 (mysql-5.6.21 ndb-7.3.7)
id=54 @127.0.0.233 (mysql-5.6.21 ndb-7.3.7)
id=55 (not connected, accepting connect from 127.0.0.233)

Here is how I'm installing and running memcached:

mysql -u root -pPASSWORD < /usr/share/mysql/memcache-api/ndb_memcache_metadata.sql
memcached -u root -E /usr/lib64/ndb_engine.so -e "connectstring=127.0.0.232:1186;role=db-only" -vv -c 20

Any in-sights to this really appreciated! I will attempt in trying lower versions of mysql cluster. I may even try 7.4 which is in beta... I really need this to work.

I don't have this problem at all when I run with innodb-memcached nor memcached server standalone. Just letting you know, that I have experience with other memcached installations.

Thanks

Paul

Why does the NoOfReplicas just support the value 1 and 2? (1 reply)

$
0
0
Form the introduction: MySQL 5.6 Reference Manual :: 18 MySQL Cluster NDB 7.3 and MySQL Cluster NDB 7.4 :: 18.3 Configuration of MySQL Cluster :: 18.3.2 MySQL Cluster Configuration Files :: 18.3.2.6 Defining MySQL Cluster Data Nodes

In this contents, it shows that the NoOfReplicas's value actually only support 1 and 2, currently. I want to know why its value only can be 1 or 2, furthermore, if i configure it with the value
3 or 4, what will happend?

reset root password (distributed privileges) (no replies)

$
0
0
Hi,

I have created a small cluster and have configured distributed privileges. How do I reset root password now? When I try to run mysqld_safe --skip-grant-tables I still cannot connect with a client, similar success from mysqld_safe --init-file=/tmp/password.change.sql - nothing being changed. Any hints are welcome!

Thanks,
Vladislav

performance of aggregate in Cluster (no replies)

$
0
0
I have table with 10M rows.

CREATE TABLE Plan_NDB (
PlanID int NOT NULL,
RepayAmount numeric(12, 2) not null default 0.0,
RepayRemain numeric(12, 2) not null default 0.0,
ClearType varchar(20) not null default ' ',
RepayResult varchar(20) not null default ' ',
RepayStatus varchar(20) not null default ' ',
StartDate numeric(8, 0) not null default 0,
MatchDate numeric(8, 0) not null default 0,
RepayDate numeric(8, 0) not null default 0,
PRIMARY KEY (PlanID),
Key (ClearType,RepayAmount)
) ENGINE NDBCLUSTER ;

I have 4 data nodes in this cluster , when I execute below query , it's really slow then the same table but in Innodb.

select ClearType,sum(RepayAmount) from Plan_NDB group by ClearType;

In Cluster it's 27sec. but in Innodb , only 3 secs.
I did the explain , it shows full index scan. the same plan with innodb.
I'm not 100% sure is the index covering working for cluster or not, can someone help to explain ?

some information for your reference.

analyze table Plan_NDB;
show index from Plan_NDB from QCDATA;
----------------------------------------------------------------------------------
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+----------------------+------------+-----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
|Plan_NDB | 0 | PRIMARY | 1 | RepayPlanID | A | 10000000 | NULL | NULL | | BTREE | | |
|Plan_NDB | 1 | ClearType | 1 | ClearType | A | 3 | NULL | NULL | | BTREE | | |
|Plan_NDB | 1 | ClearType | 2 | RepayAmount | A | 2498 | NULL | NULL | | BTREE | | |
+----------------------+------------+-----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
----------------------------------------------------------------------------------
explain extended select ClearType,sum(RepayAmount) from Plan_NDB group by ClearType
----------------------------------------------------------------------------------
+----+-------------+----------------------+-------+---------------+-----------+---------+------+----------+----------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+----------------------+-------+---------------+-----------+---------+------+----------+----------+-------+
| 1 | SIMPLE | Plan_NDB | index | NULL | ClearType | 48 | NULL | 10000000 | 100.00 | NULL |
+----+-------------+----------------------+-------+---------------+-----------+---------+------+----------+----------+-------+

any tuning advice is appreciated!

I have error logs for mysql cluster -[ERROR] NDB Binlog: Skipping locally defined table (no replies)

$
0
0
Hi,

The MySQL Cluster consist of mgmd 1 + sqld/ndbd 2.
I have some errors.
1.
2014-12-12 16:38:18 19795 [ERROR] NDB Binlog: Skipping locally defined table 'wordpress_db.seoul_14_kboard_board_attached' from binlog schema event 'ALTER TABLE `seoul_14_kboard_board_attached` ENGINE = ndbcluster' from node 5.

2.
2014-12-30 16:39:30 30705 [ERROR] NDB Binlog: Creating NdbEventOperation blob field 8 handles failed (code=4710) for REPL$wordpress_db/seoul_biz_list_2014
2014-12-30 16:39:30 30705 [ERROR] NDB Binlog:FAILED CREATE (DISCOVER) EVENT OPERATIONS Event: REPL$wordpress_db/seoul_biz_list_2014

why does it happen?
Please help me.
Viewing all 1560 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>