Quantcast
Channel: MySQL Forums - NDB clusters
Viewing all 1560 articles
Browse latest View live

indexes disordered in database (no replies)

$
0
0
hi !

i have a database with many tables, in each when open a table, before appear sort by date of insert in db (1,2,3,4...) now appear in disorder (3,12,1,99,2013...)

this happened after performing a migration...


before:



now:





how can solve this problem?, some idea?


thanks !

pd.. sorry for my english

Installing cluster as Windows services (1 reply)

$
0
0
Hi all, my cluster works fine, but i found a strange problem while installing processes as Windows services, particullary management process. The process starts then fails due it can’t find the config.ini file. I specified the path in the installation command as below

C:\mysql-cluster\bin\ndb_mgmd.exe –install -f config.ini –configdir=C:\mysql-cluster\bin

and in second instance also

C:\mysql-cluster\bin\ndb_mgmd.exe –install -f config.ini –configdir=.\

In the Windows event log i found this message:
Error opening ‘config.ini’, error 2, No such file or directory.
Clearly is a path error, but i can’t understand where the error in my command line is.

Thanks in advance

Mysql Cluster Sharding (no replies)

$
0
0
Ok so i have mysql cluster setup but what i need is say table assets to be split among the nodes not replicated as the purpose of me setting up the cluster was due to lack of hard drive space so how do i get the cluster to split the table among the nodes?

I know about dbshards ect ect so please do not mention them i need a solution using just mysql and no i cant modify the software that ill be using the db

rolling restart issue (no replies)

$
0
0
Hi,all,

After we increase the DataMemory and IndexMemory and restart the 6 data nodes using cmds:
(We initially start our cluster using --nostart option )
ndb_mgm>5 restart
ndb_mgm>5 start
We got the error in ndb_mgm client as belew:
----------------
Start failed.
* 22: Error
* No contact with the process (dead ?).: Permanent error: Application error
-----------------
But if we repeat the "5 start" several times, we will finally have it successfully started.
I found from the cluster log file that the times I tried to use "5 start", if the error as above generated, then we got a WARNING like this in the cluster log file:
WARNING -- Found timedout nodeid reservation for 5, releasing it.

I am wondering why this happens? Seems the node id 5 is not available when I execute "5 start". How can we tell whether a node id is ready for a "start" command?

Thanks very much :)

The cluster log file as below:
-----------------
2013-05-07 12:41:28 [MgmtSrvr] INFO -- Node 5: Node shutdown initiated
2013-05-07 12:41:37 [MgmtSrvr] INFO -- Node 5: Suma: initiate handover for shutdown with nodes 0000000000000040 GCI: 2262903
2013-05-07 12:41:37 [MgmtSrvr] INFO -- Node 5: Suma: handover to node 6 gci: 2262903 buckets: 00000001 (2)
2013-05-07 12:41:42 [MgmtSrvr] INFO -- Node 5: Suma: handover complete
2013-05-07 12:41:44 [MgmtSrvr] INFO -- Node 5: Node shutdown completed, restarting, no start.
2013-05-07 12:41:44 [MgmtSrvr] ALERT -- Node 4: Node 5 Disconnected
2013-05-07 12:41:44 [MgmtSrvr] ALERT -- Node 6: Node 5 Disconnected
2013-05-07 12:41:44 [MgmtSrvr] INFO -- Node 6: Communication to Node 5 closed
2013-05-07 12:41:44 [MgmtSrvr] ALERT -- Node 3: Node 5 Disconnected
2013-05-07 12:41:44 [MgmtSrvr] INFO -- Node 3: Communication to Node 5 closed
2013-05-07 12:41:44 [MgmtSrvr] INFO -- Node 6: Communication to Node 5 closed
2013-05-07 12:41:44 [MgmtSrvr] INFO -- Node 4: Communication to Node 5 closed
2013-05-07 12:41:44 [MgmtSrvr] INFO -- Node 8: Communication to Node 5 closed
2013-05-07 12:41:44 [MgmtSrvr] INFO -- Node 3: Communication to Node 5 closed
2013-05-07 12:41:44 [MgmtSrvr] INFO -- Node 7: Communication to Node 5 closed
2013-05-07 12:41:44 [MgmtSrvr] ALERT -- Node 1: Node 5 Disconnected
2013-05-07 12:41:44 [MgmtSrvr] ALERT -- Node 6: Arbitration check won - node group majority
2013-05-07 12:41:44 [MgmtSrvr] INFO -- Node 6: President restarts arbitration thread [state=6]
2013-05-07 12:41:44 [MgmtSrvr] INFO -- Node 6: GCP Take over started
2013-05-07 12:41:44 [MgmtSrvr] INFO -- Node 6: Node 6 taking over as DICT master
2013-05-07 12:41:44 [MgmtSrvr] ALERT -- Node 8: Node 5 Disconnected
2013-05-07 12:41:44 [MgmtSrvr] INFO -- Node 6: LCP Take over started
2013-05-07 12:41:44 [MgmtSrvr] INFO -- Node 6: ParticipatingDIH = 0000000000000000
2013-05-07 12:41:44 [MgmtSrvr] INFO -- Node 6: ParticipatingLQH = 0000000000000000
2013-05-07 12:41:44 [MgmtSrvr] INFO -- Node 6: m_LCP_COMPLETE_REP_Counter_DIH = [SignalCounter: m_count=0 0000000000000000]
2013-05-07 12:41:44 [MgmtSrvr] INFO -- Node 6: m_LCP_COMPLETE_REP_Counter_LQH = [SignalCounter: m_count=0 0000000000000000]
2013-05-07 12:41:44 [MgmtSrvr] INFO -- Node 6: m_LAST_LCP_FRAG_ORD = [SignalCounter: m_count=0 0000000000000000]
2013-05-07 12:41:44 [MgmtSrvr] INFO -- Node 6: m_LCP_COMPLETE_REP_From_Master_Received = 1
2013-05-07 12:41:45 [MgmtSrvr] INFO -- Node 6: GCP Monitor: unlimited lags allowed
2013-05-07 12:41:45 [MgmtSrvr] INFO -- Node 6: GCP Take over completed
2013-05-07 12:41:45 [MgmtSrvr] INFO -- Node 6: kk: 2262903/10 0 0
2013-05-07 12:41:45 [MgmtSrvr] INFO -- Node 6: LCP Take over completed (state = 4)
2013-05-07 12:41:45 [MgmtSrvr] INFO -- Node 6: ParticipatingDIH = 0000000000000000
2013-05-07 12:41:45 [MgmtSrvr] INFO -- Node 6: ParticipatingLQH = 0000000000000000
2013-05-07 12:41:45 [MgmtSrvr] INFO -- Node 6: m_LCP_COMPLETE_REP_Counter_DIH = [SignalCounter: m_count=0 0000000000000000]
2013-05-07 12:41:45 [MgmtSrvr] INFO -- Node 6: m_LCP_COMPLETE_REP_Counter_LQH = [SignalCounter: m_count=0 0000000000000000]
2013-05-07 12:41:45 [MgmtSrvr] INFO -- Node 6: m_LAST_LCP_FRAG_ORD = [SignalCounter: m_count=0 0000000000000000]
2013-05-07 12:41:45 [MgmtSrvr] INFO -- Node 6: m_LCP_COMPLETE_REP_From_Master_Received = 1
2013-05-07 12:41:47 [MgmtSrvr] INFO -- Node 6: Local checkpoint 4266 started. Keep GCI = 2262401 oldest restorable GCI = 2262127
2013-05-07 12:41:48 [MgmtSrvr] ALERT -- Node 7: Node 5 Disconnected
2013-05-07 12:41:55 [MgmtSrvr] WARNING -- Node 6: GCP Monitor: GCP_COMMIT lag 9 seconds (no max lag)
2013-05-07 12:42:05 [MgmtSrvr] WARNING -- Node 6: GCP Monitor: GCP_COMMIT lag 19 seconds (no max lag)
2013-05-07 12:42:15 [MgmtSrvr] WARNING -- Node 6: GCP Monitor: GCP_COMMIT lag 29 seconds (no max lag)
2013-05-07 12:42:20 [MgmtSrvr] WARNING -- Found timedout nodeid reservation for 5, releasing it
2013-05-07 12:42:25 [MgmtSrvr] WARNING -- Node 6: GCP Monitor: GCP_COMMIT lag 39 seconds (no max lag)
2013-05-07 12:42:35 [MgmtSrvr] WARNING -- Node 6: GCP Monitor: GCP_COMMIT lag 49 seconds (no max lag)
2013-05-07 12:42:43 [MgmtSrvr] WARNING -- Node 3: Failure handling of node 5 has not completed in 1 min - state = 6
2013-05-07 12:42:43 [MgmtSrvr] WARNING -- Node 6: Failure handling of node 5 has not completed in 1 min - state = 6
2013-05-07 12:42:43 [MgmtSrvr] WARNING -- Node 4: Failure handling of node 5 has not completed in 1 min - state = 6
2013-05-07 12:42:43 [MgmtSrvr] WARNING -- Node 8: Failure handling of node 5 has not completed in 1 min - state = 6
2013-05-07 12:42:46 [MgmtSrvr] WARNING -- Node 6: GCP Monitor: GCP_SAVE lag 60 seconds (no max lag)
2013-05-07 12:42:46 [MgmtSrvr] WARNING -- Node 6: GCP Monitor: GCP_COMMIT lag 59 seconds (no max lag)
2013-05-07 12:42:53 [MgmtSrvr] WARNING -- Found timedout nodeid reservation for 5, releasing it
2013-05-07 12:42:56 [MgmtSrvr] WARNING -- Node 6: GCP Monitor: GCP_COMMIT lag 69 seconds (no max lag)
2013-05-07 12:43:06 [MgmtSrvr] WARNING -- Node 6: GCP Monitor: GCP_COMMIT lag 79 seconds (no max lag)
2013-05-07 12:43:14 [MgmtSrvr] INFO -- Node 6: Communication to Node 50 closed
2013-05-07 12:43:14 [MgmtSrvr] INFO -- Node 6: Communication to Node 51 closed
2013-05-07 12:43:14 [MgmtSrvr] ALERT -- Node 7: Node 50 Disconnected
2013-05-07 12:43:14 [MgmtSrvr] ALERT -- Node 7: Node 51 Disconnected
2013-05-07 12:43:14 [MgmtSrvr] INFO -- Node 8: Communication to Node 50 closed
2013-05-07 12:43:14 [MgmtSrvr] INFO -- Node 8: Communication to Node 51 closed
2013-05-07 12:43:14 [MgmtSrvr] ALERT -- Node 8: Node 50 Disconnected
2013-05-07 12:43:14 [MgmtSrvr] ALERT -- Node 8: Node 51 Disconnected
2013-05-07 12:43:15 [MgmtSrvr] ALERT -- Node 6: Node 50 Disconnected
2013-05-07 12:43:15 [MgmtSrvr] ALERT -- Node 6: Node 51 Disconnected
2013-05-07 12:43:15 [MgmtSrvr] INFO -- Node 4: Communication to Node 50 closed
2013-05-07 12:43:15 [MgmtSrvr] INFO -- Node 4: Communication to Node 51 closed
2013-05-07 12:43:15 [MgmtSrvr] INFO -- Node 3: Communication to Node 50 closed
2013-05-07 12:43:15 [MgmtSrvr] INFO -- Node 3: Communication to Node 51 closed
2013-05-07 12:43:15 [MgmtSrvr] ALERT -- Node 3: Node 50 Disconnected
2013-05-07 12:43:15 [MgmtSrvr] ALERT -- Node 4: Node 50 Disconnected
2013-05-07 12:43:15 [MgmtSrvr] ALERT -- Node 4: Node 51 Disconnected
2013-05-07 12:43:15 [MgmtSrvr] ALERT -- Node 3: Node 51 Disconnected
2013-05-07 12:43:15 [MgmtSrvr] INFO -- Node 7: Communication to Node 50 closed
2013-05-07 12:43:15 [MgmtSrvr] INFO -- Node 7: Communication to Node 51 closed
2013-05-07 12:43:16 [MgmtSrvr] WARNING -- Node 6: GCP Monitor: GCP_COMMIT lag 89 seconds (no max lag)
2013-05-07 12:43:18 [MgmtSrvr] INFO -- Node 8: Communication to Node 50 opened
2013-05-07 12:43:18 [MgmtSrvr] INFO -- Node 8: Communication to Node 51 opened
2013-05-07 12:43:18 [MgmtSrvr] INFO -- Node 8: Node 50 Connected
2013-05-07 12:43:18 [MgmtSrvr] INFO -- Node 8: Node 51 Connected
2013-05-07 12:43:18 [MgmtSrvr] INFO -- Node 8: Node 50: API mysql-5.5.29 ndb-7.2.10
2013-05-07 12:43:18 [MgmtSrvr] INFO -- Node 8: Node 51: API mysql-5.5.29 ndb-7.2.10
2013-05-07 12:43:18 [MgmtSrvr] INFO -- Node 3: Communication to Node 50 opened
2013-05-07 12:43:18 [MgmtSrvr] INFO -- Node 3: Communication to Node 51 opened
2013-05-07 12:43:18 [MgmtSrvr] INFO -- Node 3: Node 50 Connected
2013-05-07 12:43:18 [MgmtSrvr] INFO -- Node 3: Node 51 Connected
2013-05-07 12:43:18 [MgmtSrvr] INFO -- Node 3: Node 50: API mysql-5.5.29 ndb-7.2.10
2013-05-07 12:43:18 [MgmtSrvr] INFO -- Node 3: Node 51: API mysql-5.5.29 ndb-7.2.10
2013-05-07 12:43:19 [MgmtSrvr] INFO -- Node 4: Communication to Node 50 opened
2013-05-07 12:43:19 [MgmtSrvr] INFO -- Node 4: Communication to Node 51 opened
2013-05-07 12:43:19 [MgmtSrvr] INFO -- Node 4: Node 50 Connected
2013-05-07 12:43:19 [MgmtSrvr] INFO -- Node 4: Node 51 Connected
2013-05-07 12:43:19 [MgmtSrvr] INFO -- Node 4: Node 50: API mysql-5.5.29 ndb-7.2.10
2013-05-07 12:43:19 [MgmtSrvr] INFO -- Node 4: Node 51: API mysql-5.5.29 ndb-7.2.10
2013-05-07 12:43:22 [MgmtSrvr] INFO -- Node 7: Communication to Node 50 opened
2013-05-07 12:43:22 [MgmtSrvr] INFO -- Node 7: Communication to Node 51 opened
2013-05-07 12:43:23 [MgmtSrvr] WARNING -- Node 7: Failure handling of node 5 has not completed in 1 min - state = 6
2013-05-07 12:43:26 [MgmtSrvr] WARNING -- Node 6: GCP Monitor: GCP_COMMIT lag 99 seconds (no max lag)
2013-05-07 12:43:26 [MgmtSrvr] WARNING -- Found timedout nodeid reservation for 5, releasing it
2013-05-07 12:43:36 [MgmtSrvr] WARNING -- Node 6: GCP Monitor: GCP_COMMIT lag 109 seconds (no max lag)
2013-05-07 12:43:37 [MgmtSrvr] INFO -- Node 7: Node 50 Connected
2013-05-07 12:43:37 [MgmtSrvr] INFO -- Node 7: Node 51 Connected
2013-05-07 12:43:37 [MgmtSrvr] INFO -- Node 7: Node 50: API mysql-5.5.29 ndb-7.2.10
2013-05-07 12:43:37 [MgmtSrvr] INFO -- Node 7: Node 51: API mysql-5.5.29 ndb-7.2.10
2013-05-07 12:43:43 [MgmtSrvr] WARNING -- Node 3: Failure handling of node 5 has not completed in 2 min - state = 6
2013-05-07 12:43:43 [MgmtSrvr] WARNING -- Node 6: Failure handling of node 5 has not completed in 2 min - state = 6
2013-05-07 12:43:44 [MgmtSrvr] WARNING -- Node 4: Failure handling of node 5 has not completed in 2 min - state = 6
2013-05-07 12:43:44 [MgmtSrvr] WARNING -- Node 8: Failure handling of node 5 has not completed in 2 min - state = 6
2013-05-07 12:43:46 [MgmtSrvr] WARNING -- Node 6: GCP Monitor: GCP_SAVE lag 120 seconds (no max lag)
2013-05-07 12:43:46 [MgmtSrvr] WARNING -- Node 6: GCP Monitor: GCP_COMMIT lag 119 seconds (no max lag)
2013-05-07 12:43:56 [MgmtSrvr] WARNING -- Node 6: GCP Monitor: GCP_COMMIT lag 129 seconds (no max lag)
2013-05-07 12:43:59 [MgmtSrvr] WARNING -- Found timedout nodeid reservation for 5, releasing it
2013-05-07 12:44:05 [MgmtSrvr] INFO -- Node 6: Communication to Node 50 opened
2013-05-07 12:44:05 [MgmtSrvr] INFO -- Node 6: Communication to Node 51 opened
2013-05-07 12:44:05 [MgmtSrvr] INFO -- Node 6: Node 51 Connected
2013-05-07 12:44:05 [MgmtSrvr] INFO -- Node 6: Node 50 Connected
2013-05-07 12:44:05 [MgmtSrvr] INFO -- Node 6: Node 51: API mysql-5.5.29 ndb-7.2.10
2013-05-07 12:44:05 [MgmtSrvr] INFO -- Node 6: Node 50: API mysql-5.5.29 ndb-7.2.10
2013-05-07 12:44:32 [MgmtSrvr] WARNING -- Found timedout nodeid reservation for 5, releasing it
2013-05-07 12:44:40 [MgmtSrvr] WARNING -- Node 7: Failure handling of node 5 has not completed in 2 min - state = 6
2013-05-07 12:44:44 [MgmtSrvr] WARNING -- Node 6: Failure handling of node 5 has not completed in 3 min - state = 6
2013-05-07 12:44:44 [MgmtSrvr] WARNING -- Node 3: Failure handling of node 5 has not completed in 3 min - state = 6
2013-05-07 12:44:44 [MgmtSrvr] WARNING -- Node 4: Failure handling of node 5 has not completed in 3 min - state = 6
2013-05-07 12:44:45 [MgmtSrvr] WARNING -- Node 8: Failure handling of node 5 has not completed in 3 min - state = 6
2013-05-07 12:45:05 [MgmtSrvr] WARNING -- Found timedout nodeid reservation for 5, releasing it
2013-05-07 12:45:37 [MgmtSrvr] INFO -- Nodeid 5 allocated for NDB at 10.1.12.243
2013-05-07 12:45:37 [MgmtSrvr] INFO -- Node 8: Communication to Node 5 opened
2013-05-07 12:45:37 [MgmtSrvr] INFO -- Node 6: Communication to Node 5 opened
2013-05-07 12:45:37 [MgmtSrvr] INFO -- Node 3: Communication to Node 5 opened
2013-05-07 12:45:38 [MgmtSrvr] INFO -- Node 4: Communication to Node 5 opened
2013-05-07 12:45:38 [MgmtSrvr] INFO -- Node 7: Communication to Node 5 opened
2013-05-07 12:45:38 [MgmtSrvr] INFO -- Nodeid 5 allocated for NDB at 10.1.12.243
2013-05-07 12:45:39 [MgmtSrvr] INFO -- Node 1: Node 5 Connected
2013-05-07 12:45:59 [MgmtSrvr] WARNING -- Node 6: Releasing node id allocation for node 5
2013-05-07 12:46:05 [MgmtSrvr] INFO -- Node 5: Start initiated (mysql-5.5.29 ndb-7.2.10)

truncate and REDO log (no replies)

$
0
0
Hello,

I am using MYSQL 5.1.61-ndb-7.1.22.

I wonder if truncate-table operaiton is logged in below log,

1) REDO log
- If yes, what would be logged, STATEMENT or row data
- If row data is logged, what is the volume? 1X or 2X table data szie?

2) Binlog (I guess yes, but logged as STATEMENT not row data)
3) Undo log (I guess no, since truncate is DDL)

Thanks!

Alax

MySQL Cluster: Connection Thread Scalability (no replies)

frm File Error (no replies)

$
0
0
Hello,
we have folowing Cluster Database: mysql-5.1.56 ndb-7.1.14

130430 7:13:41 [ERROR] MySQL: Incorrect information in file: '\ eledb \ t_management.frm.'

how can I fix it? The database is shut down again. I desperately need help. This error is reported by both the MySQL server. It affects multiple files.

[ndbd(NDB)] 3 node(s)
id=3 @10.111.106.29 (mysql-5.1.56 ndb-7.1.14, Nodegroup: 0, Master)
id=4 @10.111.106.30 (mysql-5.1.56 ndb-7.1.14, Nodegroup: 0)
id=5 @10.111.106.31 (mysql-5.1.56 ndb-7.1.14, Nodegroup: 0)

[ndb_mgmd(MGM)] 2 node(s)
id=1 @10.111.106.27 (mysql-5.1.56 ndb-7.1.14)
id=2 @10.111.106.28 (mysql-5.1.56 ndb-7.1.14)

[mysqld(API)] 3 node(s)
id=51 @10.111.106.27 (mysql-5.1.56 ndb-7.1.14)
id=52 @10.111.106.28 (mysql-5.1.56 ndb-7.1.14)
id=53 (not connected, accepting connect from any host)

My SQL cluster 7.3 DRM2 (1 reply)

$
0
0
Does anyone knows when mySQL cluster 7.3 DRM2 will be released as an official generally available version?

NDB Memcache caching policy + external values problems (no replies)

$
0
0
Hi,

Posting here as this appears to have more activity than the memcached forum.

I think I'm having problems with ndbmemcache on 7.2.7 and am wondering if they are with my setup, or general bugs.

I'm trying to use the caching policy, which crashes memcached everytime, and external_values never returns any data.

Details: http://forums.mysql.com/read.php?150,586268,586268#msg-586268

cluster index problem (no replies)

$
0
0
Hello,
We have an cluster with mysql-5.1.56 ndb-7.1.19 and we have big problems with indexes.

For example:
Table structure:
CREATE TABLE `users_uploads_counts` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` int(11) NOT NULL,
`uploads_remain` int(11) NOT NULL,
`uploads_sent` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `user_id` (`user_id`)
) /*!50100 TABLESPACE ts_transfer_free STORAGE DISK */ ENGINE=ndbcluster AUTO_INCREMENT=38595 DEFAULT CHARSET=utf8

Show indexes:
mysql> show indexes from users_uploads_counts;
+----------------------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+----------------------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
| users_uploads_counts | 0 | PRIMARY | 1 | id | A | 38555 | NULL | NULL | | BTREE | |
| users_uploads_counts | 1 | user_id | 1 | user_id | A | NULL | NULL | NULL | | BTREE | |
+----------------------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
2 rows in set (0.00 sec)

mysql>

If i optimize table, check, analyze etc nothing happen, Cardinality is still null.

If we use queries that join that table they are very slow.

also on all tables where we have indexes cardinality is null.

For example:
SELECT * FROM `users` JOIN users_uploads_counts on users_uploads_counts.user_id = users.id ORDER BY users.id DESC LIMIT 50;

This query takes about 1 minute.

Explain of the query:
mysql> explain SELECT * FROM `users` JOIN users_uploads_counts on users_uploads_counts.user_id = users.id ORDER BY users.id DESC LIMIT 50;
+----+-------------+----------------------+--------+---------------+---------+---------+---------------------------------------+-------+---------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+----------------------+--------+---------------+---------+---------+---------------------------------------+-------+---------------------------------+
| 1 | SIMPLE | users_uploads_counts | ALL | user_id | NULL | NULL | NULL | 38620 | Using temporary; Using filesort |
| 1 | SIMPLE | users | eq_ref | PRIMARY | PRIMARY | 4 | transfer.users_uploads_counts.user_id | 1 | |
+----+-------------+----------------------+--------+---------------+---------+---------+---------------------------------------+-------+---------------------------------+
2 rows in set (0.00 sec)

mysql>


Thx.

Why so many partitions? (no replies)

$
0
0
Hi,

I am using NDB cluster 7.1. the configuration is as below,

- 6 NDB nodes
- using ndbmtd
- NoofReplicas = 2
- MaxNoOfExecutionThreads = 8

CREATE TABLE TxnLog (
`LID` varchar(255) NOT NULL,
`ACTIVE` char(1) DEFAULT NULL,
`NID` varchar(255) NOT NULL,
PRIMARY KEY (`LID`),
) ENGINE=ndbcluster


So I think there should be 3 groups (2 nodes for each group).
As there is no user-defined partitioning, so I think the No. of partitions should be 6, as same as No. of NDB nodes.

But when using 'explain partitinos ...', I found there are totally 24 partitions.

Why?

ERROR 1032 (HY000): Can't find record in '' (1 reply)

$
0
0
Hi,

I have recently set up a two-node cluster:
#
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @127.0.0.1 (mysql-5.5.30 ndb-7.2.12, Nodegroup: 0)
id=5 @10.0.0.5 (mysql-5.5.30 ndb-7.2.12, Nodegroup: 0, Master)

[ndb_mgmd(MGM)] 2 node(s)
id=1 @10.0.0.4 (mysql-5.5.30 ndb-7.2.12)
id=4 @10.0.0.5 (mysql-5.5.30 ndb-7.2.12)

[mysqld(API)] 2 node(s)
id=3 @10.0.0.4 (mysql-5.5.30 ndb-7.2.12)
id=6 @10.0.0.5 (mysql-5.5.30 ndb-7.2.12)
#

I am developing a PHP website (with PDO), backed up by the cluster, but I experienced some errors (I forgot which exactly), from the SQL side, more recently.
I managed to eliminate most of them by either flushing the tables, recreating the table or the whole database a few times, and/or restarting the cluster, but one error remains:
`ERROR 1032 (HY000): Can't find record in ''`. This shows up only on one of the tables, and only when updating it, so far.
I've searched amongst the other posts here (and in the lists), and BLOB fields were involved, as well as an older version of MySQL server. I have a `TEXT` type field, but it belongs to a different table... . And I have a more recent version of the server.

Here is my version, as reported by phpMyAdmin:
Database Server
• Server type: MySQL
• Server version: 5.5.30-ndb-7.2.12-cluster-gpl-log - MySQL Cluster Community Server (GPL)
• Protocol version: 10
• Server charset: UTF-8 Unicode (utf8)

Web Server
• Apache/2.2.22 (Debian 7.0) amd64
• Database client version: libmysql - 5.5.31
• PHP extension: mysqli


And here is my configuration:

# Beginning of `config.ini`:
[tcp default]
SendBufferMemory=16M
ReceiveBufferMemory=16M

[ndb_mgmd default]
DataDir=/var/local/mysql/cluster/data/mgm

[ndb_mgmd]
HostName=Server1
NodeID=1

[ndb_mgmd]
hostname=Server2
NodeID=4

[ndbd default]
NoOfReplicas=2
DataDir=/var/local/mysql/cluster/data/ndb
LockPagesInMainMemory=1
DataMemory=768M
IndexMemory=512M
StringMemory=100

RedoBuffer=16M

ODirect=1

[ndbd]
hostName=Server1
NodeID=2

[ndbd]
HostName=Server2
NodeID=5

[mysqld]
HostName=Server1
NodeID=3

[mysqld]
HostName=Server2
NodeID=6

# End of `config.ini`.


# Beginning of `my.cnf` of Server1:
[mysqld]
ndb-nodeid=3
log-bin=Server1-bin
binlog-format=mixed
ndbcluster
datadir=/var/local/mysql/cluster/data/mysql
basedir=/usr/local/mysql/cluster
socket=/var/run/mysqld/mysqld.sock
pid-file=/var/run/mysqld/mysqld.pid
port=3306
server-id=3
federated
user=root
default-storage-engine=ndbcluster
big-tables
max_connect_errors=4294967295

# End of `my.cnf` of Server1.


# Beginning of `my.cnf` of Server2:
[mysqld]
ndb-nodeid=6
log-bin=Server2-bin
binlog-format=mixed
ndbcluster
datadir = /var/local/mysql/cluster/data/mysql
basedir = /usr/local/mysql/cluster
socket = /var/run/mysqld/mysqld.sock
pid-file=/var/run/mysqld/mysqld.pid
port = 3306
server-id=6
max_connect_errors=4294967295

federated
user=root
default-storage-engine = ndbcluster
big-tables

# End of `my.cnf` of Server2.


How could this error be solved?

Thank you!

6-nodes cluster failing (no replies)

$
0
0
Hi

I've got a 6-nodes cluster with NoOfPartitions = 3, 3 nodes running in one rack (nodegroup 1 + nodegroup 0 + nodegroup 0) and 3 nodes running in another rack (nodegroup 1 + nodegroup 1 + nodegroup 0), so I expect to be able to shutdown one entire rack for maintenance without harming the cluster.

But it seems it didn't work. I stopped by hand and gracefully 3 of the ndbmtd processes in one rack

2013-05-31 06:13:41 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_COMMIT lag 9 seconds (no max lag)
2013-05-31 06:13:45 [MgmtSrvr] WARNING -- Node 3: Node 4 missed heartbeat 2
2013-05-31 06:13:46 [MgmtSrvr] INFO -- Node 4: Node shutdown completed. Initiated by signal 15.
2013-05-31 06:13:48 [MgmtSrvr] WARNING -- Node 5: Node 8 missed heartbeat 2
2013-05-31 06:13:49 [MgmtSrvr] INFO -- Node 6: Node shutdown completed. Initiated by signal 15.
2013-05-31 06:13:51 [MgmtSrvr] ALERT -- Node 3: Network partitioning - arbitration required
2013-05-31 06:13:51 [MgmtSrvr] INFO -- Node 3: President restarts arbitration thread [state=7]
2013-05-31 06:13:51 [MgmtSrvr] ALERT -- Node 3: Arbitration won - positive reply from node 1
2013-05-31 06:13:51 [MgmtSrvr] INFO -- Node 8: Node shutdown completed. Initiated by signal 15.


Nodes 4,6,8 are all in one rack and this is the group distribution:

[ndbd(NDB)] 6 node(s)
id=3 @10.2.0.186 (mysql-5.5.29 ndb-7.2.10, Nodegroup: 0, Master)
id=4 @10.2.0.185 (mysql-5.5.29 ndb-7.2.10, Nodegroup: 0)
id=5 @10.2.0.184 (mysql-5.5.29 ndb-7.2.10, Nodegroup: 0)
id=6 @10.2.0.181 (mysql-5.5.29 ndb-7.2.10, Nodegroup: 1)
id=7 @10.2.0.183 (mysql-5.5.29 ndb-7.2.10, Nodegroup: 1)
id=8 @10.2.0.182 (mysql-5.5.29 ndb-7.2.10, Nodegroup: 1)


but suddenly, after a few seconds, node 3 died on its own, leading to a cluster outage (at least I didn't lose any data)

2013-05-31 06:14:03 [MgmtSrvr] WARNING -- Node 5: Node 3 missed heartbeat 2
2013-05-31 06:14:06 [MgmtSrvr] ALERT -- Node 3: Forced node shutdown completed. Caused by error 2341: 'Internal program error (failed ndbrequire)(Internal error, programming error or missing error message, please report a bug). Temporary error, restart node'.
2013-05-31 06:14:06 [MgmtSrvr] ALERT -- Node 5: Network partitioning - arbitration required
2013-05-31 06:14:06 [MgmtSrvr] INFO -- Node 5: President restarts arbitration thread [state=7]
2013-05-31 06:14:20 [MgmtSrvr] WARNING -- Node 7: Node 5 missed heartbeat 2
2013-05-31 06:14:23 [MgmtSrvr] ALERT -- Node 5: Forced node shutdown completed. Caused by error 2305: 'Node lost connection to other nodes and can not form a unpartitioned cluster, please investigate if there are error(s) on other node(s)(Arbitration error). Temporary error, restart node'.
2013-05-31 06:14:38 [MgmtSrvr] ALERT -- Node 7: Forced node shutdown completed. Caused by error 2305: 'Node lost connection to other nodes and can not form a unpartitioned cluster, please investigate if there are error(s) on other node(s)(Arbitration error). Temporary error, restart node'.

Any idea about this Error 2341 taht looks like a generic exception?

Final note: this was a controlled shutdown, prior to any maintenance work on the rack, so everything was fully functional network-wise

How would you architect this? (no replies)

$
0
0
I want to be able to deploy MySQL clustering in a cloud environment across two regions. I'd like to have a load balancer in each region that gets fed MySQL data. That load balancer would then direct the data to one of two MySQL management nodes in that region. Each management node would have two data nodes (so 4 per region). So each region would have a load balancer, two MySQL management nodes, and 4 data nodes. The thing is I want that all in a cluster. Can the MySQL management nodes talk to each other especially across regions?

The idea is to have redundancy within each region as well as across two regions. Perhaps this is not the right architecture. How would you design this?

Also, I am trying to do this all on Windows severs but is that more trouble thanit's worth and Linux would be better?

Thanks!

Cluster crash while deleting many rows (no replies)

$
0
0
Hi everybody,

I've got a strange problem where someone maybe can push me in the right direction. My configuration:

2 datanodes
2 management server
4 mysql server

This is the output of ndb_mgm:

ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=11 @192.168.xxx.134 (mysql-5.1.56 ndb-7.1.15, Nodegroup: 0, Master)
id=12 @192.168.xxx.135 (mysql-5.1.56 ndb-7.1.15, Nodegroup: 0)

[ndb_mgmd(MGM)] 2 node(s)
id=1 @192.168.xxx.130 (mysql-5.1.56 ndb-7.1.15)
id=2 @192.168.xxx.131 (mysql-5.1.56 ndb-7.1.15)

[mysqld(API)] 6 node(s)
id=21 @192.168.xxx.132 (mysql-5.1.51 ndb-7.1.9)
id=22 @192.168.xxx.133 (mysql-5.1.51 ndb-7.1.9)
id=23 @192.168.xxx.136 (mysql-5.1.51 ndb-7.1.9)
id=24 @192.168.xxx.137 (mysql-5.1.51 ndb-7.1.9)
id=31 (not connected, accepting connect from 192.168.xxx.134)
id=32 (not connected, accepting connect from 192.168.xxx.135)

ndb_mgm>

The server are all virtual server in a 2 server VMware ESXi 4.1 environment. This means that 1 datanode, 1 management server and 2 mysql server (along with an apache webserver) are depoyed on each physical server (and yes, I know that using VMware with MySQL Cluster is not recommended... :-) )

My problem occurred last week. In the cluster there is a table with 66 rows. I had round about 150000 rows in this table. Then I wanted to delete all the rows so I made a

mysql> delete * from spenden;
(I can't use truncate because I need the autoincrement columns intact)

About 90 seconds later I got an error in the cluster and it crashed. I was able to restart the cluster and all the data in the table I wanted to delete was there again so I did it a second time - with the same result. The cluster crashed but I was able to restart it. I got the following error message in the cluster log:

=======================
ndb_1_cluster.log
=======================
2013-06-06 11:02:45 [MgmtSrvr] INFO -- Node 24: Event buffer status: used=1224KB(100%) alloc=1224KB(0%) max=0B apply_epoch=26363618/15 latest_epoch=26363618/15
2013-06-06 11:02:45 [MgmtSrvr] INFO -- Node 23: Event buffer status: used=1598KB(100%) alloc=1598KB(0%) max=0B apply_epoch=26363618/16 latest_epoch=26363618/16
2013-06-06 11:02:45 [MgmtSrvr] INFO -- Node 22: Event buffer status: used=1029KB(99%) alloc=1029KB(0%) max=0B apply_epoch=26363618/18 latest_epoch=26363618/18
2013-06-06 11:02:45 [MgmtSrvr] INFO -- Node 21: Event buffer status: used=1159KB(100%) alloc=1159KB(0%) max=0B apply_epoch=26363619/0 latest_epoch=26363619/0
2013-06-06 11:02:45 [MgmtSrvr] INFO -- Node 22: Event buffer status: used=378KB(26%) alloc=1415KB(0%) max=0B apply_epoch=26363619/4 latest_epoch=26363619/4
2013-06-06 11:02:45 [MgmtSrvr] INFO -- Node 21: Event buffer status: used=378KB(27%) alloc=1391KB(0%) max=0B apply_epoch=26363619/4 latest_epoch=26363619/4
2013-06-06 11:02:45 [MgmtSrvr] INFO -- Node 23: Event buffer status: used=2538KB(79%) alloc=3206KB(0%) max=0B apply_epoch=26363619/1 latest_epoch=26363619/1
2013-06-06 11:02:46 [MgmtSrvr] INFO -- Node 24: Event buffer status: used=2038KB(63%) alloc=3186KB(0%) max=0B apply_epoch=26363619/2 latest_epoch=26363619/2
2013-06-06 11:02:46 [MgmtSrvr] INFO -- Node 23: Event buffer status: used=287KB(8%) alloc=3333KB(0%) max=0B apply_epoch=26363619/4 latest_epoch=26363619/7
2013-06-06 11:02:46 [MgmtSrvr] INFO -- Node 23: Event buffer status: used=287KB(8%) alloc=3333KB(0%) max=0B apply_epoch=26363619/4 latest_epoch=26363619/8
2013-06-06 11:02:46 [MgmtSrvr] INFO -- Node 23: Event buffer status: used=287KB(8%) alloc=3333KB(0%) max=0B apply_epoch=26363619/4 latest_epoch=26363619/9
2013-06-06 11:02:46 [MgmtSrvr] INFO -- Node 23: Event buffer status: used=287KB(8%) alloc=3333KB(0%) max=0B apply_epoch=26363619/4 latest_epoch=26363619/10
2013-06-06 11:02:46 [MgmtSrvr] INFO -- Node 23: Event buffer status: used=287KB(8%) alloc=3333KB(0%) max=0B apply_epoch=26363619/4 latest_epoch=26363619/11
2013-06-06 11:02:46 [MgmtSrvr] INFO -- Node 24: Event buffer status: used=378KB(11%) alloc=3259KB(0%) max=0B apply_epoch=26363619/4 latest_epoch=26363619/7
2013-06-06 11:02:46 [MgmtSrvr] INFO -- Node 24: Event buffer status: used=378KB(11%) alloc=3259KB(0%) max=0B apply_epoch=26363619/4 latest_epoch=26363619/8
2013-06-06 11:02:46 [MgmtSrvr] INFO -- Node 24: Event buffer status: used=378KB(11%) alloc=3259KB(0%) max=0B apply_epoch=26363619/4 latest_epoch=26363619/9
2013-06-06 11:02:46 [MgmtSrvr] INFO -- Node 24: Event buffer status: used=378KB(11%) alloc=3259KB(0%) max=0B apply_epoch=26363619/4 latest_epoch=26363619/10
2013-06-06 11:02:46 [MgmtSrvr] INFO -- Node 24: Event buffer status: used=378KB(11%) alloc=3259KB(0%) max=0B apply_epoch=26363619/4 latest_epoch=26363619/11
2013-06-06 11:02:46 [MgmtSrvr] INFO -- Node 24: Event buffer status: used=378KB(11%) alloc=3259KB(0%) max=0B apply_epoch=26363619/4 latest_epoch=26363619/12
2013-06-06 11:05:28 [MgmtSrvr] INFO -- Node 21: Event buffer status: used=1880KB(100%) alloc=1880KB(0%) max=0B apply_epoch=26363698/2 latest_epoch=26363698/2
2013-06-06 11:05:28 [MgmtSrvr] INFO -- Node 24: Event buffer status: used=3731KB(100%) alloc=3731KB(0%) max=0B apply_epoch=26363698/2 latest_epoch=26363698/2
2013-06-06 11:05:28 [MgmtSrvr] INFO -- Node 22: Event buffer status: used=1904KB(100%) alloc=1904KB(0%) max=0B apply_epoch=26363698/2 latest_epoch=26363698/2
2013-06-06 11:05:28 [MgmtSrvr] INFO -- Node 23: Event buffer status: used=3809KB(100%) alloc=3809KB(0%) max=0B apply_epoch=26363698/2 latest_epoch=26363698/2
2013-06-06 11:05:29 [MgmtSrvr] INFO -- Node 24: Event buffer status: used=0B(0%) alloc=4052KB(0%) max=0B apply_epoch=26363698/4 latest_epoch=26363698/4
2013-06-06 11:05:29 [MgmtSrvr] INFO -- Node 21: Event buffer status: used=0B(0%) alloc=2202KB(0%) max=0B apply_epoch=26363698/4 latest_epoch=26363698/4
2013-06-06 11:05:29 [MgmtSrvr] INFO -- Node 22: Event buffer status: used=0B(0%) alloc=2225KB(0%) max=0B apply_epoch=26363698/4 latest_epoch=26363698/4
2013-06-06 11:05:29 [MgmtSrvr] INFO -- Node 23: Event buffer status: used=0B(0%) alloc=4131KB(0%) max=0B apply_epoch=26363698/5 latest_epoch=26363698/5
2013-06-06 16:05:29 [MgmtSrvr] WARNING -- Node 11: Transporter to node 12 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:29 [MgmtSrvr] INFO -- Node 21: Event buffer status: used=8427KB(100%) alloc=8427KB(0%) max=0B apply_epoch=26372414/7 latest_epoch=26372414/7
2013-06-06 16:05:29 [MgmtSrvr] INFO -- Node 23: Event buffer status: used=10439KB(100%) alloc=10439KB(0%) max=0B apply_epoch=26372414/7 latest_epoch=26372414/7
2013-06-06 16:05:29 [MgmtSrvr] INFO -- Node 24: Event buffer status: used=10227KB(100%) alloc=10227KB(0%) max=0B apply_epoch=26372414/7 latest_epoch=26372414/7
2013-06-06 16:05:29 [MgmtSrvr] INFO -- Node 22: Event buffer status: used=8575KB(100%) alloc=8575KB(0%) max=0B apply_epoch=26372414/7 latest_epoch=26372414/7
2013-06-06 16:05:30 [MgmtSrvr] WARNING -- Node 11: Transporter to node 12 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:30 [MgmtSrvr] WARNING -- Node 11: Transporter to node 12 reported error 0x16: The send buffer was full, but sleeping for a while solved - Repeated 3 times
2013-06-06 16:05:30 [MgmtSrvr] WARNING -- Node 11: Transporter to node 23 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:31 [MgmtSrvr] WARNING -- Node 12: Transporter to node 22 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:31 [MgmtSrvr] WARNING -- Node 11: Transporter to node 12 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:31 [MgmtSrvr] WARNING -- Node 11: Transporter to node 23 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:31 [MgmtSrvr] WARNING -- Node 11: Transporter to node 12 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:31 [MgmtSrvr] WARNING -- Node 11: Transporter to node 23 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:31 [MgmtSrvr] WARNING -- Node 11: Transporter to node 23 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:38 [MgmtSrvr] INFO -- Node 11: Out of event buffer: nodefailure will cause event failures
2013-06-06 16:05:38 [MgmtSrvr] WARNING -- Node 12: Transporter to node 22 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:38 [MgmtSrvr] INFO -- Node 12: Out of event buffer: nodefailure will cause event failures
2013-06-06 16:05:38 [MgmtSrvr] WARNING -- Node 12: Transporter to node 22 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:38 [MgmtSrvr] WARNING -- Node 12: Transporter to node 22 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:39 [MgmtSrvr] WARNING -- Node 11: GCP Monitor: GCP_COMMIT lag 7 seconds (max lag: 13)
2013-06-06 16:05:39 [MgmtSrvr] WARNING -- Node 11: Transporter to node 23 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:39 [MgmtSrvr] WARNING -- Node 11: Transporter to node 23 reported error 0x16: The send buffer was full, but sleeping for a while solved - Repeated 2 times
2013-06-06 16:05:39 [MgmtSrvr] WARNING -- Node 11: Transporter to node 24 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:40 [MgmtSrvr] WARNING -- Node 11: Transporter to node 22 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:40 [MgmtSrvr] WARNING -- Node 11: Transporter to node 23 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:40 [MgmtSrvr] WARNING -- Node 11: Transporter to node 24 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:40 [MgmtSrvr] WARNING -- Node 11: Transporter to node 23 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:41 [MgmtSrvr] WARNING -- Node 11: Transporter to node 23 reported error 0x16: The send buffer was full, but sleeping for a while solved - Repeated 2 times
2013-06-06 16:05:41 [MgmtSrvr] WARNING -- Node 11: Transporter to node 22 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:41 [MgmtSrvr] WARNING -- Node 11: Transporter to node 23 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:42 [MgmtSrvr] WARNING -- Node 11: Transporter to node 22 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:45 [MgmtSrvr] WARNING -- Node 11: Transporter to node 22 reported error 0x16: The send buffer was full, but sleeping for a while solved - Repeated 5 times
2013-06-06 16:05:46 [MgmtSrvr] ALERT -- Node 1: Node 11 Disconnected
2013-06-06 16:05:46 [MgmtSrvr] WARNING -- Node 12: Transporter to node 22 reported error 0x16: The send buffer was full, but sleeping for a while solved
2013-06-06 16:05:46 [MgmtSrvr] ALERT -- Node 11: Forced node shutdown completed. Caused by error 2303: 'System error, node killed during node restart by other node(Internal error, programming error or missing error message, please report a bug). Temporary error, restart node'.
2013-06-06 16:05:46 [MgmtSrvr] ALERT -- Node 12: Node 11 Disconnected
2013-06-06 16:05:46 [MgmtSrvr] INFO -- Node 12: Communication to Node 11 closed
2013-06-06 16:05:46 [MgmtSrvr] ALERT -- Node 12: Network partitioning - arbitration required
2013-06-06 16:05:46 [MgmtSrvr] INFO -- Node 12: President restarts arbitration thread [state=7]
2013-06-06 16:05:46 [MgmtSrvr] ALERT -- Node 12: Forced node shutdown completed. Initiated by signal 11. Caused by error 6000: 'Error OS signal received(Internal error, programming error or missing error message, please report a bug). Temporary error, restart node'.
2013-06-06 16:05:46 [MgmtSrvr] ALERT -- Node 1: Node 12 Disconnected
2013-06-06 16:05:47 [MgmtSrvr] INFO -- Mgmt server state: nodeid 11 freed, m_reserved_nodes 1, 21, 22, 23 and 24.
2013-06-06 16:16:36 [MgmtSrvr] INFO -- Mgmt server state: nodeid 11 reserved for ip 192.168.129.134, m_reserved_nodes 1, 11, 21, 22, 23 and 24.
2013-06-06 16:16:37 [MgmtSrvr] INFO -- Node 1: Node 11 Connected
2013-06-06 16:16:38 [MgmtSrvr] INFO -- Node 11: Node 2 Connected
2013-06-06 16:16:54 [MgmtSrvr] INFO -- Mgmt server state: nodeid 12 reserved for ip 192.168.129.135, m_reserved_nodes 1, 11, 12, 21, 22, 23 and 24.
2013-06-06 16:16:55 [MgmtSrvr] INFO -- Node 1: Node 12 Connected
2013-06-06 16:16:55 [MgmtSrvr] INFO -- Node 12: Node 2 Connected
2013-06-06 16:17:02 [MgmtSrvr] INFO -- Node 11: Start initiated (mysql-5.1.56 ndb-7.1.15)
2013-06-06 16:17:04 [MgmtSrvr] INFO -- Node 11: Start phase 0 completed
2013-06-06 16:17:04 [MgmtSrvr] INFO -- Node 11: Communication to Node 12 opened
2013-06-06 16:17:04 [MgmtSrvr] INFO -- Node 11: Waiting 30 sec for nodes 12 to connect, nodes [ all: 11 and 12 connected: 11 no-wait: ]
(...)

=======================
ndb_11_out.log
=======================
2013-03-19 00:37:37 [ndbd] INFO -- timerHandlingLab now: 2693647860 sent: 2693647664 diff: 196
alloc_chunk(39460 16) -
alloc_chunk(39476 16) -
alloc_chunk(39492 16) -
alloc_chunk(39508 16) -
alloc_chunk(50530 16) -
alloc_chunk(50546 16) -
alloc_chunk(50562 16) -
alloc_chunk(50578 16) -
alloc_chunk(50594 16) -
alloc_chunk(50610 16) -
alloc_chunk(50626 16) -
alloc_chunk(50642 16) -
alloc_chunk(50658 16) -
alloc_chunk(50674 16) -
alloc_chunk(50690 16) -
alloc_chunk(50706 16) -
2013-06-06 16:05:31 [ndbd] WARNING -- Ndb kernel thread 0 is stuck in: Job Handling elapsed=100
2013-06-06 16:05:31 [ndbd] INFO -- Watchdog: User time: 1465347 System time: 4055724
2013-06-06 16:05:31 [ndbd] INFO -- timerHandlingLab now: 7239393501 sent: 7239393323 diff: 178
alloc_chunk(50722 16) -
alloc_chunk(50738 16) -
alloc_chunk(50754 16) -
alloc_chunk(50770 16) -
alloc_chunk(50786 16) -
alloc_chunk(50802 16) -
alloc_chunk(50818 16) -
alloc_chunk(50834 16) -
alloc_chunk(50850 16) -
alloc_chunk(50866 16) -
alloc_chunk(50882 16) -
alloc_chunk(50898 16) -
alloc_chunk(50914 16) -
alloc_chunk(50930 16) -
alloc_chunk(50946 16) -
alloc_chunk(50962 16) -
alloc_chunk(50978 16) -
alloc_chunk(50994 16) -
alloc_chunk(51010 16) -
alloc_chunk(51026 16) -
alloc_chunk(51042 16) -
alloc_chunk(51058 16) -
alloc_chunk(51074 16) -
alloc_chunk(51090 16) -
alloc_chunk(51106 16) -
alloc_chunk(51122 16) -
alloc_chunk(51138 16) -
alloc_chunk(51154 16) -
alloc_chunk(51170 16) -
alloc_chunk(51186 16) -
alloc_chunk(51202 16) -
alloc_chunk(51218 16) -
alloc_chunk(51234 16) -
c_nodeStartMaster.blockGcp: 0 4294967040
m_gcp_save.m_counter: 161 m_gcp_save.m_max_lag: 1310
m_micro_gcp.m_counter: 131 m_micro_gcp.m_max_lag: 131
m_gcp_save.m_state: 0
m_gcp_save.m_master.m_state: 0
m_micro_gcp.m_state: 2
m_micro_gcp.m_master.m_state: 2
c_COPY_GCIREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_COPY_TABREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_CREATE_FRAGREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_DIH_SWITCH_REPLICA_REQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_EMPTY_LCP_REQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_GCP_COMMIT_Counter = [SignalCounter: m_count=1 0000000000000800]
c_GCP_PREPARE_Counter = [SignalCounter: m_count=0 0000000000000000]
c_GCP_SAVEREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_SUB_GCP_COMPLETE_REP_Counter = [SignalCounter: m_count=0 0000000000000000]
c_INCL_NODEREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_MASTER_GCPREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_MASTER_LCPREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_START_INFOREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_START_RECREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_STOP_ME_REQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_TC_CLOPSIZEREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_TCGETOPSIZEREQ_Counter = [SignalCounter: m_count=1 0000000000001000]
m_copyReason: 0 m_waiting: 0 0
c_copyGCISlave: sender{Data, Ref} 11 f6000b reason: 0 nextWord: 0
Detected GCP stop(2)...sending kill to [SignalCounter: m_count=1 0000000000000800]
c_nodeStartMaster.blockGcp: 0 4294967040
m_gcp_save.m_counter: 0 m_gcp_save.m_max_lag: 1310
m_micro_gcp.m_counter: 0 m_micro_gcp.m_max_lag: 131
m_gcp_save.m_state: 0
m_gcp_save.m_master.m_state: 0
m_micro_gcp.m_state: 2
m_micro_gcp.m_master.m_state: 2
c_COPY_GCIREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_COPY_TABREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_CREATE_FRAGREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_DIH_SWITCH_REPLICA_REQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_EMPTY_LCP_REQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_GCP_COMMIT_Counter = [SignalCounter: m_count=1 0000000000000800]
c_GCP_PREPARE_Counter = [SignalCounter: m_count=0 0000000000000000]
c_GCP_SAVEREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_SUB_GCP_COMPLETE_REP_Counter = [SignalCounter: m_count=0 0000000000000000]
c_INCL_NODEREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_MASTER_GCPREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_MASTER_LCPREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_START_INFOREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_START_RECREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_STOP_ME_REQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_TC_CLOPSIZEREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_TCGETOPSIZEREQ_Counter = [SignalCounter: m_count=1 0000000000001000]
m_copyReason: 0 m_waiting: 0 0
c_copyGCISlave: sender{Data, Ref} 11 f6000b reason: 0 nextWord: 0
file[0] status: 2 type: 1 reqStatus: 0 file1: 2 1 0
c_nodeStartMaster.blockGcp: 0 4294967040
m_gcp_save.m_counter: 0 m_gcp_save.m_max_lag: 1310
m_micro_gcp.m_counter: 0 m_micro_gcp.m_max_lag: 131
m_gcp_save.m_state: 0
m_gcp_save.m_master.m_state: 0
m_micro_gcp.m_state: 2
m_micro_gcp.m_master.m_state: 2
c_COPY_GCIREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_COPY_TABREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_CREATE_FRAGREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_DIH_SWITCH_REPLICA_REQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_EMPTY_LCP_REQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_GCP_COMMIT_Counter = [SignalCounter: m_count=1 0000000000000800]
c_GCP_PREPARE_Counter = [SignalCounter: m_count=0 0000000000000000]
c_GCP_SAVEREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_SUB_GCP_COMPLETE_REP_Counter = [SignalCounter: m_count=0 0000000000000000]
c_INCL_NODEREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_MASTER_GCPREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_MASTER_LCPREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_START_INFOREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_START_RECREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_STOP_ME_REQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_TC_CLOPSIZEREQ_Counter = [SignalCounter: m_count=0 0000000000000000]
c_TCGETOPSIZEREQ_Counter = [SignalCounter: m_count=1 0000000000001000]
m_copyReason: 0 m_waiting: 0 0
c_copyGCISlave: sender{Data, Ref} 11 f6000b reason: 0 nextWord: 0
2013-06-06 16:05:45 [ndbd] INFO -- Node 11 killed this node because GCP stop was detected
2013-06-06 16:05:45 [ndbd] INFO -- NDBCNTR (Line: 276) 0x00000002
2013-06-06 16:05:45 [ndbd] INFO -- Error handler shutting down system
2013-06-06 16:05:45 [ndbd] INFO -- Error handler shutdown completed - exiting
2013-06-06 16:05:46 [ndbd] ALERT -- Node 11: Forced node shutdown completed. Caused by error 2303: 'System error, node killed during node restart by other node(Internal error, programming error or missing error message, please report a bug). Temporary error, restart node'.
2013-06-06 16:16:37 [ndbd] INFO -- Angel pid: 26971 started child: 26972
2013-06-06 16:16:37 [ndbd] INFO -- Configuration fetched from '192.168.xxx.130:1186', generation: 2
NDBMT: non-mt
2013-06-06 16:16:37 [ndbd] INFO -- NDB Cluster -- DB node 11
2013-06-06 16:16:37 [ndbd] INFO -- mysql-5.1.56 ndb-7.1.15 --
2013-06-06 16:16:37 [ndbd] INFO -- WatchDog timer is set to 6000 ms
2013-06-06 16:16:37 [ndbd] INFO -- numa_set_interleave_mask(numa_all_nodes) : OK
2013-06-06 16:16:37 [ndbd] INFO -- Ndbd_mem_manager::init(1) min: 2062Mb initial: 2082Mb
Adding 493Mb to ZONE_LO (1,15759)
Instantiating DBSPJ instanceNo=0
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
Adding 1591Mb to ZONE_LO (15760,50888)
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310392 for In/Deflate buffer
WOPool::init(61, 9)
(...)
===============================================================

As said before I could reproduce the issue on the system. Because the server is in use I made a backup of the data to reproduce the issue on my local copy of the server. Because they are virtual maschines the test environment is nearly 100% identical to the prod system with the difference that I use VMware player to run all virtual server. Surprisingly I was not able to reproduce the problem in my test environment. The query took around 2:30 min but completet as expected.

Has anybody an idea what happens on my prod system? Why is there an overflow in the send buffer? How can I adjust the size of the buffer? (I found no configuration entry for this)


Thanks in advance!
Malte

MySQL Cluster 7.3 is GA (no replies)

$
0
0
MySQL Cluster 7.3 has now been declared GA! This means that you can deploy it in your live systems and get support from Oracle.

The MySQL Cluster Auto-Installer is a browser-based GUI that will provision a well configured, distributed Cluster in minutes, ready for test, development or production environments.

MySQL Cluster now supports Foreign Keys!

Node.js is a platform that allows fast, scalable network applications (typically web applications) to be developed using JavaScript. Node.js is designed for a single thread to serve millions of client connections in real-time – this is achieved by an asynchronous, event-driven architecture – just like MySQL Cluster, making them a great match.
The MySQL Cluster NoSQL Driver for Node.js is implemented as a module for the V8 engine, providing Node.js with a native, asynchronous JavaScript interface that can be used to both query and receive results sets directly from MySQL Cluster, without transformations to SQL. As an added benefit, you can direct the driver to use SQL so that the same API can be used with InnoDB tables.

MySQL Cluster Connection Thread scalability increases the throughput over each connection between the data nodes and application nodes (such as MySQL Servers) by as much as 8x.

More details can be found in http://www.clusterdb.com/mysql-cluster/mysql-cluster-7-3-is-now-ga/

Newbie question about auto-installer and Windows 2008 and ssh (1 reply)

$
0
0
I am new to the auto-installer and basically mysql in general, but have a question about the ssh part on the first screen after I run setup.bat.

What do I do when I run this from a Windows 2008 server and try to connect to another Windows 2008 server?
Do I need to install a ssh server on Windows? Or just provide some Windows credentials and let it log in using those?

I tried to manually enter the information (cpu's, ram ,etc) on the second screen but when I hit the last screen, it would not deploy the software.
I understand why it wouldn't deploy, but am confused about how to finish the setup.

Any help is greatly appreciated. I don't see much about this in the documentation or on the web.

J

best clustering plan for keep a DB HighAvaiable (no replies)

$
0
0
Hi,

I have an application that use MySQL database I installed my MySQL server in VM in VMware vsphere and I have a shared storage and two separate physical server.

I want to configure my MySQL server in a way that if the first server failed by any reason the users can continue working, or maybe just with simple reconnect?

Thanks

MySQL Cluster: Node.js (no replies)

why the sql statement executed on mysql cluster takes such a long time? (2 replies)

$
0
0
I come with some problems when using mysql cluster(7.1),
There are two data nodes in mysql cluster;
Table T1 (index a) and table T2 (index b) both have about 180,000 records.
the sql statement "select * from T1 left join T2 on T1.a=T2.b where T2.b is null;"
costs about 10.7s when Executed on single node A(only data node A works)
and costs about 7.2s when Executed on single node B(only data node B works);
but it takes about 38.62s when two data nodes work ;
I wanna know :
a)why the sql statement is executed for such a long time?
b)what's the differences between the situation that the sql statement is executed on single worked data node and two data nodes(in the same group) ?
Viewing all 1560 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>