Quantcast
Channel: MySQL Forums - NDB clusters
Viewing all 1560 articles
Browse latest View live

Data node won't start (no replies)

$
0
0
Hello!

I'm trying to configure a MySQL Cluster on 4 machines, all runing Debian.

MySQL Cluster Environment:

MGM - 192.168.1.10
NDB1 - 192.168.1.30
NDB2 - 192.168.1.40
MYSQLD - 192.168.1.20

I installed all from the Binary Relases for Generic-Linux

1. I started the MGM node: ndb-mgmd -f /var/lib/mysql-cluster/config.ini
The config.ini file:

[ndbd default]

NoOfReplicas=2
DataMemory=80M
IndexMemory=18M

[tcp default]

portnumber=2202

[ndb_mgmd]

id = 1
hostname = 192.168.1.10
datadir=/var/lib/mysql-cluster

[ndbd]

id = 3
hostname = 192.168.1.30
datadir = /usr/local/mysql/data

[ndbd]

id = 4
hostname = 192.168.1.40
datadir = /usr/local/mysql/data

[mysqld]

id = 2
hostname=192.168.1.20

2. Now I start the data nodes with: ndbd
And wait until the node start

3. Check on MGM node with: ndb_mgm

ndb_mbm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 nodes
id=3 @192.168.1.30 (mysql-5.6.23 ndb-7.4.5, Nodegroup: 0, *)
id=4 (not connected, accepting connect from 192.168.1.40)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.1.10 (mysql-5.6.23 ndb-7.4.5)

[mysqld(API)] 1 node(s)
id=5 (not connected, accepting connect from 192.168.1.20)

4. And I try to start MySQL server on the Data Node already started (id=3)

shell> /etc/init.d/mysql start
Starting MySQL ....(...) The server quit without updating PID file (/usr/local/mysql/data/NDB1.pid)
[FAIL...failed!

The my.cnf file has:

[mysqld]

ndbcluster
ndb-connectstring = 192.168.1.10

[mysql_cluster]

ndb-connectstring = 192.168.1.10



The MySQL server is installed correctly because if do a default my.cnf file (without the cluster lines) the server starts.
It happens the same with the other data node.

What should I do? Is there something wrong? I have searched a lot in the Inernet and in your forums but I couldn't find the solution

Thanks you very much,

Cristian

PK Auto increment (no replies)

$
0
0
I am being told that because our DB is clustered every InnoDB table needs to have a Primary Key that is an auto_increment value.

Is that true, or am I possibly only getting partial information from the individual administering our DB?

zero values in ndb_binlog_index (no replies)

$
0
0
Hello. I try to set up replication between two clusters.
If I understand correctly, according to this config, mysql must write events in ndb_binlog_index even without working slave:

server-id=3
log-bin-trust-function-creators = 1
log-bin=log-bin.log
binlog-format=ROW
ndb-log-transaction-id=1
ndb-log-update-as-write=0
ndb-log-apply-status=1
replicate-wild-do-table=my_test.replication%
replicate-wild-do-table=mysql.ndb_apply_status
replicate-wild-do-table=mysql.ndb_replication

So, it writes, but in very strange way:

mysql-office> select file,orig_server_id,orig_epoch,position,next_position from mysql.ndb_binlog_index;
+------------------+----------------+------------+----------+---------------+
| file | orig_server_id | orig_epoch | position | next_position |
+------------------+----------------+------------+----------+---------------+
| ./log-bin.000001 | 0 | 0 | 217 | 596 |
+------------------+----------------+------------+----------+---------------+
1 row in set (0,00 sec)

mysql-office> show binlog events;
+----------------+-----+-------------+-----------+-------------+-------------------------------------------------------------+
| Log_name | Pos | Event_type | Server_id | End_log_pos | Info |
+----------------+-----+-------------+-----------+-------------+-------------------------------------------------------------+
| log-bin.000001 | 4 | Format_desc | 3 | 120 | Server ver: 5.6.23-ndb-7.4.5-cluster-gpl-log, Binlog ver: 4 |
| log-bin.000001 | 120 | Query | 3 | 217 | use `my_test`; truncate table replication |
| log-bin.000001 | 217 | Query | 3 | 285 | BEGIN |
| log-bin.000001 | 285 | Table_map | 3 | 343 | table_id: 79 (my_test.replication) |
| log-bin.000001 | 343 | Table_map | 3 | 409 | table_id: 71 (mysql.ndb_apply_status) |
| log-bin.000001 | 409 | Write_rows | 3 | 474 | table_id: 71 |
| log-bin.000001 | 474 | Write_rows | 3 | 527 | table_id: 79 flags: STMT_END_F |
| log-bin.000001 | 527 | Query | 3 | 596 | COMMIT |
+----------------+-----+-------------+-----------+-------------+-------------------------------------------------------------+
8 rows in set (0,00 sec)

There are simple situation (no working slave), but when replication is and working, all fine (data changes in both clusters, there is some non zero values in ndb_apply_status) except ndb_binlog_index.

Is it bug or my fault in configuration?

Cluster :: Cluster Crash (1 reply)

$
0
0
We have 2Mg node 6 API node and 2 Datanode

we are running Node 4: Node 7: API mysql-5.5.20 ndb-7.2.5

Today the Datanode have been crash to force shutdown 2 time with below error message on

The error message on MG node log

Forced node shutdown completed. Caused by error 2341: 'Internal program error (failed ndbrequire)(Internal error, programming error or missing error message, please report a bug). Temporary error, restart node'

150505 10:32:34 [ERROR] Got error 4028 when reading table './e3/e3_user_account'
150505 10:32:34 [ERROR] Got error 4028 when reading table './e3/e3_user_student_tw


The error message on Datanode :

Time: Tuesday 5 May 2015 - 10:32:30
Status: Temporary error, restart node
Message: Internal program error (failed ndbrequire) (Internal error, programming error or missing error message, please report a bug)
Error: 2341
Error data: DbspjMain.cpp
Error object: DBSPJ (Line: 888) 0x00000002
Program: ndbd
Pid: 26341
Version: mysql-5.5.20 ndb-7.2.5
Trace: /usr/local/mysql/data/ndb_3_trace.log.7 [t1..t1]
***EOM***

Time: Tuesday 5 May 2015 - 17:23:10
Status: Temporary error, restart node
Message: Internal program error (failed ndbrequire) (Internal error, programming error or missing error message, please report a bug)
Error: 2341
Error data: DbspjMain.cpp
Error object: DBSPJ (Line: 888) 0x00000002
Program: ndbd
Pid: 1246
Version: mysql-5.5.20 ndb-7.2.5
Trace: /usr/local/mysql/data/ndb_3_trace.log.8 [t1..t1]
***EOM***

NDB API: ndb_ndbapi_simple can't connect (no replies)

$
0
0
I have a working cluster built as follows:
+--------+----------+-------+---------+-----------+---------+
| NodeId | Process | Host | Status | Nodegroup | Package |
+--------+----------+-------+---------+-----------+---------+
| 49 | ndb_mgmd | ndb03 | running | | 7.4.4 |
| 51 | ndb_mgmd | ndb05 | running | | 7.4.4 |
| 53 | ndb_mgmd | ndb04 | running | | 7.4.4 |
| 55 | ndb_mgmd | ndb06 | running | | 7.4.4 |
| 1 | ndbd | ndb03 | running | 0 | 7.4.4 |
| 2 | ndbd | ndb05 | running | 0 | 7.4.4 |
| 3 | ndbd | ndb04 | running | 1 | 7.4.4 |
| 4 | ndbd | ndb06 | running | 1 | 7.4.4 |
| 50 | mysqld | ndb03 | running | | 7.4.4 |
| 52 | mysqld | ndb05 | running | | 7.4.4 |
| 54 | mysqld | ndb04 | running | | 7.4.4 |
| 56 | mysqld | ndb06 | running | | 7.4.4 |
+--------+----------+-------+---------+-----------+---------+

I built the API examples
when I run:
[bobp@ndb03 ndbapi-examples]$ ./ndb_ndbapi_simple ndb03 ndb03
Cluster management server was not ready within 30 secs.

What is the correct connection syntax?

ndbmtd unexpected high memory usage (1 reply)

$
0
0
Hello,
I am experiencing higher system memory usage than expected when using ndbmtd instead of ndbd. By my count from config.ini each node should be using around 13GB of memory, but looking at the ndbmtd parent process it has consumed around 36GB instead. I know that the RedoBuffer is a "per LDM" value and that each LDM thread is using the value from RedoBuffer. Are there other values that this is true for? It's consuming more than twice what I expect. I've used ndb_config to make sure that the "actual live" values match config.ini values. Perhaps it has something to do with my ThreadConfig and extra threads consuming memory? Any insight is welcome.

# ps aux |grep ndbmtd
root 2031 28.1 47.4 38620704 35290504 ? Sl May06 253:19 /usr/bin/ndbmtd --ndb-nodeid=11

# free
total used free shared buffers cached
Mem: 74377352 38038176 36339176 604 216968 1508032
-/+ buffers/cache: 36313176 38064176
Swap: 3997692 0 3997692

from config.ini

NoOfReplicas = 2
DataMemory = 6G
IndexMemory = 1G
SharedGlobalMemory = 2G
DiskPageBufferMemory = 2G
RedoBuffer = 512M
FragmentLogFileSize = 64M
NoOfFragmentLogFiles = 128
ThreadConfig=ldm={count=4,cpubind=1,2,3,4},main={c ount=1,cpuset=5,13},io={count=1,cpuset=5,13},rep={ count=1,cpuset=5,13},tc={count=2,cpuset=6,7,14,15} ,send={count=1,cpuset=6,7,14,15},recv={count=1,cpu set=6,7,14,15}

Small question (no replies)

$
0
0
Hi all,

Here I set up a little and study the system of MySQL Cluster for some months.

In some time I have to make a small presentation of what I'm there for two months, but I have a little problem: I have a memory hole.

Indeed, I do not remember, once the complete cluster is started me what's going on if you count the Manager (ndb_mgmd)? The system is always 100% functional? Or nothing works?

So that was just a stupid question ..

Thank you in advance,

Florian.

Building Mysql cluster on Hyper-v (no replies)

$
0
0
Hello all,

When creating MySQL cluster using Auto-installer on Hyper-v , and trying to add the local machine as a host, it fail to get the resource information of the machine giving me [error : Number_of_Processors] , also, doesn't give the right MySQL install directory , and MySQL data directory.

Is this related to creating cluster using hyper-V?

Thanks alot

mysql_data files deleted (1 reply)

$
0
0
Hi,

I have deleted sql node in my test environment , is it possible to access ndb_data with newly installed sql node setup ? is it required both mysql_data and ndb_data to get required results from cluster environment ?

Mysql NDB Cluster Backup Issue (no replies)

$
0
0
Hi All,

I hope you are Doing Well .....

I need your help regarding Mysql NDB Cluster Backup .

I have face issue which you guys facing previously and i think you have the solution for this, Please go through below and suggest me the solutions.....


I'm trying to do backup

From ndb_mgm

start backup

and after some time i got error

ndb_mgm> START BACKUPConnected to Management Server at: 198.168.16.101:1186Waiting for completed, this may take several minutesBackup failed* 3001: Could not start backup* Too many triggers: Permanent error: Application errorndb_mgm> Node 3: Backup 20150531 started from 2 has been aborted. Error: 4237

so please suggest me the solution on this...


Thanks,
Abhijit Chikhalikar
09028553747/07383571197

I have trouble installing the database mysql clustering 7.4.6. (no replies)

$
0
0
I have trouble installing the database mysql clustering 7.4.6.
I have a database 230 database and tables, including about 75,000 square ants.
Kang Ta to be taken up in the database.
I found a bug that msql clusting 7.4.6.
Alerting
Got error 707 'No more table metadata records (increase MaxNoOfTables)' from NDB.

question
1. How can I increase MaxNoOfTables beyond the 23,020 it.
2. If not, I should fix this.

mysql clustering (no replies)

$
0
0
Hello , I would like to deploy a mysql service cluster .... Im newby on this , so I don't know how to begin. Any recommeded document for reading ?

Application nodes does not join cluster (1 reply)

$
0
0
Hello,

i am facing an issue processing my cluster start:

My Configuration (which node is located on which System):
A4TMQL103: id 1 & 3
A4TMQL104: id 2 & 4
A4TMQL105: id 5
A4TMQL106: id 6

My my.ini on A4TMQL103/104:

[mysqld]
# Options for mysqld process:
innodb=OFF
ndbcluster=ON
default-storage-engine=ndbcluster
default-tmp-storage-engine=ndbcluster
ndb-connectstring=A4TMQL103.A41MGT.LOCAL,A4TMQL104.A41MGT.LOCAL
basedir=F:\mysql
log-error=F:\Error.log
log-warnings
ndb-nodeid=3
general-log=true

My config.ini on A4TMQL103/104:

[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2 # Number of replicas
DataDir=h:/cluster-data # Directory for each data node's data files
# Forward slashes used in directory path,
# rather than backslashes. This is correct;
# see Important note in text
DataMemory=80M # Memory allocated to data storage
IndexMemory=18M # Memory allocated to index storage
# For DataMemory and IndexMemory, we have used the
# default values. Since the "world" database takes up
# only about 500KB, this should be more than enough for
# this example Cluster setup.

[ndb_mgmd]
# Management process options:
NodeId=1
HostName=A4TMQL103.A41MGT.LOCAL
DataDir=s:/cluster-logs
[ndb_mgmd]
# Management process options:
NodeId=2
HostName=A4TMQL104.A41MGT.LOCAL
DataDir=s:/cluster-logs

[ndbd]
# Options for data node "A":
NodeId=5
HostName=A4TMQL105.A41MGT.LOCAL
[ndbd]
NodeId=6
# Options for data node "B":
HostName=A4TMQL106.A41MGT.LOCAL

[mysqld]
# SQL node options:
NodeId=3
HostName=A4TMQL103.A41MGT.LOCAL
[mysqld]
# SQL node options:
NodeId=4
HostName=A4TMQL104.A41MGT.LOCAL



My recent cluster state:

[ndbd(NDB)] 2 node(s)
id=5 @10.251.82.81 (mysql-5.6.24 ndb-7.4.6, Nodegroup: 0, *)
id=6 @10.251.82.78 (mysql-5.6.24 ndb-7.4.6, Nodegroup: 0, *)

[ndb_mgmd(MGM)] 2 node(s)
id=1 @10.251.82.79 (mysql-5.6.24 ndb-7.4.6)
id=2 @10.251.82.80 (mysql-5.6.24 ndb-7.4.6)

[mysqld(API)] 2 node(s)
id=3 (not connected, accepting connect from A4TMQL103.A41MGT.LOCAL)
id=4 (not connected, accepting connect from A4TMQL104.A41MGT.LOCAL)


At this Point, all Looks fine.

Now i got problems starting my application nodes (mysqld).
Even the process is running, the state via ndb_mgm does not Change.
It seems, the application node never speaks to the Management nodes, even it is on the same System.


MySQLD Log result:

2015-06-03 13:17:10 2324 [Note] Plugin 'FEDERATED' is disabled.
2015-06-03 13:17:10 2324 [Warning] The option innodb (skip-innodb) is deprecated and will be removed in a future release
2015-06-03 13:17:10 2324 [Note] Plugin 'InnoDB' is disabled.
2015-06-03 13:17:10 2324 [Note] NDB: Changed global value of binlog_format from STATEMENT to MIXED
2015-06-03 13:17:10 2324 [Note] NDB: NodeID is 3, management server 'A4TMQL103.A41MGT.LOCAL:1186'
2015-06-03 13:17:41 2324 [Note] NDB[0]: NodeID: 3, no storage nodes connected (timed out)
2015-06-03 13:17:41 2324 [Warning] NDB: server id set to zero - changes logged to bin log with server id zero will be logged with another server id by slave mysqlds
2015-06-03 13:17:41 2324 [Note] NDB Binlog: Starting...
2015-06-03 13:17:41 2324 [Note] NDB Util: Starting...
2015-06-03 13:17:41 2324 [Note] NDB Index Stat: Starting...
2015-06-03 13:17:41 2324 [Note] NDB Binlog: Started
2015-06-03 13:17:41 2324 [Note] NDB Index Stat: Wait for server start completed
2015-06-03 13:17:41 2324 [Note] NDB Binlog: Setting up
2015-06-03 13:17:41 2324 [Note] NDB Util: Wait for server start completed
2015-06-03 13:17:41 2324 [Note] NDB Binlog: Created schema Ndb object, reference: 0x80040003, name: 'Ndb Binlog schema change monitoring'
2015-06-03 13:17:41 2324 [Note] Server hostname (bind-address): '*'; port: 3306
2015-06-03 13:17:41 2324 [Note] NDB Binlog: Created injector Ndb object, reference: 0x80050003, name: 'Ndb Binlog data change monitoring'
2015-06-03 13:17:41 2324 [Note] IPv6 is available.
2015-06-03 13:17:41 2324 [Note] NDB Binlog: Setup completed
2015-06-03 13:17:41 2324 [Note] - '::' resolves to '::';
2015-06-03 13:17:41 2324 [Note] NDB Binlog: Wait for server start completed
2015-06-03 13:17:41 2324 [Note] Server socket created on IP: '::'.
2015-06-03 13:17:41 2324 [Note] Event Scheduler: Loaded 0 events
2015-06-03 13:17:41 2324 [Note] mysqld: ready for connections.
Version: '5.6.24-ndb-7.4.6-cluster-gpl-log' socket: '' port: 3306 MySQL Cluster Community Server (GPL)
2015-06-03 13:17:41 2324 [Note] NDB Index Stat: Wait for cluster to start
2015-06-03 13:17:41 2324 [Note] NDB Binlog: Check for incidents
2015-06-03 13:17:41 2324 [Note] NDB Util: Wait for cluster to start
2015-06-03 13:17:41 2324 [Note] ndb_index_stat_proc: Created Ndb object, reference: 0x80060003, name: 'Ndb Index Statistics monitoring'
2015-06-03 13:17:41 2324 [Note] NDB Binlog: Wait for cluster to start
2015-06-03 13:17:41 2324 [Note] NDB Util: Started
2015-06-03 13:17:41 2324 [Note] NDB Index Stat: Started
2015-06-03 13:18:11 2324 [Warning] NDB : Tables not available after 30 seconds. Consider increasing --ndb-wait-setup value


result from cluster log:

2015-06-03 13:17:10 [MgmtSrvr] INFO -- Nodeid 3 allocated for API at 9.251.82.79
2015-06-03 13:17:10 [MgmtSrvr] INFO -- Node 3: mysqld --server-id=0

Content ob ndb_1_out.log (does not looking very helpful):

==INITIAL==
==INITIAL==
==CONFIRMED==
==CONFIRMED==
Failed to flush buffer to socket, errno: 34
==INITIAL==
==CONFIRMED==
==INITIAL==
==CONFIRMED==
==CONFIRMED==
Failed to flush buffer to socket, errno: 34
==CONFIRMED==
==CONFIRMED==




Annoying: why does node 3 (application node 1) allocates ip 9.251.82.79?
all other machines are located at the 10.x.x.x subnet.

May this cause some Trouble? I tried binding the Adapter to the 10.x.x.x-address of the machine - successfully, but it does not connect to the cluster anyway.

And what does this log entry mean:

2015-06-03 13:17:41 2324 [Warning] NDB: server id set to zero - changes logged to bin log with server id zero will be logged with another server id by slave mysqlds




All machines are in the same vlan, all Firewalls are turned off and my user is owning the Domain admin role.

Realy Need some Input solving that... .

Best regards,
Kai

ndbinfo resources table - config params related to (no replies)

$
0
0
I'm trying to use the resources table to see if buffers are allocated correctly.
I haven't been able to find good documentation on what config parameters would apply to the numbers i'm looking at.
In the data shown below it seems i should have larger numbers for RESERVED and FILE_BUFFERS, JOBBUFFERS and TRANSPORTER_BUFFERS. I'm not using disk-based tables so DISK_PAGE_BUFFER should not matter.

Can someone recommend the config parameters i should be looking at?

+---------+---------------------+----------+--------+--------+
| node_id | resource_name | reserved | used | max |
+---------+---------------------+----------+--------+--------+
| 2 | RESERVED | 167942 | 217936 | 402257 |
| 2 | DISK_OPERATIONS | 0 | 0 | 0 |
| 2 | DISK_RECORDS | 0 | 2 | 0 |
| 2 | DATA_MEMORY | 377608 | 213045 | 377608 |
| 2 | JOBBUFFER | 720 | 135 | 720 |
| 2 | FILE_BUFFERS | 2176 | 2136 | 2176 |
| 2 | TRANSPORTER_BUFFERS | 3068 | 377 | 3835 |
| 2 | DISK_PAGE_BUFFER | 2240 | 2240 | 2240 |
| 2 | QUERY_MEMORY | 0 | 0 | 0 |
| 2 | SCHEMA_TRANS_MEMORY | 64 | 1 | 0 |
| 3 | RESERVED | 168014 | 217864 | 402257 |
| 3 | DISK_OPERATIONS | 0 | 0 | 0 |
| 3 | DISK_RECORDS | 0 | 2 | 0 |
| 3 | DATA_MEMORY | 377608 | 212959 | 377608 |
| 3 | JOBBUFFER | 720 | 135 | 720 |
| 3 | FILE_BUFFERS | 2176 | 2136 | 2176 |
| 3 | TRANSPORTER_BUFFERS | 3068 | 391 | 3835 |
| 3 | DISK_PAGE_BUFFER | 2240 | 2240 | 2240 |
| 3 | QUERY_MEMORY | 0 | 0 | 0 |
| 3 | SCHEMA_TRANS_MEMORY | 64 | 1 | 0 |
+---------+---------------------+----------+--------+--------+

Create table fail with table 'xxx' is full error (1 reply)

$
0
0
Hi everyone, I 'm totally new to mysql cluster, recently I encountered a strange question, when I tried to create an new table for a database, the error message shown: "ERROR 1114 (HY000): The table 'xxx' is full". Only if I dropped another table first, it would be success. I have 229 tables totally in the system(all schemata). And I noticed the "max_data_size" and "data_free" for certain database are 0M, but it still be able to be inserted, and "data_size" is larger then 0M, I would like to know why?

Thanks

Jeff

MySQL cluster developing team is great (2 replies)

$
0
0
MySQL cluster developing team is great!
you develop a great database for millions people all over the world.

I just succeeded in installing it in our company Linux server and windows server,
and I hope MySQL cluster can really function well as over 99.999% high availabily and high scalability.

Thank you very much

Is it possible to enable mysql cluster to directly show connecting-client's error detail info in logs but not just an "unknow error code"? (no replies)

$
0
0
Dear MySQL cluster pioneers,

I just installed mysql-cluster-gpl-7.4.6-linux-glibc2.5-x86_64.tar in Linux servers and windows 2008 server, but is it possible to enable mysql cluster to directly show connecting-client's error detail info in logs but not just an "illegal error code 240"? I mean the detail errors for cluster's connection clients but not only for the cluster's system itself.

Even I use cluster's mgm cosole to enable clusterlog's debug info, or even I add LogLevelError, LogLevelInfo and LogDestination(all output) in config.ini file, or even I use perror command, I cann't get any detail error info to get an effective solution clues.

I have turn on the detail log of every client's every executing query, but how to enable showing cluster's returning detail info for every query?

How to make procedure store in NDBCluster as tables? (no replies)

$
0
0
hi, everyone,

It seems that I have found the Error code 1296 with 240 error's reason: the mysql cluster doesn't make
DB's procedure shareed by multi SQL node and multi users.

And I have check mysql's offical document on "create procedure" statement syntax, it seems there is no storageengine syntax to make procedure public-shared as tables.

So how to make procedure store in NDBCluster as tables stored in NDBengine?

Is the only way to share a procedure by just making mysql-cluster's users accounts sharered by multi SQL nodes?

Bye.

Cannot add unique constraint with fk column (3 replies)

$
0
0
Hi,

I try to get NDB working with OpenStack and found the following code not working:

==========

CREATE TABLE networks (
tenant_id VARCHAR(255),
id VARCHAR(36) NOT NULL,
name VARCHAR(255),
status VARCHAR(16),
admin_state_up BOOL,
shared BOOL,
PRIMARY KEY (id)
);


CREATE TABLE ports (
tenant_id VARCHAR(255),
id VARCHAR(36) NOT NULL,
name VARCHAR(255),
network_id VARCHAR(36) NOT NULL,
mac_address VARCHAR(32) NOT NULL,
admin_state_up BOOL NOT NULL,
status VARCHAR(16) NOT NULL,
device_id VARCHAR(255) NOT NULL,
device_owner VARCHAR(255) NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY(network_id) REFERENCES networks (id)
);


ALTER TABLE ports ADD CONSTRAINT uniq_ports0network_id0mac_address UNIQUE (network_id, mac_address);

============

This gives me ERROR 1296 (HY000): Got error 4243 'Index not found' from NDBCLUSTER

I have played a bit with the code and if I add KEY (network_id) for 'ports' creation everything works. Just puzzled whether this is a bug or a specified behavior.. If the latter is true I will have to convince OpenStack developers to add necessary key.

Thanks!

Crontab fail to start backup (no replies)

$
0
0
Hi,

I executed mysql cluster backup using crontab.
This is the error message:
-----------------------------------------------
[MgmtSrvr] ALERT -- Node 3: Backup request from 2 failed to start. Error: 1302
-----------------------------------------------

This is a backup script; backup.sh
-----------------------------------------------
#!/bin/sh

DATETIME=`date +%Y%m%d`
BASEDIR=/PRD_SQL/mysql_sql
BKDIR=/backup
BKCFG=${BKDIR}/config
BKLOG=${BKDIR}/log/mysql_full_cluster_${DATETIME}.log

find ${BKDIR}/log -name "*.log" -mtime +30 -exec rm {} \;


if [ -d ${BKDIR}/log ]
then
echo "Exist ${BKDIR}/log"
else
mkdir -p ${BKDIR}/log
fi

#################### MySQL config Backup ####################
echo " " > ${BKLOG}
echo "MySQL Cluster config backup start : `date +%H:%M:%S` " >> ${BKLOG}
cp /PRD_SQL/mysql_sql/my.cnf ${BKCFG}/my.cnf.${DATETIME}
cp /PRD_SQL/mysql_sql/config.ini ${BKCFG}/config.ini.${DATETIME}

#################### MySQL Cluster Backup ####################
echo " " >> ${BKLOG}
echo "MySQL Cluster data backup start : `date +%H:%M:%S` " >> ${BKLOG}
echo " " >> ${BKLOG}
echo "MySQL Binlog info" >> ${BKLOG}
echo "=================================================================" >> ${BKLOG}
cd $BASEDIR
${BASEDIR}/bin/mysql -uroot -pdlqehd11 -e show master status >> ${BKLOG}
echo "=================================================================" >> ${BKLOG}
echo " ">> ${BKLOG}

${BASEDIR}/bin/ndb_mgm -e show >> ${BKLOG}
echo " ">> ${BKLOG}
${BASEDIR}/bin/ndb_mgm -e "start backup ${DATETIME}2" >> ${BKLOG}

ssh mysql@xxx.xxx.xxx.66 "cd /backup/BACKUP;tar -czvf /backup/BACKUP/BACKUP-${DATETIME}2.tgz BACKUP-${DATETIME}2"
ssh mysql@xxx.xxx.xxx.66 "cd /backup/BACKUP;rm -rf BACKUP-${DATETIME}2"
ssh mysql@xxx.xxx.xxx.77 "cd /backup/BACKUP;tar -czvf /backup/BACKUP/BACKUP-${DATETIME}2.tgz BACKUP-${DATETIME}2"
ssh mysql@xxx.xxx.xxx.77 "cd /backup/BACKUP;rm -rf BACKUP-${DATETIME}2"

echo "MySQL Cluster backup end : `date +%H:%M:%S` " >> $BKLOG
---------------------------------------------------------------

The mysql cluster consist of 2 mgmd, 2 sqld, 2 ndbd.
1. When I execute the script. There is no error message.
./backup.sh
No error

2. ./bin/ndb_mgm -e "start bakcup 201505"
No error

3. When the backup is executed with crontab. There is the error message.
---------------------------------------------------------------------------
[MgmtSrvr] ALERT -- Node 3: Backup request from 2 failed to start. Error: 1302
---------------------------------------------------------------------------

Is there any bug for mysql cluster "Start Backup" with Crontab?


Please help me.

Thanks
Viewing all 1560 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>