Quantcast
Channel: MySQL Forums - NDB clusters
Viewing all 1560 articles
Browse latest View live

2 active-active Clusters without split brain ? (no replies)

$
0
0
Hello,

I would like to setup MySQL Cluster for the first time and I am very interested in the geo replication features.

If I have two clusters on different DC, does the clusters manage a loss of the network connectivity between the DC so that there is no split brain ?

I know that I have to have at least two replication processes, with two different paths, but if I still lose the entire network connectivity between my clusters, I would like that one of them is shutdown.

Even though I can't have an entire third cluster on another DC, I can have some hosts in a third DC which would be dedicated to this monitoring, like external arbitror of entire clusters.

So, are there some standard tools to do that ?

Thank you,
Regards,
Grégoire Leroy

Triggers and Stored Procedures Only on a Single Node? (no replies)

$
0
0
Am I missing something? I have set up a cluster for testing (7.4.4) (with the intention of moving our master > slave setup) with the aim to provide us the ability to write scale our system, and remove the reliance on a single Master (amongst other reasons).

However, my reading of the documents seems to suggest that if I create a trigger on one SQL node, which subsequently fails, and I then connect to another SQL Node, my trigger wont be available, and my data will become incorrect.

Is this right? As surely that means that the HA credentials of MySQL Cluster is a myth, as I can only ever have a single SQL node if I want to ensure my triggers are available.

MySQL Cluster 7.4: 200Mil Queries Per Second (no replies)

Mysqlcluster disk table still occupy a lot of memory (no replies)

$
0
0
Mysqlcluster disk table, the official website said that each record will occupy 8 bytes of memory space to point to the record of the location, but I found that in practical application is actually accounted for 40 bytes, which leads to the disk table still occupied a large memory space, would like to ask, is I have problems or configuration error on the website.

Number of machines required for cluster redundancy (1 reply)

$
0
0
6 machines is suggested for full redundancy but can't 2 machines be setup for full redundancy? (A data, SQL, and Management node on each machine.)

Could you not have full redundancy with this type of setup or would it cause a major issue if one of the machines went down?

Also, would having only 2 machines with this type of setup make growing a cluster very inconvenient?

Thanks!

InnoDB to NDBCLUSTER Engine (no replies)

$
0
0
Hi everyone,

I ended up almost correctly configure my MySQL Cluster .

Now I would like to test to get closer to production.

I'll recover databases InnoDB and I love them for NDBCLUSTER convertirs in the tables work with my cluster.

I obviously try
"ALTER TABLE table ENGINE = NDBCLUSTER" but this is a per-table and I did not really want to waste my time with that.

Is there a way to change all the tables in a DB of a sudden?

I also try
"SELECT CONCAT (' ALTER TABLE ',table_name ' ENGINE = NDBCLUSTER;')
FROM INFORMATION_SCHEMA.TABLES
WHERE table_schema IN ('db1','db2');"

But it does not work despite it selects me well
"ALTER TABLE table1 ENGINE = NDBCLUSTER"
"ALTER TABLE table2 ENGINE = NDBCLUSTER"
in profit ..
Can you help me?

Best Regards,
Florian .

GCP_SAVE lag 60 seconds (no max lag) (no replies)

$
0
0
Hi,

What does it mean? GCP_SAVE lag .. seconds (no max lag)
How can I fix it?
What does it happen?


#cat ndb_1_cluster.log | grep WARNING
2015-03-09 04:31:12 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_SAVE lag 60 seconds (no max lag)
2015-03-09 04:32:15 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_SAVE lag 120 seconds (no max lag)
2015-03-09 04:33:17 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_SAVE lag 180 seconds (no max lag)
2015-03-09 04:34:20 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_SAVE lag 240 seconds (no max lag)
2015-03-09 04:35:23 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_SAVE lag 300 seconds (no max lag)
2015-03-09 04:35:54 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_COMMIT lag 0 seconds (no max lag)
2015-03-09 04:36:26 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_SAVE lag 360 seconds (no max lag)
2015-03-09 04:37:07 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_COMMIT lag 0 seconds (no max lag)
2015-03-09 04:37:28 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_SAVE lag 420 seconds (no max lag)
2015-03-09 04:38:31 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_SAVE lag 480 seconds (no max lag)
2015-03-09 04:39:34 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_SAVE lag 540 seconds (no max lag)
2015-03-09 04:40:37 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_SAVE lag 600 seconds (no max lag)
2015-03-09 04:41:40 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_SAVE lag 660 seconds (no max lag)
2015-03-09 04:42:42 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_SAVE lag 720 seconds (no max lag)
2015-03-09 04:43:45 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_SAVE lag 780 seconds (no max lag)
2015-03-09 04:44:48 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_SAVE lag 840 seconds (no max lag)
2015-03-09 04:45:51 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_SAVE lag 900 seconds (no max lag)
2015-03-09 04:46:33 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_COMMIT lag 0 seconds (no max lag)
2015-03-09 04:46:54 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_SAVE lag 960 seconds (no max lag)
2015-03-09 04:47:56 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_SAVE lag 1020 seconds (no max lag)
2015-03-09 04:48:59 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_SAVE lag 1080 seconds (no max lag)
2015-03-09 04:50:02 [MgmtSrvr] WARNING -- Node 3: GCP Monitor: GCP_SAVE lag 1140 seconds (no max lag)

ERROR 1114 (HY000): The table '#sql-414_3d' is full (8 replies)

$
0
0
Hi all,
I am trying to set up a MySQL Cluster , and I think it will soon come to an end .

In order to test this in a manner " more real ", I decide to dump a DB InnoDB and restorer 's on one of my SQL nodes .

Once that do, I make the query:
SELECT CONCAT('ALTER TABLE ',table_schema,'.',table_name,' ENGINE=NDBCLUSTER;')
FROM information_schema.TABLES
WHERE 1=1
    AND engine = 'InnoDB'
    AND table_schema NOT IN ('information_schema', 'mysql', 'performance_schema');
in profit for all tables that are not NDBCLUSTER to migrate .
I then just do a copy paste the result on my MySQL console.

Everything seems to be done properly, or until errors are displayed ..
I redo my request this and I see that " there are two left , I copy the results and I have this error:
ERROR 1114 (HY000): The table '#sql-414_3d' is full

Followed by this: on my ndb_mgm :
Node 3: Data usage increased to 81%(2075 32K pages of total 2560)
Node 2: Data usage increased to 81%(2085 32K pages of total 2560)
Node 3: Data usage increased to 90%(2325 32K pages of total 2560)
Node 2: Data usage increased to 91%(2334 32K pages of total 2560)
Node 3: Data usage decreased to 58%(1503 32K pages of total 2560)
Node 2: Data usage decreased to 58%(1503 32K pages of total 2560)

When I REPORT ALL MEMORY USAGE I get this:
Node 2: Data usage is 58%(1503 32K pages of total 2560)
Node 2: Index usage is 28%(676 8K pages of total 2336)
Node 3: Data usage is 58%(2085 32K pages of total 2560)
Node 3: Index usage is 28%(2085 8K pages of total 2336)

In my config.ini i only have this options:
NoOfReplicas=2
DataMemory=80M
IndexMemory=18M
My nodes data have 1 GB RAM (i dont really understand how use all options on config.ini so, its probably bad).
Can you help me?

Thank you to you!

InnoDB to NDBCLUSTER Engine (no replies)

$
0
0
Hi everyone,

I ended up almost correctly configure my MySQL Cluster .

Now I would like to test to get closer to production.

I'll recover databases InnoDB and I love them for NDBCLUSTER convertirs in the tables work with my cluster.

I obviously try
"ALTER TABLE table ENGINE = NDBCLUSTER" but this is a per-table and I did not really want to waste my time with that.

Is there a way to change all the tables in a DB of a sudden?

I also try
"SELECT CONCAT (' ALTER TABLE ',table_name ' ENGINE = NDBCLUSTER;')
FROM INFORMATION_SCHEMA.TABLES
WHERE table_schema IN ('db1','db2');"

But it does not work despite it selects me well
"ALTER TABLE table1 ENGINE = NDBCLUSTER"
"ALTER TABLE table2 ENGINE = NDBCLUSTER"
in profit ..
Can you help me?

Best Regards,
Florian .

ndb_slave_conflict_role pass (1 reply)

$
0
0
Hello.
There are no information about values of this parameter (ndb_slave_conflict_role pass).

Documentation says:
>For more information, see Section 18.6.11, “MySQL Cluster Replication Conflict Resolution”.

Ok. I try to search "PASS" - nothing. Try to search "ndb_slave_conflict_role_pass" and there is only
>The NDB$EPOCH2() function, added in MySQL Cluster NDB 7.4.2, is similar to NDB$EPOCH(), except that NDB$EPOCH2() provides for delete-delete handling with a circular replication (“master-master”) topology. In this scenario, primary and secondary roles are assigned to the two masters by setting the ndb_slave_conflict_role system variable to the appropriate value on each master (usually one each of PRIMARY, SECONDARY). When this is done, modifications made by the secondary are reflected by the primary back to the secondary which then conditionally applies them.

Here http://forums.mysql.com/read.php?3,621084,621084 I can read this:
>(PASS enables a passthrough state in which the effects of any conflict resolution function are ignored.) This can be useful when it is necessary to fail over from the MySQL Cluster acting as the primary.

I can't understand, what can be useful in case of fail? What behaivour will be in case of fail with different values? When and where this variable used/checked by cluster?

Thanks.

Mysql NDB Cluster (1 reply)

$
0
0
Hi Every One,

I am using MYSQL as database for my application. It is good and working fine. Recently we have implemented NDB cluster for my database. From there on wards I am getting some issues related to database in my application. Every time getting error message like "Lock wait timeout exceeded; restart transaction". Currently my database parameters are set like below.

innodb_lock_wait_timeout=50
ndb_wait_connected=30
ndb_wait_setup=30
table_lock_wait_timeout=30
wait_timeout=28800

Kindly tell me which parameter I need to change and what will be the feasible value to run our application smoother.

Thanks & Regards
Srinivasa Rao Pujari

Help for explication backup and storage (no replies)

$
0
0
Hi all,

Well, I 'm currently working on the backups of MySQL Cluster .

However I'm pretty lost and I 'd like a certainty from you.

To make a backup of the cluster , use the command " START BACKUP" .
To restore it you must use the program " ndb_restore " after emptying the storage nodes.
I know that "START BACKUP" creates three files :
CTL
LOG
DATA

But I 'd like to know , that it contains they?
Despite the use of MySQL Cluster , my MySQL nodes , I still have the files of database / tables that are created , so, why MySQL Cluster is said that stores all the storage nodes in RAM ?

Should , in addition to backup MySQL Cluster with "START BACKUP" to backup the MySQL server with the command " mysqldump " and restore simultaneously LIVE BACKUP MySQL Cluster ?

Thank you for your help and answer!
Best Regards,

Florian

Cluster Configuration (1 reply)

$
0
0
I want to have a cluster with 4 management nodes and 4 data nodes. For starters I am using 2 management nodes and 2 data nodes. I am on a Windows Server 2008 R2 Enterprise SP1. The config.ini file on the two management nodes looks like this:

[ndbd default]
DataMemory=2000M
IndexMemory=1000M
noofreplicas=2
datadir=e:\MySQL_Cluster\My_Cluster\data\

[ndb_mgmd default]
datadir=e:\MySQL_Cluster\My_Cluster\data\

[ndb_mgmd]
NodeId=1
hostname=15.50.0.130

[ndb_mgmd]
hostname=15.50.5.163
NodeId=2

[ndbd]
NodeId=3
hostname=15.50.5.116

[ndbd]
NodeId=4
hostname=15.50.4.155

I can startup the two management nodes and they see each other however when I go to start the first data node I get this error:

Failed to allocate nodeid, error: 'Error: Could not alloc node id at 15.50.0.130 port 1186: Connection done from wrong host ip 15.50.0.116.'.

The first IP is that of the first of the two management nodes. The second IP is that of this data node. It has the same config.ini file as the management nodes and this my.1.cnf:

[mysqld]
ndbcluster

[mysql_cluster]
ndb-connectstring="nodeid=1,nodeid=2"

The firewall is turned off on all servers.

I'm sure there is more information you'll need to help so just ask.

Thanks!

risen mysql API load after Cluster Update (no replies)

$
0
0
we recently updated from 7.1.23 to 7.3.7
This came with an update from mysql 5.1.x to 5.6.x

Since then, we see a 20% higher load on the servers, where our applications are installed in combination with the mysqld (graph: https://www.dropbox.com/s/hzn4vzkdqg6vfhk/load_after_7.1.23_to_7.3.7_update.png?dl=0)

The first reason we thought of, was the lac of enginde_condition_pushdown=1|2 parameter in my.ini, but we checked, that the optimizer switch is set correctly to 'on' and our queries are still explaining to "Using where with pushed condition"

Any other thigs, which might have hit us?

The cluster seems to perform great. The CPU there went down, but as we also moved the data nodes to new and more and faster hardware, so this the was expected behaviour.

could the higher load on the mysqld be coming from the more data nodes (10 ->12) and more LQM (4->6) per data node?

Thanks for any hints.

Stefan

ClusterJ multi thread java programming (no replies)

$
0
0
Hi there,

I'm trying to develop a java application using the ClusterJ API.
The application is multi threaded and is inserting and updating records in transactions. It is possible threads will insert but certainly update the same records. At this moment I'm running into problems being not all records are inserted and updated. What will be the best approach to to handle with this requirements ?
Right now my main thread constructs a SessionFactory and passes a connection obtained from the factory to every thread. Every thread tries to insert and update records, when a exception occurs a rollback is performed and the transaction is run again (this because of possible dupliacate inserts or deadlocks). The result of my tests are that not all records are inserted and updated.
Can anybody help me out how to setup this properly ?
Thanks in advance,
Arco

Ubuntu support (no replies)

$
0
0
Does the community edition of Mysql Cluster run on Ubuntu? When i went to the mysql cluster download page, i didn't see Ubuntu as one of the choices. There was a choice for debian, but explicit choice for ubuntu.

2355: 'Failure to restore schema(Resource configuration error) (no replies)

$
0
0
After rebooting data nodes we are getting this error

error: [ code: 1509 line: 22171 node: 2 count: 1 status: 0 key: 0 name: '' ]
2015-04-07 12:06:10 [ndbd] INFO -- Failed to restore schema during restart, error 1509.
2015-04-07 12:06:10 [ndbd] INFO -- DBDICT (Line: 4303) 0x00000000
2015-04-07 12:06:10 [ndbd] INFO -- Error handler restarting system
2015-04-07 12:06:10 [ndbd] INFO -- Error handler shutdown completed - exiting
2015-04-07 12:06:10 [ndbd] ALERT -- Angel detected too many startup failures(3), not restarting again
2015-04-07 12:06:10 [ndbd] ALERT -- Node 2: Forced node shutdown completed. Occured during startphase 4. Caused by error 2355: 'Failure to restore schema(Resource configuration error). Permanent error, external action needed'.


$cat ndb_2_error.log


Time: Tuesday 7 April 2015 - 12:06:10
Status: Permanent error, external action needed
Message: Failure to restore schema (Resource configuration error)
Error: 2355
Error data: Failed to restore schema during restart, error 1509.
Error object: DBDICT (Line: 4303) 0x00000000
Program: ndbmtd
Pid: 3530 thr: 0
Version: mysql-5.6.17 ndb-7.3.5
Trace: /home/mysql/mysqlcluster_data/2//ndb_2_trace.log.3 [t1..t5]
***EOM***

Trace Files


--------------- Signal ----------------
r.bn: 253 "NDBFS", r.proc: 2, r.sigId: 2078 gsn: 264 "FSREADREQ" prio: 0
s.bn: 247/1 "DBLQH", s.proc: 2, s.sigId: 4000951 length: 8 trace: 0 #sec: 0 fragInf: 0
UserPointer: 0
FilePointer: 13
UserReference: H'02f70002 Operation flag: H'00000000 (No sync, Format=List of pairs)
varIndex: 1
numberOfPages: 1
pageData: H'00000000, H'00001380

--------------- Signal ----------------
r.bn: 253 "NDBFS", r.proc: 2, r.sigId: 2077 gsn: 264 "FSREADREQ" prio: 0
s.bn: 247/1 "DBLQH", s.proc: 2, s.sigId: 4000950 length: 8 trace: 0 #sec: 0 fragInf: 0
UserPointer: 1
FilePointer: 12
UserReference: H'02f70002 Operation flag: H'00000000 (No sync, Format=List of pairs)
varIndex: 1
numberOfPages: 1
pageData: H'00000001, H'00001380


Configuration File in Management Node



[NDB_MGMD DEFAULT]
Portnumber=1186

[NDB_MGMD]
NodeId=49
HostName=hazelcasta
DataDir=/home/mysql/mysqlcluster_data/49/
Portnumber=1186

[NDB_MGMD]
NodeId=52
HostName=hazelcastb
DataDir=/home/mysql/mysqlcluster_data/52/
Portnumber=1186

[TCP DEFAULT]
SendBufferMemory=4M
ReceiveBufferMemory=4M

[NDBD DEFAULT]
BackupMaxWriteSize=1M
BackupDataBufferSize=16M
BackupLogBufferSize=4M
BackupMemory=20M
BackupReportFrequency=10
MemReportFrequency=30
LogLevelStartup=15
LogLevelShutdown=15
LogLevelCheckpoint=8
LogLevelNodeRestart=15

DataMemory=2000M
IndexMemory=512M
MaxNoOfTables=4096
MaxNoOfTriggers=10000
NoOfReplicas=2
StringMemory=5M
MaxNoOfSubscribers=20000

DiskPageBufferMemory=512M
SharedGlobalMemory=768M
LongMessageBuffer=32M
MaxNoOfConcurrentOperations=1100000
MaxNoOfConcurrentTransactions=900000
MaxNoOfLocalOperations=1000000

MaxNoOfConcurrentScans=400

MaxNoOfLocalScans=800
BatchSizePerLocalScan=512
FragmentLogFileSize=256M
NoOfFragmentLogFiles=25
RedoBuffer=32M
DiskIOThreadPool=16
StopOnError=false

TransactionInactiveTimeout=180000
MaxNoOfOrderedIndexes=1000
MaxNoOfUniqueHashIndexes=2000
MaxNoOfAttributes=8000
TransactionDeadlockDetectionTimeout=1800000
LcpScanProgressTimeout=600
TimeBetweenWatchdogCheck=300000
TimeBetweenWatchdogCheckInitial=300000
LockPagesInMainMemory=2

MaxBufferedEpochs=5000
RealtimeScheduler=1
ThreadConfig=ldm={cpubind=2},rep={cpubind=3},tc={cpubind=4},recv={cpubind=5},send={cpubind=6},main={cpubind=7},io={cpubind=7}

NoOfFragmentLogParts=4

MinFreePct=5
TwoPassInitialNodeRestartCopy=true
BuildIndexThreads=8
RedoOverCommitCounter=5
RedoOverCommitLimit=60

TimeBetweenEpochsTimeout=0



Please help.....

Start MySQL Cluster with Last known Stable GCP (no replies)

$
0
0
Hi,

I have restarted my data nodes and they are not able to start. I am searching for the way to start ndb data nodes in last known stable state.

Can any one help me on this.

Exception on get method clusterj (no replies)

$
0
0
Hi there,

I'm developing a java application using the clusterj api.
The code below sometimes throws the following exception
"com.mysql.clusterj.ClusterJDatastoreException: For field total column total valueDelegate object BigDecimal, error executing objectGetValue. Caused by java.lang.IllegalStateException:Current state = CODING_END, new state = FLUSHED"
The exception is thrown by the positionsInterface.getTotal() statement.
The code is running in multiple threads.

PositionsInterface positionsInterface = session.find(PositionsInterface.class, id);
if (positionsInterface == null) {
//Insert
positionsInterface = session.newInstance(PositionsInterface.class, id);
positionsInterface.setTotal(amount);
session.persist(positionsInterface);
}
else {
//Update
positionsInterface.setTotal(positionsInterface.getTotal().add(amount));
session.updatePersistent(positionsInterface);
}

Can anyone help me out with this ?

When I reboot a data node machine on a 4 machine cluster it hangs SQL nodes (no replies)

$
0
0
I have a cluster with four machines such as:

[ndbd(NDB)] 2 node(s)
id=1 @19.85.1.183 (mysql-5.6.23 ndb-7.4.5, Nodegroup: 0)
id=2 @19.85.1.165 (mysql-5.6.23 ndb-7.4.5, Nodegroup: 0, *)

[ndb_mgmd(MGM)] 2 node(s)
id=49 @19.85.1.167 (mysql-5.6.23 ndb-7.4.5)
id=52 @19.85.1.184 (mysql-5.6.23 ndb-7.4.5)

[mysqld(API)] 3 node(s)
id=50 (not connected, accepting connect from 129.85.128.167)
id=55 @19.85.1.167 (mysql-5.6.23 ndb-7.4.5)
id=56 @19.85.1.184 (mysql-5.6.23 ndb-7.4.5)

If I do a reboot on either data node machine id=1 or id=2 then I cannot run any SQL queries until that machine comes up. Here are some errors:

ERROR 1297 (HY000): Got temporary error 4028 'Node failure caused abort of transaction' from NDBCLUSTER

ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction

If I stop the data node first in ndb_mgm such as 1 Stop or 2 Stop and then reboot the machine then there is no issue.

Is this the correct behavior? If a data node machine crashes or someone pulls the plug on it will the cluster be inoperable until the machine comes back online? Or am I doing something wrong?

Thanks in advance!
Viewing all 1560 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>