Quantcast
Channel: MySQL Forums - NDB clusters
Viewing all 1553 articles
Browse latest View live

Continuous errors when ndbmtd is used (no replies)

$
0
0
Hi

I was trying to do some tests in my lab using MySQL Cluster. I am using 2 servers each with 2 mgmt nodes, 2 storage nodes, and 2 sql nodes.
When checked using SHOW in ndb_mgm, it shows all 6 nodes connected and running.

But in the file ndb_*_out.log, I see errors like these below consistently appearing:
jbalock thr: 2 waiting for lock, contentions: 1800 spins: 21175
send lock node 6 waiting for lock, contentions: 6 spins: 2883
sendbufferpool waiting for lock, contentions: 15 spins: 1226

In my test currently, I have hardly any data.

Interestingly, these errors do not come if I use ndbd instead of ndbmtd.

Anyone has any idea about why these errors are coming ?
Thanks

Very large tables in the cluster (3 replies)

$
0
0
Hi,

we're considering mysql cluster for a product that has 4-5 really large tables (tens of millions of rows) and many small tables.
I know that ndb stores all indexed data in memory. It seems like a big waste because of the 50M rows in the whole table - we actively use only few thousands of rows. Data is added and read only from one end of the table.
So the question is whether mysql cluster is the right solution for it?

Any advice is appreciated.

Data nodes angel. HELP (1 reply)

$
0
0
hi, im new to sql and am trying to set up an SQL cluster accross multiple servers. by following the quick start guide i was able to create a cluster on one machine. but now that im trying to do it on multiple machines (using the mysql-cluster-excerpt-5.1 guide) im having a few difficulties.

I have set up the management node, but when i start the data nodes they return the message:
info --Angel connected to 'management nodes ip address:1186'
info --Angel allocated nodeid: 2"
error --couldn't start as daemon, error: 'failed to open llogfile 'c:\\mysql\\bin\\cluster-data\ndb_2_out.log' for write, errno: 2'

the guide says that it should say:
info -- configuration fetched from 'localhost:1186', generation1

can anybody please tell me what im doing wrong

thanks in advance

Data Node shutdown when two same delete sql stm in the same transactions (no replies)

$
0
0
Hi,

We are testing MySQL-Cluster for an OS Rails Project.

We have realized that there is a problem when 2 equal DELETE sql statements are executed on the same transaction "BEGIN//COMMIT".

ERROR:
Node 11: Forced node shutdown completed. Initiated by signal 6. Caused by error 6000: 'Error OS signal received(Internal error, programming error or missing error message, please report a bug). Temporary error, restart node'.


For example:

SET NAMES 'utf8' ;
SET SQL_AUTO_IS_NULL=0 ;
BEGIN ;

SELECT * FROM `roles` WHERE (`roles`.`id` = 3) ;
SELECT * FROM `users` WHERE (`users`.`id` = 2) ;
SELECT * FROM `projects` WHERE (`projects`.`id` = 1) ;
SELECT `members`.id FROM `members` WHERE (`members`.`user_id` = 2 AND `members`.project_id = 1) LIMIT 1 ;
INSERT INTO `members` (`created_on`, `project_id`, `user_id`, `mail_notification`) VALUES('2010-08-27 18:28:42', 1, 2, 0) ;
SET @MEMBERID=@@IDENTITY;
SELECT @MEMBERID;
INSERT INTO `member_roles` (`member_id`, `role_id`, `inherited_from`) VALUES(@MEMBERID, 3, NULL) ;
SET @ROLEID=@@IDENTITY;
SELECT @ROLEID;
SELECT * FROM `members` WHERE (`members`.`id` = @MEMBERID) ;
SELECT * FROM `users` WHERE (`users`.`id` = 2) ;
COMMIT ;


/* MORE SQL SELECTS.... */
BEGIN ;
/* MORE SQL SELECTS.... */
DELETE FROM `members` WHERE `id` = @MEMBERID ;
/* MORE SQL SELECTS.... */
DELETE FROM `members` WHERE `id` = @MEMBERID ;
/* !!!!!!!!!!!!!! HERE a data node goes down.... */
COMMIT ;


Any help would be really appreciated.

Thanks in advance.

Setting up MySQL Cluster (NDB) on 1 machine (no replies)

$
0
0
Hi, I need to set up MySQL Cluster on FreeBSD 8.1. However, initially I only need to set it up only 1 server (during development stage and initial beta launch), but later will add more servers in different locations. How do I set up using NDB for only 1 server? Please advice if this is possible and how to do it.

(On the other hand, I thought of using InnoDB but later moving to NDB but I read somewhere that changing table type is difficult. Please advise.)

Thanks!

Kevin.

sell cvv,cc,uk,us,v.v.. good and fresh 100% (no replies)

$
0
0
I'm Seller for: CC, CVV US,UK,CA, EURO,AU, Italian,Japan,France,...all cc. Paypal verify, Software Spam mail mail list, code PHP,Shop Admin and CC fullz info, CC DOB, Dump, Banklogin, Pri sock....Domain hosting.
If you want to test you must send money for me
Contact me if you need it: Yahoo ID: sell_cvv_321
You Send money => I send cvv2 good
Payment via LR
"Sell cvv US UK AU EU full info live 100% good price
1 Sock live = 3$/1 sock live = 10 day
1 US CVV ( visa) = 2$/cvv
1 US CVV ( master) = 2$/cvv
1 US CVV (Amex,dis) = 3$/cvv
1 UK CVV (visa, Mc) = 10$/cvv
1 Uk check bins = 10.5$/1cvv
1 China CVV = 15$/1CVV
1 Ca CVV = 6$/CVV
1 EU CVV = 10$/CVV+ Itali = 13$/1cvv
+ Spain = 13$/1cvv
+ Denmark = 13$/1cvv
+ Sweden = 13$/1cvv
+ ger =13$/1cvv
+ Fr =13$/1cvv
1 New Zealand (NZ) :15$/cvv
1 Japan (JP) : 15$/cvv
1 AU CVV = 10 $/CV
1 US CVV full info = 25$/CVV
1 UK CVV full info = 30$/CVV
1 Eu CVV full info = 40$/cvv
Asia = 15$/1cvv
1 Paypal with pass email = 19$/paypal
1 Paypal don't have pass email = 9.5$/Paypal
1 Banklogin us (personel) = 80$
1 Banklogin us company = 80$
1 Banklogin uk (personel) = 100$
1 Banklogin uk company = 100$
PLANS FOR US CCS RANDOM (1$/1 CCS)
You pay : 15$ -> You get : 15 ccs
You pay : 25$ -> You get : 30 ccs
You pay : 50$ -> You get : 60 ccs
You pay : 100$ -> You get : 120 ccs
You pay : 280$ -> You get : 350 ccs
You pay : 600$ -> You get : 700 ccs
PLANS FOR UK CCS RANDOM(2$/1 CCS)
You pay : 45$ -> You get : 25 ccs
You pay : 80$ -> You get : 50 ccs
You pay : 140$ -> You get : 100 ccs
You pay : 190$ -> You get : 150 ccs
You pay : 400$ -> You get : 600 ccs
You pay : 700$ -> You get : 1200 ccs
Balance In Chase : 70K To 155K = 150$
Balance In Wachovia : 24K To 80K = 70$
Balance In Boa : 75K To 450K = 250$
Balance In Credit Union : Any Amount = 150$
Balance In Hallifax : ANY AMOUNT = 150$
Balance In Compass : ANY AMOUNT = 150$
Balance In Wellsfargo : ANY AMOUNT = 150$
Balance In Barclays : 80K To 100K = 200$
Balance In Abbey : 82K = 400$
Balance in Hsbc : 50K = 250$
Ship any thing product. Laptop ... Iphone...vv
+SHIP LAPTOP APPLE = 200$
+SHIP LAPTOP HP + DELL = 150$
+SHIP LAPTOP TOSHIBA = 100$
+SHIP LAPTOP LENOVO = 150$
Drop any think 30% product
Payment only Libertyreserve: And then chat with me
(M!Yahoo: sell_cvv_321) .I will check and send you.
Contact me Yahoo: sell_cvv_321
Maill: sell_cvv_321@yahoo.com

Cannot restore schema (2 replies)

$
0
0
Please help ASAP after some changes in config.ini node server is not starting anymore.

Changes I made on config.ini

MaxNoOfAttributes=24576
MaxNoOfTriggers=14336

### Watchdog
#TimeBetweenWatchdogCheckInitial=60000

### TransactionInactiveTimeout - should be enabled in Production
TransactionInactiveTimeout=60000
### CGE 6.3 - REALTIME EXTENSIONS
#RealTimeScheduler=1
#SchedulerExecutionTimer=80
#SchedulerSpinTimer=40

Time: Sunday 29 August 2010 - 14:28:47
Status: Permanent error, external action needed
Message: Invalid configuration received from Management Server (Configuration error)
Error: 2350
Error data: Invalid file size for redo logfile, size only changable with --initial
Error object: DBLQH (Line: 14185) 0x0000000a
Program: /usr/sbin/ndbd
Pid: 5884
Version: mysql-5.1.41 ndb-7.0.13
Trace: /var/lib/mysql-cluster/ndb_4_trace.log.2
***EOM***

Data restore failed!!! (8 replies)

$
0
0
I run my 2 management server and data node... it's connected already now i'm trying to restore may backup on management server I get this error please help...

Note: API node is not yet running... only management and data node

[root@mgmt1 mysql-cluster]# /usr/bin/ndb_restore -c 192.168.0.75 -n 3 -m -b 32 --backup_path /var/lib/mysql-cluster/BACKUP/BACKUP-32/
Nodeid = 3
Backup Id = 32
backup path = /var/lib/mysql-cluster/BACKUP/BACKUP-32/
Opening file '/var/lib/mysql-cluster/BACKUP/BACKUP-32/BACKUP-32.3.ctl'
readDataFileHeader: Error reading header
Failed to read /var/lib/mysql-cluster/BACKUP/BACKUP-32/BACKUP-32.3.ctl


NDBT_ProgramExit: 1 - Failed

MultiThread (no replies)

$
0
0
I set my config Multithread and check management console showing both data node 1 and 2 are master. Is this ok?

MaxNoOfExecutionThreads=2

/usr/sbin/ndbmtd < --- I used this on multitread

Also my management logs keep using Data and Index usage is this normal?

If I don't set multithread and use this command /usr/sbin/ndbd

Data node 1 is only the master.

It's normal to have 2 master?


Anyone please?

ndb-index-stat-enable (no replies)

$
0
0
Hi,

default value for ndb-index-stat-enable is OFF, but i've got a query where the cluster is using the wrong index. Enabling this option, it uses the correct index.

Anyone knows why this option is off by default?

There are some reports of a bug when it's enabled, but without details (what kind of bug, versions affected, etc.).

Thanks,

V.

Error restoring hot backup with ndb_restore (4 replies)

$
0
0
Hi all,

I'm having problems restoring a backup with ndb_restore command. The backup is made as shown:

ndb_mgm> start backup 1
Waiting for completed, this may take several minutes
Node 3: Backup 1 started from node 2
Node 3: Backup 1 started from node 2 completed
StartGCP: 1535 StopGCP: 1538
#Records: 2053 #LogRecords: 0
Data: 50312 bytes Log: 0 bytes


To restore it, I put the cluster in single user mode, granting access to the same node ID where I will run ndb_restore.

ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 @10.180.243.73 (mysql-5.1.47 ndb-7.1.5, Nodegroup: 0, Master)
id=4 @10.180.243.74 (mysql-5.1.47 ndb-7.1.5, Nodegroup: 0)

[ndb_mgmd(MGM)] 2 node(s)
id=1 @10.180.243.75 (mysql-5.1.47 ndb-7.1.5)
id=2 @10.180.243.151 (mysql-5.1.47 ndb-7.1.5)

[mysqld(API)] 5 node(s)
id=10 @10.180.243.149 (mysql-5.1.47 ndb-7.1.5)
id=11 @10.180.243.149 (mysql-5.1.47 ndb-7.1.5)
id=16 @10.180.243.150 (mysql-5.1.47 ndb-7.1.5)
id=17 @10.180.243.150 (mysql-5.1.47 ndb-7.1.5)
id=22 @10.180.243.151 (mysql-5.1.47 ndb-7.1.5)

ndb_mgm> enter single user mode 22
Single user mode entered
Access is granted for API node 22 only.



After, I stop the mysqld daemon in this machine, and finally I run ndb_restore.



ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 @10.180.243.73 (mysql-5.1.47 ndb-7.1.5, single user mode, Nodegroup: 0, Master)
id=4 @10.180.243.74 (mysql-5.1.47 ndb-7.1.5, single user mode, Nodegroup: 0)

[ndb_mgmd(MGM)] 2 node(s)
id=1 @10.180.243.75 (mysql-5.1.47 ndb-7.1.5)
id=2 @10.180.243.151 (mysql-5.1.47 ndb-7.1.5)

[mysqld(API)] 5 node(s)
id=10 @10.180.243.149 (mysql-5.1.47 ndb-7.1.5)
id=11 @10.180.243.149 (mysql-5.1.47 ndb-7.1.5)
id=16 @10.180.243.150 (mysql-5.1.47 ndb-7.1.5)
id=17 @10.180.243.150 (mysql-5.1.47 ndb-7.1.5)
id=22 (not connected, accepting connect from 10.180.243.151)



# ndb_restore -n 3 -b 1 --backup_path=/var/lib/mysql-cluster/ -m
Nodeid = 3
Backup Id = 1
backup path = /var/lib/mysql-cluster/
Opening file '/var/lib/mysql-cluster/BACKUP-1.3.ctl'
readDataFileHeader: Error reading header
Failed to read /var/lib/mysql-cluster/BACKUP-1.3.ctl


NDBT_ProgramExit: 1 - Failed



I can't find many information about this error. Any idea about what's going wrong??

Thanks in advance, David.

How to know the synchronization is complete between data nodes in MySQL Cluster ? (no replies)

$
0
0
As the subject,

Now i have created a MySQL Cluster,

mgmt node *1
sql node *1
data node A (Master) *1
data node B (Slave) *1

One day i got the situation:

1. data node A crash
2. data node B being the Master data node
3. continue to writing datas to data node B
4. data node A repaired succeed & connect to MySQL Cluster

So, the question is,
how could i know data node A & B is syncing ?
how could i konw when the synchronization is complete ?

Is there any files of logs or status could be observe?

ndb_restore --print-data only (1 reply)

$
0
0
Hi! please help I just need to print data to stout

/usr/bin/ndb_restore -b 1 n 3 --print-data --print-log --append --tab=/root/ --fields-enclosed-by="" --fields-separated-by="," --lines-terminated-by="\n"

When I try to execute this command date node 3 I get this error.

Backup Id = 1
/usr/bin/ndb_restore: unknown variable 'fields-separated-by=,'


NDBT_ProgramExit: 2 - Wrong arguments

All I need is data only. Please tell me what's wrong with above command thanks.

How to configure load balanced mysql cluster with ldirectord and heart beat on linux (1 reply)

$
0
0
Hi,

Could anybody help me out to configure load balanced mysql cluster using ldirectord and heart beat in linux environment.

Thank You...

Relation between [MYSQLD], ndb-cluster-connection-pool and threads (3 replies)

$
0
0
Hi all,

If you have in the config.ini file of a management node, a SQL node declared with 2 API like this:

[MYSQLD]
id=16
HostName=192.168.1.2
[MYSQLD]
id=17
HostName=192.168.1.2

And in /etc/my.cnf file of this SQL node, you have declared "ndb-cluster-connection-pool=2", which means that this SQL node has 2 simultaneus API to use, does this mean that this SQL node will have 2 threads of mysqld running??

I think so, but I'm not sure of this relation. Can anyone confirm??

Thanks and greetings, David.

Node failure caused abort of transaction after trying to import a "large" mysqldump file (3 replies)

$
0
0
Greetings!

I am trying to import a ~500mbyte sql dump file to a new installation of mysql cluster (7.1.5-1)

The dump file was taken from another mysql (non cluster) server with the following command :

mysqldump -udata -p www_database_com --opt --lock-all-tables > database.sql

I changed the engine from myisam to ndbcluster in all the tables and then i created the database from the mysql api service.

When i am trying to import the dump file to the mysql cluster from node 7 i get the following error after some minutes:

mysql -uroot -p www_database_com < database.sql_ndb
Enter password:

ERROR 1297 (HY000) at line 941: Got temporary error 4010 'Node failure caused abort of transaction' from NDBCLUSTER

I am using 2 servers for ndb_mgmd and mysqld and another 2 for ndbd
(total 4 servers)

I have a private 100mbit network dedicated for mysql cluster

All the servers are of the same configuration

16G ram
3x500G sata disks in RAID-5 configuration
2xQuad-Core AMD Opteron(tm) Processor 2344 HE


my config.ini

[TCP DEFAULT]
SendBufferMemory=4M
ReceiveBufferMemory=4M

[NDB_MGMD DEFAULT]
PortNumber=1186
Datadir=/var/lib/mysql-cluster/

[NDB_MGMD]
Id=1
Hostname=172.18.77.1
LogDestination=FILE:filename=ndb_1_cluster.log,maxfiles=6
ArbitrationRank=1

[NDB_MGMD]
Id=2
Hostname=172.18.77.2
LogDestination=FILE:filename=ndb_2_cluster.log,maxfiles=6
ArbitrationRank=1

[NDBD DEFAULT]
NoOfReplicas=2
Datadir=/var/lib/mysql-cluster/
FileSystemPathDD=/var/lib/mysql-cluster/
#FileSystemPathUndoFiles=/var/lib/mysql-cluster/
#FileSystemPathDataFiles=/var/lib/mysql-cluster/
DataMemory=12139M
IndexMemory=1518M
LockPagesInMainMemory=1

MaxNoOfConcurrentOperations=100000

StringMemory=25
MaxNoOfTables=8192
MaxNoOfOrderedIndexes=2048
MaxNoOfUniqueHashIndexes=512
MaxNoOfAttributes=24576
MaxNoOfTriggers=14336
DiskCheckpointSpeedInRestart=10M
FragmentLogFileSize=256M
InitFragmentLogFiles=SPARSE
NoOfFragmentLogFiles=72
RedoBuffer=48M

TimeBetweenLocalCheckpoints=20
TimeBetweenGlobalCheckpoints=1000
TimeBetweenEpochs=100

MemReportFrequency=30
BackupReportFrequency=10

### Params for setting logging
LogLevelStartup=15
LogLevelShutdown=15
LogLevelCheckpoint=8
LogLevelNodeRestart=15

### Params for increasing Disk throughput
BackupMaxWriteSize=1M
BackupDataBufferSize=16M
BackupLogBufferSize=4M
BackupMemory=20M
#Reports indicates that odirect=1 can cause io errors (os err code 5) on some systems. You must test.
#ODirect=1

### Watchdog
TimeBetweenWatchdogCheckInitial=80000

### TransactionInactiveTimeout - should be enabled in Production
TransactionInactiveTimeout=80000

# Added this cause of ERROR 1297 (HY000) at line 612: Got temporary error 4010 'Node failure caused abort of transaction' from NDBCLUSTER
TransactionDeadLockDetectionTimeOut=80000

### CGE 6.3 - REALTIME EXTENSIONS
#RealTimeScheduler=1
#SchedulerExecutionTimer=80
#SchedulerSpinTimer=40

### DISK DATA
SharedGlobalMemory=20M
DiskPageBufferMemory=64M

### Multithreading
#MaxNoOfExecutionThreads=4

### Increasing the LongMessageBuffer b/c of a bug (20090903)
LongMessageBuffer=8M

BatchSizePerLocalScan=512

[NDBD]
Id=3
Hostname=172.18.77.3


[NDBD]
Id=4
Hostname=172.18.77.4


## BELOW ARE TWO (INACTIVE) SLOTS FOR DATA NODES TO ALLOW FOR GROWTH
#[NDBD]
#Id=5
#Hostname=

#[NDBD]
#Id=6
#Hostname=


[MYSQLD DEFAULT]
BatchSize=512
#BatchByteSize=2048K
#MaxScanBatchSize=2048K

[MYSQLD]
Id=7
Hostname=172.18.77.1
[MYSQLD]
Id=8
Hostname=172.18.77.1

[MYSQLD]
Id=9
Hostname=172.18.77.2
[MYSQLD]
Id=10
Hostname=172.18.77.2

### SLOTS FOR CMON (one for each ndb_mgmd)
[MYSQLD]
Hostname=172.18.77.1
[MYSQLD]
Hostname=172.18.77.2

### SLOTS (one for each ndb_mgmd) FOR HELPER APPLICATIONS SUCH AS ndb_show_tables etc
[MYSQLD]
Hostname=172.18.77.1
[MYSQLD]
Hostname=172.18.77.2



My my.cnf

[mysqld]
user=root
datadir=/var/lib/mysql-cluster/
pid-file=mysqld.pid
socket=/var/lib/mysql-cluster/mysql.sock
ndbcluster
ndb-connectstring="172.18.77.1:1186;172.18.77.2:1186"

#These are commented out for troubleshooting reasons
#ndb-cluster-connection-pool=1
#ndb-force-send=1
#ndb-use-exact-count=0
#ndb-extra-logging=1
#ndb-autoincrement-prefetch-sz=256
#engine-condition-pushdown=1

log-err=error.log

key_buffer = 256M
#max_allowed_packet = 16M
max_allowed_packet = 4M
sort_buffer_size = 512K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
#thread_cache_size=1024
#myisam_sort_buffer_size = 8M
myisam_sort_buffer_size = 2M
memlock
sysdate_is_now
max-connections=200
thread-cache-size=128
query-cache-type = 0
query-cache-size = 0
table-open_cache=1024
table-cache=512
lower-case-table-names=0


[MYSQL]
socket=/var/lib/mysql-cluster/mysql.sock

[client]
socket=/var/lib/mysql-cluster/mysql.sock


My my.cnf in data nodes

[mysqld]
ndbcluster
ndb-connectstring=172.18.77.1,172.18.77.2

[mysql-cluster]
ndb-connectstring=172.18.77.1,172.18.77.2

About the logs now

------------------START------------ndb_3_out.log---------------------------------
2010-09-01 09:27:48 [ndbd] INFO -- Start phase 5 completed
2010-09-01 09:27:48 [ndbd] INFO -- Start phase 6 completed
m_active_buckets.set(0)
2010-09-01 09:27:48 [ndbd] INFO -- Start phase 7 completed
2010-09-01 09:27:48 [ndbd] INFO -- Start phase 8 completed
2010-09-01 09:27:48 [ndbd] INFO -- Start phase 9 completed
2010-09-01 09:27:48 [ndbd] INFO -- Start phase 100 completed
2010-09-01 09:27:48 [ndbd] INFO -- Start phase 101 completed
2010-09-01 09:27:48 [ndbd] INFO -- Node started
alloc_chunk(312334 16) -
tab: 51 frag: 1 increasing maxGciInLcp from 724 to 731
tab: 51 frag: 0 increasing maxGciInLcp from 743 to 752
tab: 51 frag: 1 increasing maxGciInLcp from 743 to 752
2010-09-01 09:47:56 [ndbd] ERROR -- c_gcp_list.seize() failed: gci: 4200478015496 nodes: 0000000000000000000000000000000000000000000000000000000000000280
2010-09-01 09:47:56 [ndbd] WARNING -- ACK wo/ gcp record (gci: 978/8) ref: 0fa20009 from: 00030009
2010-09-01 09:47:56 [ndbd] ERROR -- c_gcp_list.seize() failed: gci: 4200478015497 nodes: 0000000000000000000000000000000000000000000000000000000000000280
2010-09-01 09:47:56 [ndbd] WARNING -- ACK wo/ gcp record (gci: 978/9) ref: 0fa20009 from: 00030009
WARNING: timerHandlingLab now: 65637471 sent: 65637421 diff: 50
2010-09-01 09:47:56 [ndbd] ERROR -- c_gcp_list.seize() failed: gci: 4204772982784 nodes: 0000000000000000000000000000000000000000000000000000000000000280
2010-09-01 09:47:56 [ndbd] WARNING -- ACK wo/ gcp record (gci: 979/0) ref: 0fa20009 from: 00030009
2010-09-01 09:47:56 [ndbd] ERROR -- c_gcp_list.seize() failed: gci: 4204772982785 nodes: 0000000000000000000000000000000000000000000000000000000000000280
2010-09-01 09:47:56 [ndbd] WARNING -- ACK wo/ gcp record (gci: 979/1) ref: 0fa20009 from: 00030009
2010-09-01 09:47:56 [ndbd] ERROR -- c_gcp_list.seize() failed: gci: 4204772982786 nodes: 0000000000000000000000000000000000000000000000000000000000000280
2010-09-01 09:47:56 [ndbd] WARNING -- ACK wo/ gcp record (gci: 979/2) ref: 0fa20009 from: 00030009
WARNING: timerHandlingLab now: 67357839 sent: 67357789 diff: 50
--------------------END------------ndb_3_out.log---------------------------------

------------------START------------ndb_4_out.log---------------------------------
2010-09-01 09:26:11 [ndbd] INFO -- Start phase 6 completed
m_active_buckets.set(1)
2010-09-01 09:26:11 [ndbd] INFO -- Start phase 7 completed
2010-09-01 09:26:11 [ndbd] INFO -- Start phase 8 completed
2010-09-01 09:26:11 [ndbd] INFO -- Start phase 9 completed
2010-09-01 09:26:11 [ndbd] INFO -- Start phase 100 completed
2010-09-01 09:26:11 [ndbd] INFO -- Start phase 101 completed
2010-09-01 09:26:11 [ndbd] INFO -- Node started
alloc_chunk(312334 16) -
WARNING: timerHandlingLab now: 65225261 sent: 65225208 diff: 53
tab: 51 frag: 0 increasing maxGciInLcp from 743 to 752
2010-09-01 09:46:18 [ndbd] ERROR -- c_gcp_list.seize() failed: gci: 4200478015496 nodes: 0000000000000000000000000000000000000000000000000000000000000280
2010-09-01 09:46:18 [ndbd] WARNING -- ACK wo/ gcp record (gci: 978/8) ref: 0fa20009 from: 00040009
2010-09-01 09:46:18 [ndbd] ERROR -- c_gcp_list.seize() failed: gci: 4200478015497 nodes: 0000000000000000000000000000000000000000000000000000000000000280
2010-09-01 09:46:18 [ndbd] WARNING -- ACK wo/ gcp record (gci: 978/9) ref: 0fa20009 from: 00040009
2010-09-01 09:46:18 [ndbd] ERROR -- c_gcp_list.seize() failed: gci: 4204772982784 nodes: 0000000000000000000000000000000000000000000000000000000000000280
2010-09-01 09:46:18 [ndbd] WARNING -- ACK wo/ gcp record (gci: 979/0) ref: 0fa20009 from: 00040009
2010-09-01 09:46:18 [ndbd] ERROR -- c_gcp_list.seize() failed: gci: 4204772982785 nodes: 0000000000000000000000000000000000000000000000000000000000000280
2010-09-01 09:46:18 [ndbd] WARNING -- ACK wo/ gcp record (gci: 979/1) ref: 0fa20009 from: 00040009
2010-09-01 09:46:18 [ndbd] ERROR -- c_gcp_list.seize() failed: gci: 4204772982786 nodes: 0000000000000000000000000000000000000000000000000000000000000280
2010-09-01 09:46:18 [ndbd] WARNING -- ACK wo/ gcp record (gci: 979/2) ref: 0fa20009 from: 00040009
--------------------END------------ndb_4_out.log---------------------------------



-----------------------------MySQL-webmgm1.database.com.err----------------------
100901 09:32:06 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql-cluster/
100901 9:32:06 [Note] Plugin 'FEDERATED' is disabled.
100901 9:32:06 InnoDB: Started; log sequence number 0 44233
100901 9:32:06 [Note] NDB: NodeID is 7, management server '172.18.77.1:1186'
100901 9:32:07 [Note] NDB[0]: NodeID: 7, all storage nodes connected
100901 9:32:07 [Warning] NDB: server id set to zero will cause any other mysqld with bin log to log with wrong server id
100901 9:32:07 [Note] Starting Cluster Binlog Thread
100901 9:32:07 [Note] Event Scheduler: Loaded 0 events
100901 9:32:07 [Note] NDB Binlog: DISCOVER TABLE Event: REPL$mysql/ndb_schema
100901 9:32:07 [Note] NDB Binlog: logging ./mysql/ndb_schema (UPDATED,USE_WRITE)
100901 9:32:07 [Note] NDB Binlog: DISCOVER TABLE Event: REPL$mysql/ndb_apply_status
100901 9:32:07 [Note] NDB Binlog: logging ./mysql/ndb_apply_status (UPDATED,USE_WRITE)
2010-09-01 09:32:08 [NdbApi] INFO -- Flushing incomplete GCI:s < 190/2
2010-09-01 09:32:08 [NdbApi] INFO -- Flushing incomplete GCI:s < 190/2
100901 9:32:08 [Note] NDB Binlog: starting log at epoch 190/2
100901 9:32:08 [Note] NDB Binlog: ndb tables writable
100901 9:32:08 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.1.47-ndb-7.1.5-cluster-gpl' socket: '/var/lib/mysql-cluster/mysql.sock' port: 3306 MySQL Cluster Server (GPL)
100901 9:32:08 [Note] NDB Binlog: Node: 3, subscribe from node 9, Subscriber bitmask 0200
100901 9:32:08 [Note] NDB Binlog: Node: 4, subscribe from node 9, Subscriber bitmask 0200
100901 9:34:38 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/404-log
100901 9:34:59 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/alert_direct_msg
100901 9:35:03 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/alert_replies
100901 9:35:06 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/alerts-global-read-mem
100901 9:35:08 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/alerts
100901 9:35:09 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/applications-settings
100901 9:35:12 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/applications
100901 9:35:14 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/basic_user_index
100901 9:41:57 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/basic_user_info-xxx
100901 9:41:59 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/favorited-tweets
100901 9:42:03 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/favorites-tags
100901 9:42:05 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/favorites
100901 9:42:09 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/followers-cache
100901 9:44:19 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/followers
100901 9:44:20 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/followfriday_reminder
100901 9:44:25 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/group_categories
100901 9:44:26 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/groups-autopost-cache
100901 9:44:29 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/groups-autopost-keywords
100901 9:44:31 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/groups-members-cache
100901 9:44:33 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/groups-members
100901 9:44:35 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/groups-tags-follow
100901 9:44:36 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/groups-timeline-index
100901 9:44:43 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/groups-timeline-xxx
100901 9:44:46 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/groups
100901 9:44:48 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/groups_tags
100901 9:44:50 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/ignore_users
100901 9:44:51 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/invitation-requests
100901 9:44:53 [Note] NDB Binlog: CREATE TABLE Event: REPL$www_database_com/invitations
100901 9:44:54 [Note] NDB Binlog: CREATE TABLE Event:
REPL$www_database_com/log-posts-per-day
100901 9:44:56 [Note] NDB Binlog: CREATE TABLE Event:
REPL$www_database_com/log-registered-online-hourly
100901 9:45:05 [Note] NDB Binlog: CREATE TABLE Event:
REPL$www_database_com/log-registered-online-tmp
100901 9:45:07 [Note] NDB Binlog: CREATE TABLE Event:
REPL$www_database_com/log-registered-online
100901 9:45:09 [Note] NDB Binlog: CREATE TABLE Event:
REPL$www_database_com/log-users-online
100901 9:46:55 [Note] NDB Binlog: CREATE TABLE Event:
REPL$www_database_com/mem-tweet-replies
100901 9:49:02 [Note] NDB Binlog: Node: 3, down, Subscriber bitmask 00
100901 9:49:02 [Note] NDB Binlog: Node: 4, down, Subscriber bitmask 00
100901 9:49:02 [Note] NDB Binlog: cluster failure for ./mysql/ndb_schema at epoch 972/0.
100901 9:49:02 [Note] NDB Binlog: cluster failure for ./mysql/ndb_apply_status at epoch 972/0.
100901 9:49:02 [Note] Restarting Cluster Binlog
100901 9:49:33 [Note] Restarting Cluster Binlog
100901 9:49:52 [Note] NDB Binlog: DISCOVER TABLE Event: REPL$mysql/ndb_schema
100901 9:49:52 [Note] NDB Binlog: logging ./mysql/ndb_schema (UPDATED,USE_WRITE)
100901 9:49:52 [Note] NDB Binlog: DISCOVER TABLE Event: REPL$mysql/ndb_apply_status
100901 9:49:52 [Note] NDB Binlog: logging ./mysql/ndb_apply_status (UPDATED,USE_WRITE)
2010-09-01 09:49:52 [NdbApi] INFO -- Flushing incomplete GCI:s < 1025/0
2010-09-01 09:49:52 [NdbApi] INFO -- Flushing incomplete GCI:s < 1025/0
100901 9:49:52 [Note] NDB Binlog: starting log at epoch 1025/0
100901 9:49:52 [Note] NDB Binlog: ndb tables writable
100901 9:49:52 [Note] NDB Binlog: Node: 3, subscribe from node 9, Subscriber bitmask 0200
100901 9:49:52 [Note] NDB Binlog: Node: 4, subscribe from node 9, Subscriber bitmask 0200
--------------------------------END-----------------------------------------------

-------------------START-------ndb_2_cluster.log----------------------------------
2010-09-01 09:48:54 [MgmtSrvr] INFO -- Node 3: Local checkpoint 20 completed
2010-09-01 09:48:55 [MgmtSrvr] INFO -- Node 3: Local checkpoint 21 started. Keep GCI = 933 oldest restorable GCI
= 961
2010-09-01 09:49:02 [MgmtSrvr] WARNING -- Node 3: Node 1 missed heartbeat 2
2010-09-01 09:49:03 [MgmtSrvr] WARNING -- Node 4: Node 1 missed heartbeat 2
2010-09-01 09:49:03 [MgmtSrvr] WARNING -- Node 4: Node 7 missed heartbeat 2
2010-09-01 09:49:03 [MgmtSrvr] WARNING -- Node 3: Node 1 missed heartbeat 3
2010-09-01 09:49:04 [MgmtSrvr] ALERT -- Node 3: Node 1 Disconnected
2010-09-01 09:49:04 [MgmtSrvr] INFO -- Node 3: Communication to Node 1 closed
2010-09-01 09:49:04 [MgmtSrvr] INFO -- Node 4: Communication to Node 1 closed
2010-09-01 09:49:04 [MgmtSrvr] ALERT -- Node 4: Node 1 Disconnected
2010-09-01 09:49:04 [MgmtSrvr] WARNING -- Node 4: Node 7 missed heartbeat 3
2010-09-01 09:49:05 [MgmtSrvr] INFO -- Node 4: Communication to Node 1 opened
2010-09-01 09:49:05 [MgmtSrvr] INFO -- Node 3: Communication to Node 1 opened
2010-09-01 09:49:06 [MgmtSrvr] INFO -- Node 3: Communication to Node 7 closed
2010-09-01 09:49:06 [MgmtSrvr] WARNING -- Node 4: Node 7 missed heartbeat 4
2010-09-01 09:49:06 [MgmtSrvr] ALERT -- Node 4: Node 7 declared dead due to missed heartbeat
2010-09-01 09:49:06 [MgmtSrvr] INFO -- Node 4: Communication to Node 7 closed
2010-09-01 09:49:06 [MgmtSrvr] INFO -- Node 4: Data usage is 0%(2190 32K pages of total 388448)
2010-09-01 09:49:06 [MgmtSrvr] INFO -- Node 4: Index usage is 0%(1511 8K pages of total 194336)
2010-09-01 09:49:16 [MgmtSrvr] INFO -- Node 3: Data usage is 0%(2148 32K pages of total 388448)
2010-09-01 09:49:16 [MgmtSrvr] INFO -- Node 3: Index usage is 0%(1511 8K pages of total 194336)
2010-09-01 09:49:35 [MgmtSrvr] INFO -- Node 3: Local checkpoint 21 completed
2010-09-01 09:49:36 [MgmtSrvr] INFO -- Node 4: Data usage is 0%(2148 32K pages of total 388448)
2010-09-01 09:49:36 [MgmtSrvr] INFO -- Node 4: Index usage is 0%(1511 8K pages of total 194336)
2010-09-01 09:49:46 [MgmtSrvr] INFO -- Node 3: Data usage is 0%(2148 32K pages of total 388448)
2010-09-01 09:49:46 [MgmtSrvr] INFO -- Node 3: Index usage is 0%(1511 8K pages of total 194336)
2010-09-01 09:49:50 [MgmtSrvr] INFO -- Node 4: Node 1 Connected
2010-09-01 09:49:50 [MgmtSrvr] ALERT -- Node 4: Node 7 Disconnected
2010-09-01 09:49:50 [MgmtSrvr] INFO -- Node 4: Node 1: API mysql-5.1.47 ndb-7.1.5
2010-09-01 09:49:51 [MgmtSrvr] INFO -- Node 3: Node 1 Connected
2010-09-01 09:49:51 [MgmtSrvr] ALERT -- Node 3: Node 7 Disconnected
2010-09-01 09:49:51 [MgmtSrvr] INFO -- Node 3: Node 1: API mysql-5.1.47 ndb-7.1.5
2010-09-01 09:49:52 [MgmtSrvr] INFO -- Node 4: Communication to Node 7 opened
2010-09-01 09:49:52 [MgmtSrvr] INFO -- Node 4: Node 7 Connected
2010-09-01 09:49:52 [MgmtSrvr] INFO -- Node 4: Node 7: API mysql-5.1.47 ndb-7.1.5
2010-09-01 09:49:53 [MgmtSrvr] INFO -- Node 3: Communication to Node 7 opened
2010-09-01 09:49:53 [MgmtSrvr] INFO -- Node 3: Node 7 Connected
2010-09-01 09:49:53 [MgmtSrvr] INFO -- Node 3: Node 7: API mysql-5.1.47 ndb-7.1.5

---------------------END-------ndb_2_cluster.log----------------------------------


Sorry for the big logs and thank you in advance :)

Table creation time out (no replies)

$
0
0
Hi again,

Anyone ever got into this situation?
Any ideas?

.
.
.
.
100904 21:33:19 [Note] NDB create table: waiting max 4 sec for distributing ./www_database_com/x@002dgroups@002dtimeline@002d119. epochs: (0/0,0/0,113850/1) injector proc_info: Waiting for event from ndbcluster
100904 21:33:20 [Note] NDB create table: waiting max 3 sec for distributing ./www_database_com/x@002dgroups@002dtimeline@002d119. epochs: (0/0,0/0,113851/0) injector proc_info: Waiting for event from ndbcluster
100904 21:33:21 [Note] NDB create table: waiting max 2 sec for distributing ./www_database_com/x@002dgroups@002dtimeline@002d119. epochs: (0/0,0/0,113852/0) injector proc_info: Waiting for event from ndbcluster
100904 21:33:22 [Note] NDB create table: waiting max 1 sec for distributing ./www_database_com/x@002dgroups@002dtimeline@002d119. epochs: (0/0,0/0,113853/0) injector proc_info: Waiting for event from ndbcluster
100904 21:33:23 [ERROR] NDB create table: distributing ./www_database_com/x@002dgroups@002dtimeline@002d119 timed out. Ignoring...

Thanks in advance

mysqldump with MySQLCluster (2 replies)

$
0
0
Hi guys,

I'm looking for making backups of my ndbcluster database. I read about START BACKUP command for the manager node, but I wanna make a simple .sql backup for my tables, like mysqldump does.

Would be any problem using it with ndbcluster engine? Because I read some info about, but seems outdated (2005).

Can anybody confirm this?

Thanks!

Juan

Datacenters fitted for MySQL Cluster (1 reply)

$
0
0
Does anyone here have any high end data centers to recommend in Europe? I know it's not so relevant , but in order to run properly a MySQL cluster you need the appropriate infrastructure. I want to be able to use ena extra private network for MySQL traffic and ofcourse to have my all my servers in the same subnet for fail-over reasons.


PS
Does anyone ever tried vps.net ?

Thank you

problems converting a 3 nodegroup setup to a 1 nodegroup setup (1 reply)

$
0
0
Hi there,

I have a backup from a live environment with 3 nodegroups that i need to install on a 1 nodegroup lab system.
I tried the following:

--
host1:/var/tmp/ # /usr/local/mysqlCluster/mysql/bin/ndb_restore -b1 -n3 -m -r --ndb-nodegroup-map='(1,0)(2,0)' BACKUP-1/
Backup Id = 1
Nodeid = 3
Analyse node group map
backup path = BACKUP-1/
Opening file 'BACKUP-1/BACKUP-1.3.ctl'
Backup version in files: ndb-6.3.11 ndb version: mysql-5.1.35 ndb-7.0.7
Stop GCP of Backup: 0
Connected to ndb!!
Create table `acctopus_te/def/acctsessions` failed: 1224: Too many fragments
Restore: Failed to restore table: `acctopus_te/def/acctsessions` ... Exiting

NDBT_ProgramExit: 1 - Failed
--

The 1224 error seems a bit strange to me, since the backup was created on a system with the same cluster version.

Any ideas anyone?

Thanks in advance,

Gunther
Viewing all 1553 articles
Browse latest View live




Latest Images