Quantcast
Channel: MySQL Forums - NDB clusters
Viewing all 1553 articles
Browse latest View live

Problem with SET types with cluster (2 replies)

0
0
This is my table:

CREATE TABLE `PULL_promotionOutput` (
`id_promotion` mediumint(8) NOT NULL AUTO_INCREMENT,
`promotionName` varchar(70) NOT NULL,
`promotionType` set('SMS','MMS-COUPON','WAP-COUPON','E-MAIL','WEBSERVICE') NOT NULL
PRIMARY KEY (`id_promotion`)
) ENGINE=ndbcluster


We use two engines one for develop and another one in production.
We have some problems with SET types with cluster, because it doesn't response any line.


MyISAM TABLE
SELECT * FROM PULL_promotionOutput WHERE promotionType LIKE "%WAP-COUPON%" (Ok)
SELECT * FROM PULL_promotionOutput WHERE promotionType = "WAP-COUPON" (Ok)

ndbcluster
SELECT * FROM PULL_promotionOutput WHERE promotionType LIKE "%WAP-COUPON%" (KO)
SELECT * FROM PULL_promotionOutput WHERE promotionType = "WAP-COUPON" (Ok)



thanks.

SQL node is running but not connected to cluster (1 reply)

0
0
Hi All,
I have implemented mysql cluster with configuration(1 management node, 4 data node, and 2 sql nodes).I stuck with a problem where my sql node is running but showing not connected on management client.
I have implemented it before with same configuration but now it is not working. Please help me out to resolve this. Thanks in advance.
==== ndb_mgm -e show ====
Connected to Management Server at: 192.168.38.87:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 4 node(s)
id=2 @192.168.38.84 (mysql-5.1.35 ndb-7.0.7, starting, Nodegroup: 0, Master)
id=3 @192.168.38.85 (mysql-5.1.35 ndb-7.0.7, starting, Nodegroup: 0)
id=4 @192.168.38.86 (mysql-5.1.35 ndb-7.0.7, starting, Nodegroup: 0)
id=5 @192.168.144.114 (mysql-5.1.35 ndb-7.0.7, starting, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.38.87 (mysql-5.1.35 ndb-7.0.7)

[mysqld(API)] 2 node(s)
id=6 (not connected, accepting connect from 192.168.38.87)
id=7 (not connected, accepting connect from 192.168.38.70)

===== My configuration are as follows: =====
Management node: 192.168.38.87
DataNode1: 192.168.38.84
DataNode2: 192.168.38.85
DataNode3: 192.168.38.86
DataNode4: 192.168.144.114
SQLNode1: 192.168.38.87
SQLNode2: 192.168.38.70

====== config.ini =====
#options affecting ndbd processes on all data nodes:
[ndbd default]
NoOfReplicas=2 # Number of replicas
DataMemory=1332M # ~1.3GB # How much memory to allocate for data storage
IndexMemory=300M # How much memory to allocate for index storage
# For DataMemory and IndexMemory, we have used the
# default values.
TimeBetweenLocalCheckpoints=20
# TCP/IP options:
[tcp default]
portnumber=2202 # This the default; however, you can use any port that is free
# for all the hosts in the cluster
# Note: It is recommended that you do not specify the port
# number at all and allow the default value to be used instead

# Management process options SQL1:
[ndb_mgmd]
hostname=192.168.38.87 # Hostname or IP address of management node
datadir=/var/lib/mysql-cluster # Directory for management node log files

# Options for data node DN1:
[ndbd]
# (one [ndbd] section per data node)
hostname=192.168.38.84 # Hostname or IP address
datadir=/var/lib/mysql-cluster # Directory for this data node's data files
id=2

# Options for data node DN2:
[ndbd]
hostname=192.168.38.85 # Hostname or IP address
datadir=/var/lib/mysql-cluster # Directory for this data node's data files
id=3


# Options for data node DN3:
[ndbd]
hostname=192.168.38.86 # Hostname or IP address
datadir=/var/lib/mysql-cluster # Directory for this data node's data files
id=4


# Options for data node DN4:
[ndbd]
hostname=192.168.144.114 # Hostname or IP address
datadir=/var/lib/mysql-cluster # Directory for this data node's data files
id=5


# SQL node options:
#Option for SQL node SQLNode1:
[mysqld]
hostname = 192.168.38.87
id=6

#Option for SQL node SQLNode2:
[mysqld]
hostname = 192.168.38.70
id=7


========= my.cnf =============
[mysqld]
datadir=/usr/local/mysql-cluster-gpl-7.0.7-linux-i686-glibc23/data
basedir=/usr/local/mysql-cluster-gpl-7.0.7-linux-i686-glibc23/
socket=/var/lib/mysql/mysql.sock
user=mysql

# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1

# To allow mysqld to connect to a MySQL Cluster management daemon, uncomment
# these lines and adjust the connectstring as needed.
ndbcluster
ndb-connectstring="nodeid=1;host=192.168.38.87:1186"
#ndb-connectstring="nodeid=1;host=localhost:1186"
server-id=6


[client]
socket=/var/lib/mysql/mysql.sock

[mysql_cluster]
ndb-connectstring=192.168.38.87

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

[ndbd]
# If you are running a MySQL Cluster storage daemon (ndbd) on this machine,
# adjust its connection to the management daemon here.
# Note: ndbd init script requires this to include nodeid!
#connect-string="nodeid=2;host=192.168.38.87:1186"
#connect-string=192.168.41.17
[ndb_mgm]
# connection string for MySQL Cluster management tool
#connect-string="host=localhost:1186"
connect-string="nodeid=6;host=192.168.38.87:1186"

ERROR 1297 (HY000): Got temporary error 4010 'Node failure caused abort of transaction' from NDBCLUSTER (1 reply)

0
0
Hi. I want to delete about 200.000 rows and got this errors:

mysql> delete FROM sent WHERE data < now() - interval '60' day;
ERROR 1297 (HY000): Got temporary error 4010 'Node failure caused abort of transaction' from NDBCLUSTER

mysql> show warnings
-> ;
+-------+------+-------------------------------------------------------------------------------------+
| Level | Code | Message |
+-------+------+-------------------------------------------------------------------------------------+
| Error | 1297 | Got temporary error 4010 'Node failure caused abort of transaction' from NDB |
| Error | 1297 | Got temporary error 4010 'Node failure caused abort of transaction' from NDBCLUSTER |
| Error | 1180 | Got error 4010 during COMMIT |
+-------+------+-------------------------------------------------------------------------------------+
3 rows in set (0.00 sec)

mysql>

Also nodes crashes and all cluster goes away.

What can we do to delete about 200.000 rows without node crashes?

Also all data goes away. What can we do to restore all data?

ndb_mgm> all report memory;
Node 2: Data usage is 0%(22 32K pages of total 32000)
Node 2: Index usage is 0%(16 8K pages of total 38432)
Node 3: Data usage is 0%(22 32K pages of total 32000)
Node 3: Index usage is 0%(16 8K pages of total 38432)
[root@s4 ndb_3_fs]# ls
AUTOEXTEND_SIZE D1 D10 D11 D2 D8 D9 data_tr_free_1.dat data_tr_free_2.dat data_tr_free_3.dat LCP undo_tr_free_1.dat undo_tr_free_2.dat undo_tr_free_3.dat
[root@s4 ndb_3_fs]# du -hs .
13G .
[root@s4 ndb_3_fs]#

In Mysql error log we get this:

100930 18:27:01 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './db_name/table_name.frm'
100930 18:27:01 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './db_name/table_name.frm'
100930 18:27:01 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './db_name/table_name.frm'
100930 18:27:01 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './db_name/table_name.frm'
100930 18:27:01 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './db_name/table_name.frm'
100930 18:27:01 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './db_name/table_name.frm'
100930 18:27:01 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './db_name/table_name.frm'
100930 18:27:02 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './db_name/table_name.frm'
100930 18:27:02 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './db_name/table_name.frm'
100930 18:27:03 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './db_name/table_name.frm'
100930 18:27:03 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './db_name/table_name.frm'
100930 18:27:03 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './db_name/table_name.frm'


Can we recover data? Thanks.

Forced node shutdown completed. Occured during startphase 5. Caused by error 2341: 'Internal program error (2 replies)

0
0
Hi. We have a problem with one of our nodes:


In node log:
Message: System error, node killed during node restart by other node (Internal error, programming error or missing error message, please report a bug)
Error: 2303
Error data: Killed by node 2 as copyfrag failed, error: 1501
Error object: NDBCNTR (Line: 274) 0x00000002
Program: ndbd
Pid: 30112
Version: mysql-5.1.47 ndb-7.1.5
Trace: /path/to_log/ndb_2_trace.log.21
***EOM***

In NDB_MGM >
Node 2: Forced node shutdown completed. Occured during startphase 5. Caused by error 2341: 'Internal program error (failed ndbrequire)(Internal error, programming error or missing error message, please report a bug). Temporary error, restart node'.

How can we fix this ?

MySQL cluster Backup - unpredictable backup sizes (5 replies)

0
0
Hi All,

I am hoping someone can help me shed some light on the online backup process of a 2-node mysql cluster.

I have a functioning 2-node cluster serving a db worth of approx 3gig of data.
I noticed that when I create a backup by regular means (ndb_mgm -e "START BACKUP <ID>", the resulting "BACKUP-<ID>" directories under each node contain approximately 300Meg each.
I have ten days of backups so far, and only one of the BACKUP-<ID> directories on
one node contains 2.7G, while the corresponding backup directory on the second node
only 270M for that date.

My question is should I not be expecting each node to produce equal and large backup files (on the order of 3GB)? Ndb cluster does not support incrementals,
so it cant be that...

Your assistance is greatly appreciated.
Thanks!
-igor

Logs for mysqld not appearing (4 replies)

0
0
I have a very simple cluster of two data nodes, a management node and all three running mysqld. I am getting logs for the data nodes, but none from the MySQL nodes. I looked in the DataDir that I specified in my.cnf and see nothing along the lines of mysqld.log. There appears to be nothing in the DataDir except data. I looked in /var/log/mysql and found logs there, but they are for another install of MySQL on the box that is not in use. The SQL nodes being used in the cluster don't appear to be sending logs to the data directory. Where else could they be? Thanks

How To install CMS or Apps on Cluster (6 replies)

0
0
Hello, I am checking out MySQL Cluster functions on 5 VM Redhat servers. I configured a cluster with 1 mgmt, 2 API and 2 Data nodes. Now I try to install wordpress with the help of an API server on the cluster. Once I was successful when I had a mysqld service running on my API node. After the local install I gave the datatables an alter table to the ndbengine from MyISAM. With the ndbengine they were automatically written to the ndb nodes.
I want to know how to install applications and a CMS directly to the storage nodes. I didnt find any how to's via google. There are always MySQL Cluster installation Howtos. To give you an example what i mean: If I try to install wordpress on the Cluster configuration, it's asking for a database connection. Normally it tries to connect to local host.
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @10.90.0.210 (mysql-5.1.47 ndb-7.1.5, Nodegroup: 0)
id=3 @10.90.0.211 (mysql-5.1.47 ndb-7.1.5, Nodegroup: 0, Master)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @10.90.0.214 (mysql-5.1.47 ndb-7.1.5)

[mysqld(API)] 5 node(s)
id=4 @10.90.0.212 (mysql-5.1.47 ndb-7.1.5)
id=5 (not connected, accepting connect from 10.90.0.213)

I would be very joyful if you could give me some advices how I can handle Applications and the CMS (wordpress for example).

Mysql Cluster - Solaris with load balancer (no replies)

0
0
Hi everybody, I have 3 cluster with 5 nodes working:

10.10.10.32 - ndbd and sql node
10.10.10.33 - ndbd and sql node

10.10.10.34 - ndb_mgmd

I want to make a load balancer in solaris for the two clusters 32 and 33.

If somebody that did something like that can help me please?

Thanks a lot!

MySQL Cluster / Apache 2server setup - is 2 servers enough? (3 replies)

0
0
Hi everybody,
I'm investigating the replacement of an Apache/MySQL server due to old age and am looking into the possibility of adding some redundancy.
My idea is to use 2 virtual(VMWare) servers with this configuration:

SuSE Linux Enterprise Server 10 SP3
Apache2
MySQL Cluster 7.0

ATM I'm not considering an Apache Cluster - I think we will use another failover solution for the Apache.
The docroot will be placed on an NSS Cluster Volume shared by the two servers.

My questions are these:
1) can I use the same 2 servers as management nodes?
2(not really for this forum but...)) could Apache handle redaing the docroot from an NSS Cluster?

I would really appreciate if anyone could help me out here since I don't have the time or means to experiment.

ndb_restore command (2 replies)

0
0
Hi,
I did Cluster database backup using 'ndb_mgm -e "START BACKUP"'. I can see the following backup files:

[root@adg1 BACKUP-1]# ls -l /root/BACKUP/BACKUP-1/
total 72
-rw-r--r-- 1 root root 51228 Nov 10 14:42 BACKUP-1-0.3.Data
-rw-r--r-- 1 root root 8872 Nov 10 14:42 BACKUP-1.3.ctl
-rw-r--r-- 1 root root 52 Nov 10 14:42 BACKUP-1.3.log


Now I entered into single user mode in ndb_mgm. Then I tried to restore the database objects using 'ndb_restore' but got the following error:

[root@adg1 BACKUP-1]# ndb_restore --backup_path=/root/BACKUP/BACKUP-1
backup path = /root/BACKUP/BACKUP-1
Opening file '/root/BACKUP/BACKUP-1/BACKUP-0.0.ctl'
readDataFileHeader: Error reading header
Failed to read /root/BACKUP/BACKUP-1/BACKUP-0.0.ctl

NDBT_ProgramExit: 1 - Failed


My question is how to make 'ndb_restore' read my backup file 'BACKUP-1.3.ctl' instead of 'BACKUP-0.0.ctl'.

Thank you in advance.

Error restoring hot backup with ndb_restore. HELP (2 replies)

0
0
Hi all,

I'm having problems restoring a backup with ndb_restore command.

config.ini
-----------------------------------
# Mgm Nodes

[MGM DEFAULT]
PortNumber: 1186
DataDir: /usr/local/mysql/data

# First (PRIMARY) mgm node

[NDB_MGMD]
Id: 1
HostName: 192.168.0.140

# Second (BACKUP) mgm node

[NDB_MGMD]
Id: 2
HostName: 192.168.0.141

# Storage nodes
# ---------------------------------------------------------

[NDBD DEFAULT]

NoOfReplicas: 2
DataDir: /usr/local/mysql/data
FileSystemPath: /usr/local/mysql/data

[NDBD]
Id: 3
HostName: 192.168.0.145

[NDBD]
Id: 4
HostName: 192.168.0.146

[NDBD]
Id: 5
HostName: 192.168.0.147

[NDBD]
Id: 6
HostName: 192.168.0.148

# SQL Nodes
# ---------------------------------------------------------

[mysqld]
Id: 7
HostName: 192.168.0.142


[mysqld]
Id: 8
HostName: 192.168.0.143


[mysqld]
Id: 9
HostName: 192.168.0.144
# ----------------------------------------------------------
# ----------- END file -------------------------------------

my.cnf 's
# ----------- file -----------------------------------------
[mysqld]
ndbcluster

#connectstring: primary,secondary management nodes
ndb-connectstring=nodeid=X,192.168.0.140,192.168.0.141

[mysql_cluster]
ndb-connectstring=nodeid=X,192.168.0.140,192.168.0.141
# ----------------------------------------------------------
X is nodeid
# ----------- END file -------------------------------------

The backup is made as shown:

ndb_mgm> START BACKUP WAIT COMPLETED

To restore it, I put the cluster in single user mode.


ndb_mgm> enter single user mode 9
Single user mode entered
Access is granted for API node 9 only.
ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)] 4 node(s)
id=3 @192.168.0.145 (mysql-5.1.47 ndb-7.1.8, single user mode, Nodegroup: 0, Master)
id=4 @192.168.0.146 (mysql-5.1.47 ndb-7.1.8, single user mode, Nodegroup: 0)
id=5 @192.168.0.147 (mysql-5.1.47 ndb-7.1.8, single user mode, Nodegroup: 1)
id=6 @192.168.0.148 (mysql-5.1.47 ndb-7.1.8, single user mode, Nodegroup: 1)

[ndb_mgmd(MGM)] 2 node(s)
id=1 @192.168.0.140 (mysql-5.1.47 ndb-7.1.8)
id=2 @192.168.0.141 (mysql-5.1.47 ndb-7.1.8)

[mysqld(API)] 3 node(s)
id=7 @192.168.0.142 (mysql-5.1.47 ndb-7.1.8)
id=8 @192.168.0.143 (mysql-5.1.47 ndb-7.1.8)
id=9 @192.168.0.144 (mysql-5.1.47 ndb-7.1.8)

And then in node "id=6 @192.168.0.148"


-bash-3.00# ndb_restore -m -n 6 -b 2 --backup_path=/usr/local/mysql/data/BACKUP/BACKUP-2

Nodeid = 6
Backup Id = 2
backup path = /usr/local/mysql/data/BACKUP/BACKUP-2
Opening file '/usr/local/mysql/data/BACKUP/BACKUP-2/BACKUP-2.6.ctl'
Backup version in files: ndb-6.3.11 ndb version: mysql-5.1.47 ndb-7.1.8
Stop GCP of Backup: 288
Configuration error: Error : Could not alloc node id at 192.168.0.140 port 1186: Id 6 configured as ndbd(NDB), connect attempted as mysqld(API).
Failed to initialize consumers

NDBT_ProgramExit: 1 - Failed

-bash-3.00#
I can't find many information about this error. Any idea about what's going wrong??

Thanks

MySql Cluster 7 installation and configuration in Solaris 10 w/zones (container). (no replies)

0
0
Hi All,
I had some difficult in create a cluster of MySql in solaris 10 (zones).
Now that I put it to work, I am sharing this “how to”. (I hope that this can help someone)

######################################################################
#
# MySQL Cluster Instalation and Configuration
# (Using Solaris Container).
#
# Copyright (c) 2010 Renato Tegon Forti
#
# Distributed under the Boost Software License, Version 1.0.
#
# Renato Tegon Forti 09/11/2010 ddmmyy | Initial Release
#
######################################################################

# Overview
# --------------------------------------------------------------------

# This doc show a Aysso MySQL cluster 7.1 setup in this environment:
#
#
# MySql Server Managemen Nodes:
#
# MySqlManagement0 (192.168.0.140)
# MySqlManagement1 (192.168.0.141)
#
# MySql Server Access Nodes:
#
# MySqlServer0 (192.168.0.142)
# MySqlServer1 (192.168.0.143)
# MySqlServer2 (192.168.0.144)
#
# MySql Server Data Nodes:
#
# MySqlData0 (192.168.0.145)
# MySqlData1 (192.168.0.146)
# MySqlData2 (192.168.0.147)
# MySqlData3 (192.168.0.148)


# Step 01 (Configure Solaris Pool)
# --------------------------------------------------------------------

# MySql Server Managemen Nodes

poolcfg -c 'create pset pset-mysqlmanagement-0 (uint pset.min=1; uint pset.max=1)';
poolcfg -c 'create pool pool-mysqlmanagement-0';
poolcfg -c 'associate pool pool-mysqlmanagement-0 (pset pset-mysqlmanagement-0)';
pooladm -c;

poolcfg -c 'create pset pset-mysqlmanagement-1 (uint pset.min=1; uint pset.max=1)';
poolcfg -c 'create pool pool-mysqlmanagement-1';
poolcfg -c 'associate pool pool-mysqlmanagement-1 (pset pset-mysqlmanagement-1)';
pooladm -c;

# MySql Server Access Nodes

poolcfg -c 'create pset pset-mysqlserver-0 (uint pset.min=1; uint pset.max=1)';
poolcfg -c 'create pool pool-mysqlserver-0';
poolcfg -c 'associate pool pool-mysqlserver-0 (pset pset-mysqlserver-0)';
pooladm -c;

poolcfg -c 'create pset pset-mysqlserver-1 (uint pset.min=1; uint pset.max=1)';
poolcfg -c 'create pool pool-mysqlserver-1';
poolcfg -c 'associate pool pool-mysqlserver-1 (pset pset-mysqlserver-1)';
pooladm -c;

poolcfg -c 'create pset pset-mysqlserver-2 (uint pset.min=1; uint pset.max=1)';
poolcfg -c 'create pool pool-mysqlserver-2';
poolcfg -c 'associate pool pool-mysqlserver-2 (pset pset-mysqlserver-2)';
pooladm -c;

# MySql Server Data Nodes

poolcfg -c 'create pset pset-mysqldata-0 (uint pset.min=1; uint pset.max=2)';
poolcfg -c 'create pool pool-mysqldata-0';
poolcfg -c 'associate pool pool-mysqldata-0 (pset pset-mysqldata-0)';
pooladm -c;

poolcfg -c 'create pset pset-mysqldata-1 (uint pset.min=1; uint pset.max=2)';
poolcfg -c 'create pool pool-mysqldata-1';
poolcfg -c 'associate pool pool-mysqldata-1 (pset pset-mysqldata-1)';
pooladm -c;

poolcfg -c 'create pset pset-mysqldata-2 (uint pset.min=1; uint pset.max=2)';
poolcfg -c 'create pool pool-mysqldata-2';
poolcfg -c 'associate pool pool-mysqldata-2 (pset pset-mysqldata-2)';
pooladm -c;

poolcfg -c 'create pset pset-mysqldata-3 (uint pset.min=1; uint pset.max=2)';
poolcfg -c 'create pool pool-mysqldata-3';
poolcfg -c 'associate pool pool-mysqldata-3 (pset pset-mysqldata-3)';
pooladm -c;

pooladm -s # Now Save Pool


# Step 02 (Configure Solaris Nones)
# --------------------------------------------------------------------

# MySql Server Managemen Nodes
# ----------------------------------------------------
# ----------------------------------------------------

# Node zn-mysqlmanagement-0
# ----------------------------------------------------

zonecfg -z zn-mysqlmanagement-0
create
set zonepath=/zone_pool/zn-mysqlmanagement-0
set autoboot=true
add net
set address=192.168.0.140
set physical=e1000g0
end
set pool=pool-mysqlmanagement-0
verify
commit
exit

# Create file system to hold MySQL

mkdir -p /zone_pool/zn-mysqlmanagement-0/local
chmod 700 /zone_pool/zn-mysqlmanagement-0/

zonecfg -z zn-mysqlmanagement-0
add fs
set dir=/usr/local
set special=/zone_pool/zn-mysqlmanagement-0/local
set type=lofs
set options=[rw,nodevices]
end
commit
exit

zoneadm list -icv
zoneadm -z zn-mysqlmanagement-0 install
zoneadm -z zn-mysqlmanagement-0 boot
zlogin -C zn-mysqlmanagement-0

# Node zn-mysqlmanagement-1
# ----------------------------------------------------

zonecfg -z zn-mysqlmanagement-1
create
set zonepath=/zone_pool/zn-mysqlmanagement-1
set autoboot=true
add net
set address=192.168.0.141
set physical=e1000g0
end
set pool=pool-mysqlmanagement-1
verify
commit
exit

# Create file system to hold MySQL

mkdir -p /zone_pool/zn-mysqlmanagement-1/local
chmod 700 /zone_pool/zn-mysqlmanagement-1/

zonecfg -z zn-mysqlmanagement-1
add fs
set dir=/usr/local
set special=/zone_pool/zn-mysqlmanagement-1/local
set type=lofs
set options=[rw,nodevices]
end
commit
exit

zoneadm list -icv
zoneadm -z zn-mysqlmanagement-1 install
zoneadm -z zn-mysqlmanagement-1 boot
zlogin -C zn-mysqlmanagement-1

# MySql Server Nodes
# ----------------------------------------------------
# ----------------------------------------------------

# Node zn-mysqlserver-0
# ----------------------------------------------------

zonecfg -z zn-mysqlserver-0
create
set zonepath=/zone_pool/zn-mysqlserver-0
set autoboot=true
add net
set address=192.168.0.142
set physical=e1000g0
end
set pool=pool-mysqlserver-0
verify
commit
exit

# Create file system to hold MySQL

mkdir -p /zone_pool/zn-mysqlserver-0/local
chmod 700 /zone_pool/zn-mysqlserver-0/

zonecfg -z zn-mysqlserver-0
add fs
set dir=/usr/local
set special=/zone_pool/zn-mysqlserver-0/local
set type=lofs
set options=[rw,nodevices]
end
commit
exit

zoneadm list -icv
zoneadm -z zn-mysqlserver-0 install
zoneadm -z zn-mysqlserver-0 boot
zlogin -C zn-mysqlserver-0

# Node zn-mysqlserver-1
# ----------------------------------------------------

zonecfg -z zn-mysqlserver-1
create
set zonepath=/zone_pool/zn-mysqlserver-1
set autoboot=true
add net
set address=192.168.0.143
set physical=e1000g0
end
set pool=pool-mysqlserver-1
verify
commit
exit

# Create file system to hold MySQL

mkdir -p /zone_pool/zn-mysqlserver-1/local
chmod 700 /zone_pool/zn-mysqlserver-1/

zonecfg -z zn-mysqlserver-1
add fs
set dir=/usr/local
set special=/zone_pool/zn-mysqlserver-1/local
set type=lofs
set options=[rw,nodevices]
end
commit
exit

zoneadm list -icv
zoneadm -z zn-mysqlserver-1 install
zoneadm -z zn-mysqlserver-1 boot
zlogin -C zn-mysqlserver-1

# Node zn-mysqlserver-2
# ----------------------------------------------------

zonecfg -z zn-mysqlserver-2
create
set zonepath=/zone_pool/zn-mysqlserver-2
set autoboot=true
add net
set address=192.168.0.144
set physical=e1000g0
end
set pool=pool-mysqlserver-2
verify
commit
exit

# Create file system to hold MySQL

mkdir -p /zone_pool/zn-mysqlserver-2/local
chmod 700 /zone_pool/zn-mysqlserver-2/

zonecfg -z zn-mysqlserver-2
add fs
set dir=/usr/local
set special=/zone_pool/zn-mysqlserver-2/local
set type=lofs
set options=[rw,nodevices]
end
commit
exit

zoneadm list -icv
zoneadm -z zn-mysqlserver-2 install
zoneadm -z zn-mysqlserver-2 boot
zlogin -C zn-mysqlserver-2

# MySql Server Data Nodes
# ----------------------------------------------------
# ----------------------------------------------------

# Node zn-mysqldata-0
# ----------------------------------------------------

zonecfg -z zn-mysqldata-0
create
set zonepath=/zone_pool/zn-mysqldata-0
set autoboot=true
add net
set address=192.168.0.145
set physical=e1000g0
end
set pool=pool-mysqldata-0
verify
commit
exit

# Create file system to hold MySQL

mkdir -p /zone_pool/zn-mysqldata-0/local
chmod 700 /zone_pool/zn-mysqldata-0/

zonecfg -z zn-mysqldata-0
add fs
set dir=/usr/local
set special=/zone_pool/zn-mysqldata-0/local
set type=lofs
set options=[rw,nodevices]
end
commit
exit

zoneadm list -icv
zoneadm -z zn-mysqldata-0 install
zoneadm -z zn-mysqldata-0 boot
zlogin -C zn-mysqldata-0

# Node zn-mysqldata-1
# ----------------------------------------------------

zonecfg -z zn-mysqldata-1
create
set zonepath=/zone_pool/zn-mysqldata-1
set autoboot=true
add net
set address=192.168.0.146
set physical=e1000g0
end
set pool=pool-mysqldata-1
verify
commit
exit

# Create file system to hold MySQL

mkdir -p /zone_pool/zn-mysqldata-1/local
chmod 700 /zone_pool/zn-mysqldata-1/

zonecfg -z zn-mysqldata-1
add fs
set dir=/usr/local
set special=/zone_pool/zn-mysqldata-1/local
set type=lofs
set options=[rw,nodevices]
end
commit
exit

zoneadm list -icv
zoneadm -z zn-mysqldata-1 install
zoneadm -z zn-mysqldata-1 boot
zlogin -C zn-mysqldata-1

# Node zn-mysqldata-2
# ----------------------------------------------------

zonecfg -z zn-mysqldata-2
create
set zonepath=/zone_pool/zn-mysqldata-2
set autoboot=true
add net
set address=192.168.0.147
set physical=e1000g0
end
set pool=pool-mysqldata-2
verify
commit
exit

# Create file system to hold MySQL

mkdir -p /zone_pool/zn-mysqldata-2/local
chmod 700 /zone_pool/zn-mysqldata-2/

zonecfg -z zn-mysqldata-2
add fs
set dir=/usr/local
set special=/zone_pool/zn-mysqldata-2/local
set type=lofs
set options=[rw,nodevices]
end
commit
exit

zoneadm list -icv
zoneadm -z zn-mysqldata-2 install
zoneadm -z zn-mysqldata-2 boot
zlogin -C zn-mysqldata-2


# Node zn-mysqldata-3
# ----------------------------------------------------

zonecfg -z zn-mysqldata-3
create
set zonepath=/zone_pool/zn-mysqldata-3
set autoboot=true
add net
set address=192.168.0.148
set physical=e1000g0
end
set pool=pool-mysqldata-3
verify
commit
exit

# Create file system to hold MySQL

mkdir -p /zone_pool/zn-mysqldata-3/local
chmod 700 /zone_pool/zn-mysqldata-3/

zonecfg -z zn-mysqldata-3
add fs
set dir=/usr/local
set special=/zone_pool/zn-mysqldata-3/local
set type=lofs
set options=[rw,nodevices]
end
commit
exit

zoneadm list -icv
zoneadm -z zn-mysqldata-3 install
zoneadm -z zn-mysqldata-3 boot
zlogin -C zn-mysqldata-3


# Step 03 (Setup basic configuration os zones)
# --------------------------------------------------------------------

# for each none configure ->

# Telnet

chmod +w /etc/default/login
vi /etc/default/login # comment console line

# DNS

vi /etc/nsswitch.conf # add "dns" | hosts files dns
vi /etc/resolv.conf # add "nameserver <ip1> newline nameserver <ip2>"

ping aysso.net # test conf

# Set shell

usermod -s /usr/bin/bash root


# Step 04 (Prepare MySql Instalation)
# --------------------------------------------------------------------

# Copy MySql install file to zones:

# mysqlmanagement-

mkdir /zone_pool/zn-mysqlmanagement-0/root/temp_mysql/
cp /programs/mysql-cluster-gpl-7.1.8-solaris10-sparc-64bit.tar.gz /zone_pool/zn-mysqlmanagement-0/root/temp_mysql/

mkdir /zone_pool/zn-mysqlmanagement-1/root/temp_mysql/
cp /programs/mysql-cluster-gpl-7.1.8-solaris10-sparc-64bit.tar.gz /zone_pool/zn-mysqlmanagement-1/root/temp_mysql/

# mysqlserver

mkdir /zone_pool/zn-mysqlserver-0/root/temp_mysql/
cp /programs/mysql-cluster-gpl-7.1.8-solaris10-sparc-64bit.tar.gz /zone_pool/zn-mysqlserver-0/root/temp_mysql/

mkdir /zone_pool/zn-mysqlserver-1/root/temp_mysql/
cp /programs/mysql-cluster-gpl-7.1.8-solaris10-sparc-64bit.tar.gz /zone_pool/zn-mysqlserver-1/root/temp_mysql/

mkdir /zone_pool/zn-mysqlserver-2/root/temp_mysql/
cp /programs/mysql-cluster-gpl-7.1.8-solaris10-sparc-64bit.tar.gz /zone_pool/zn-mysqlserver-2/root/temp_mysql/

# mysqldata

mkdir /zone_pool/zn-mysqldata-0/root/temp_mysql/
cp /programs/mysql-cluster-gpl-7.1.8-solaris10-sparc-64bit.tar.gz /zone_pool/zn-mysqldata-0/root/temp_mysql/

mkdir /zone_pool/zn-mysqldata-1/root/temp_mysql/
cp /programs/mysql-cluster-gpl-7.1.8-solaris10-sparc-64bit.tar.gz /zone_pool/zn-mysqldata-1/root/temp_mysql/

mkdir /zone_pool/zn-mysqldata-2/root/temp_mysql/
cp /programs/mysql-cluster-gpl-7.1.8-solaris10-sparc-64bit.tar.gz /zone_pool/zn-mysqldata-2/root/temp_mysql/

mkdir /zone_pool/zn-mysqldata-3/root/temp_mysql/
cp /programs/mysql-cluster-gpl-7.1.8-solaris10-sparc-64bit.tar.gz /zone_pool/zn-mysqldata-3/root/temp_mysql/

# Untar MySql (to all zones)

gunzip mysql-cluster-gpl-7.1.8-solaris10-sparc-64bit.tar.gz | tar xvf -
tar xvf mysql-cluster-gpl-7.1.8-solaris10-sparc-64bit.tar


# Step 05 (MySql Basic Instalation)
# --------------------------------------------------------------------

# Files instalation

cd /
mkdir /usr/local/mysql
cp -r /temp_mysql/mysql-cluster-gpl-7.1.8-solaris10-sparc-64bit/* /usr/local/mysql/
rm -r temp_mysql/

# User and group setup

groupadd mysql
useradd -g mysql mysql

chown -R mysql /usr/local/mysql
chgrp -R mysql /usr/local/mysql

# MySql Install

cd /usr/local/mysql
./scripts/mysql_install_db --user=mysql

M MySql Test

cd ./mysql-test ; perl mysql-test-run.pl


# Step 06 (Configure MySql Management)
# --------------------------------------------------------------------

mkdir /var/lib/mysql-cluster

cp /usr/local/mysql/support-files/config.small.ini /var/lib/mysql-cluster/config.ini


vi config.ini

# ----------- file -----------------------------------------

##########################################################
#
# Renato Tegon Forti 10/11/2010 ddmmyyyy | Initial Release
#
##########################################################
# MySql Server Managemen Nodes:
#
# MySqlManagement0 (192.168.0.140)
# MySqlManagement1 (192.168.0.141)
#
# MySql Server Access Nodes:
#
# MySqlServer0 (192.168.0.142)
# MySqlServer1 (192.168.0.143)
# MySqlServer2 (192.168.0.144)
#
# MySql Server Data Nodes:
#
# MySqlData0 (192.168.0.145)
# MySqlData1 (192.168.0.146)
# MySqlData2 (192.168.0.147)
# MySqlData3 (192.168.0.148)

# Mgm Nodes
# ---------------------------------------------------------

[MGM DEFAULT]
PortNumber: 1186
DataDir: /usr/local/mysql/data

# First (PRIMARY) mgm node

[NDB_MGMD]
Id: 1
HostName: 192.168.0.140

# Second (BACKUP) mgm node

[NDB_MGMD]
Id: 2
HostName: 192.168.0.141

# Storage nodes
# ---------------------------------------------------------

[NDBD DEFAULT]

NoOfReplicas: 2
DataDir: /usr/local/mysql/data
FileSystemPath: /usr/local/mysql/data

[NDBD]
Id: 3
HostName: 192.168.0.145

[NDBD]
Id: 4
HostName: 192.168.0.146

[NDBD]
Id: 5
HostName: 192.168.0.147

[NDBD]
Id: 6
HostName: 192.168.0.148

# SQL Nodes
# ---------------------------------------------------------

[mysqld]
Id: 7
HostName: 192.168.0.142


[mysqld]
Id: 8
HostName: 192.168.0.143


[mysqld]
Id: 9
HostName: 192.168.0.144

# This node will is used for ndb_backup and ndb_restore
[mysqld]


# ----------------------------------------------------------
# ----------- END file -------------------------------------


# Step 07 (Configure MySql other nodes)
# --------------------------------------------------------------------

# Now, on all other servers, you place the following in /etc/my.cnf:

vi /etc/my.cnf # or gedit /etc/my.cnf

# ----------- file -----------------------------------------
[mysqld]
ndbcluster

#connectstring: primary,secondary management nodes
ndb-connectstring=nodeid=X,192.168.0.140,192.168.0.141

[mysql_cluster]
ndb-connectstring=nodeid=X,192.168.0.140,192.168.0.141
# ----------------------------------------------------------
# ----------- END file -------------------------------------

# Notice the id=x in the second connectstring: Make sure you put the correct node ID (as specified in the configuration file) in here.
#
# for sample if my.cnf is of node [NDBD] nodeId: 5, the x will be 5
# for sample if my.cnf is of node [mysqld] nodeId: 8, the x will be 8
#
# [mysql_cluster]
# ndb-connectstring=nodeid=3,192.168.0.140,192.168.0.141


# Step 08 (Testing Cluster)
# --------------------------------------------------------------------

# Starting mgmd

PATH=$PATH:/usr/local/mysql/bin;export PATH

cd /usr/local/mysql
./bin/ndb_mgmd -f /var/lib/mysql-cluster/config.ini
./bin/ndb_mgm

# Starting MySql Server

PATH=$PATH:/usr/local/mysql/bin;export PATH

cd /usr/local/mysql
./bin/mysqld_safe --user=mysql &

# Starting MySql Data

PATH=$PATH:/usr/local/mysql/bin;export PATH

cd /usr/local/mysql
./bin/ndbd

# Useful Commands
# --------------------------------------------------------------------

# Solaris

init 6 # restart machine

poolstat # status

pooladm -x # remove
pooladm -e # ativa
pooladm -s # salva

zonename # show current zone name

######################################################################
# START CLUSTER
# --------------------------------------------------------------------

# --------------------------------------------------------------------
# Starting mgmd nodes
# --------------------------------------------------------------------


cd /usr/local/mysql
./bin/ndb_mgmd -f /var/lib/mysql-cluster/config.ini
./bin/ndb_mgm


# You can start management server using the --initial option.
#
# ./bin/ndb_mgmd --initial -f /var/lib/mysql-cluster/config.ini
#
# In this case, the global configuration file is re-read,
# any existing cache files are deleted, and the management server
# creates a new configuration cache.


# --------------------------------------------------------------------
# Starting MySql Data nodes
# --------------------------------------------------------------------

PATH=$PATH:/usr/local/mysql/bin;export PATH

cd /usr/local/mysql
./bin/ndbd

# You can statrt management server using the --initial option.
#
# ./bin/ndbd --initial


# --------------------------------------------------------------------
# Starting MySql Server nodes
# --------------------------------------------------------------------

PATH=$PATH:/usr/local/mysql/bin;export PATH

cd /usr/local/mysql
./bin/mysqld_safe --user=mysql &

# You can statrt management server using the --initial option.
#
# ./bin/mysqld_safe --user=mysql --initial &


######################################################################
# BACKUP CLUSTER
# --------------------------------------------------------------------

PATH=$PATH:/usr/local/mysql/bin;export PATH

cd /usr/local/mysql
./bin/ndb_mgm

ndb_mgm> START BACKUP WAIT COMPLETED

######################################################################
# RESTORE CLUSTER
# --------------------------------------------------------------------

PATH=$PATH:/usr/local/mysql/bin;export PATH

cd /usr/local/mysql
./bin/ndb_mgm

ndb_mgm> enter single user mode <MYSQL SERVER NODEID>

# Then in any node recovery meta data: note: -m and c is management node ip
ndb_restore -m -n 6 -b 2 --backup_path=/usr/local/mysql/data/BACKUP/BACKUP-2 -c 192.168.0.140

# And finally for each data node recovery data: note -r
ndb_restore -r -n 6 -b 2 --backup_path=/usr/local/mysql/data/BACKUP/BACKUP-2 -c 192.168.0.140

# -n <is node number>
# -b <is backup id>
# -C <is managements node ip>

ndb_mgm> exit single user mode

MySQL Cluster Data Nodes Problem (2 replies)

0
0
My Cluster set up is:

2 data nodes of 2 different servers
Mgmt node and SQL node are on the same server.

I followed the instructions and even the config files from the MySQL web site.

Output from SHOW command of ndb_mgm:
-------------------------------------
/var/lib/mysql-cluster#> ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show;
Connected to Management Server at: 192.168.20.100:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.20.101 (mysql-5.1.47 ndb-7.1.8, Nodegroup: 0, Master)
id=3 @192.168.20.102 (mysql-5.1.47 ndb-7.1.8, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.20.100 (mysql-5.1.47 ndb-7.1.8)

[mysqld(API)] 1 node(s)
id=4 (not connected, accepting connect from 192.168.20.100)

-------------------------------------------------

I run mysql and execute show engine on 2 data nodes and the mgmt node to make sure that the ndbcluster engine are ENABLED, and indeed the ndbcluster engine is enabled on all 3 servers.

From the SQL Node, I run command SHOW ENGINE NDBCLUSTER STATUS and here is the interesting result:

--------------------------------------------------

mysql> show engine ndbcluster status;
| Type | Name | Status
| ndbcluster | connection | cluster_node_id=4, connected_host=192.168.20.100, connected_port=1186, number_of_data_nodes=2, number_of_ready_data_nodes=1, connect_count=0 |

----------------------------------------------------

The weird thing is if I run this comand again some time later, then it reports that "number_of_data_nodes=0, number_of_ready_data_nodes=0",
wait for a while then run the command again, this time it reports that
"number_of_data_nodes=2, number_of_ready_data_nodes=1"

While this happened on the mgmt node I saw these lines from the ndb_2/3_out.log:
--------------------------------------------------------------
jbalock thr: 0 waiting for lock, contentions: 3 spins: 1047390
--------------------------------------------------------------

I believe this has something to do with the NUMBER OF DATA NODES and NUMBER_OF_READY_DATA_NODES reported from the mgmt node.

What is the issue here? I don't believe it has anything to do with my my.cnf and/or config.ini files at all. It is something with the servers themselves, but without any further messages, I am clueless.

Evaluating the New Push Down JOINs project in MySQL Cluster (no replies)

0
0
Hello Cluster Forum
The MySQL Cluster development team recently presented their current work on the Push Down Joins (aka SPJ) project in a webinar recorded 10 days ago

The replay is now available which discusses how Push Down Joins are implemented, and also provides details on how those interested in JOIN performance in MySQL Cluster can download and test the code

The development team would highly value feedback from community members on their experience with the Push Down Joins project, and whether any current limitations prevent you from using this functionality.

Please send feedback or questions via the following mailing list:
spj-feedback@sun.com

To learn more about the project, you can access the the webinar replay here (note, registration is required):
http://www.mysql.com/news-and-events/on-demand-webinars/display-od-583.html

You can access the binary (Linux-only) and source here:
ftp://ftp.mysql.com/pub/mysql/download/cluster_telco/mysql-5.1.51-ndb-7.1.9-spj-preview/

The code is also available via the SeveralNines configuration tool, with instructions on how to build it included in the webinar:
www.severalnines.com/config

Please Help!! API won't connect! (4 replies)

0
0
2 node cluster,

Node1: 10.10.10.101, NDB, MYSQLD
Node2: 10.10.10.102, NDB_MGMD, NDB, MYSQLD

Configs created with the scripts at severalnines.com (Recommended in many MySQL how-to's)

#### my.cnf #####
[MYSQLD]
user=mysql
ndbcluster
ndb-connectstring="10.10.10.101:1186"
basedir=/usr/local/mysql
datadir=/var/lib/mysql
pid-file=mysqld.pid
socket=/var/lib/mysql/mysql.sock
port=3306

ndb-cluster-connection-pool=16
ndb-force-send=1
ndb-use-exact-count=0
ndb-extra-logging=1
ndb-autoincrement-prefetch-sz=256
engine-condition-pushdown=1

log-err=error.log
log
log-slow-queries

key_buffer = 256M
max_allowed_packet = 16M
sort_buffer_size = 512K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
#thread_cache_size=1024
myisam_sort_buffer_size = 8M
memlock
sysdate_is_now
max-connections=200
thread-cache-size=128
query-cache-type = 0
query-cache-size = 0
table-open_cache=1024
table-cache=512
lower-case-table-names=0

default-storage-engine=NDBCLUSTER

[MYSQL]
socket=/var/lib/mysql/mysql.sock

[client]
socket=/var/lib/mysql/mysql.sock

[MYSQL_CLUSTER]
ndb-connectstring="10.10.10.102:1186"

#### config.ini #####
[TCP DEFAULT]
SendBufferMemory=2M
ReceiveBufferMemory=2M

[NDB_MGMD DEFAULT]
PortNumber=1186
Datadir=/var/lib/mysql

[NDB_MGMD]
NodeId=1
Hostname=10.10.10.102
LogDestination=FILE:filename=ndb_1_cluster.log,maxsize=10000000,maxfiles=6
ArbitrationRank=0

[NDBD DEFAULT]
NoOfReplicas=2
Datadir=/var/lib/mysql
FileSystemPathDD=/var/lib/mysql
DataMemory=1453M
IndexMemory=182M
LockPagesInMainMemory=1
MaxNoOfConcurrentOperations=100000

StringMemory=25
MaxNoOfTables=4096
MaxNoOfOrderedIndexes=2048
MaxNoOfUniqueHashIndexes=512
MaxNoOfAttributes=24576
MaxNoOfTriggers=14336

FragmentLogFileSize=256M
InitFragmentLogFiles=SPARSE
NoOfFragmentLogFiles=9
RedoBuffer=32M
TimeBetweenGlobalCheckpoints=1000
TimeBetweenEpochs=100
TimeBetweenEpochsTimeout=32000

DiskCheckpointSpeedInRestart=100M
DiskCheckpointSpeed=10M
TimeBetweenLocalCheckpoints=20

HeartbeatIntervalDbDb=15000
HeartbeatIntervalDbApi=15000

MemReportFrequency=30
BackupReportFrequency=10
LogLevelStartup=15
LogLevelShutdown=15
LogLevelCheckpoint=8
LogLevelNodeRestart=15

BackupMaxWriteSize=1M
BackupDataBufferSize=16M
BackupLogBufferSize=4M
BackupMemory=20M

TimeBetweenWatchdogCheckInitial=60000
TransactionInactiveTimeout=60000
SharedGlobalMemory=384M
DiskPageBufferMemory=256M
MaxNoOfExecutionThreads=8
LongMessageBuffer=32M
BatchSizePerLocalScan=512

[NDBD]
NodeId=2
Hostname=10.10.10.101

[NDBD]
NodeId=3
Hostname=10.10.10.102

[MYSQLD DEFAULT]
BatchSize=512

[MYSQLD]
NodeId=6
Hostname=10.10.10.101
[MYSQLD]
NodeId=7
Hostname=10.10.10.101
[MYSQLD]
NodeId=8
Hostname=10.10.10.101
[MYSQLD]
NodeId=9
Hostname=10.10.10.101
[MYSQLD]
NodeId=10
Hostname=10.10.10.101
[MYSQLD]
NodeId=11
Hostname=10.10.10.101
[MYSQLD]
NodeId=12
Hostname=10.10.10.101
[MYSQLD]
NodeId=13
Hostname=10.10.10.101
[MYSQLD]
NodeId=14
Hostname=10.10.10.101
[MYSQLD]
NodeId=15
Hostname=10.10.10.101
[MYSQLD]
NodeId=16
Hostname=10.10.10.101
[MYSQLD]
NodeId=17
Hostname=10.10.10.101
[MYSQLD]
NodeId=18
Hostname=10.10.10.101
[MYSQLD]
NodeId=19
Hostname=10.10.10.101
[MYSQLD]
NodeId=20
Hostname=10.10.10.101
[MYSQLD]
NodeId=21
Hostname=10.10.10.101

[MYSQLD]
NodeId=22
Hostname=10.10.10.102
[MYSQLD]
NodeId=23
Hostname=10.10.10.102
[MYSQLD]
NodeId=24
Hostname=10.10.10.102
[MYSQLD]
NodeId=25
Hostname=10.10.10.102
[MYSQLD]
NodeId=26
Hostname=10.10.10.102
[MYSQLD]
NodeId=27
Hostname=10.10.10.102
[MYSQLD]
NodeId=28
Hostname=10.10.10.102
[MYSQLD]
NodeId=29
Hostname=10.10.10.102
[MYSQLD]
NodeId=30
Hostname=10.10.10.102
[MYSQLD]
NodeId=31
Hostname=10.10.10.102
[MYSQLD]
NodeId=32
Hostname=10.10.10.102
[MYSQLD]
NodeId=33
Hostname=10.10.10.102
[MYSQLD]
NodeId=34
Hostname=10.10.10.102
[MYSQLD]
NodeId=35
Hostname=10.10.10.102
[MYSQLD]
NodeId=36
Hostname=10.10.10.102
[MYSQLD]
NodeId=37
Hostname=10.10.10.102

[MYSQLD]
Hostname=10.10.10.102
ndb_show_tables etc
[MYSQLD]
Hostname=10.10.10.102

#### output: ndb_mgm show #####
Connected to Management Server at: 10.10.10.102:1186
Cluster Configuration

---------------------
[ndbd(NDB)] 2 node(s)
id=2 @10.10.10.101 (mysql-5.1.47 ndb-7.1.8, starting, Nodegroup: 0)
id=3 @10.10.10.102 (mysql-5.1.47 ndb-7.1.8, starting, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @10.10.10.102 (mysql-5.1.47 ndb-7.1.8)

[mysqld(API)] 34 node(s)
id=6 (not connected, accepting connect from 10.10.10.101)
id=7 (not connected, accepting connect from 10.10.10.101)
id=8 (not connected, accepting connect from 10.10.10.101)
id=9 (not connected, accepting connect from 10.10.10.101)
id=10 (not connected, accepting connect from 10.10.10.101)
id=11 (not connected, accepting connect from 10.10.10.101)
id=12 (not connected, accepting connect from 10.10.10.101)
id=13 (not connected, accepting connect from 10.10.10.101)
id=14 (not connected, accepting connect from 10.10.10.101)
id=15 (not connected, accepting connect from 10.10.10.101)
id=16 (not connected, accepting connect from 10.10.10.101)
id=17 (not connected, accepting connect from 10.10.10.101)
id=18 (not connected, accepting connect from 10.10.10.101)
id=19 (not connected, accepting connect from 10.10.10.101)
id=20 (not connected, accepting connect from 10.10.10.101)
id=21 (not connected, accepting connect from 10.10.10.101)
id=22 (not connected, accepting connect from 10.10.10.102)
id=23 (not connected, accepting connect from 10.10.10.102)
id=24 (not connected, accepting connect from 10.10.10.102)
id=25 (not connected, accepting connect from 10.10.10.102)
id=26 (not connected, accepting connect from 10.10.10.102)
id=27 (not connected, accepting connect from 10.10.10.102)
id=28 (not connected, accepting connect from 10.10.10.102)
id=29 (not connected, accepting connect from 10.10.10.102)
id=30 (not connected, accepting connect from 10.10.10.102)
id=31 (not connected, accepting connect from 10.10.10.102)
id=32 (not connected, accepting connect from 10.10.10.102)
id=33 (not connected, accepting connect from 10.10.10.102)
id=34 (not connected, accepting connect from 10.10.10.102)
id=35 (not connected, accepting connect from 10.10.10.102)
id=36 (not connected, accepting connect from 10.10.10.102)
id=37 (not connected, accepting connect from 10.10.10.102)
id=38 (not connected, accepting connect from 10.10.10.102)
id=39 (not connected, accepting connect from 10.10.10.102)

ndbmtd stops logging (!?!?!) (1 reply)

0
0
I'm running a 7.1.8 cluster (2 SQL/MGMT nodes, 2 NDB nodes) on Ubuntu 10.04.1 (64-bit).

I've found that the ndbmtd on each NDB node stops logging after some time. [I'm referring to the "ndb_x_out.log" file, not a REDO log.]

I'll stop seeing new lines being appended to the "ndb_x_out.log" file, sometimes mid-line. (I've checked, both ndbmtd (daemon & angel) do in fact still have it open. Usually it's something like this:

send lock node 19 waiting for lock, contentions: 9 spins: 25922
jbalock thr: 1 waiting for lock, contentions: 200 spins: 29922
send lock node 19 waiting for lock, contentions: 10 spins: 25923
send lock node 19 waiting for lock, contentions: 11 spins: 29330
send lock node 19 waiting for lock, contentions: 12 spins: 32464
send lock node 19 waiting for lock, contentions: 13 spins: 35449
send lock node 19 waiting for lock, contentions: 14 spins: 38

(Notice that the last line should have included three more digits on the number of spins.)

There is no indication of any problem, anywhere. All queries work, ndb_mgmd is ok and reports everything up. No indication of errors in the SQL error.log or anywhere else.

It's as though ndbmtd's file write buffer has gone to never-never land :-(
This does not occur on each NDB node at the same time; sometimes one of them stops logging quite a while before the other one.

If I perform a rolling restart, what appear to be the most recently buffered log lines are written (again, more "send lock node..." stuff) but there has clearly been a gap, followed by the shutdown/start messages that I'd expect from the rolling restart. Then the logging works again for some time, and eventually I get the same problem.

The cluster configuration is nothing unusual, and is largely based on the severalnines.com/config tool.

Any ideas???? Thanks...

DROP database does not propagate after mysqld restore (1 reply)

0
0
I use MySQL Cluster 7.1.8 on CentOS 5.5 in configuration with two ndbd, two mysqld and two ndb_mgmd daemons and faced with following problem. I have create database (testdb1) when all daemons are running and belong to one MySQL cluster. I saw this database (testdb1) on all mysqld. Then I stopped one mysqld daemon and dropped database, created before (testdb1) and created new database (testdb2) on second mysqld. Then started mysqld, stopped before and see databases, existing on it. I see both (testdb1 and testdb2). I didn't expect to see dropped database (testdb1). Could you explain me this situation. This is a bug or designed behaviour?
With tables I have no such problem. Both instructions (create and drop table) propagate on restored mysqld daemon.

Got error 4350 'Transaction already aborted' from NDBCLUSTER (1 reply)

0
0
Greetings forum,

I am using mysql cluster 7.1.5-1 and i am getting the following error :

Warning(29): Got error 4350 'Transaction already aborted' from NDBCLUSTER

The query that recreates the above error is the following :

INSERT IGNORE INTO `profile_cache`.`tweets_index_z` (`statusID`, `api`, `type`, `metakey`, `timestamp`, `profileID`) VALUES ('477704436584448','twitter','post','','1288948868',40275201)

The thing is that i can run this query successfully from a front-end like phpMyAdmin. Any ideas why this is happening?

Thank you in advance

Alexander Economou

JpaCLuster -> message Invalid schema object version (4 replies)

0
0
Hi,

Trying to runa simple example on JpaCLuster. I keep getting this but I dont understand why?

Persistence.xml:
<persistence-unit name="BM" transaction-type="JTA">
<provider>
org.apache.openjpa.persistence.PersistenceProviderImpl
</provider>
<jta-data-source>jdbc/bm</jta-data-source>
<class>persist.BigCompany</class>
<class>persist.SmallEmployee</class>
<properties>
<property name="openjpa.jdbc.SynchronizeMappings" value="buildSchema" />
<property name="openjpa.ConnectionDriverName"
value="com.mysql.jdbc.Driver"/>
<property name="openjpa.BrokerFactory" value="ndb"/>
<property name="openjpa.ndb.connectString" value="localhost:1186"/>
<!--property name="openjpa.jdbc.SynchronizeMappings"
value="buildSchema(SchemaAction='add')"/-->
<property name="openjpa.jdbc.DBDictionary" value="TableType=ndb"/>
<property name="openjpa.ConnectionRetainMode" value="transaction"/>
<property name="openjpa.ndb.database" value="bm"/>
<property name="openjpa.ndb.connectVerbose" value="0"/>
<property name="openjpa.DataCache" value="false"/>
</properties>
</persistence-unit>

Error:

[#|2010-11-22T21:37:14.724+0100|WARNING|sun-glassfish-comms-server2.0|javax.enterprise.resource.jta|_ThreadID=37;_ThreadName=httpWorkerThread-8080-1;_RequestID=9d1790a1-9a1d-4a46-b50e-7f81bbed6746;|DTX5007:Exception :
<openjpa-1.2.2-r422266:898935 nonfatal general error> org.apache.openjpa.persistence.PersistenceException: Commit failed: com.mysql.clusterj.ClusterJDatastoreException: Error in NdbJTie: returnCode -1, code 241, mysqlCode 159, status 2, classification 4, message Invalid schema object version .
at org.apache.openjpa.kernel.BrokerImpl.afterCompletion(BrokerImpl.java:1889)
at com.sun.enterprise.distributedtx.J2EETransaction.commit(J2EETransaction.java:515)
at com.sun.enterprise.distributedtx.J2EETransactionManagerOpt.commit(J2EETransactionManagerOpt.java:371)
at com.sun.ejb.containers.BaseContainer.completeNewTx(BaseContainer.java:3826)
at com.sun.ejb.containers.BaseContainer.postInvokeTx(BaseContainer.java:3619)
at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:1388)
at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:1325)
at com.sun.ejb.containers.EJBLocalObjectInvocationHandler.invoke(EJBLocalObjectInvocationHandler.java:205)
at com.sun.ejb.containers.EJBLocalObjectInvocationHandlerDelegate.invoke(EJBLocalObjectInvocationHandlerDelegate.java:83)
at $Proxy190.createCompany(Unknown Source)
at loader.MyLoader.createACompanyWithEmployee(MyLoader.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at com.sun.enterprise.security.application.EJBSecurityManager.runMethod(EJBSecurityManager.java:1011)
at com.sun.enterprise.security.SecurityUtil.invoke(SecurityUtil.java:175)
at com.sun.ejb.containers.BaseContainer.invokeTargetBeanMethod(BaseContainer.java:2929)
at com.sun.ejb.containers.BaseContainer.intercept(BaseContainer.java:4020)
at com.sun.ejb.containers.EJBLocalObjectInvocationHandler.invoke(EJBLocalObjectInvocationHandler.java:197)
at com.sun.ejb.containers.EJBLocalObjectInvocationHandlerDelegate.invoke(EJBLocalObjectInvocationHandlerDelegate.java:83)
at $Proxy189.createACompanyWithEmployee(Unknown Source)
at example.service.BMService.createACompanyWithEmployee(BMService.java:47)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at com.sun.enterprise.webservice.InstanceResolverImpl$1.invoke(InstanceResolverImpl.java:112)
at com.sun.xml.ws.server.InvokerTube$2.invoke(InvokerTube.java:146)
at com.sun.xml.ws.server.sei.EndpointMethodHandler.invoke(EndpointMethodHandler.java:257)
at com.sun.xml.ws.server.sei.SEIInvokerTube.processRequest(SEIInvokerTube.java:93)
at com.sun.xml.ws.api.pipe.Fiber.__doRun(Fiber.java:595)
at com.sun.xml.ws.api.pipe.Fiber._doRun(Fiber.java:554)
at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:539)
at com.sun.xml.ws.api.pipe.Fiber.runSync(Fiber.java:436)
at com.sun.xml.ws.api.pipe.helper.AbstractTubeImpl.process(AbstractTubeImpl.java:106)
at com.sun.enterprise.webservice.MonitoringPipe.process(MonitoringPipe.java:147)
at com.sun.xml.ws.api.pipe.helper.PipeAdapter.processRequest(PipeAdapter.java:115)
at com.sun.xml.ws.api.pipe.Fiber.__doRun(Fiber.java:595)
at com.sun.xml.ws.api.pipe.Fiber._doRun(Fiber.java:554)
at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:539)
at com.sun.xml.ws.api.pipe.Fiber.runSync(Fiber.java:436)
at com.sun.xml.ws.api.pipe.helper.AbstractTubeImpl.process(AbstractTubeImpl.java:106)
at com.sun.enterprise.webservice.CommonServerSecurityPipe.processRequest(CommonServerSecurityPipe.java:222)
at com.sun.enterprise.webservice.CommonServerSecurityPipe.process(CommonServerSecurityPipe.java:133)
at com.sun.xml.ws.api.pipe.helper.PipeAdapter.processRequest(PipeAdapter.java:115)
at com.sun.xml.ws.api.pipe.Fiber.__doRun(Fiber.java:595)
at com.sun.xml.ws.api.pipe.Fiber._doRun(Fiber.java:554)
at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:539)
at com.sun.xml.ws.api.pipe.Fiber.runSync(Fiber.java:436)
at com.sun.xml.ws.server.WSEndpointImpl$2.process(WSEndpointImpl.java:243)
at com.sun.xml.ws.transport.http.HttpAdapter$HttpToolkit.handle(HttpAdapter.java:444)
at com.sun.xml.ws.transport.http.HttpAdapter.handle(HttpAdapter.java:244)
at com.sun.xml.ws.transport.http.servlet.ServletAdapter.handle(ServletAdapter.java:135)
at com.sun.enterprise.webservice.JAXWSServlet.doPost(JAXWSServlet.java:177)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:754)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
at org.apache.catalina.core.ApplicationFilterChain.servletService(ApplicationFilterChain.java:427)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:315)
at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:287)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:218)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:648)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:593)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:94)
at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:98)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:222)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:648)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:593)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:587)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:1093)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:166)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:648)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:593)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:587)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:1093)
at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:291)
at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.invokeAdapter(DefaultProcessorTask.java:666)
at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.doProcess(DefaultProcessorTask.java:597)
at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.process(DefaultProcessorTask.java:872)
at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.executeProcessorTask(DefaultReadTask.java:341)
at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:263)
at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:214)
at com.sun.enterprise.web.connector.grizzly.TaskBase.run(TaskBase.java:264)
at com.sun.enterprise.web.connector.grizzly.WorkerThreadImpl.run(WorkerThreadImpl.java:117)
Caused by: com.mysql.clusterj.ClusterJException: Commit failed: com.mysql.clusterj.ClusterJDatastoreException: Error in NdbJTie: returnCode -1, code 241, mysqlCode 159, status 2, classification 4, message Invalid schema object version .
at com.mysql.clusterj.openjpa.NdbOpenJPAStoreManager.commit(NdbOpenJPAStoreManager.java:480)
at org.apache.openjpa.kernel.DelegatingStoreManager.commit(DelegatingStoreManager.java:94)
at org.apache.openjpa.kernel.BrokerImpl.endStoreManagerTransaction(BrokerImpl.java:1327)
at org.apache.openjpa.kernel.BrokerImpl.endTransaction(BrokerImpl.java:2201)
at org.apache.openjpa.kernel.BrokerImpl.afterCompletion(BrokerImpl.java:1865)
... 83 more
|#]


I tried to drop db, drop tables etc. Every time after deployment the tables are re-created but invoking any methods fail with above error.

Modify file config.ini (3 replies)

0
0
Hi, can I change the config.ini file, and change the value of such DataMemory, and what procedure should I run for the changes to be updated on the cluster.
Environment:
Mysql cluster 7.1.8
1 Management node
2 db node
8 mysqld node

Thanks
Viewing all 1553 articles
Browse latest View live




Latest Images