Quantcast
Channel: MySQL Forums - NDB clusters
Viewing all 1553 articles
Browse latest View live

Problems using disk data tables (6 replies)

$
0
0
Hi,
We recently installed MySQL Cluster but we have a problem. Our database has 8GB so we need to configure a disk data tables.

In config.ini(ndb_mgmd config) we put:
InitialLogFileGroup = name=lg_1; undo_buffer_size=200M; undo1.log=500M; undo2.log=500M
InitialTablespace = name=ts_1; extent_size=1M; data1.dat=5G; data2.dat=10G


In "ndb_mgm -e report all memory" we get :
ndb_mgm> all report memory;
Connected to Management Server at: 193.230.184.214:1186
Node 2: Data usage is 49%(4733 32K pages of total 9600)
Node 2: Index usage is 3%(1415 8K pages of total 38432)
Node 3: Data usage is 100%(4800 32K pages of total 4800)
Node 3: Index usage is 7%(1415 8K pages of total 19232)

ndb_mgm>


So data usage on node 3 is full since usage on node 2 is 49% used.

How can we increes the storage of database to 15 GB ?

Thanks very much.

Disk Storage Issue - No effect on memory consumption? (4 replies)

$
0
0
Hello,

i had set up a 2 Node NDB-Cluster. In order to keep the RAM-consumption low, we decided to place a couple of tables in disk space, using "STORAGE DISK".

While i alter a table with "ALTER TABLE table1 TABLESPACE table1 STORAGE DISK ENINGE NDBCLUSTER" i can see on the management node that the data memory is increasing such an alter without "...TABLESPACE table1 STORAGE DISK..."
That index memory is increasing is ok, because indices are stored completely in RAM. But the data memory should'nt increase if i use "STORAGE DISK", am i right? For this case i have the tablespaces with the associated data files, i thought.

So it looks like that the "STORAGE DISK" has no effect! Can anybody help me with this issues?

Thanks a lot!

Table mysql.ndb_binlog_index contains no data (no replies)

$
0
0
hi,

I set up a new MySQL-Cluster (v7.1.5) with 4 Data-Nodes, 2 SQL-Nodes and 2 MGM-Nodes on RHEL 5.5 (64Bit). The Cluster itself works fine. Now I try the Point-In-Time Recovery using mysql cluster replication. I can create a backup and restore it. The only problem i have, that the table mysql.ndb_binlog_index is empty.
Has anybody an idea why this table contains no data?

thanks

Cluster Node - Error data: Arbitrator decided to shutdown this node (5 replies)

$
0
0
Hi,
Our cluster node shutdown and gives us this error. What may be the problem who shutdown cluster node?


-bash-3.2# tail -n 1000 -f /cage_ext1/mysql-cluster/ndb_2_error.log
Current byte-offset of file-pointer is: 568

Time: Thursday 30 September 2010 - 18:21:53
Status: Temporary error, restart node
Message: Node lost connection to other nodes and can not form a unpartitioned cluster, please investigate if there are error(s) on other node(s) (Arbitration error)
Error: 2305
Error data: Arbitrator decided to shutdown this node
Error object: QMGR (Line: 5595) 0x00000002
Program: ndbd
Pid: 2890
Version: mysql-5.1.47 ndb-7.1.5
Trace: /cage_ext1/mysql-cluster//ndb_2_trace.log.1
***EOM***

Thanks.

Problem starting ndb_mgmd (4 replies)

$
0
0
Hello. I'm trying to setup a simple MySQL cluster and when I try to start the management console on the server that will be the manager, I get this error: 2010-09-29 21:28:31 [MgmtSrvr] ERROR -- Could not determine which nodeid to use for this node. Specify it with --ndb-nodeid=<nodeid> on command line

I have tried running the command it suggests, but it always errors out. I've added ID=1 to the config.ini file under the management section, but it still does not work. I'm on SuSE 11.3. I'm new at MySQL clustering, so any help is appreciated.

HELP: NDB is much slower than InnoDB (7 replies)

$
0
0
I have a denormalized table
product
with about 6 million rows (~ 2GB) mainly for lookups. Fields include
price, color, unitprice, weight, ...
I have BTREE indexes on color etc. Queriy conditions are dynamically generated from the Web, such as

select count(*) from product where color=1 and price > 5 and price <100 and weight > 30 ... etc

and

select * from product where color=2 and price > 35 and unitprice <110 order by weight limit 25;

I used to use InnoDB and tried MEMORY tables, and switched to NDB hoping more concurrent queries can be done faster. I have 2 tables with the same schema, indexes, and data. One is InnoDB while the other is NDB. But the results are very disappointing:for the queries mentioned above, InnoDB is like 50 times faster than NDB. It's like 0.8 seocond vs 40 seconds. For this test I was running only a single select query repeatedbly. Both InnoDB and NDB queries are using the same index on color.

I am using mysql-5.1.47 ndb-7.1.5 on a dual Xeon 5506 (8 cores total), 32GB memory running CentOS 5. I set up 2 NDB Data nodes, one MGM node and one MYSQL node on the same box. For each node I allocated like 9GB memory, and also tried
MaxNoOfExecutionThreads=8, LockPagesInMainMemory, LockExecuteThreadToCPU
and many other config parameters, but no luck. While NDB is running the query, my peak CPU load was only like 200%, i.e., only 2 out of 8 cores were busy. Most of the time it was like 100%. I was using ndbmtd, and verified in the data node log and the LQH threads were indeed spawned.
I also tried explain, profiling -- it just showing that 'Sending data' was consuming most of the time. I also went thru some Mysql Cluster tuning documents available online, not very helpful in my case.

Anybody can shed some light on this? Is there any better way to tune an NDB database? Appreciate it!

Best configuration for 16 nodes (3 replies)

$
0
0
Hi!

I have 16 nodes (16 cores and 16GB per node), so I wonder which is the best configuration for MySQL Cluster. On one hand, the more MySQL Servers I have, the more computing power I have to execute a query. On the other hand, the more Data Nodes I have, the more data I can store in memory (and also computing power for data sending).

I would say: in each node, execute a MySQL Server and also a Data Node (server). What drawbacks does this configuration present?

PD: How are the queries distributed over the MySQL servers?

Thanks in advance!

mysql nodes not connecting (no replies)

$
0
0
I have seen similar topics here to my problem but none of them seem to have fixed my problem so I am posting this. I am sorry if it's a repeat of another one.

I am testing out mysql cluster so I decided I will have 1 management server and 2 mysql servers and data nodes. As I only have 2 machines to test with 1 server will run the management server as well. The setup is like this

192.168.1.10 - management server / sql node / data node
192.168.1.11 - sql node / data node

As these are locally connected I have disabled the firewall to get rid of any problems there but if someone could tell me what ports I need to open when/if it goes into production that would be good.



Now the config for the management:

[NDBD DEFAULT]
NoOfReplicas=2

[MYSQLD DEFAULT]

[NDB_MGMD DEFAULT]

[TCP DEFAULT]

# Section for the cluster management node
[NDB_MGMD]
# IP address of the management node (this system)
HostName=192.168.1.10
DataDir= /var/lib/mysql-cluster

# Section for the storage nodes
[NDBD]
# IP address of the first storage node
HostName=192.168.1.10
DataDir= /var/lib/mysql-cluster

[NDBD]
# IP address of the second storage node
HostName=192.168.1.11
DataDir=/var/lib/mysql-cluster

# one [MYSQLD] per storage node
[MYSQLD]
HostName=192.168.1.10
[MYSQLD]
HostName=192.168.1.11




and the config file for mysql:

[mysqld]
ndbcluster
# IP address of the cluster management node
ndb-connectstring="4,192.168.1.10:1186"
server-id=4

#Option for ndbd process
[ndbd]
connect-string=192.168.1.10
[ndb_mgm]
connect-string=192.168.1.10

[mysql_cluster]
# IP address of the cluster management node
ndb-connectstring=192.168.1.10


and the results from ndb_mgm

Connected to Management Server at: 192.168.1.10:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.1.10 (mysql-5.1.47 ndb-7.1.5, Nodegroup: 0, Master)
id=3 @192.168.1.11 (mysql-5.1.47 ndb-7.1.5, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.1.10 (mysql-5.1.47 ndb-7.1.5)

[mysqld(API)] 2 node(s)
id=4 (not connected, accepting connect from any host)
id=5 (not connected, accepting connect from any host)


Finally the error log from mysql

101007 09:55:09 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
101007 9:55:09 [Note] Plugin 'FEDERATED' is disabled.
101007 9:55:09 InnoDB: Started; log sequence number 0 44253
101007 9:55:09 [Note] NDB: NodeID is 4, management server '192.168.1.10:1186'
101007 9:55:10 [Note] NDB[0]: NodeID: 4, no storage nodes connected (timed out)
101007 9:55:10 [Note] Starting Cluster Binlog Thread
101007 9:55:10 [Note] Event Scheduler: Loaded 0 events
101007 9:55:25 [Warning] NDB : Tables not available after 15 seconds. Consider increasing --ndb-wait-setup value
101007 9:55:25 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.1.47-ndb-7.1.5-cluster-gpl' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Cluster Server (GPL)



I have run out of ideas so any help is really appreciated.

Default Port for Mgt Node (1 reply)

$
0
0
Hi,
I want my Mgt Node to listen on other than default port. Earlier, i used
[tcp default]
PortNumber=3310
option. But now, this parameter is obselete.
How can i have this setting for Cluster?

What are the other ways for it.

Thanks.

"Table is full" error at around 350 million rows (3 replies)

$
0
0
Hi,

we are using MySQL Cluster for storing financial data like stock quotes which results in massive INSERTs and UPDATEs. Lately, we encountered strange "table full" errors while inserting data, although data and index memory are ok (50 % free each).

The table full errors happen at around 350 million rows in 2 specific tables, but always exactly around that boundary. All other tables are fine an it is possible to INSERT without any issue. We used the table create option "max_rows=600000000". Seems like we hit a limit of some kind (see detailed config below). Any idea?



Infrastructure:
- 4 data nodes, 128 GB RAM each
- 2 mysqld instances for writing data
- 4 mysqld instances which are configured master(1)-slave(3) and are connected to the cluster for reading


NDB MGM config:
[NDBD DEFAULT]
NoOfReplicas=2
Datadir=/mnt/data/cluster
FileSystemPathDD=/mnt/data/cluster
#FileSystemPathUndoFiles=/mnt/data/cluster
#FileSystemPathDataFiles=/mnt/data/cluster
DataMemory=85000M
IndexMemory=25000M
LockPagesInMainMemory=1

MaxNoOfConcurrentOperations=1500000

StringMemory=25
MaxNoOfTables=4096
MaxNoOfOrderedIndexes=2048
MaxNoOfUniqueHashIndexes=512
MaxNoOfAttributes=24576
MaxNoOfTriggers=14336
DiskCheckpointSpeedInRestart=100M
FragmentLogFileSize=128M
InitFragmentLogFiles=SPARSE
NoOfFragmentLogFiles=300
RedoBuffer=1G

TimeBetweenLocalCheckpoints=20
TimeBetweenGlobalCheckpoints=1000
TimeBetweenEpochs=100

MemReportFrequency=30
BackupReportFrequency=10

### Params for setting logging
LogLevelStartup=15
LogLevelShutdown=15
LogLevelCheckpoint=8
LogLevelNodeRestart=15

### Params for increasing Disk throughput
BackupMaxWriteSize=1M
BackupDataBufferSize=16M
BackupLogBufferSize=4M
BackupMemory=20M
#Reports indicates that odirect=1 can cause io errors (os err code 5) on some systems. You must test.
ODirect=1

### Watchdog
TimeBetweenWatchdogCheckInitial=60000

### TransactionInactiveTimeout - should be enabled in Production
TransactionInactiveTimeout=60000
### CGE 6.3 - REALTIME EXTENSIONS
#RealTimeScheduler=1
#SchedulerExecutionTimer=80
#SchedulerSpinTimer=40

### DISK DATA
SharedGlobalMemory=20M
DiskPageBufferMemory=64M

### Multithreading
MaxNoOfExecutionThreads=4

### Increasing the LongMessageBuffer b/c of a bug (20090903)
LongMessageBuffer=32M

ERROR 1296 (HY000) at line 7026: Got error 4350 'Transaction already aborted' from NDBCLUSTER (2 replies)

$
0
0
Hi,
We are trying to insert about 1000 rows and we get this error:

ERROR 1296 (HY000) at line 7026: Got error 4350 'Transaction already aborted' from NDBCLUSTER

In our config we have:
TransactionInactiveTimeout=60000

What can we do to fix this issue?

Thanks.

Restarting data nodes automatically in Windows 2008 (1 reply)

$
0
0
Hi,

I have three servers (WIndows Server 2008) in which two are for data nodes and remaining one for management and sql node. I am using My SQL Cluster 7.1.5. I am trying to test if data node restarts automatically after a network/other failure.

I have added the parameter StopOnError under ndbd_default in config.ini. But the angel process just stops and doesn't restart all by iteself.

Now following are my questions:
1) Does windows support restarting the data nodes automatically(using StopOnError parameter),
2) Does any new version of my sql cluster support restarting data nodes in Windows 2008?

Appreciate any help in this regard.

Restarting DataNodes in Windows 2008 (1 reply)

$
0
0
Hi,

I have three servers (WIndows Server 2008) in which two are for data nodes and remaining one for management and sql node. I am using My SQL Cluster 7.1.5. I am trying to test if data node restarts automatically after a network/other failure.

I have added the parameter StopOnError under ndbd_default in config.ini. But the angel process just stops and doesn't restart all by iteself.

Now following are my questions:
1) Does windows support restarting the data nodes automatically(using StopOnError parameter),
2) Does any new version of my sql cluster support restarting data nodes in Windows 2008?

Appreciate any help in this regard.

Read only "Lock wait timeout exceeded; try restarting transaction" (1 reply)

$
0
0
This is a plea for help! I'm seeing loads of "Lock wait timeout exceeded; try restarting transaction" exceptions when hitting the cluster with a large number of SELECTs. The question is WHY?!

I'm not inserting, updating or deleting, the isolation level is READ-COMMITTED and my queries are not explicitly locking. Additionally, non of the columns are BLOB or TEXT. Also the timeouts are set really long, so the sub-second query ought to be fine. I'm at a bit of a loss. Details as follows:

I have a small cluster with 2 data nodes, 1 mgt node and 2 MySQL nodes. The data nodes are multi-threaded, config files below (my.cnf are identical on the MySQL setup, config.ini is the mgt node setup).

I have a jdbc-based test harness, it sets up many readers to concurrently fire a non-trivial (but still sub-second) query. It joins a largish table (1.2M rows) to itself and onto another similar sized table (690K rows) and a couple of ref data tables.

The harness uses C3P0 to pool connections, setup thus:
cpds.setMinPoolSize(10);
cpds.setAcquireIncrement(5);
cpds.setAutoCommitOnClose(true);
cpds.setMaxPoolSize(200);

The connect string is:
jdbc:mysql:loadbalance://10.194.192.74:3306,10.194.192.75:3306/test?roundRobinLoadBalance=true

----config.ini----
[NDBD DEFAULT]
NoOfReplicas=2
LockPagesInMainMemory=1

DataMemory=5244M
IndexMemory=768M

ODirect=1

NoOfFragmentLogFiles=300
MaxNoOfConcurrentOperations=100000
TimeBetweenGlobalCheckpoints=1000
TimeBetweenEpochs=200
DiskCheckpointSpeed=10M
DiskCheckpointSpeedInRestart=100M
RedoBuffer=32M
# MaxNoOfLocalScans=64
MaxNoOfTables=1024
MaxNoOfOrderedIndexes=256

MaxNoOfConcurrentScans=500
MaxNoOfExecutionThreads=2
TransactionDeadlockDetectionTimeout=60000

[MYSQLD DEFAULT]

[NDB_MGMD DEFAULT]

[TCP DEFAULT]
# Managment Server
SendBufferMemory=8M
ReceiveBufferMemory=8M

[NDB_MGMD]
# the IP of THIS SERVER
HostName=10.194.192.71

[NDBD]
# the IP of the FIRST SERVER (Data Node)
HostName=10.194.192.76
DataDir= /var/lib/mysql-cluster

[NDBD]
# the IP of the SECOND SERVER (Data Node)
HostName=10.194.192.78
DataDir=/var/lib/mysql-cluster

[MYSQLD]...x24
-----end config.ini ----

----my.cnf----
[client]
port=3306
socket=/var/lib/mysql/mysql.sock

[mysqld]

ndbcluster
# IP address of the cluster management node
ndb-connectstring=10.194.192.71
default-storage-engine=NDBCLUSTER

ndb_cluster_connection_pool=10
max_connections=1000
transaction-isolation = READ-COMMITTED
#query_cache_size=16M
#thread_concurrency = 4

----end my.cnf----

Sql Nodes Not Connecting !! (3 replies)

$
0
0
Hi,
I've setup MySql Cluster with 1 Mgt Node, 2 Data Nodes and 2 Sql Nodes.
Data nodes and Sql nodes are from the same machines.

Data Nodes are connected to the Cluster, but when i start the Sql Nodes, it gets started, but not connected to the Cluster.

I have done it earlier, but not sure why its not working now.

The Configuration detail for Data Nodes are as follows:

[mysqld]
server-id=5

[ndbcluster]
ndb-connectstring="host=192.168.150.130:3310"
datadir=/home/mysql-cluster-gpl-7.1.5/data

[mysql_cluster]
ndb-connectstring="host=192.168.150.130:3310"


Let me know if there is something wrong that iam doing.

Thanks.

Mysql Cluster in 2 Servers! (1 reply)

$
0
0
Hi, I only have 2 servers, and I want configure the 3 nodes in only this 2 clusters, but if I congigure 2 ndb_mgm and one fall, all nodes falls too, and If I only have one ndb_mgm and this fall, all nodes falls.

I really apreciate that somebody help me with that!

Thanks and sorry for my english

Cannot add tables to MySQL cluster (no replies)

$
0
0
Here is the output from ndb_mgm > SHOW of my cluster setup:

ndb_mgm> SHOW
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.1.20 (mysql-5.1.47 ndb-7.1.5, Nodegroup: 0, Master)
id=3 @192.168.1.22 (mysql-5.1.47 ndb-7.1.5, starting, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.1.21 (mysql-5.1.47 ndb-7.1.5)

[mysqld(API)] 3 node(s)
id=4 @192.168.1.21 (mysql-5.1.47 ndb-7.1.5)
id=5 @192.168.1.20 (mysql-5.1.47 ndb-7.1.5)
id=6 @192.168.1.22 (mysql-5.1.47 ndb-7.1.5)

However, when I login to one of the mysqld nodes and try to create a table, I am given this error message:
ERROR 1005 (HY000): Can't create table 'clustertest.ctest' (errno: 711)

I've looked into it a little bit and the only explanation I can find is that one of the nodes is busy. I have not found anyway to fix this error.

Any help is appreciated.

JpaCluster on Glassfish using persistence.xml: Can not connect to 192.168.56.101 (1 reply)

$
0
0
Doing some prototyping with openJPA + JpaCluster + Sailfin 2 (based on glassfish 2.1) +
JPA.

Using the following persistence.xml:
<persistence-unit name="BM" transaction-type="JTA">
<provider>
org.apache.openjpa.persistence.PersistenceProviderImpl
</provider>
<jta-data-source>jdbc/bm</jta-data-source>
<class>persist.BigCompany</class>
<class>persist.SmallEmployee</class>
<properties>
<property name="openjpa.BrokerFactory" value="ndb"/>
<property name="openjpa.ndb.connectString" value="192.168.56.101:1186"/>
<property name="openjpa.jdbc.SynchronizeMappings"
value="buildSchema(SchemaAction='add')"/>
<property name="openjpa.ConnectionRetainMode" value="transaction"/>
<property name="openjpa.ndb.database" value="bm"/>
<property name="openjpa.ndb.connectVerbose" value="1"/>
<property name="openjpa.DataCache" value="false"/>
<property name="openjpa.ConnectionUserName" value="root"/>
<property name="openjpa.ConnectionPassword" value="system"/>
</properties>
</persistence-unit>

Cluster-j seems to fail to connect to a data node:

Caused by: <openjpa-1.2.2-r422266:898935 nonfatal general error>
org.apache.openjpa.persistence.PersistenceException: Error getting connection to cluster
with properties {com.mysql.clusterj.connect.verbose=1,
com.mysql.clusterj.connect.retries=4, com.mysql.clusterj.connect.delay=5,
com.mysql.clusterj.connectstring=192.168.56.101:1186,
com.mysql.clusterj.max.transactions=1024, com.mysql.clusterj.connect.timeout.before=30,
com.mysql.clusterj.database=bm, com.mysql.clusterj.connect.timeout.after=20} Caused by
com.mysql.clusterj.ClusterJDatastoreException:Datastore exception Return code: -1 error
code: 0 message: .
at
org.apache.openjpa.kernel.AbstractBrokerFactory.newBroker(AbstractBrokerFactory.java:196)
at
org.apache.openjpa.kernel.DelegatingBrokerFactory.newBroker(DelegatingBrokerFactory.java:142)
at
org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:192)
at
org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:56)
at
com.sun.enterprise.util.EntityManagerWrapper._getDelegate(EntityManagerWrapper.java:326)
at com.sun.enterprise.util.EntityManagerWrapper.persist(EntityManagerWrapper.java:440)
at loader.MyLoader.createCompany(MyLoader.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
com.sun.enterprise.security.application.EJBSecurityManager.runMethod(EJBSecurityManager.java:1011)
at com.sun.enterprise.security.SecurityUtil.invoke(SecurityUtil.java:175)
at com.sun.ejb.containers.BaseContainer.invokeTargetBeanMethod(BaseContainer.java:2929)
at com.sun.ejb.containers.BaseContainer.intercept(BaseContainer.java:4020)
at
com.sun.ejb.containers.WebServiceInvocationHandler.invoke(WebServiceInvocationHandler.java:190)
... 63 more
Caused by: com.mysql.clusterj.ClusterJFatalException: Error getting connection to cluster
with properties {com.mysql.clusterj.connect.verbose=1,
com.mysql.clusterj.connect.retries=4, com.mysql.clusterj.connect.delay=5,
com.mysql.clusterj.connectstring=192.168.56.101:1186,
com.mysql.clusterj.max.transactions=1024, com.mysql.clusterj.connect.timeout.before=30,
com.mysql.clusterj.database=bm, com.mysql.clusterj.connect.timeout.after=20} Caused by
com.mysql.clusterj.ClusterJDatastoreException:Datastore exception Return code: -1 error
code: 0 message: .
at com.mysql.clusterj.core.SessionFactoryImpl.<init>(SessionFactoryImpl.java:145)
at
com.mysql.clusterj.core.SessionFactoryImpl.getSessionFactory(SessionFactoryImpl.java:108)
at
com.mysql.clusterj.core.SessionFactoryServiceImpl.getSessionFactory(SessionFactoryServiceImpl.java:36)
at
com.mysql.clusterj.core.SessionFactoryServiceImpl.getSessionFactory(SessionFactoryServiceImpl.java:27)
at com.mysql.clusterj.ClusterJHelper.getSessionFactory(ClusterJHelper.java:61)
at com.mysql.clusterj.ClusterJHelper.getSessionFactory(ClusterJHelper.java:46)
at
com.mysql.clusterj.openjpa.NdbOpenJPAConfigurationImpl.createSessionFactory(NdbOpenJPAConfigurationImpl.java:261)
at
com.mysql.clusterj.openjpa.NdbOpenJPAConfigurationImpl.getSessionFactory(NdbOpenJPAConfigurationImpl.java:228)
at
com.mysql.clusterj.openjpa.NdbOpenJPAConfigurationImpl.getSessionFactory(NdbOpenJPAConfigurationImpl.java:52)
at
com.mysql.clusterj.openjpa.NdbOpenJPAStoreManager.setContext(NdbOpenJPAStoreManager.java:99)
at
com.mysql.clusterj.openjpa.NdbOpenJPAStoreManager.setContext(NdbOpenJPAStoreManager.java:93)
at
org.apache.openjpa.kernel.DelegatingStoreManager.setContext(DelegatingStoreManager.java:78)
at org.apache.openjpa.kernel.BrokerImpl.initialize(BrokerImpl.java:311)
at
org.apache.openjpa.kernel.AbstractBrokerFactory.initializeBroker(AbstractBrokerFactory.java:216)
at
org.apache.openjpa.kernel.AbstractBrokerFactory.newBroker(AbstractBrokerFactory.java:190)
... 78 more
Caused by: com.mysql.clusterj.ClusterJDatastoreException: Datastore exception Return
code: -1 error code: 0 message: .
at
com.mysql.clusterj.tie.ClusterConnectionImpl.throwError(ClusterConnectionImpl.java:143)
at
com.mysql.clusterj.tie.ClusterConnectionImpl.handleError(ClusterConnectionImpl.java:120)
at
com.mysql.clusterj.tie.ClusterConnectionImpl.waitUntilReady(ClusterConnectionImpl.java:113)
at com.mysql.clusterj.core.SessionFactoryImpl.<init>(SessionFactoryImpl.java:143)
... 92 more
|#]

in the ndb_mgm show I can see available mysqld(API) nodes:
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 @127.0.0.1 (mysql-5.1.47 ndb-7.1.8, Nodegroup: 0, Master)
id=4 @127.0.0.1 (mysql-5.1.47 ndb-7.1.8, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @127.0.0.1 (mysql-5.1.47 ndb-7.1.8)

[mysqld(API)] 4 node(s)
id=50 @127.0.0.1 (mysql-5.1.47 ndb-7.1.8)
id=51 (not connected, accepting connect from any host)
id=52 (not connected, accepting connect from any host)
id=53 (not connected, accepting connect from any host)

I am using the default config.ini:

[ndb_mgmd]
hostname=localhost
datadir=/home/baboune/ndb/one-cluster/ndb_data
id=1

[ndbd default]
noofreplicas=2
datadir=/home/baboune/ndb/one-cluster/ndb_data

[ndbd]
hostname=localhost
id=3

[ndbd]
hostname=localhost
id=4

[mysqld]
id=50

[mysqld]
id=51

[mysqld]
id=52

[mysqld]
id=53

And my.cnf:

[mysqld]
ndbcluster
datadir=/home/baboune/ndb/one-cluster/mysqld_data
basedir=/home/baboune/mysqlc

Note: if I remove the ndb parts in the persistence.xml then everything works as my data
source points to a valid MySQL connection pool.

How to repeat:
Using above persistence.xml configuration deploy an application on a glassfish server.

Some comments on my setup:
- Main machine using windows 7 64 bits running Glassfish
- Virtual machine running Ubuntu Maverick 32 bits + Mysql cluster

Tried the exact same thing but with everything collocated on windows and I seem to get a
link towards the DB:

Caused by: org.apache.openjpa.lib.jdbc.ReportingSQLException: Can't create table
'bm.bigcompany' (errno: 157) {stmnt 202688481 CREATE TABLE BIGCOMPANY (id BIGINT NOT NULL
AUTO_INCREMENT, name VARCHAR(255), PRIMARY KEY (id)) TYPE = ndbcluster} [code=1005,
state=HY000]
at
org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.wrap(LoggingConnectionDecorator.java:192)
at
org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.access$700(LoggingConnectionDecorator.java:57)
at
org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator$LoggingConnection$LoggingStatement.executeUpdate(LoggingConnectionDecorator.java:762)
at
org.apache.openjpa.lib.jdbc.DelegatingStatement.executeUpdate(DelegatingStatement.java:114)
at org.apache.openjpa.jdbc.schema.SchemaTool.executeSQL(SchemaTool.java:1191)
at org.apache.openjpa.jdbc.schema.SchemaTool.createTable(SchemaTool.java:949)
at org.apache.openjpa.jdbc.schema.SchemaTool.add(SchemaTool.java:526)
at org.apache.openjpa.jdbc.schema.SchemaTool.add(SchemaTool.java:344)
at org.apache.openjpa.jdbc.schema.SchemaTool.run(SchemaTool.java:321)
at org.apache.openjpa.jdbc.meta.MappingTool.record(MappingTool.java:501)
... 82 more

It is still not good but at least it appears to connect to the NDB API.

So there is some sort of bug going to an external NDB API (not localhost)?

Error during restore (no replies)

$
0
0
Hi,
I am re-storing Cluster from the Backup, and during re-store receives error
"Temporary error: 4010: Node failure caused abort of transaction".

And then One of the Data Node gets disconnected.

I am re-storing to a sinlge Data Node (hopes that the re-store gets replicated to the other Data Node).

I am using mysql-cluster-gpl-7.1.8-linux-i686-glibc23.tar.gz
and having 2 Mgt Node, 2 Data Nodes and 4 Sql Nodes.

I had this problem in 7.0.9 version as well, and could not resolve it.

When i started the Data Node, iam getting the error " Node 3 missed heartbeat 3"
and the Data Node disconnected again.

I think now i need to re-initialize the Data Node again, but this can't be done in a production environment.


Is this problem related to the re-store or some other issue?

Config File:
[ndbd default]
NoOfReplicas=2
DataMemory=1000M
IndexMemory= 100M
BackupMemory= 16M
MaxNoOfOrderedIndexes = 1024
MaxNoOfAttributes = 10000
MaxNoOfTables = 2500
MaxNoOfConcurrentOperations=500000
MaxNoOfLocalOperations=550000


[NDB_MGMD default]
ArbitrationRank=2

[NDB_MGMD]
PortNumber=3312
HostName=192.168.150.180
datadir=/home/mysql-cluster-7.1.8/data

[NDB_MGMD]
PortNumber=3312
HostName=192.168.150.158
datadir=/home/mysql-cluster-7.1.8/data

[NDBD]
HostName=192.168.150.181
datadir=/home/mysql-cluster-7.1.8/data
TransactionDeadlockDetectionTimeout=7000
#HeartbeatIntervalDbDb=3000
#HeartbeatIntervalDbApi=3000


[NDBD]
HostName=192.168.150.184
datadir=/home/mysql-cluster-7.1.8/data
TransactionDeadlockDetectionTimeout=7000
#HeartbeatIntervalDbDb=3000
#HeartbeatIntervalDbApi=3000

[mysqld]
HostName=192.168.150.180

[mysqld]
HostName=192.168.150.181

[mysqld]
HostName=192.168.150.184

[mysqld]
HostName=192.168.150.158

[mysqld]
[mysqld]

Cluster Configuration (1 reply)

$
0
0
Hi,
What should be the best configuration for 2 Data Nodes, w.r.t:
No of Replicas & Node Groups.

Also, can any one tell me the optimum configuration details for an OLTP application, that needs to run over Cluster.

Thanks.
Viewing all 1553 articles
Browse latest View live




Latest Images