Quantcast
Channel: MySQL Forums - NDB clusters
Viewing all 1560 articles
Browse latest View live

Failed to get s session to MySQL cluster (no replies)

$
0
0
Hi experts,

I'm new to MySQL, I want to use ClusterJ to access database in MySQL cluster.

I got the following error when I tried to get a session from cluster:
Exception in thread "main" com.mysql.clusterj.ClusterJFatalUserException: Error getting connection to cluster with properties {com.mysql.clusterj.connectstring=127.0.0.1:1186, com.mysql.clusterj.database=geodb}:
Caused by com.mysql.clusterj.ClusterJDatastoreException:Datastore exception on connectString '127.0.0.1:1186' nodeId 0; Return code: -1 error code: 1,101 message: Error: Could not alloc node id at 127.0.0.1 port 1186: No free node id found for mysqld(API)..
at com.mysql.clusterj.core.SessionFactoryImpl.createClusterConnection(SessionFactoryImpl.java:252)
at com.mysql.clusterj.core.SessionFactoryImpl.createClusterConnectionPool(SessionFactoryImpl.java:226)
at com.mysql.clusterj.core.SessionFactoryImpl.<init>(SessionFactoryImpl.java:174)
at com.mysql.clusterj.core.SessionFactoryImpl.getSessionFactory(SessionFactoryImpl.java:129)
at com.mysql.clusterj.core.SessionFactoryServiceImpl.getSessionFactory(SessionFactoryServiceImpl.java:36)
at com.mysql.clusterj.core.SessionFactoryServiceImpl.getSessionFactory(SessionFactoryServiceImpl.java:27)
at com.mysql.clusterj.ClusterJHelper.getSessionFactory(ClusterJHelper.java:69)
at com.mysql.clusterj.ClusterJHelper.getSessionFactory(ClusterJHelper.java:54)

I found the http://forums.mysql.com/read.php?25,518259,518358#msg-518358 maybe related to this. But I don't know how to resolve this issue.

Can someone tell me how to resolve this issue? e.g. some example?

Thanks a lot.

Failed to persist object through ClusterJPA (no replies)

$
0
0
I tried to persist a simple object into MySQL cluster DB through ClusterJPA following the instructions at http://planet.mysql.com/entry/?id=24140

I got following error when trying to persist the object:
-connector-java-5.1.31 ( Revision: alexander.soklakov@oracle.com-20140520065950-groqzzbvxprqdmnz ).
Exception in thread "main" <openjpa-2.3.0-r422266:1540826 fatal store error> org.apache.openjpa.persistence.RollbackException: The transaction has been rolled back. See the nested exceptions for details on the errors that occurred.
at org.apache.openjpa.persistence.EntityManagerImpl.commit(EntityManagerImpl.java:594)
at TestClusterJPA.main(TestClusterJPA.java:34)
Caused by: <openjpa-2.3.0-r422266:1540826 fatal general error> org.apache.openjpa.persistence.PersistenceException: The transaction has been rolled back. See the nested exceptions for details on the errors that occurred.
at org.apache.openjpa.kernel.BrokerImpl.newFlushException(BrokerImpl.java:2370)
at org.apache.openjpa.kernel.BrokerImpl.flush(BrokerImpl.java:2207)
at org.apache.openjpa.kernel.BrokerImpl.flushSafe(BrokerImpl.java:2105)
at org.apache.openjpa.kernel.BrokerImpl.beforeCompletion(BrokerImpl.java:2023)
at org.apache.openjpa.kernel.LocalManagedRuntime.commit(LocalManagedRuntime.java:81)
at org.apache.openjpa.kernel.BrokerImpl.commit(BrokerImpl.java:1528)
at org.apache.openjpa.kernel.DelegatingBroker.commit(DelegatingBroker.java:933)
at org.apache.openjpa.persistence.EntityManagerImpl.commit(EntityManagerImpl.java:570)
... 1 more
Caused by: com.mysql.clusterj.ClusterJFatalInternalException: Operation partitionKeySetPart is not supported for non-key fields.
at com.mysql.clusterj.core.metadata.AbstractDomainFieldHandlerImpl$27.partitionKeySetPart(AbstractDomainFieldHandlerImpl.java:2387)
at com.mysql.clusterj.core.metadata.AbstractDomainFieldHandlerImpl.partitionKeySetPart(AbstractDomainFieldHandlerImpl.java:392)
at com.mysql.clusterj.openjpa.NdbOpenJPADomainTypeHandlerImpl.createPartitionKey(NdbOpenJPADomainTypeHandlerImpl.java:578)
at com.mysql.clusterj.core.SessionImpl.setPartitionKey(SessionImpl.java:264)
at com.mysql.clusterj.core.SessionImpl.insert(SessionImpl.java:428)
at com.mysql.clusterj.openjpa.NdbOpenJPAStoreManager.flush(NdbOpenJPAStoreManager.java:375)
at org.apache.openjpa.kernel.DelegatingStoreManager.flush(DelegatingStoreManager.java:131)

I use apache-openjpa 2.3.0 + mysql-connector-java-5.1.31.tar.gz
and here is the definition of my test class:
@Entity(name = "Project")
public class ProjectJPA {
private Long Id;
private String tenantOrg;
private int quotaGB;
private int quotaEnabled;

@Id
Long getId() {
return Id;
}

void setId(Long id) {
Id = id;
}

@Column(name="tenantOrg")
String getTenantOrg() {
return tenantOrg;
}

void setTenantOrg(String tenant) {
tenantOrg = tenant;
}

@Column(name="quotaGB")
int getQuotaGB() {
return quotaGB;
}

void setQuotaGB(int quota) {
quotaGB = quota;
}

@Column(name="quotaEnabled")
int getQuotaEnabled() {
return quotaEnabled;
}

void setQuotaEnabled(int enabled) {
quotaEnabled = enabled;
}

@Override
public String toString() {
StringBuilder builder = new StringBuilder("ProjectJPA:\n");
builder.append("Id=");
builder.append(getId());
builder.append("\n");

builder.append("tid=");
builder.append(getTenantOrg());
builder.append("\n");

builder.append("guotaGB=");
builder.append(getQuotaGB());
builder.append("\n");

builder.append("guotaEnabled=");
builder.append(getQuotaEnabled());
builder.append("\n");

return builder.toString();
}
}

Why I get this error and how can I set the log level so i can get more log info from ClusterJ?

Thanks a lot.

how to build openjpa jar from source (no replies)

$
0
0
I want to build mysql cluster from source and generate cluserJPA jar file.

The http://dev.mysql.com/doc/ndbapi/en/mccj-getting.html says that I need to set the configure options --with-plugin to openjpa to generate the openjpa jar file.

But source codes of mysql-cluster-jpl-7.3.5 uses cmake instead of configure to build mysql cluster. How to build clusterjpa with cmake?

Thanks

Not able to run mutiple memcached instance (no replies)

$
0
0
Hi,

We are trying to implementing Sessions using MySQL Cluster + memcached

For this we have setup mysql cluster with one cluster management node + two database Nodes + two application server.

But when I try to connect using memcached only one instance able to connect that.

Below are the config files for each one.

------------------------------------------------------------------------------------------------------------
1. management server
root@mgmd1:/var/lib/mysql-cluster # cat config.ini
[NDBD DEFAULT]

NoOfReplicas=2
DataMemory=80M
IndexMemory=18M
[MYSQLD DEFAULT]

[NDB_MGMD DEFAULT]
DataDir=/var/lib/mysql-cluster
[TCP DEFAULT]

# Section for the cluster management node
[NDB_MGMD]
NodeId=1
# IP address of the first management node (this system)
HostName=192.168.175.35

# Section for the storage nodes
[NDBD]
# IP address of the first storage node
HostName=192.168.172.6
DataDir= /var/lib/mysql-cluster
[NDBD]
# IP address of the second storage node
HostName=192.168.172.26
DataDir=/var/lib/mysql-cluster
# one [MYSQLD] per storage node
[MYSQLD]
[MYSQLD]
------------------------------------------------------------------------------------------------------------
2. Database Nodes Node 1

root@ndbd1:/etc # cat my.cnf
[mysqld]
ndbcluster
# IP address of the cluster management node
ndb-connectstring=192.168.175.35
[mysql_cluster]
# IP address of the cluster management node
ndb-connectstring=192.168.175.35
------------------------------------------------------------------------------------------------------------
3. Database Nodes Node 2

root@ndbd1:/etc # cat my.cnf
[mysqld]
ndbcluster
# IP address of the cluster management node
ndb-connectstring=192.168.175.35
[mysql_cluster]
# IP address of the cluster management node
ndb-connectstring=192.168.175.35
------------------------------------------------------------------------------------------------------------

Cluster Configuration >>

root@mgmd1:/var/lib/mysql-cluster # ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.172.6 (mysql-5.6.19 ndb-7.3.6, Nodegroup: 0, *)
id=3 @192.168.172.26 (mysql-5.6.19 ndb-7.3.6, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.175.35 (mysql-5.6.19 ndb-7.3.6)

[mysqld(API)] 2 node(s)
id=4 @192.168.175.26 (mysql-5.6.19 ndb-7.3.6)
id=5 @192.168.175.26 (mysql-5.6.19 ndb-7.3.6)

ndb_mgm>

------------------------------------------------------------------------------------------------------------

Now when I start memcached on application server only one application server able to connect to NDB.

Please see below,

root@applicationserver1:~ # memcached -E /usr/local/mysql/lib/ndb_engine.so -e "connectstring=192.168.175.35:1186;role=db-only;" -vv -c 20 -u root
12-Sep-2014 13:04:09 IST NDB Memcache 5.6.19-ndb-7.3.6 started [NDB 7.3.6; MySQL 5.6.19]
Contacting primary management server (192.168.175.35:1186) ...
Connected to "192.168.175.35:1186" as node id 4.
Retrieved 3 key prefixes for server role "db-only".
The default behavior is that:
GET uses NDB only
SET uses NDB only
DELETE uses NDB only.
The 2 explicitly defined key prefixes are "b:" (demo_table_large) and "t:" (demo
Connected to "192.168.175.35" as node id 5.
Server started with 4 threads.
Priming the pump ...
Failed to grow connection pool.
Scheduler: using 1 connection to cluster 0
Scheduler: starting for 1 cluster; c0,f0,g1,t1
done [14.575 sec].
Loaded engine: NDB Memcache 5.6.19-ndb-7.3.6
Supplying the following features: compare and swap, persistent storage, LRU
<48 server listening (auto-negotiate)
<49 server listening (auto-negotiate)
<50 send buffer was 212992, now 268435456
<51 send buffer was 212992, now 268435456
<50 server listening (udp)
<50 server listening (udp)
<51 server listening (udp)
<50 server listening (udp)
<50 server listening (udp)
<51 server listening (udp)
<51 server listening (udp)
<51 server listening (udp)
------------------------------------------------------------------------------------------------------------
root@applicationserver2:~ # memcached -E /usr/local/mysql/lib/ndb_engine.so -e "connectstring=192.168.175.35:1186;role=db-only;" -vv -c 20 -u root
12-Sep-2014 13:04:43 IST NDB Memcache 5.6.19-ndb-7.3.6 started [NDB 7.3.6; MySQL 5.6.19]
Contacting primary management server (192.168.175.35:1186) ...
FAILED.
Could not connect to NDB. Shutting down.
Failed to initialize instance. Error code: 255
root@applicationserver2:~ #

Can any one please help, what was the wrong in my configuration.

Establish SSL in MySQL Cluster (1 reply)

$
0
0
I am an Oracle professional who just inherited a MySQL Cluster database and know nothing about MySQL or cluster databases.

I need to establish encryption communication on this MySQl databases.

I have done the research and have found documntation within the MySQL reference manuals and it looks rather straight forward. But I can find no documentation that references setting up SSL on regarding a cluster database.

Do I just repeat the same instructions for each node of the cluster. Updating the configuration file on each node specifing the [mysqld] and [client]. Copy the same certificates on each node. Then do I restart each node individually or as a whole.

I would like to have some help here for I do not know when they will be able to fill the position with an expereinced MySQL professional.

Thank you for any assistance.

'Cluster Failure' from NDB. Could not acquire global schema lock (5 replies)

$
0
0
Hi all,

I'm new to MySQL Cluster and started evaluating MySQL Cluster Solution.

I'm using 2 datanodes, 1 SQL Node and 1 API nodes for MySQL Cluster setup.

Data Node and SQL configuration:
================================
[mysqld]
ndbcluster
# IP address of the cluster management node
ndb-connectstring=1.0.0.114
[mysql_cluster]
# IP address of the cluster management node
ndb-connectstring=1.0.0.114

Mgmt Node configuration:
========================
[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2 # Number of replicas
#DataMemory=80M # How much memory to allocate for data storage
#IndexMemory=18M # How much memory to allocate for index storage
# For DataMemory and IndexMemory, we have used the
# default values. Since the "world" database takes up
# only about 500KB, this should be more than enough for
# this example Cluster setup.

[tcp default]
# TCP/IP options:
#portnumber=2202 # This the default; however, you can use any
# port that is free for all the hosts in the cluster
# Note: It is recommended that you do not specify the port
# number at all and simply allow the default value to be used
# instead

[ndb_mgmd]
# Management process options:
hostname=1.0.0.114 # Hostname or IP address of MGM node
datadir=/var/lib/mysql-cluster # Directory for MGM node log files

[ndbd]
NodeId:10
# Options for data node "A":
# (one [ndbd] section per data node)
hostname=1.0.0.111 # Hostname or IP address
datadir=/usr/local/mysql/data # Directory for this data node's data files

[ndbd]
NodeId:11
# Options for data node "B":
hostname=1.0.0.112 # Hostname or IP address
datadir=/usr/local/mysql/data # Directory for this data node's data files

[mysqld]
NodeId:20
# SQL node options:
hostname=1.0.0.113 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)


mgmt node output:
=================
[root@localhost mysql]# ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=10 @1.0.0.111 (mysql-5.6.17 ndb-7.3.5, Nodegroup: 0, *)
id=11 @1.0.0.112 (mysql-5.6.17 ndb-7.3.5, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @1.0.0.114 (mysql-5.6.17 ndb-7.3.5)

[mysqld(API)] 1 node(s)
id=20 @1.0.0.113 (mysql-5.6.17 ndb-7.3.5)

[root@localhost mysql]#

the show processlist output in data and SQL nodes:

Here we could see the state is still waiting.

mysql> show processlist;
+----+-------------+-----------+------+---------+------+-----------------------------------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+-------------+-----------+------+---------+------+-----------------------------------+------------------+
| 1 | system user | | | Daemon | 0 | Waiting for event from ndbcluster | NULL |
| 2 | root | localhost | ss | Query | 0 | init | show processlist |
+----+-------------+-----------+------+---------+------+-----------------------------------+------------------+
2 rows in set(0.00 sec).

On creating the database and a table via the SQL Node. Could see the below message getting displayed in the data nodes.

mysql> show warnings;
+---------+------+---------------------------------------------------------------------------------+
| Level | Code | Message |
+---------+------+---------------------------------------------------------------------------------+
| Warning | 1296 | Got error 4009 'Cluster Failure' from NDB. Could not acquire global schema lock |
+---------+------+---------------------------------------------------------------------------------+
1 row in set (0.00 sec)

mysql>

Datanode output for global status:

mysql> show global status like 'ndb_number_of%';
+--------------------------------+-------+
| Variable_name | Value |
+--------------------------------+-------+
| Ndb_number_of_data_nodes | 2 |
| Ndb_number_of_ready_data_nodes | 0 |
+--------------------------------+-------+
2 rows in set (0.00 sec)

mysql>

SQL node output for global status:

mysql> show global status like 'ndb_number_of%';
+--------------------------------+-------+
| Variable_name | Value |
+--------------------------------+-------+
| Ndb_number_of_data_nodes | 2 |
| Ndb_number_of_ready_data_nodes | 2 |
+--------------------------------+-------+
2 rows in set (0.00 sec)


I can't able view the SQL changes(create databases/tables) done from SQL Node in the data nodes.

MySQL Cluster logs:
===================
2014-07-02 14:25:19 [MgmtSrvr] INFO -- Got initial configuration from '/var/lib/mysql-cluster/config.ini', will try to set it when all ndb_mgmd(s) started
2014-07-02 14:25:19 [MgmtSrvr] INFO -- Id: 1, Command port: *:1186
2014-07-02 14:25:19 [MgmtSrvr] INFO -- Node 1: Node 1 Connected
2014-07-02 14:25:19 [MgmtSrvr] INFO -- MySQL Cluster Management Server mysql-5.6.17 ndb-7.3.5 started
2014-07-02 14:25:19 [MgmtSrvr] INFO -- Node 1 connected
2014-07-02 14:25:19 [MgmtSrvr] INFO -- Starting initial configuration change
2014-07-02 14:25:19 [MgmtSrvr] INFO -- Configuration 1 commited
2014-07-02 14:25:19 [MgmtSrvr] INFO -- Config change completed! New generation: 1
2014-07-02 14:25:45 [MgmtSrvr] INFO -- Nodeid 10 allocated for NDB at 1.0.0.111
2014-07-02 14:25:45 [MgmtSrvr] INFO -- Node 1: Node 10 Connected
2014-07-02 14:25:45 [MgmtSrvr] INFO -- Node 10: Buffering maximum epochs 100
2014-07-02 14:25:45 [MgmtSrvr] INFO -- Node 10: Start phase 0 completed
2014-07-02 14:25:45 [MgmtSrvr] INFO -- Node 10: Communication to Node 11 opened
2014-07-02 14:25:45 [MgmtSrvr] INFO -- Node 10: Waiting 30 sec for nodes 11 to connect, nodes [ all: 10 and 11 connected: 10 no-wait: ]
2014-07-02 14:25:48 [MgmtSrvr] INFO -- Node 10: Waiting 27 sec for nodes 11 to connect, nodes [ all: 10 and 11 connected: 10 no-wait: ]
2014-07-02 14:25:51 [MgmtSrvr] INFO -- Node 10: Waiting 24 sec for nodes 11 to connect, nodes [ all: 10 and 11 connected: 10 no-wait: ]
2014-07-02 14:25:53 [MgmtSrvr] INFO -- Nodeid 11 allocated for NDB at 1.0.0.112
2014-07-02 14:25:53 [MgmtSrvr] INFO -- Node 1: Node 11 Connected
2014-07-02 14:25:53 [MgmtSrvr] INFO -- Node 11: Buffering maximum epochs 100
2014-07-02 14:25:53 [MgmtSrvr] INFO -- Node 11: Start phase 0 completed
2014-07-02 14:25:53 [MgmtSrvr] INFO -- Node 11: Communication to Node 10 opened
2014-07-02 14:25:53 [MgmtSrvr] INFO -- Node 11: Waiting 30 sec for nodes 10 to connect, nodes [ all: 10 and 11 connected: 11 no-wait: ]
2014-07-02 14:25:54 [MgmtSrvr] INFO -- Node 10: Node 11 Connected
2014-07-02 14:25:54 [MgmtSrvr] INFO -- Node 11: Node 10 Connected
2014-07-02 14:25:54 [MgmtSrvr] INFO -- Node 10: Start with all nodes 10 and 11
2014-07-02 14:25:54 [MgmtSrvr] INFO -- Node 10: CM_REGCONF president = 10, own Node = 10, our dynamic id = 0/1
2014-07-02 14:25:57 [MgmtSrvr] INFO -- Node 11: CM_REGCONF president = 10, own Node = 11, our dynamic id = 0/2
2014-07-02 14:25:57 [MgmtSrvr] INFO -- Node 10: Node 11: API mysql-5.6.17 ndb-7.3.5
2014-07-02 14:25:57 [MgmtSrvr] INFO -- Node 11: Node 10: API mysql-5.6.17 ndb-7.3.5
2014-07-02 14:25:57 [MgmtSrvr] INFO -- Node 11: Start phase 1 completed
2014-07-02 14:25:57 [MgmtSrvr] INFO -- Node 10: Start phase 1 completed
2014-07-02 14:25:57 [MgmtSrvr] INFO -- Node 11: Start phase 2 completed (system restart)
2014-07-02 14:25:57 [MgmtSrvr] INFO -- Node 10: Start phase 2 completed (system restart)
2014-07-02 14:25:57 [MgmtSrvr] INFO -- Node 10: Start phase 3 completed (system restart)
2014-07-02 14:25:57 [MgmtSrvr] INFO -- Node 11: Start phase 3 completed (system restart)
2014-07-02 14:25:57 [MgmtSrvr] INFO -- Node 10: Restarting cluster to GCI: 3899
2014-07-02 14:25:57 [MgmtSrvr] INFO -- Node 10: Starting to restore schema
2014-07-02 14:25:58 [MgmtSrvr] INFO -- Node 10: Restore of schema complete
2014-07-02 14:25:58 [MgmtSrvr] INFO -- Node 11: Starting to restore schema
2014-07-02 14:25:58 [MgmtSrvr] INFO -- Node 11: Restore of schema complete
2014-07-02 14:25:58 [MgmtSrvr] INFO -- Node 10: DICT: activate index 6 done (sys/def/5/ndb_index_stat_sample_x1)
2014-07-02 14:25:58 [MgmtSrvr] INFO -- Node 10: Node: 10 StartLog: [GCI Keep: 1031 LastCompleted: 3899 NewestRestorable: 3899]
2014-07-02 14:25:58 [MgmtSrvr] INFO -- Node 10: Node: 11 StartLog: [GCI Keep: 1031 LastCompleted: 3899 NewestRestorable: 3899]
2014-07-02 14:26:00 [MgmtSrvr] INFO -- Node 11: LQH: Starting to rebuild ordered indexes
2014-07-02 14:26:00 [MgmtSrvr] INFO -- Node 11: LQH: index 6 rebuild done
2014-07-02 14:26:00 [MgmtSrvr] INFO -- Node 11: LQH: Rebuild ordered indexes complete
2014-07-02 14:26:00 [MgmtSrvr] INFO -- Node 10: LQH: Starting to rebuild ordered indexes
2014-07-02 14:26:00 [MgmtSrvr] INFO -- Node 10: LQH: index 6 rebuild done
2014-07-02 14:26:00 [MgmtSrvr] INFO -- Node 10: LQH: Rebuild ordered indexes complete
2014-07-02 14:26:00 [MgmtSrvr] INFO -- Node 10: Start phase 4 completed (system restart)
2014-07-02 14:26:00 [MgmtSrvr] INFO -- Node 11: Start phase 4 completed (system restart)
2014-07-02 14:26:00 [MgmtSrvr] INFO -- Node 10: GCP Monitor: unlimited lags allowed
2014-07-02 14:26:01 [MgmtSrvr] INFO -- Node 10: Local checkpoint 4 started. Keep GCI = 2650 oldest restorable GCI = 1370
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 10: Local checkpoint 4 completed
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 10: Start phase 5 completed (system restart)
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 11: Start phase 5 completed (system restart)
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 10: Start phase 6 completed (system restart)
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 11: Start phase 6 completed (system restart)
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 10: President restarts arbitration thread [state=1]
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 10: Start phase 7 completed (system restart)
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 11: Start phase 7 completed (system restart)
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 10: Start phase 8 completed (system restart)
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 11: Start phase 8 completed (system restart)
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 10: Start phase 9 completed (system restart)
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 11: Start phase 9 completed (system restart)
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 10: Start phase 100 completed (system restart)
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 11: Start phase 100 completed (system restart)
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 10: Start phase 101 completed (system restart)
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 11: Start phase 101 completed (system restart)
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 10: Started (mysql-5.6.17 ndb-7.3.5)
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 11: Started (mysql-5.6.17 ndb-7.3.5)
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 10: Node 1: API mysql-5.6.17 ndb-7.3.5
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 11: Node 1: API mysql-5.6.17 ndb-7.3.5
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 11: Prepare arbitrator node 1 [ticket=733f0001426bb9c9]
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 10: Started arbitrator node 1 [ticket=733f0001426bb9c9]
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 11: Communication to Node 20 opened
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 10: Communication to Node 20 opened
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 10: Node 20 Connected
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 11: Node 20 Connected
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 10: Node 20: API mysql-5.6.17 ndb-7.3.5
2014-07-02 14:26:05 [MgmtSrvr] INFO -- Node 11: Node 20: API mysql-5.6.17 ndb-7.3.5


Please let me know the issue/errors here. Please do the needful.

Update records based on QueryBuilder result set for ClusterJ (no replies)

$
0
0
I am willing to implement a batch update feature by using QueryBuilder to get a result set from database, then change the in the result set and update to database again. However, I get null exception when trying to set value to the result data. Is it not possible to set value in the result set of QueryBuilder?

Exception in thread "main" com.mysql.clusterj.ClusterJDatastoreException: For field data column data valueDelegate object String, error executing objectSetValue. Caused by java.lang.NullPointerException:null
at com.mysql.clusterj.core.metadata.AbstractDomainFieldHandlerImpl.objectSetValue(AbstractDomainFieldHandlerImpl.java:289)
at com.mysql.clusterj.tie.NdbRecordSmartValueHandlerImpl.set(NdbRecordSmartValueHandlerImpl.java:696)
at com.mysql.clusterj.tie.NdbRecordSmartValueHandlerImpl.invoke(NdbRecordSmartValueHandlerImpl.java:722)
at com.sun.proxy.$Proxy0.setData(Unknown Source)
at com.syniverse.mysql.interfaceApp.MysqlInterface.query(MysqlInterface.java:296)
at com.syniverse.mysql.interfaceApp.MysqlInterface.main(MysqlInterface.java:547)

ClusterJ session creation slows down a lot having a large number of sessions (no replies)

$
0
0
We are developing a back end application which is supposed to handle a large number of RPC calls, each of them issueing a transaction to MySQL Cluster through the ClusterJ API. We would like to be able to handle around 2000 RCP calls per server in parallel and want to run a few of these servers. We are running one thread per RCP call and do currently creation one session for each of these calls. After handling the RPC calls, the session is closed and thrown away.

The issue we are facing is that session creation becomes really slow with such a large amount of threads. Using 100 threads, session creation takes only a few milliseconds, but using 1000 threads session creation takes 500 milliseconds or longer what is much too slow for our purpose.

The question now is if we missunderstood the use of session or whether we can increase throughput somehow else. Is our assumption correct that we should use one session per thread and close it after the transaction was executed or should we reuse them?

Our assumption so far was that we need to close sessions after each RPC as the internal data structures do not seem to be cleared otherwise.

We tried to incease the clusterj connection pool size but haven't seen a hughe efect so far.

Thank you very much in advance for your advice.

Data node stuck in phase 4 with error code (no replies)

$
0
0
Hi, I think I need some help to restart my data nodes in cluster. I have a cluster with 4 data nodes and hundreds of millions record inserted. But somehow I couldn't successfully start the cluster now due to failure in data nodes, as show below. Can anyone provide any clue why this happened?

2014-10-06 19:10:33 [ndbd] INFO -- Start phase 0 completed
2014-10-06 19:10:36 [ndbd] INFO -- findNeighbours from: 2202 old (left: 65535 right: 65535) new (12 11)
2014-10-06 19:10:36 [ndbd] INFO -- Start phase 1 completed
2014-10-06 19:10:36 [ndbd] INFO -- findNeighbours from: 2114 old (left: 12 right: 11) new (12 14)
2014-10-06 19:10:36 [ndbd] INFO -- Start phase 2 completed
2014-10-06 19:10:36 [ndbd] INFO -- Start phase 3 completed
restartCreateObj(1) file: 1
restartCreateObj(2) file: 1
restartCreateObj(3) file: 1
restartCreateObj(4) file: 1
restartCreateObj(5) file: 1
restartCreateObj(6) file: 1
restartCreateObj(7) file: 1
restartCreateObj(9) file: 1
restartCreateObj(11) file: 1
restartCreateObj(12) file: 1
restartCreateObj(13) file: 1
restartCreateObj(15) file: 1
restartCreateObj(17) file: 1
restartCreateObj(19) file: 1
restartCreateObj(21) file: 1
restartCreateObj(23) file: 1
restartCreateObj(25) file: 1
restartCreateObj(27) file: 1
restartCreateObj(29) file: 1
restartCreateObj(31) file: 1
restartCreateObj(33) file: 1
restartCreateObj(39) file: 1
restartCreateObj(41) file: 1
restartCreateObj(43) file: 1
restartCreateObj(50) file: 1
restartCreateObj(51) file: 1
restartCreateObj(8) file: 1
restartCreateObj(10) file: 1
restartCreateObj(14) file: 1
restartCreateObj(16) file: 1
restartCreateObj(18) file: 1
restartCreateObj(20) file: 1
restartCreateObj(22) file: 1
restartCreateObj(24) file: 1
restartCreateObj(26) file: 1
restartCreateObj(28) file: 1
restartCreateObj(30) file: 1
restartCreateObj(32) file: 1
restartCreateObj(34) file: 1
restartCreateObj(35) file: 1
restartCreateObj(36) file: 1
restartCreateObj(37) file: 1
restartCreateObj(38) file: 1
restartCreateObj(40) file: 1
restartCreateObj(42) file: 1
restartCreateObj(44) file: 1
restartCreateObj(45) file: 1
restartCreateObj(46) file: 1
restartCreateObj(47) file: 1
restartCreateObj(48) file: 1
restartCreateObj(49) file: 1
restartCreateObj(52) file: 1
Using 1 fragments per node
execSTART_RECREQ chaning srnodes from 0000000000007800 to 0000000000002800
RESTORE table: 2 540 rows applied
RESTORE table: 2 481 rows applied
RESTORE table: 3 6 rows applied
RESTORE table: 3 4 rows applied
RESTORE table: 4 8 rows applied
RESTORE table: 4 4 rows applied
RESTORE table: 5 8 rows applied
RESTORE table: 5 4 rows applied
RESTORE table: 6 0 rows applied
RESTORE table: 6 0 rows applied
RESTORE table: 7 70465 rows applied
RESTORE table: 7 69722 rows applied
RESTORE table: 9 62948 rows applied
RESTORE table: 9 62300 rows applied
RESTORE table: 11 0 rows applied
RESTORE table: 11 0 rows applied
RESTORE table: 12 0 rows applied
RESTORE table: 12 0 rows applied
RESTORE table: 13 71542 rows applied
RESTORE table: 13 70846 rows applied
RESTORE table: 15 0 rows applied
RESTORE table: 15 0 rows applied
RESTORE table: 17 71624 rows applied
RESTORE table: 17 70907 rows applied
RESTORE table: 19 71624 rows applied
RESTORE table: 19 70907 rows applied
RESTORE table: 21 71542 rows applied
RESTORE table: 21 70846 rows applied
RESTORE table: 23 67152 rows applied
RESTORE table: 23 66504 rows applied
RESTORE table: 25 55916 rows applied
RESTORE table: 25 55465 rows applied
RESTORE table: 27 65815 rows applied
RESTORE table: 27 65136 rows applied
RESTORE table: 29 863 rows applied
RESTORE table: 29 832 rows applied
RESTORE table: 31 0 rows applied
RESTORE table: 31 0 rows applied
2014-10-06 19:11:35 [ndbd] INFO -- RESTORE: File system read failed. OS errno: 87
2014-10-06 19:11:35 [ndbd] INFO -- RESTORE (Line: 3656) 0x00000002
2014-10-06 19:11:35 [ndbd] INFO -- Error handler shutting down system
2014-10-06 19:11:35 [ndbd] INFO -- Error handler shutdown completed - exiting
2014-10-06 19:11:41 [ndbd] ALERT -- Node 13: Forced node shutdown completed. Occured during startphase 4. Caused by error 2813: 'Unknown file system error(Internal error, programming error or missing error message, please report a bug). Temporary error, restart node'.

can a 1TB database use Mysql Cluster in Production ? (no replies)

$
0
0
I have a production database with 1TB data using mysql 5.5 now. I'm planning to deploy Mysql Cluster to replace the current master-slave Mysql servers.
Now the database has lots of read/write ,about 100GB data updated per day. and database are in a Dell FC storage. server's memory is 64GB.

Can anybody please provide some comments for this plan?
1 Is this case can use Mysql cluster?
2 The cluster's performance of DB will be good with the database volume? what kind of server hardware configuration will be ?
3 Are there some cases of production Mysql Cluster with large data volume?


Thanks
Justin

Data Node sync Problem. (no replies)

$
0
0
Hello,

We have 6 individual cantos 64 BIT virtual machines on XEN Server.

We are able to create table. But, when we are trying to insert data in this table we showing only process is running long time. And after long time we are not getting any response from server.

Also we have tested same scenario with single data node it is working fine.

Data Node RAM : 19GB Both.

CREATE TABLE `TB_Name` (
`T1` BIGINT(20) NOT NULL,
`T2` BIGINT(20) NOT NULL,
`T3` INT(11) NOT NULL,
`T4` SMALLINT(6) DEFAULT NULL,
`T5` SMALLINT(6) DEFAULT NULL,
PRIMARY KEY (`T1`),
UNIQUE KEY `UK_CoEE` (`T2`,`T3`)
) ENGINE=NDBCLUSTER DEFAULT CHARSET=latin1;

INSERT INTO `TB_Name`(`T1`,`T2`,`T3`,`T4`,`T5`) VALUES (325302462008513,2460,244,1,7) ,(88727112009284,1,245,1,25);

+----+-------------+--------------------+---------+---------+------+-----------------------------------+-----------------------------------------------------------------------------------------------
| Id | User | Host | db | Command | Time | State | Info
+----+-------------+--------------------+---------+---------+------+-----------------------------------+-----------------------------------------------------------------------------------------------
| 4 | root | localhost | DB_Name | Query | 1519 | query end | INSERT INTO `CoExtElement` VALUES (325302462008513,2460,244,1,7),(88727112009284,1,245,1,25)
+----+-------------+--------------------+---------+---------+------+-----------------------------------+-----------------------------------------------------------------------------------------------


Configuration Part :

------------------------------------------------------------------------
[NDBD DEFAULT]
MaxBufferedEpochs=5000
BackupMaxWriteSize=54M
BackupDataBufferSize=100M
BackupLogBufferSize=50M
BackupMemory=55M
BackupReportFrequency=10
MemReportFrequency=30
LogLevelStartup=15
LogLevelShutdown=15
LogLevelCheckpoint=8
LogLevelNodeRestart=15
DataMemory=14032M
IndexMemory=1024M
MaxNoOfTables=4596
MaxNoOfSubscribers=10000
MaxNoOfTriggers=3500
NoOfReplicas=2
StringMemory=25
DiskPageBufferMemory=1048M
SharedGlobalMemory=512M
LongMessageBuffer=32M
MaxNoOfConcurrentTransactions=195000
BatchSizePerLocalScan=512
FragmentLogFileSize=256M
NoOfFragmentLogFiles=16
RedoBuffer=128M
MaxNoOfExecutionThreads=2
StopOnError=false
LockPagesInMainMemory=1
TimeBetweenEpochsTimeout=32000
TimeBetweenWatchdogCheckInitial=60000
TransactionDeadlockDetectionTimeout=8640000
TransactionInactiveTimeout=60000
HeartbeatIntervalDbDb=15000
HeartbeatIntervalDbApi=15000
MaxNoOfConcurrentOperations=510000
MaxNoOfLocalOperations=561000
MaxNoOfAttributes=314572
MaxNoOfOrderedIndexes=8000
RealtimeScheduler=1
Datadir=/home/mysql/data/

[MYSQLD DEFAULT]

[TCP DEFAULT]
SendBufferMemory=64M
ReceiveBufferMemory=24M

[NDB_MGMD DEFAULT]
Portnumber=1186

[NDB_MGMD]
NodeId=1
Hostname=192.168.0.53
LogDestination=FILE:filename=ndb_1_cluster.log,maxsize=10000000,maxfiles=6
DataDir=/var/lib/mysql-cluster/


[NDB_MGMD]
NodeId=2
Hostname=192.168.0.55
LogDestination=FILE:filename=ndb_2_cluster.log,maxsize=10000000,maxfiles=6
DataDir=/var/lib/mysql-cluster/

[NDBD]
NodeId=3
Hostname=192.168.0.57
datadir=/home/mysql/data/

[NDBD]
NodeId=4
Hostname=192.168.0.65
datadir=/home/mysql/data/

[MYSQLD]
NodeId=5
hostname=192.168.0.51

[MYSQLD]
NodeId=6
hostname=192.168.0.56

------------------------------------------------------------------------
[root@mgm ~]# ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 @192.168.0.57 (mysql-5.6.19 ndb-7.3.6, Nodegroup: 0, *)
id=4 @192.168.0.65 (mysql-5.6.19 ndb-7.3.6, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.0.53 (mysql-5.6.19 ndb-7.3.6)
id=2 @192.168.0.55 (mysql-5.6.19 ndb-7.3.6)

[mysqld(API)] 2 node(s)
id=5 @192.168.0.51 (mysql-5.6.19 ndb-7.3.6)
id=6 @192.168.0.56 (mysql-5.6.19 ndb-7.3.6)
------------------------------------------------------------------------

[root@mysqld ~]# cat /etc/my.cnf
[client]
socket=/home/mysql-Cluster/mysql.sock
[mysqld]
max_connections=100
datadir=/home/mysql-Cluster
socket=/home/mysql-Cluster/mysql.sock
ndbcluster
ndb-connectstring=192.168.0.53, 192.168.0.55
ndb-force-send=1
ndb-use-exact-count=0
ndb-extra-logging=1
ndb-batch-size=24M
ndb-autoincrement-prefetch-sz=1024
default-storage-engine=NDB
max_allowed_packet = 5G
skip-name-resolve

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

[mysql_cluster]
ndb-connectstring=192.168.0.53, 192.168.0.55
[root@mysqld ~]#

Thanks,
Bhairav

creating mysql cluster - DLL load failed (1 reply)

$
0
0
I am attempting to create my first mySql cluster.


when i run the setup.bat and define the other host in the cluster and click next.

I get the error:

the page at localhost:8081 says:
host `208.xx.xx.152`: DLL load failed The specific module could not be fould.
host `208.xx.xx.153`: DLL load failed The specific module could not be fould.
host `208.xx.xx.154`: DLL load failed The specific module could not be fould.

I installed mysql-cluster-gpl-7.3.6-winx64 on all your severs.

If I ignore the error and contine and try to start the cluster I get the same errors above.

Since I have never installed mysql cluster before I assume I am missing something fairly obvious?

Thanks,

Mysql Cluster , Partitioning , Shreds, Replication , Fabric (no replies)

$
0
0
Hi,

Is Mysql Cluster is Partitioning or Replication or both,
when we say Partitioning is mean manual Partitioning ? and Cluster do this job on auto basis ?

Is cluster also use Replication (Master master)?

What Fabric do then ?

Is Shreds and Partitioning are same things ?

Thank you

MySql Cluster - How many data files required and what should be size of each data files for 1 TB data? (no replies)

$
0
0
What is ideal configuration of data files for 1 TB data for disk based mysql cluster set up?

1. How many data files needed?
2. What should be the size of each data file?

Thanks,
Noghan

mysql cluster server 2012 - deploying configuration. (no replies)

$
0
0
Trying to deploy configuration on mySql cluster (server 2012) (mysql-cluster-gpl-7.3.6-winx64)

When I click deploy its gets to about 17% and errors with:

Unable to append file C:/Program Fiels/MySQL/MySQL Cluster 7.3/share/mysql_system_tables.sql to C:/MySQL_Cluster/55/tmp/install.sql to on host "my ip address" [Errno 13] Permission Denied.

It does create folders on the remote server via freeSSH.

order by timestamp column with fractional seconds giving incorrect results ? (no replies)

$
0
0
hi all.

is there any known occurrence of order by giving incorrect order while using order by timestamp with precision (fractional seconds) ?

its a mystery for us still, we have added app logging to further figure this out.
i went thru the bug database but could not find any bugs.


we are using mysql cluster 7.3.4.

thanking you.

with regards,
ch vishnu

[ERROR] Transaction not registered for MySQL 2PC, but transaction is active (no replies)

$
0
0
Does anyone know what the following error message means
- "[ERROR] Transaction not registered for MySQL 2PC, but transaction is active"

I'm actually using Percona XtraDB Cluster, but I haven't got any answer at their forums.

Environment:
* OS: Debian Wheezy, up to date patched.
* Percona version: percona-xtradb-cluster-56 5.6.20-25.7-886.wheezy amd64
* XTRABackup: percona-xtrabackup 2.2.5-5027-1.wheezy amd64
* Galera percona-xtradb-cluster-galera-3.x 3.7.3256.wheezy amd64
* Database size : ~40GB

Detailed description:
I have the following problem that I've not been able to solve. When our application starts and modifies the database schema ( using Liquibase ), the following error message will be in the error logs followed by a crash dump
- "Transaction not registered for MySQL 2PC, but transaction is active"
The error will occure when Liquibase tries to add index to a table or alter the table structure like add new column to a table. The database is backup from Percona 5.5, and I have run mysql_upgrade before synchronizing the cluster with the new database.

To make things weird, here's few of my notes
- Everything works well with PXD 5.6 if I will use test database that is about ~400MB ( from percona 5.5 also )
- Everything works well if I will use Percona XtraDB Cluster 5.5. No errors or warnings.
- I can run single 'CREATE INDEX' or 'ALTER TABLE' clauses from mysql command line without problems when using Percona 5.6, but the same statements will fail when run throught Liquibase. ( This is why I'm asking about what the error means, as I'm wondering if it could be caused by Liquibase. Still dont understand why it works with small database )



STACK TRACE:
2014-10-10 14:20:29 4876 [ERROR] Transaction not registered for MySQL 2PC, but transaction is active
2014-10-10 14:20:38 7f95fd985700 InnoDB: Assertion failure in thread 140282181474048 in file row0mysql.cc line 3990
InnoDB: Failing assertion: table->n_rec_locks == 0
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.6/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
11:20:38 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona XtraDB Cluster better by reporting any
bugs at https://bugs.launchpad.net/percona-xtradb-cluster

key_buffer_size=402653184
read_buffer_size=2097152
max_used_connections=60
max_threads=502
thread_count=62
connection_count=60
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 2457013 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x3045010
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 7f95fd984e50 thread_stack 0x30000
/usr/sbin/mysqld(my_print_stacktrace+0x2e)[0x8c8e1e]
/usr/sbin/mysqld(handle_fatal_signal+0x36c)[0x680ffc]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xf030)[0x7f962ef06030]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35)[0x7f962d1401a5]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x180)[0x7f962d143420]
/usr/sbin/mysqld[0x98e654]
/usr/sbin/mysqld[0x8edf77]
/usr/sbin/mysqld(_Z15ha_delete_tableP3THDP10handlertonPKcS4_S4_b+0xb5)[0x5e1e65]
/usr/sbin/mysqld(_Z14quick_rm_tableP3THDP10handlertonPKcS4_j+0x12a)[0x73ff1a]
/usr/sbin/mysqld(_Z17mysql_alter_tableP3THDPcS1_P24st_ha_create_informationP10TABLE_LISTP10Alter_infojP8st_orderb+0x2681)[0x748fc1]
/usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x37f5)[0x6f9695]
/usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x588)[0x6fbbb8]
/usr/sbin/mysqld[0x6fbcbd]
/usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0xc4e)[0x6fcdbe]
/usr/sbin/mysqld(_Z10do_commandP3THD+0x1f1)[0x6fde21]
/usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x27d)[0x6cf8ed]
/usr/sbin/mysqld(handle_one_connection+0x42)[0x6cf972]
/usr/sbin/mysqld(pfs_spawn_thread+0x140)[0xb13270]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50)[0x7f962eefdb50]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f962d1e9e6d]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (3051600): CREATE INDEX `xxxIndex` ON `xxxTable`(`xxxColumn`)
Connection ID (thread ID): 59
Status: NOT_KILLED

Common Table (no replies)

$
0
0
Is there an ability to mark tables as common to make them exist on all data nodes?

I have small tables that will be joined on commonly and it would be helpful to have those tables on every data node. I'd imagine there is some way to do this by making those tables use Innodb and set up a bunch of replicas (assuming I can join between ndb and innodb) but was wondering if there is something more native to MySQL cluster.

Thanks!

Query faster 10 times when stop on data node (1 reply)

$
0
0
When stop a data node, keep only one running, my query faster 10X.
when two data node running, my query take 10 sec,
when I stop 1 data node, this query only take 0.3 sec

Can anyone explain this?

-------------------------------------------------------------
My config.ini

[ndbd default]
LockPagesInMainMemory=1
ODirect=1
NoOfReplicas=2
DataMemory=2000M
IndexMemory=200M
MaxNoOfTables=1800
MaxNoOfUniqueHashIndexes=3000
MaxNoOfAttributes=50000
MaxNoOfOrderedIndexes=16000
[tcp default]
#portnumber=2202
SendBufferMemory=2M
ReceiveBufferMemory=2M
[ndb_mgmd]
hostname=192.168.1.10
datadir=/var/lib/mysql-cluster
#[ndb_mgmd]
#hostname=192.168.1.11
#datadir=/var/lib/mysql-cluster
[ndbd]
hostname=192.168.1.10
datadir=/var/lib/mysql-cluster
[ndbd]
hostname=192.168.1.11
datadir=/var/lib/mysql-cluster
[mysqld]
hostname=192.168.1.10
[mysqld]
hostname=192.168.1.11
[mysqld]
[mysqld]
[mysqld]

My Query SQL:

SELECT t1.StoreId,
t1.Name AS StoreName,
t1.Latitude,
t1.Longitude,
t1.MemberLeverCode,
t1.Score,
t1.Address,
t1.MainFeatures,
1 AS LocationStatus,
GetDistance(t1.Longitude,
t1.Latitude,
113.69032,
34.81928)
AS DistanceSql
FROM store AS t1 , store_category_rel AS t2
WHERE ((t1.ApplyStatus = 1 && t1.IsRegComplete) OR t1.Source = 0)
AND t1.StoreId = t2.StoreId AND t2.StoreCategoryId = 2
AND t1.AreaCode LIKE '4101%'
ORDER BY LocationStatus DESC, DistanceSql ASC
LIMIT 0, 10

mysql cluster with loadbalancing (1 reply)

$
0
0
dear community,

i have the following problem:
i have a big traffic Website. this runs currently on the newsest Debian Version, with the mysql-server 5.5.

first Server is a webserver (load: 40%)
second Server is a mysqlserver (load: 95%)

so now i have ordered two new Servers, and want Setup an mysql Cluster with loadbalancing.

now the question: how i can do that? any tutorils or informations how i can do that?

thank you!

many greests,

m. king
Viewing all 1560 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>