Quantcast
Channel: MySQL Forums - NDB clusters
Viewing all 1553 articles
Browse latest View live

Clustering (1 reply)

$
0
0
Please tell the names of files required for clustering
and please tell the exact location where I can download them from.

Download, install, configure, run and test MySQL Cluster in under 15 minutes (no replies)

MySQL Cluster: Download, install, configure, run in under 15 minutes (no replies)

Table creation from openLDAP generates an error messgae (1 reply)

$
0
0
Hi All,

I am having a strange problem. When I insert data from the slapadd to openLDAP-NDB server, All tables are created fine except for the "Create Table" queries which are of length greater then 4096 characters.

I got this information from the query logs of the MySQL. I guess when a Create Table call is made to the ndb-cluster using NDBApi, It puts some garbage character at the end of query after 4096 characters because of which the query execution fails.

Any workaround for the problem. I am not sure if it's openldap problem or problem with the NdbApi. Any help on the same is appreciated.

mysql cluster error code 2310 (1 reply)

$
0
0
I start mysql cluster and then import data. Data is probably 14G.Import completed successfully.and i restart the mysql cluster. use ndb_mgmd -f /mysql/mysql-cluster/config.ini 、ndbd start mgm data node .When the memory when using 11% of the following error report. i set datamemory is 24 G.
If the data is less than the normal 3g .I'm so depressed
2010-06-24 09:31:11 [MgmSrvr] ALERT -- Node 6: Forced node shutdown completed. Occured during startphase 4. Caused by error 2310: 'Error while reading the REDO log(Ndbd file system inconsistency error, please report a bug). Ndbd file system error, restart node initial'.
2010-06-24 09:31:11 [MgmSrvr] ALERT -- Node 4: Forced node shutdown completed. Occured during startphase 4. Caused by error 2310: 'Error while reading the REDO log(Ndbd file system inconsistency error, please report a bug). Ndbd file system error, restart node initial'.
2010-06-24 09:31:11 [MgmSrvr] ALERT -- Node 1: Node 6 Disconnected
2010-06-24 09:31:11 [MgmSrvr] ALERT -- Node 5: Forced node shutdown completed. Occured during startphase 4. Caused by error 2310: 'Error while reading the REDO log(Ndbd file system inconsistency error, please report a bug). Ndbd file system error, restart node initial'.
2010-06-24 09:31:11 [MgmSrvr] ALERT -- Node 1: Node 4 Disconnected
2010-06-24 09:31:11 [MgmSrvr] ALERT -- Node 1: Node 5 Disconnected
2010-06-24 09:31:11 [MgmSrvr] ALERT -- Node 3: Forced node shutdown completed. Occured during startphase 4. Caused by error 2310: 'Error while reading the REDO log(Ndbd file system inconsistency error, please report a bug). Ndbd file system error, restart node initial'.
2010-06-24 09:31:11 [MgmSrvr] ALERT -- Node 1: Node 3 Disconnected

Node declared dead. See error log for details (Arbitration error) (1 reply)

$
0
0
Hello, I ofter encounter this error information:

Time: Wednesday 23 June 2010 - 19:14:47
Status: Temporary error, restart node
Message: Node declared dead. See error log for details (Arbitration error)
Error: 2315
Error data: We(3) have been declared dead by 4 reason: Hearbeat failure(4)
Error object: QMGR (Line: 3555) 0x00000008
Program: ndbmtd
Pid: 6037 thr: 0
Version: mysql-5.1.44 ndb-7.1.3
Trace: /usr/local/mysql/data/ndb_3_trace.log.22 /usr/local/mysql/data/ndb_3_trace.log.22_t1 /usr/local/mysql/data/ndb_3_trace.log.22_t2 /usr/local/mys

please help me. thank you.

my environment:
1 management node
28 sql node
4 data node 32G memory/every data node

my config.ini :
# Options affecting ndbd processes on all data nodes:
[ndbd default]
NoOfReplicas=2 # Number of replicas
#DataMemory=3072M # How much memory to allocate for data storage
DataMemory=20480M
IndexMemory=3413M # How much memory to allocate for index storage
# For DataMemory and IndexMemory, we have used the
# default values. Since the "world" database takes up
# only about 500KB, this should be more than enough for
# this example Cluster setup.
StringMemory=25

ODirect=1
MaxNoOfLocalScans=64

MaxNoOfTables=4096
MaxNoOfOrderedIndexes=2048
MaxNoOfUniqueHashIndexes=512
MaxNoOfAttributes=2000
#MaxNoOfAttributes=24576
MaxNoOfTriggers=14336
MaxNoOfConcurrentOperations=5000000

#MaxAllocate=50M

LockPagesInMainMemory=1

MaxNoOfConcurrentTransactions=16384

NoOfFragmentLogFiles=48

#### New Add ##############
DiskCheckpointSpeedInRestart=100M
FragmentLogFileSize=256M
#TimeBetweenLocalCheckpoints=20
TimeBetweenGlobalCheckpoints=1000
TimeBetweenEpochs=100
InitFragmentLogFiles=SPARSE
MemReportFrequency=30
BackupReportFrequency=10
### Watchdog
TimeBetweenWatchdogCheckInitial=60000
### TransactionInactiveTimeout - should be enabled in Production
TransactionInactiveTimeout=60000
SharedGlobalMemory=384M -- 决定日志、磁盘操作、表空间的元数据和日志文件组、UNDO文件、数据文件的缓冲区总量
LongMessageBuffer=1024M -- 用于节点之间的传递消息的内部缓冲
BatchSizePerLocalScan=512
#############################

#InitFragmentLogFiles=FULL
RedoBuffer=32M
#

TransactionBufferMemory=10M
TimeBetweenLocalCheckpoints=4
TransactionDeadlockDetectionTimeout=10000


DiskPageBufferMemory=256M
DiskCheckpointSpeed=100M

LogLevelStartup=15
LogLevelShutdown=15
LogLevelCheckpoint=8
LogLevelNodeRestart=15
LogLevelError=15

BackupWriteSize=1M
BackupDataBufferSize=16M
BackupLogBufferSize=4M
BackupMemory=20M

UndoIndexBuffer=64M
UndoDataBuffer=256M

StopOnError=0

#NoOfDiskPagesToDiskAfterRestartTUP=40
#NoOfDiskPagesToDiskAfterRestartACC=20
#NoOfDiskPagesToDiskDuringRestartTUP=40
#NoOfDiskPagesToDiskDuringRestartACC=20

MaxNoOfExecutionThreads=8

TotalSendBufferMemory=20M

#HeartbeatIntervalDbDb=15000
#HeartbeatIntervalDbApi=15000

# TCP/IP options:
[tcp default]
#portnumber=1186 # This the default; however, you can use any port that is free
# for all the hosts in the cluster
# Note: It is recommended that you do not specify the port
# number at all and allow the default value to be used instead
SendBufferMemory=20480K
ReceiveBufferMemory=20480K

# Management process options:
[ndb_mgmd]
id=1
hostname=10.192.83.7 # Hostname or IP address of management node
datadir=/var/lib/mysql-cluster # Directory for management node log files

ArbitrationRank=1
ArbitrationDelay=0

# Options for data node "A":
[ndbd]
id=2 # (one [ndbd] section per data node)
hostname=10.192.7.15 # Hostname or IP address
datadir=/usr/local/mysql/data # Directory for this data node's data files
#TotalSendBufferMemory=200M

# Options for data node "B":
[ndbd]
id=3
hostname=10.192.7.16 # Hostname or IP address
datadir=/usr/local/mysql/data # Directory for this data node's data files
#TotalSendBufferMemory=200M

# Options for data node "C":
[ndbd]
id=4
hostname=10.192.136.21 # Hostname or IP address
datadir=/usr/local/mysql/data # Directory for this data node's data files
#TotalSendBufferMemory=200M

# Options for data node "D":
[ndbd]
id=5
hostname=10.192.136.28 # Hostname or IP address
datadir=/usr/local/mysql/data # Directory for this data node's data files
#TotalSendBufferMemory=200M



# SQL node options:

[mysqld]
hostname=10.192.89.6 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
id=18
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
hostname=10.192.89.60
id=19
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

[mysqld]
MaxScanBatchSize=16M
ArbitrationDelay=0

MySql Cluster in Wan (1 reply)

$
0
0
I am doing a simple mysql cluster.
1 mgmt node
2 data node
2 sql node

The above cluster running successfully in a network. Now i would like to add a sql node in another network across WAN. However, i found out that the sql node successfully connect to the mgmt node but fail to connect data node. I assign a public address to the mgmt node only. How to solve this issue in this case?

Using Syslog with MySQL Cluster (no replies)


Data loss (1 reply)

$
0
0
hello,

I have encountered the data loss on mysql cluster. The cluster worked properly at the beginning. After it ran for some time, about last 2 days of data is lost suddenly.

I checked the log file of the mysql cluster. I cannot find any error message.


Any thoughts? thanks.


---------------------
2 Physical servers. The OS of servers are SUSE 10 Enterprise with XEN.
VMs(OS:SUSE 10 Enterprise) are created to form the mysql cluster.


Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 @192.168.4.1 (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0, Master)
id=4 @192.168.4.11 (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0)

[ndb_mgmd(MGM)] 2 node(s)
id=1 @192.168.4.101 (mysql-5.1.39 ndb-7.0.9)
id=2 @192.168.4.111 (mysql-5.1.39 ndb-7.0.9)

[mysqld(API)] 4 node(s)
id=5 @192.168.4.1 (mysql-5.1.39 ndb-7.0.9)
id=6 @192.168.4.1 (mysql-5.1.39 ndb-7.0.9)
id=7 @192.168.4.11 (mysql-5.1.39 ndb-7.0.9)
id=8 @192.168.4.11 (mysql-5.1.39 ndb-7.0.9)

MySQL Sandbox embraces Python and meets Cluster (no replies)

cluster with big partitioning (2 replies)

$
0
0
Hi,

I am investigating the possibilites to upscale with MySQL. We've looked into Cassandra and MongoDB but the read performance is too low compared to MySQL so we are now looking to do it all in MySQL.

We have a myisam table in a db which has 500 partitions by key (max rows: 50M so 100.000 rows per partition). 100.000 rows is about 1Gb of data including indexes.

This in a single node is no problem. No for our challenge:

The reads on this table are about 1,000 per second...BUT the writes are about the same!!
We want to replicate the database using a master-master setup (=ndb cluster) using up to 10+ nodes. A master with 10 slaves is not good with so much writes so we hope master-master replication is.

My questions:
- is this setup even possible?
- is there a better solution (we need to keep the read performance very very high and be able to update/write on the same without disturbing the read performance)

Thx,
Maarten

Failed to create LOGFILE GROUP (7 replies)

$
0
0
I have created a 2 node mysql cluster which seems to be working fine. I have tried to import a mysql dump backup file to it from another cluster. bouth clusters are the same version. The command I'm using is:
mysql -uroot -p ticket_login < /usr/tmp/patron_recognition_database_2010-06-16_Hour02.sql
after adding the passwrod I get this error:
ERROR 1528 (HY000) at line 22: Failed to create LOGFILE GROUP

I am kind of new to Mysql and clustering so any advise would be most helpfull

Thanks

Nagios reporting /dev/shm full on data nodes (2 replies)

$
0
0
Hi,

I've got a cluster setup with a management node and two data nodes. Recently the nagios server which is set up to monitor disk space (amongst other things) started warning about the /dev/shm filesystem being 100% full. This warning would "flap" i.e. go from a warning state to a non-warning state.

After logging into this machine and running df, it showed that this filesystem is completely empty:

# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                       26G   20G  4.7G  81% /
/dev/xvda1             99M   43M   52M  46% /boot
tmpfs                  10G     0   10G   0% /dev/shm

The only software running on this machine is ndbd. Also, I should mention that nagios has never reported this error before, although recently I have been migrating a lot of data into to the cluster tables. I have stopped inserting any more data into the cluster tables, but the warning remains.

My thought is that the data nodes may be running out of memory, but after checking the memory usage, it doesn't seem like it:
ndb_mgm> 3 REPORT MemoryUsage
Node 3: Index usage is 13%(24473 8K pages of total 180384)
Node 3: Data usage is 43%(155534 32K pages of total 360512)

ndb_mgm> 2 REPORT MemoryUsage
Node 2: Index usage is 13%(24473 8K pages of total 180384)
Node 2: Data usage is 43%(155536 32K pages of total 360512)

BTW, how do I read the above numbers? i.e. are they both using 1.8G for indexes and 3.6G of memory for data? The data nodes each have 20GB of memory in total and DataMemory = 11266M, IndexMemory = 1409M in config.ini.

The version of Cluster that I'm using is:
MySQL distrib mysql-5.1.39 ndb-7.0.9b, for unknown-linux-gnu (x86_64)

and the OS it's running on is CentOS release 5.4 (Final) 64-bit version

2 API node not sysnc (5 replies)

$
0
0
Hi! I have successfully setup mysql cluster in fact is already been running for 75 days with no issue until now when I setup load balancing via haproxy.

I found out that API Node 1 is fully updated while API Node 2 tables are incomplete also with some data.

Before when I try to insert record on API Node 1 I can see it on API Node 2, Same with API Node create table or data I can see it on API Node 1

Now everytime I made changes on API Node 1 does not replicate to API Node 2 vice versa.

Please need help badly.... thanks

NDB node keep starting state for long time (2 replies)

$
0
0
We are using mysql cluster 7.1.4b having following configuration under Red Hat Enterprise Linux Server release 5.4:

Management Node:

[ndbd default]
NoOfReplicas=2 # Number of replicas
DataMemory=1536M
IndexMemory=100M
RedoBuffer=256M
[tcp default]

[ndb_mgmd]
hostname=10.0.104.33
datadir=/var/lib/mysql-cluster

[ndbd]
hostname=10.0.104.22
datadir=/usr/local/mysql/data
[ndbd]
hostname=10.0.104.23
datadir=/usr/local/mysql/data
[mysqld]
[mysqld]


My.cnf:
[mysqld]
ndbcluster
ndb-connectstring=10.0.104.33
[mysql_cluster]
ndb-connectstring=10.0.104.33


------------------------------------------------------
When we start the ndbd, it keeps on the starting status:[ndbd(NDB)] 2 node(s)
id=2 @10.0.104.22 (mysql-5.1.44 ndb-7.1.4, starting, Nodegroup: 0)
id=3 (not connected, accepting connect from 10.0.104.23)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @10.0.104.33 (mysql-5.1.44 ndb-7.1.4)

[mysqld(API)] 2 node(s)
id=4 (not connected, accepting connect from any host)
id=5 (not connected, accepting connect from any host)


Checking the cluster log, found the following message:
2010-07-12 22:19:11 [MgmtSrvr] INFO -- Node 2: Initial start, waiting for 3 to connect, nodes [ all: 2 and 3 connected: 2 no-wait: ]
2010-07-12 22:19:14 [MgmtSrvr] INFO -- Node 2: Initial start, waiting for 3 to connect, nodes [ all: 2 and 3 connected: 2 no-wait: ]
2010-07-12 22:19:17 [MgmtSrvr] INFO -- Node 2: Initial start, waiting for 3 to connect, nodes [ all: 2 and 3 connected: 2 no-wait: ]
2010-07-12 22:19:20 [MgmtSrvr] INFO -- Node 2: Initial start, waiting for 3 to connect, nodes [ all: 2 and 3 connected: 2 no-wait: ]
2010-07-12 22:19:23 [MgmtSrvr] INFO -- Node 2: Initial start, waiting for 3 to connect, nodes [ all: 2 and 3 connected: 2 no-wait: ]
2010-07-12 22:19:26 [MgmtSrvr] INFO -- Node 2: Initial start, waiting for 3 to connect, nodes [ all: 2 and 3 connected: 2 no-wait: ]
2010-07-12 22:19:29 [MgmtSrvr] INFO -- Node 2: Initial start, waiting for 3 to connect, nodes [ all: 2 and 3 connected: 2 no-wait: ]
2010-07-12 22:19:32 [MgmtSrvr] INFO -- Node 2: Initial start, waiting for 3 to connect, nodes [ all: 2 and 3 connected: 2 no-wait: ]
2010-07-12 22:19:35 [MgmtSrvr] INFO -- Node 2: Initial start, waiting for 3 to connect, nodes [ all: 2 and 3 connected: 2 no-wait: ]
2010-07-12 22:19:38 [MgmtSrvr] INFO -- Node 2: Initial start, waiting for 3 to connect, nodes [ all: 2 and 3 connected: 2 no-wait: ]


2010-07-12 21:49:59 [MgmtSrvr] INFO -- Node 2: Initial start, waiting for 3 to connect, nodes [ all: 2 and 3 connected: 2 no-wait: ]
2010-07-12 21:50:00 [MgmtSrvr] INFO -- Node 3: Initial start, waiting for 2 to connect, nodes [ all: 2 and 3 connected: 3 no-wait: ]
2010-07-12 21:50:02 [MgmtSrvr] INFO -- Node 2: Initial start, waiting for 3 to connect, nodes [ all: 2 and 3 connected: 2 no-wait: ]
2010-07-12 21:50:03 [MgmtSrvr] INFO -- Node 3: Initial start, waiting for 2 to connect, nodes [ all: 2 and 3 connected: 3 no-wait: ]
2010-07-12 21:50:05 [MgmtSrvr] INFO -- Node 2: Initial start, waiting for 3 to connect, nodes [ all: 2 and 3 connected: 2 no-wait: ]
2010-07-12 21:50:07 [MgmtSrvr] INFO -- Node 3: Initial start, waiting for 2 to connect, nodes [ all: 2 and 3 connected: 3 no-wait: ]
2010-07-12 21:50:08 [MgmtSrvr] INFO -- Node 2: Initial start, waiting for 3 to connect, nodes [ all: 2 and 3 connected: 2 no-wait: ]
2010-07-12 21:50:10 [MgmtSrvr] INFO -- Node 3: Initial start, waiting for 2 to connect, nodes [ all: 2 and 3 connected: 3 no-wait: ]

For NDB log:
2010-07-12 21:54:22 [ndbd] INFO -- NDB Cluster -- DB node 2
2010-07-12 21:54:22 [ndbd] INFO -- mysql-5.1.44 ndb-7.1.4b --
2010-07-12 21:54:22 [ndbd] INFO -- Ndbd_mem_manager::init(1) min: 1896Mb initial: 1916Mb
Adding 1917Mb to ZONE_LO (1,61319)
2010-07-12 21:54:24 [ndbd] INFO -- Start initiated (mysql-5.1.44 ndb-7.1.4)
WARNING: timerHandlingLab now: 38480895 sent: 38480810 diff: 85
NDBFS/AsyncFile: Allocating 310256 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310256 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310256 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310256 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310256 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310256 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310256 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310256 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310256 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310256 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310256 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310256 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310256 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310256 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310256 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310256 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310256 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310256 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310256 for In/Deflate buffer
NDBFS/AsyncFile: Allocating 310256 for In/Deflate buffer
WOPool::init(61, 9)
RWPool::init(22, 13)
RWPool::init(42, 18)
RWPool::init(62, 13)
Using 1 fragments per node
WARNING: timerHandlingLab now: 38481065 sent: 38481015 diff: 50
RWPool::init(c2, 18)
RWPool::init(e2, 14)
WOPool::init(41, 7)
RWPool::init(82, 12)
RWPool::init(a2, 53)
WOPool::init(21, 6)
2010-07-12 21:54:24 [ndbd] INFO -- Start phase 0 completed


P.S. There is no ERROR message found from the log file

It seems that the node is keeop waiting for other node. Even if I start the two node together, the same problem happened. Is there anyway to trouble shoot this problem?

Cluster data node firewall setting (1 reply)

$
0
0
I would like to setup firewall rule on my data node machine. What service port no. should I enable for the data node? What service port is require between data nodes communication?

Thanks.

Slow query (1 reply)

$
0
0
How to check slow query in mysql cluster?

MySQL in Cloud (1 reply)

$
0
0
Hi,

We are migrating an application (Tomcat/MySQL) into Azure Cloud. For this we are using MySQL Accelerator provided by Microsoft which uses Master/Slave configuration for Load balancing. There are a couple of issues with this approach

1. There is 1 master and n slaves. So, the write through-put is constant and the read through-put is scalable.

2. When a slave is added, then tomcat should be made aware of the new MySQL instance. Do we need to use JMX to update the JDBC Connect String in Tomcat? We are using com.mysql.jdbc.ReplicationDriver (http://dev.mysql.com/doc/refman/5.5/en/connector-j-reference-replication-connection.html) to load balance across the master and the slaves.

Are there any best practices/patterns/softwares for making MySQL scalable in the cloud? Has anyone used MySQL Cluster in the cloud?

Thanks,
Praveen

Setup Cluster doubts? (2 replies)

$
0
0
Hi.

I have been reading about cluster and want to try, but I have some doubts specifically what do I need to install on each machine?.

I will run this inside virtual machines with Xen on centos 5.5.

The doc say about data nodes, sql nodes, management server, management client.

I will use Centos for this to, but what software I have to install on:

sql node
data node
nmg_management

Is what I don't get it, because I have a lot software from redhat about clusters:

mysql-cluster-client
mysql-cluster-server
mysql-cluster-clusterj
mysql-cluster-management
mysql-cluster-shared
mysql-cluster-storage

etc, etc.

I don't fully understand which software goes on each part.

Any info about I will appreciated, thanks all for your time!!!

Worried about using a library in a clustered environment. (3 replies)

$
0
0
I am concerned about using a regex library(I need a regex replace feature). If I install a 3rd party library on mysql, how would this effect if I went to a clustering model? Would I need to install this on each server?? I am worried that it will cause a lot of headaches in scenerio's where we use multiple servers.

Should I stay away from custom libraries in scenerio's with multiple servers??

Thanks!
Viewing all 1553 articles
Browse latest View live




Latest Images