Quantcast
Channel: MySQL Forums - NDB clusters
Viewing all 1560 articles
Browse latest View live

exponential rss memory increase on API node with max-connections (1 reply)

$
0
0
I'm testing out an upgrade to 7.4.11 and noticed that the RSS memory usage on the mysqld api node was extraordinarily high.
This is right after startup of the api node and with no client connections.
I'm compiling from source and running on CentOS 6.7 x86_64

After doing some digging, i found that at a max-connections setting of >= 303, the RSS memory usage increased exponentially (+313.2MB from 302 to 303).

In my tests an increase of max-connections by +1 normally resulted in an increase of RSS memory usage by +125KB.

Here are my results:

max-connections / RSS (MB)
001 / 087.9
050 / 093.1
100 / 099.4
200 / 111.8
300 / 124.2
301 / 124.3
302 / 124.4
303 / 438.5
350 / 446.7
400 / 455.4

/proc/<pid>/smaps reveals the following large memory mapping at 303 connections:

7fc031eeb000-7fc04b05d000 rw-p 00000000 00:00 0
Size: 411080 kB
Rss: 411080 kB
Pss: 411080 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 411080 kB
Referenced: 411080 kB
Anonymous: 411080 kB
AnonHugePages: 0 kB
Swap: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
VmFlags: rd wr mr mw me ac

while the mapping at 302 connections reveals:

7ff68f4d2000-7ff69538e000 rw-p 00000000 00:00 0
Size: 97008 kB
Rss: 97008 kB
Pss: 97008 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 97008 kB
Referenced: 97008 kB
Anonymous: 97008 kB
AnonHugePages: 0 kB
Swap: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
VmFlags: rd wr mr mw me ac

I have also tested by removing any buffers I had set in /etc/my.cnf such as:
max-allowed-packet,tmp_table_size,max_heap_table_size,join_buffer_size,sort_buffer_size,read_rnd_buffer_size,ndb-batch-size and also any buffers i had set in the cluster config under [MYSQLD DEFAULT]
Neither change had any effect on the RSS usage.

Any help is appreciated.

-Tony

there is memory pointer issue in mysql-5.6.27 ndb-7.4.8 (1 reply)

$
0
0
This morning, our mysql cluster is crashed in ndbd, i checked error logs:
------------------------------------
Current byte-offset of file-pointer is: 568


Time: Tuesday 10 May 2016 - 01:23:56
Status: Temporary error, restart node
Message: Internal program error (failed ndbrequire) (Internal error, programming error or missing error message, please report a bug)
Error: 2341
Error data: DbtupTrigger.cpp
Error object: DBTUP (Line: 2200) 0x00000002
Program: ndbd
Pid: 32647
Version: mysql-5.6.27 ndb-7.4.8
Trace: /usr/local/mysql/data/ndb_2_trace.log.1 [t1..t1]
***EOM***
------------------------------------

Is it a known issue, or a new issue ?

How can i resolve this issue ?

Does 5.7.11-ndb-7.5.1-cluster-gpl support json datatype in ndb table? (1 reply)

$
0
0
Dear Experts,


As per documentation
"In MySQL Cluster NDB 7.5.1 and later, an NDB table can have a maximum of 3 JSON columns."

Does 5.7.11-ndb-7.5.1-cluster-gpl support json datatype in ndb table?

Please suggest.


I am looking for latest mysql - cluster having support for JSON data type.

As per URL latest release seems to be 5.7.11-ndb-7.5.1-cluster-gpl

https://dev.mysql.com/doc/relnotes/mysql-cluster/7.5/en/mysql-cluster-news-7-5-2.html

VERSION(): 5.7.11-ndb-7.5.1-cluster-gpl


Does anyone has installed latest community mysql cluster release?
Does anyone knows if Json data-type feature is available in mysql-cluster?

Please guide.

Thanks & Regards,

Import large mysql (innodb and myisam) database to a mysql cluster with less physical memory(ram) (no replies)

$
0
0
I have 5 databases whose cumulative size is about 55GB and 2 Mysql data nodes with memory of 30 GB each, how can i import the databases into cluster with lesser RAM compared to the size of database. Please provide any suggestions or useful links. Thanks
mysql config.ini :-


[tcp default]
SendBufferMemory=2M
ReceiveBufferMemory=2M

[ndb_mgmd]
NodeId=10
hostname=10.10.10.10
datadir=/var/lib/mycluster

[ndbd default]
NoOfReplicas=2
LockPagesInMainMemory=1
DataMemory=20000M
IndexMemory=1024M
ServerPort=2202
ODirect=1
#CompressedLCP=1
#CompressedBackup=1
#table related things
MaxNoOfTables=4096
MaxNoOfAttributes=24756
MaxNoOfOrderedIndexes=2048
MaxNoOfUniqueHashIndexes=512
MaxNoOfConcurrentOperations=500000
MaxNoOfConcurrentTransactions=500000
NoOfFragmentLogFiles=128
TimeBetweenLocalCheckpoints=30
FragmentLogFileSize=256M

[ndbd]
NodeId=20
hostname=10.10.10.20
datadir=/var/lib/mycluster-data
#BackupDataDir=/dir/to/backup

[ndbd]
NodeId=30
hostname=10.10.10.21
datadir=/var/lib/mycluster-data
#BackupDataDir=/dir/to/backup

[mysqld]
NodeId=40
hostname=10.10.10.30

[mysqld]
NodeId=50
hostname=10.10.10.31

[mysqld]

[mysqld]

[api]

[api]

[api]

[api]

-

DATA Nodes

# my.cnf
[mysqld]
ndbcluster
ndb-connectstring=10.10.10.10

[mysql_cluster]
ndb-connectstring=10.10.10.10

-

SQL node "/etc/my.cnf"

[mysqld]
ndbcluster
default-storage-engine=NDBCLUSTER
net_read_timeout=60000
connect_timeout=60000
max_allowed_packet=32M
max_connections=1000
query_cache_size=128M
query_cache_limit=16M
ndb-cluster-connection-pool=2
slow-query-log=1
slow-query-log-file=/var/log/mysql/slowquery.log
long-query_time=1
log-queries-not-using-indexes
[mysql_cluster]
ndb-connectstring=10.10.10.10

ndbmtd constantly writes to disk on idle (1 reply)

$
0
0
Hi,

I just setup a mysql cluster, so far everything is working as expected.
I have 2 data nodes with just ndbmtd running. I noticed that on both data nodes the ndbmtd processes writing constantly 7-10MB/sec on disk with an idle cluster for more than 12Hours.

#Node 1
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
vda 472,00 0,00 9,24 0 9

#Node 2
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
vda 488,00 0,00 9,62 0 9

The netwok traffic is 200-250KB/sec on each node overall.

Is this a normal behavior ? is it possible stop that ?

Thanks
Regards

Denny

DataMemory uses 2x table size (1 reply)

$
0
0
Hi guys,

i just setup a mysql-cluster with 2 datanodes. Im useing tablespace to reduce memory consumtion. I noticed that still the doubleof the size of my table allocated in memory. Im restoring a dump which creates the table as follows:

create table test "some data" TABLESPACE ts_tablespace STORAGE DISK ENGINE=ndbcluster;

my table size is the following:
+----------------+------------+-----------------+
| Table Name | Rows Count | Table Size (MB) |
+----------------+------------+-----------------+
| test | 614360 | 93.22 |
+----------------+------------+-----------------+

memory consumption of all nodes:
Connected to Management Server at: localhost:1186
Node 3: Data usage is 1%(6560 32K pages of total 384000)
Node 3: Index usage is 0%(1156 8K pages of total 131136)
Node 4: Data usage is 1%(6504 32K pages of total 384000)
Node 4: Index usage is 0%(1156 8K pages of total 131136)



its the only table on the cluster an the memory consumption of datamemory is around 200MB. How can i reduce this datamemory consumption ?

Regards
Denny

How to resolve "Lock wait timeout exceeded" issue reported by MySQL Cluster (2 replies)

$
0
0
When our system busy with SQL update logic in MySQL Cluster, it report below info:
----------------------------
Lock wait timeout exceeded; try restarting transaction; nested exception is java.sql.SQLException: Lock wait timeout exceeded; try restarting transactionCaused by: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction
----------------------------

Is MySQL cluster NDB engine different to Innodb, that difference lead this issue ?

Or there is no difference between NDB and Innodb about SQL Lock logic, that happened comes from our system logic ?

Could you help to give me more suggestions to resolve this issue ?

Best Regards,
Hu Jingyu

Add index for mysql table with ndbcluster storage engine (1 reply)

$
0
0
I am having trouble while adding index on mysql table with ndbcluster storage engine below is the error . [mysqlcluster ver. 7.4 GA]

mysql> alter table `sks_staff_office` add index `pid_index` (pid);
ERROR 1296 (HY000): Got error 156 'Unknown error code' from NDBCLUSTER

mysql> create ONLINE index pid_index on sks_staff_office(pid);
ERROR 1296 (HY000): Got error 156 'Unknown error code' from NDBCLUSTER

mysql> alter online table `sks_staff_office` add index `pid_index` (pid);
ERROR 1296 (HY000): Got error 156 'Unknown error code' from NDBCLUSTER
the link i reffered to is

http://dev.mysql.com/doc/refman/5.6/en/alter-table-online-operations.html

Could not determine which nodeid to use for this node (no replies)

$
0
0
Hi guys,

I got a problem with my mysql cluster Setup with 2 Management Nodes.

here my config.ini:

[ndbd default]
NoOfReplicas=2
DataMemory=12000M
IndexMemory=1024M
#DiskPageBufferMemory=1000M
NoOfFragmentLogFiles=32
FragmentLogFileSize=64M
MaxNoOfExecutionThreads=4
MaxDiskWriteSpeed=1000M
MinDiskWriteSpeed=100M
MaxNoOfConcurrentOperations=100000
RedoBuffer=32M
LockPagesInMainMemory=1
ODirect=1
datadir=/var/lib/mysql-cluster

[tcp default]
portnumber=2202
SendBufferMemory=2M
ReceiveBufferMemory=2M

[ndb_mgmd]
NodeId=1
hostname=192.168.0.19
datadir=/var/lib/mysql-cluster

[ndb_mgmd]
NodeId=2
hostname=192.168.0.20
datadir=/var/lib/mysql-cluster

[ndbd]
NodeId=10
hostname=192.168.0.17

[ndbd]
NodeId=11
hostname=192.168.0.18

[mysqld]
NodeId=20
hostname=192.168.0.17

[mysqld]
NodeId=21
hostname=192.168.0.18


If i try to start a management node i get the following error:
# ndb_mgmd -f /etc/my.cnf.d/config.ini --initial -vvvvv
MySQL Cluster Management Server mysql-5.6.29 ndb-7.4.11
2016-06-01 13:02:27 [MgmtSrvr] DEBUG -- Not alone on host 192.168.0.20, node 2 will also run here
2016-06-01 13:02:27 [MgmtSrvr] ERROR -- Could not determine which nodeid to use for this node. Specify it with --ndb-nodeid=<nodeid> on command line

if delete the management node section with id2 on node 1 and rhe section with id1 on node 2 bot node come up but ndb_mgm show shows only the mangement node.
from which i execute ndb_mgmcommand.

Regards
Denny

auto discovery of existing databases not working (no replies)

$
0
0
We have a problem with the database autodiscovery when you add a MySQL API node to the cluster.

When we add an API node, there is a step when the node first connects to the management node where there is a discovery of the database:



2016-05-26 15:38:54 26109 [Note] - '::' resolves to '::';

2016-05-26 15:38:54 26109 [Note] Server socket created on IP: '::'.

2016-05-26 15:38:54 26109 [Note] Event Scheduler: Loaded 0 events

2016-05-26 15:38:54 26109 [Note] mysqld: ready for connections.

Version: '5.6.21-ndb-7.3.7-cluster-gpl-log' socket: '/nn/mysql/mysql_cluster/mysql.sock' port: 3306 MySQL Cluster Community Server (GPL)

2016-05-26 15:38:54 26109 [Note] ndb_index_stat_proc: Ndb object created with reference : 0x80060034, name : Ndb Index Statistics monitoring

2016-05-26 15:38:54 26109 [Note] NDB Binlog: DISCOVER TABLE Event: REPL$mysql/ndb_schema

2016-05-26 15:38:54 26109 [Note] NDB Binlog: logging ./mysql/ndb_schema (UPDATED,USE_WRITE)

2016-05-26 15:38:54 26109 [Note] NDB Binlog: DISCOVER TABLE Event: REPL$mysql/ndb_apply_status

2016-05-26 15:38:54 26109 [Note] NDB Binlog: logging ./mysql/ndb_apply_status (UPDATED,USE_WRITE)

2016-05-26 15:38:54 26109 [Note] NDB: Cleaning stray tables from database 'information_schema'

2016-05-26 15:38:54 26109 [Note] NDB: Cleaning stray tables from database 'ndbinfo'

2016-05-26 15:38:54 26109 [Note] NDB: Cleaning stray tables from database 'performance_schema'

2016-05-26 15:38:54 26109 [Note] NDB: Cleaning stray tables from database 'test'

2016-05-26 15:38:54 26109 [Note] NDB: Discovered missing database 'Mailboxes'

2016-05-26 15:38:54 26109 [Note] NDB: missing frm for Mailboxes.schema_history, discovering...

2016-05-26 15:38:54 26109 [Note] NDB: missing frm for Mailboxes.mail, discovering...

2016-05-26 15:38:54 26109 [Note] NDB: missing frm for Mailboxes.mailbox, discovering...

2016-05-26 15:38:54 26109 [Note] NDB: missing frm for Mailboxes.interface_history, discovering...

2016-05-26 15:38:54 [NdbApi] INFO -- Flushing incomplete GCI:s < 2217166/3

2016-05-26 15:38:54 [NdbApi] INFO -- Flushing incomplete GCI:s < 2217166/3

2016-05-26 15:38:54 26109 [Note] NDB Binlog: starting log at epoch 2217166/3

2016-05-26 15:38:54 26109 [Note] NDB Binlog: ndb tables writable

2016-05-26 15:38:54 26109 [Note] NDB Binlog: Node: 10, subscribe from node 50, Subscriber bitmask 400000

2016-05-26 15:38:54 26109 [Note] NDB Binlog: Node: 10, subscribe from node 58, Subscriber bitmask 40400000

2016-05-26 15:38:54 26109 [Note] NDB Binlog: Node: 11, subscribe from node 58, Subscriber bitmask 40000000

2016-05-26 15:38:54 26109 [Note] NDB Binlog: Node: 11, subscribe from node 50, Subscriber bitmask 40400000



But sometimes we add a node and it doesn’t discover the databases: the DISCOVER TABLE event doesn’t see any missing databases. In those cases we have to run the CREATE DATABASE command from that new API node, which weirdly succeeds despite the databases existing on the data nodes, and then the API node can access that existing database. We have tried a lot of experimentation to determine what governs this behavior of the DISCOVER TABLE event but are coming up short. Has anyone ever encountered this?

how to change database schema online in NDB cluster (no replies)

$
0
0
Repost:
how to change database schema in ndbcluster
for example add indexes to a table , change column data type etc.below is the error i get when i try to do so. Isnt it supported ?

mysql> alter table `sks_staff_office` add index `pid_index` (pid);
ERROR 1296 (HY000): Got error 156 'Unknown error code' from NDBCLUSTER

mysql> create ONLINE index pid_index on sks_staff_office(pid);
ERROR 1296 (HY000): Got error 156 'Unknown error code' from NDBCLUSTER

mysql> alter online table `sks_staff_office` add index `pid_index` (pid);
ERROR 1296 (HY000): Got error 156 'Unknown error code' from NDBCLUSTER

ndbmtd: ndbzwrite|write returned -1: errno: 28 my_errno: 0 (2 replies)

$
0
0
Hi,

I have a problem with creating Tablespaces on mysql-cluster.
If i create tablespace my 2 ndbdatanodes start creating the datafile but after a few minutes on one node i get the error:

ndbmtd: ndbzwrite|write returned -1: errno: 28 my_errno: 0

and
ERROR 1528 (HY000): Failed to create DATAFILE.

my steps:

1 CREATE LOGFILE GROUP lg ADD UNDOFILE 'undo.log' INITIAL_SIZE 1024M UNDO_BUFFER_SIZE 32M ENGINE NDBCLUSTER;

2 CREATE TABLESPACE ts_devilb ADD DATAFILE 'ts.dat' USE LOGFILE GROUP lg INITIAL_SIZE 60G ENGINE NDBCLUSTER;

the second Create command ends then with "ERROR 1528 (HY000): Failed to create DATAFILE."

Regards
Denny

ndbd's stuck in phase 2 (2 replies)

$
0
0
In a simple Testsetup the cluster does't comes out of phase 2 for more then 8 hours for now. (started with --initial)


removing the ndb_2_fs und ndf_3_fs folders manually and then retry it doesn't work also.


please help.


the event log is filled with entry's like this.

2016-06-20 22:26:06 Node 3: Operations=0
2016-06-20 22:26:06 Node 2: Operations=0
2016-06-20 22:26:11 Node 3: Operations=0
2016-06-20 22:26:11 Node 2: Operations=0
2016-06-20 22:26:16 Node 3: Operations=0
2016-06-20 22:26:16 Node 2: Operations=0
2016-06-20 22:26:21 Node 3: Operations=0
2016-06-20 22:26:21 Node 2: Operations=0
2016-06-20 22:26:26 Node 3: Operations=0
2016-06-20 22:26:26 Node 2: Operations=0
2016-06-20 22:26:31 Node 3: Operations=0
2016-06-20 22:26:31 Node 2: Operations=0
2016-06-20 22:26:32 Node 2: Mean loop Counter in doJob last 8192 times = 6
2016-06-20 22:26:35 Node 2: 8192 loops,tot 48634297 usec,exec 11 extra:loops = 573,time 7,const 50
2016-06-20 22:26:36 Node 3: Operations=0


ndb_mgmd log

2016-06-20 13:57:38 [MgmtSrvr] INFO -- Node 2: Node 3: API mysql-5.6.29 ndb-7.4.11
2016-06-20 13:57:38 [MgmtSrvr] INFO -- Node 3: Node 2: API mysql-5.6.29 ndb-7.4.11
2016-06-20 13:57:38 [MgmtSrvr] INFO -- Node 3: Start phase 1 completed
2016-06-20 13:57:38 [MgmtSrvr] INFO -- Node 2: Start phase 1 completed
2016-06-20 13:57:38 [MgmtSrvr] INFO -- Node 2: System Restart: master node: 2, num starting: 2, gci: 0
2016-06-20 13:57:38 [MgmtSrvr] INFO -- Node 2: CNTR_START_CONF: started: 0000000000000000
2016-06-20 13:57:38 [MgmtSrvr] INFO -- Node 2: CNTR_START_CONF: starting: 000000000000000c
2016-06-20 13:57:38 [MgmtSrvr] INFO -- Node 2: Local redo log file initialization status:
#Total files: 400000, Completed: 0
#Total MBytes: 6400000, Completed: 0
2016-06-20 13:57:38 [MgmtSrvr] INFO -- Node 3: Local redo log file initialization status:
#Total files: 400000, Completed: 0
#Total MBytes: 6400000, Completed: 0




ndb2 log


2016-06-20 13:57:32 [ndbd] INFO -- Clearing filesystem in initial start
2016-06-20 13:57:32 [ndbd] INFO -- Start phase 0 completed
2016-06-20 13:57:32 [ndbd] INFO -- Phase 0 has made some file system initialisations
2016-06-20 13:57:32 [ndbd] INFO -- Starting QMGR phase 1
2016-06-20 13:57:32 [ndbd] INFO -- DIH reported initial start, now starting the Node Inclusion Protocol
2016-06-20 13:57:38 [ndbd] INFO -- findNeighbours from: 2248 old (left: 65535 right: 65535) new (3 3)
2016-06-20 13:57:38 [ndbd] INFO -- Include node protocol completed, phase 1 in QMGR completed
2016-06-20 13:57:38 [ndbd] INFO -- Start phase 1 completed
2016-06-20 13:57:38 [ndbd] INFO -- Phase 1 initialised some variables and included node in cluster, locked memory if configured to do so
2016-06-20 13:57:38 [ndbd] INFO -- Asking master node to accept our start (we are master, GCI = 0)
2016-06-20 13:57:38 [ndbd] INFO -- NDBCNTR master accepted us into cluster, start NDB start phase 1
2016-06-20 13:57:38 [ndbd] INFO -- We are performing initial start of cluster
2016-06-20 13:57:38 [ndbd] INFO -- LDM(0): Starting REDO log initialisation


ndb3 log


2016-06-20 13:57:35 [ndbd] INFO -- Clearing filesystem in initial start
2016-06-20 13:57:35 [ndbd] INFO -- Start phase 0 completed
2016-06-20 13:57:35 [ndbd] INFO -- Phase 0 has made some file system initialisations
2016-06-20 13:57:35 [ndbd] INFO -- Starting QMGR phase 1
2016-06-20 13:57:35 [ndbd] INFO -- DIH reported initial start, now starting the Node Inclusion Protocol
2016-06-20 13:57:38 [ndbd] INFO -- findNeighbours from: 2336 old (left: 65535 right: 65535) new (2 2)
2016-06-20 13:57:38 [ndbd] INFO -- Include node protocol completed, phase 1 in QMGR completed
2016-06-20 13:57:38 [ndbd] INFO -- Start phase 1 completed
2016-06-20 13:57:38 [ndbd] INFO -- Phase 1 initialised some variables and included node in cluster, locked memory if configured to do so
2016-06-20 13:57:38 [ndbd] INFO -- Asking master node to accept our start (nodeId = 2 is master), GCI = 0
2016-06-20 13:57:38 [ndbd] INFO -- NDBCNTR master accepted us into cluster, start NDB start phase 1
2016-06-20 13:57:38 [ndbd] INFO -- We are performing initial start of cluster
2016-06-20 13:57:38 [ndbd] INFO -- LDM(0): Starting REDO log initialisation



config:

[NDBD DEFAULT]

NoOfReplicas = 2
MaxNoOfAttributes = 40000
MaxNoOfTables = 1000
DataMemory = 8GB # 80M is default
IndexMemory = 512M # 18M is default
DataDir = /var/mysql-cluster
NoOfFragmentLogFiles = 100000
#LockPagesInMainMemory = 1 # Make sure not to use swap

[NDB_MGMD DEFAULT]
DataDir = /var/mysql-cluster

[NDB_MGMD]
NodeId=1
HostName = localhost

[NDBD]
NodeId = 2
HostName = localhost
MaxNoOfOrderedIndexes =512
MaxNoOfConcurrentOperations=2000000

[NDBD]
NodeId = 3
HostName = localhost
MaxNoOfOrderedIndexes =512
MaxNoOfConcurrentOperations=2000000

[MYSQLD]
NodeId = 4
[MYSQLD]
NodeId = 5
[API]


- the ndbd process writes all the times with about 8MB/s to disc.

- the ndb_2_fs and ndb_3_fs folders are now 89 GB big

Maximum data size (no replies)

$
0
0
According to the limitations of cluster, it says there can only be 48 data nodes. Does that mean with number of replica 2, I can actually have only 24 nodes to store data?

If the data memory is, say 4GB, then, does it mean I can only store 24*4=~100GB?

If so, how can I set up a cluster to store TBs of data?

Intermittent Packet loss on MySQL 5.5 Server (no replies)

$
0
0
Recently I have upgraded the the MySQL from 5.1 to 5.5 on RHEL6.

After the update there seems to be some intermittent disconnections between App and DB Server. Both the servers are communicating on the Private IP's and there are no IPTABLES or other firewalls running.

To check the same just tried pinging the DB Server from APP Server on port 3306 and regular ping. While there are few packet loss on the MySQL port where as I don't observer any packet loss on the regular ping to server.

To check the same I have downgraded to MySQL5.1 again and every thing seems to be working fine. There is no drop in the connection as well as no packet loss in the ping to MySQL port.

Basically these are the virtual machines running on VMware. I have tried building new virtual machine and installed MySQL5.5 directly on it. And observer the same packet loss on the MySQL port.

these is where i am struck with kindly let me know how can i go forward to get to the bottom of this problem.

Thanks
Vishal

Moving from 1 node to MySQL Cluster (1 reply)

$
0
0
Hello everyone,

We are exploring MySQL cluster installation. We want to install the My cluster on one node, and when we get to a scenario of more application load we want to go for the second node of MySQL cluster with a replication of data nodes.

However from the articles listed the MySQL cluster does not guarantee durable commits and that may be risky to run in one node.

so we are planning to start with InnoDB engine on the first node, and when we
go for a second node, we are converting that InnoDB engine to NLB cluster.

Can you suggest if there any other approach that we could follow? The main
concern we have is the customer will be adding node by node and we don't want
to ask more than one machine upfront.

Also we are exploring the following options to convert the engine as many forums explain, can you suggest which of them is better or any other better alternative?

1. create a dump using mysqldump, and drop/rename the original database, change the engine to NDBCLUSTER and restore it back.
2. Alter all the tables to drop the keys/indexes and alter the engine, and recreate keys.


Thanks for the help,
Venu

Sharding based on encrypted field (no replies)

$
0
0
Does anyone know if one can shard a table based on an encrypted field? I don't see any particular reason why not, but I just want to make sure it's possible.

How can I increase one thread's throughput (2 replies)

$
0
0
We are do PoC on MySQL Cluster and try to find how can we migrate to it for our application. Our old data persist tier is very fast (one thread can do 80K database operations per second) but it's not friendly for horizontal scalability.

I change ndbapi_simple.cpp (only do one record insert in one transaction) do the similar test on i7-2600 with NoOfReplica=1 and find the following (the number is similar when I choose either ndbmtd or ndbd
Thread Number, Throughput per Thread
1, 19K
4, 11K
10, 7K
20, 5K

When I change to two machine with 2xE5-2670 (8 cores per socket) and 64G memory, this time NoOfReplica=2 (I use ndbmtd and MaxNoOfExecutionThreads=8 for this)
Thread Number, Throughput per Thread
1, 3.3K
4, 3.9K
8, 3.6K
16, 3.1K
32, 2.7K
64, 2.3K
128, 1.8K
256, 1.4K

I'm not surprised for the throughput drops so much for NoOfReplica=2. I found a lot of thread are required for fill one data group. Which will make he business layer must have a lot of thread than before.

I use localhost as connection string.

My Question is how can I increase one thread's throughput for one insert per transaction.

out of job buffer (no replies)

$
0
0
Hello,

I'm using ndb cluster 7.4.11 with ndbmtd data nodes. I recently increased the size of data memory available to the nodes. But now when I restart the data nodes I receive the following error

Node 3: Forced node shutdown completed. Occured during startphase 0. Initiated by signal 6.

And in the log I find

out of job buffer

Is there a way I can resolve this?

Thanks in advance,
Randy

`Got temporary error 899 'Rowid already allocated' from NDBCLUSTER' (no replies)

$
0
0
We are seeing this error when inserting multiple rows to one of our tables in our production db.
it's only one table that keep causing this error and it's not even under heavy load(around 10 inserts per min)
we had same issue couple month ago and fixed itself when we upgraded to latest version but now it started showing again.
I am not sure where to look for fix and what causes this error. Has anyone faced this issue and fixed it?

our set up is 6 data nodes, two node group, one manager, 4 mysql node and they are deployed in m4.4xlarge instances on aws
version mysql-5.6.29 ndb-7.4.11
Viewing all 1560 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>