Quantcast
Channel: MySQL Forums - NDB clusters
Viewing all 1553 articles
Browse latest View live

error 293 Inconsistent trigger state in TC block (no replies)

$
0
0
Hi,

I'm trying to run some tests with our mysql NDB cluster and we initially got some of the following errors:

Got temporary error 221 'Too many concurrently fired triggers (increase MaxNoOfFiredTriggers)'

And so, following the guidance, we increased the MaxNoOfFiredTriggers, and following that we got

Got temporary error 233 'Out of operation records in transaction coordinator (increase MaxNoOfConcurrentOperations)'

So we've increased that parameter as well. Currently with these settings

[ndbd]
MaxNoOfConcurrentOperations=6553600
MaxNoOfFiredTriggers=4000000

we're getting neither of the above errors, but now we're getting this error, and I haven't found any guidance in any forums or documentation on what is causing this or what to try:

Got error 293 'Inconsistent trigger state in TC block' from NDBCLUSTER

Does anyone have any insight into what this error means and what I can do about it?

Thanks very much,

Pete

p.s.

We're running this version of MySQL

Server version: 5.7.16-ndb-7.5.4-cluster-commercial-advanced MySQL Cluster Server - Advanced Edition (Commercial)

We've seen this on inserts and on deletes of a number of records. The slow query log is showing us this for the delete queries that are encountering the 'Inconsistent trigger state in TC block' errors.

# Time: 2018-03-14T21:48:13.716121Z
# User@Host: user[user] @ localhost [] Id: 33277
# Query_time: 20.595956 Lock_time: 0.000161 Rows_sent: 0 Rows_examined: 100512
SET timestamp=1521064093;
delete s, ao from S_TABLE s JOIN A_TABLE ao ON s.OBJ = ao.ID where time <= 1521042709062;

NDB Checkpoints and research on In-Memory Databases (no replies)

NDB Cluster and disk columns (no replies)

Data node forced to shutdown on restart. Caused by error 2341 (no replies)

$
0
0
Hello everyone,

I have been running into this problem since yesterday, and somehow I just can't seem to get it to work.

Config:

I setup a total of 4 hosts where there are 2xData Nodes, and 2xMgmt Servers and 1 SQL Node (for now)

Software:
I setup MySQL NDB cluster mysql-5.7.20 ndb-7.6.4 on my Ubuntu 16.04 LTS 64-bit box. NUMA is turned off in grub by setting numa=off in grub.conf.


Server Setup:
FusionIO SX300 3.0T SSD
2x DELL POWEREDGE R820 4X E5-4650 2.7GHZ 8C 256GB RAM


What am I doing:

I setup the servers, and the database. I shutdown DN#1 with ndb_mgm -e "1 STOP". The DN shut down just fine, and DN#2 took over as master. However, when I try to start DN#1, I get this error.


Here are my logs:

ndb_11_error.log:

Time: Friday 30 March 2018 - 10:13:46
Status: Temporary error, restart node
Message: Internal program error (failed ndbrequire) (Internal error, programming error or missing error message, please report a bug)
Error: 2341
Error data: DblqhMain.cpp
Error object: DBLQH (Line: 16139) 0x00000002 Check c_copy_fragment_in_progress failed
Program: ndbmtd
Pid: 5991 thr: 30
Version: mysql-5.7.20 ndb-7.6.4
Trace file name: ndb_11_trace.log.25_t30
Trace file path: /db/mysql-cluster/ndb_11_trace.log.25 [t1..t54]
***EOM***



ndb_11_out.log:

2018-03-30 10:23:24 [ndbd] WARNING -- Ndb kernel thread 18 is stuck in: Print Job Buffers at crash elapsed=200
2018-03-30 10:23:24 [ndbd] INFO -- Watchdog: User time: 15294 System time: 10100
2018-03-30 10:23:24 [ndbd] WARNING -- Ndb kernel thread 18 is stuck in: Print Job Buffers at crash elapsed=100
2018-03-30 10:23:24 [ndbd] INFO -- Watchdog: User time: 15294 System time: 10120
2018-03-30 10:23:24 [ndbd] WARNING -- Ndb kernel thread 18 is stuck in: Print Job Buffers at crash elapsed=100
2018-03-30 10:23:24 [ndbd] INFO -- Watchdog: User time: 15294 System time: 10140
2018-03-30 10:23:25 [ndbd] WARNING -- Ndb kernel thread 18 is stuck in: Print Job Buffers at crash elapsed=200
2018-03-30 10:23:25 [ndbd] INFO -- Watchdog: User time: 15294 System time: 10150
2018-03-30 10:23:26 [ndbd] ALERT -- Node 11: Forced node shutdown completed. Occured during startphase 5. Caused by error 2341: 'Internal program error (failed ndbrequire)(Internal error, programming error or missing error message, please report a bug). Temporary error, restart node'.


ndb_12_out.log:


2018-03-30 10:11:19 [ndbd] INFO -- Master takeover started from 11
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 1: Started failure handling for node 11
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 6: Started failure handling for node 11
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 6: Step NF_BLOCK_HANDLE completed, failure handling for node 11 waiting for NF_TAKEOVER, NF_CHECK_SCAN, NF_CHECK_TRANSACTION.
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 6: Inserting failed node 11 into takeover queue, length 1
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 5: Started failure handling for node 11
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 5: Step NF_BLOCK_HANDLE completed, failure handling for node 11 waiting for NF_TAKEOVER, NF_CHECK_SCAN, NF_CHECK_TRANSACTION.
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 6: GCP completion 203/10 waiting for node failure handling (1) to complete. Seizing record for GCP.
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 8: Started failure handling for node 11
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 8: Step NF_BLOCK_HANDLE completed, failure handling for node 11 waiting for NF_TAKEOVER, NF_CHECK_SCAN, NF_CHECK_TRANSACTION.
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 2: Started failure handling for node 11
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 10: Started failure handling for node 11
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 10: Step NF_BLOCK_HANDLE completed, failure handling for node 11 waiting for NF_TAKEOVER, NF_CHECK_SCAN, NF_CHECK_TRANSACTION.
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 13: Started failure handling for node 11
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 13: Step NF_BLOCK_HANDLE completed, failure handling for node 11 waiting for NF_TAKEOVER, NF_CHECK_SCAN, NF_CHECK_TRANSACTION.
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 12: Started failure handling for node 11
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 12: Step NF_BLOCK_HANDLE completed, failure handling for node 11 waiting for NF_TAKEOVER, NF_CHECK_SCAN, NF_CHECK_TRANSACTION.
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 12: GCP completion 203/10 waiting for node failure handling (1) to complete. Seizing record for GCP.
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 12: Inserting failed node 11 into takeover queue, length 1
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 15: Started failure handling for node 11
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 15: Step NF_BLOCK_HANDLE completed, failure handling for node 11 waiting for NF_TAKEOVER, NF_CHECK_SCAN, NF_CHECK_TRANSACTION.
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 15: GCP completion 203/10 waiting for node failure handling (1) to complete. Seizing record for GCP.
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 16: Started failure handling for node 11
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 16: Step NF_BLOCK_HANDLE completed, failure handling for node 11 waiting for NF_TAKEOVER, NF_CHECK_SCAN, NF_CHECK_TRANSACTION.
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 16: GCP completion 203/10 waiting for node failure handling (1) to complete. Seizing record for GCP.
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 16: Inserting failed node 11 into takeover queue, length 1
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 4: Started failure handling for node 11
2018-03-30 10:11:19 [ndbd] INFO -- DBTC 4: Step NF_BLOCK_HANDLE completed, failure handling for node 11 waiting for NF_TAKEOVER, NF_CHECK_SCAN, NF_CHECK_TRANSACTION.



My Config.ini

root@server103:/var/lib/mysql-cluster# cat /var/lib/mysql-cluster/config.ini
[NDB_MGMD DEFAULT]
#DataDir=/var/lib/mysql-cluster # Directory for the log files
DataDir=/db/mysql-cluster # Directory for the log files
#config-cache=0


[NDBD DEFAULT]
#Redundancy:
NoOfReplicas=2

# MULTI-THREADING OPTIONS
#Our data node config for 60 CPUs: #ldm=32 #tc=16 #send=4 #recv=4 #main=1 #io=1 #watchdog=1 #rep=1
ThreadConfig = ldm={count=32,cpubind=4-35,thread_prio=10,realtime=0,spintime=500},tc={count=16, cpubind=36-51,thread_prio=10,realtime=0,spintime=100},send={count=4,cpubind=52-55},recv={count=4,cpubind=56-59},main={cpubind=60},io={cpubind=61},watchdog={cpubind=62,realtime=0},rep={cpubind=63},idxbld={count=1,cpubind=3}


#LockExecuteThreadToCPU=1

#IMPORTANT: This is not necessary to be set if you're using ThreadConfig
#If you are planning to use MySQL Cluster 7.0's multithreaded version 'ndbmtd' then you need to add
#'MaxNoOfExecutionThreads' to the [NDBD DEFAULT] section in the cluster configuration.
#MaxNoOfExecutionThreads=56

# On systems with multiple CPUs, these parameters can be used to lock NDBCLUSTER
# threads to specific CPUs. Only applicable when ThreadConfig isn't used.
#LockMaintThreadsToCPU=0

#Listing 4-3. LogDestination Using FILE
#LogDestination = FILE:filename=ndb_{node_id}_cluster.log,maxsize=1024000,maxfiles=6

#InitialLogFileGroup = name=lg_1; undo_buffer_size=64M; undo1.log=150M; undo2.log=200M
#InitialTablespace = name=ts_1; extent_size=1M; data1.dat=1G; data2.dat=2G

#Memory Data Storage Options The following options are related to memory sizing. Strategy for memory sizing is not difficult;
#allocate memory as much as the system has unless the system causes memory swapping. Note that objects for schema and transaction
#processing also consume a certain amount of memory. It is important not to allocate memory to buffers in this section too much.
#Leave a margin for them.
#DataMemory (memory for records and ordered indexes)
DataMemory=128G

#IndexMemory (memory for Primary key hash index and unique hash index)
#Usually between 1/6 or 1/8 of the DataMemory is enough, but depends on the
#number of unique hash indexes (UNIQUE in table def)
#IndexMemory=64G

# Avoid Swapping:
# On Linux and Solaris systems, setting this parameter locks data node
# processes into memory. Doing so prevents them from swapping to disk,
# which can severely degrade cluster performance.
LockPagesInMainMemory=1

# Schema Object Options On MySQL NDB Cluster, metadata of schema objects is stored in fixed size arrays that are allocated at the
# startup of the data node. The maximum allowable number of various objects is configured by the following options.
# It is important to allocate a required size for each schema objects beforehand. Schema object design is covered in Chapter 18.

# Table related things
# MaxNoOfLocalScans=64
#MaxNoOfTables=4096
#MaxNoOfAttributes=24756
MaxNoOfOrderedIndexes=2048
#MaxNoOfUniqueHashIndexes=512
#MaxNoOfTriggers=14336
#StringMemory=25


# DATA NODE CONFIGURATION:

#RAM from the shared global memory is used for the UNDO_BUFFER when you create the log file group.
#In the configuration generated by severalnines.com/config then you have to uncomment the
# SharedGlobalMemory in mysqlcluster-XYZ/cluster/config/config.ini before you start the cluster.
SharedGlobalMemory=8G

#If you are relying a lot on the disk data, we recommend to set this to as much as possible.
#In the configuration generated by severalnines.com/config then you have to uncomment the
#DiskPageBufferMemory in mysqlcluster-63/cluster/config/config.ini before you start the cluster.
#The DiskPageBufferMemory should be set to:
#DiskPageBufferMemory=TOTAL_RAM - 1GB (OS) - 1200MB (approx size used for buffers etc in the data nodes) - DataMemory - IndexMemory
#Expect to do some trial and terror before getting this correct.
DiskPageBufferMemory=8G



# TRANSACTION OPTIONS:
# Since MySQL NDB Cluster is a real-time database system, it doesn’t allocate memory on the fly.
# Instead, it allocates memory at startup. It includes various types of buffers used by transactions and data operations.

# Operation records
# MaxNoOfConcurrentOperations 100000 (min) means that you can load any mysqldump file into cluster.
MaxNoOfConcurrentOperations=250000
MaxNoOfConcurrentTransactions=16384
MaxNoOfConcurrentScans=500
#MaxNoOfLocalScans=4 * MaxNoOfConcurrentScans * [# data nodes] + 2
MaxNoOfLocalScans=4000
MaxParallelScansPerFragment=512
TransactionDeadlockDetectionTimeout=5000

# Transaction Temporary Storage #
MaxNoOfConcurrentIndexOperations=8192
MaxNoOfFiredTriggers=4000

#Data Files Storage
#FileSystemPathDD - MySQL Cluster Disk Data data files and undo log files are placed in the indicated directory.
#FileSystemPathDataFiles - MySQL Cluster Disk Data data files are placed in the indicated directory.
#FileSystemPathUndoFiles - MySQL Cluster Disk Data undo log files are placed in the indicated directory
#DataDir=/usr/local/mysql/data # Remote directory for the data files
#FileSystemPathUndoFiles=/storage/data/mysqlcluster/
#FileSystemPathDataFiles=/storage/data/mysqlcluster/
#BackupDataDir=/storage/data/mysqlcluster/backup/
DataDir=/db/mysql-cluster

#Setting these to system default
TimeBetweenWatchDogCheck= 60000
#ArbitrationTimeout=5000

#Bypass FS cache (you should test if this works for you or not)
#Reports indicates that odirect=1 can cause io errors (os err code 5) on some systems. You must test.
# When this option is true, it causes write operations for checkpoints to be
# done in O_DIRECT mode, which means direct I/O. As the name suggests,
# direct I/O is an I/O operation done directly without routing file system cache.
# It may save certain CPU resources. It is best to set this option to true on
# Linux systems using kernel 2.6 or later.
ODirect=1

#Checkpointing...
#DiskCheckpointSpeed=10M
#TimeBetweenGlobalCheckpoints=1000
#the default value for TimeBetweenLocalCheckpoints is very good
#TimeBetweenLocalCheckpoints=20

#This option determines the speed of write operation for checkpoints in the
#amount of data written per second during a local checkpoint as part of a
#restart operation. This option is deprecated on 7.4.1 and removed on the
#7.5 series. Use MaxDiskWriteSpeedOtherNodeRestart and MaxDiskWriteSpeedOwnRestart
#instead on the 7.4.1 or newer series. On the 7.4 series, which is newer than or
#equal to 7.4.1, this option can be set but it has no effect.
#DiskCheckpointSpeedInRestart=100M


### Params for LCP
#MinDiskWriteSpeed=10M
#MaxDiskWriteSpeed=20M
#MaxDiskWriteSpeedOtherNodeRestart=500M
#MaxDiskWriteSpeedOwnRestart=200M
#TimeBetweenLocalCheckpoints=20
#TimeBetweenGlobalCheckpoints=2000
#TimeBetweenEpochs=100

#MemReportFrequency=30
#BackupReportFrequency=10

### Params for increasing Disk throughput
#BackupMaxWriteSize=1M
#BackupDataBufferSize=16M
#BackupLogBufferSize=4M


### Watchdog
#TimeBetweenWatchdogCheckInitial=60000

### TransactionInactiveTimeout - should be enabled in Production
TransactionInactiveTimeout=60000

### CGE 6.3 - REALTIME EXTENSIONS
# Setting these parameters allows you to take advantage of real-time scheduling
# of NDB threads to achieve increased throughput when using ndbd. They
# are not needed when using ndbmtd; in particular, you should not set
# RealTimeScheduler for ndbmtd data nodes.
RealTimeScheduler=0
#SchedulerExecutionTimer=80
#SchedulerSpinTimer=400
#SchedulerExecutionTimer=100


#RedoBuffer of 32M should let you restore/provision quite a lot of data in parallel.
#If you still have problems ("out of redobuffer"), then you probably have to slow disks and
#increasing this will not help, but only postpone the inevitable.
RedoBuffer=64M

### New 7.1.10 redo logging parameters
RedoOverCommitCounter=3
RedoOverCommitLimit=20

### Params for REDO LOG

# This is only useful when ThreadConfig isn't configured
NoOfFragmentLogParts = 32

#size of each redo log fragment, 4 redo log fragment makes up on fragment log file.
# A bigger Fragment log file size thatn the default 16M works better with high write load
# and is strongly recommended!!
# This option specifies size of each redo log file. See NoOfFragmentLogFiles
# for more information. If you need more redo log space, consider increasing
# this option first, because each log file needs a memory buffer.
#FragmentLogFileSize = 16M
FragmentLogFileSize=64M
InitFragmentLogFiles=SPARSE

# Set NoOfFragmentLogFiles to 6xDataMemory [in MB]/(4 *FragmentLogFileSize [in MB]
# Thus, NoOfFragmentLogFiles=6*2048/1024=12
# The "6xDataMemory" is a good heuristic and is STRONGLY recommended.
# ---
# This option specifies the number of redo log files. The redo log is written
# in a circular fashion. See Chapter 2 for more information about the redo log.
# The total file size of the redo log is calculated using the following formula:
# NoOfFragmentLogFiles * NoOfFragmentLogParts * FragmentLogFileSize
# The default values for these options are 16, 4, and 16M. 16 * 4 * 16M = 1G is the default for total size of the redo log.
#NoOfFragmentLogFiles=<4-6> X DataMemory in MB / 4 x FragmentLogFileSize
# NoOfFragmentLogFiles = 4 --> ### NoOfFragmentLogParts = <<No of LDM>>
NoOfFragmentLogFiles=300

TransactionBufferMemory=8M
#TimeBetweenGlobalCheckpoints=1000
#TimeBetweenEpochs=100
#TimeBetweenEpochsTimeout=0

### Heartbeating
#HeartbeatIntervalDbDb=15000
#HeartbeatIntervalDbApi=15000

### Params for setting logging
MemReportFrequency=30
BackupReportFrequency=10
LogLevelStartup=15
LogLevelShutdown=15
LogLevelCheckpoint=8
LogLevelNodeRestart=15

### Params for BACKUP
#BackupMaxWriteSize=1M
#BackupDataBufferSize=24M
#BackupLogBufferSize=16M
#BackupMemory=40M


# If you use MySQL Cluster 6.3 (CGE 6.3) and are tight on disk space, e.g ATCA.
# You should also then lock cpu's to a particular core.
# When this option is true, it causes LCP to be stored in compressed format.
# It saves certain disk space, but consumes more CPU time upon LCP and restart.
# It is better not to compress LCP on a busy system. CPU resources should be
# reserved for transaction processing. It is not recommended to set this option
# different per data node. Available resources should be the same among all data
# nodes to avoid bottlenecks.
#CompressedLCP=1
#CompressedBackup=1

#Realtime extensions (only in MySQL Cluster 6.3 (CGE 6.3) , read this how to use this)
#LockMaintThreadsToCPU=[cpuid]
#LockExecuteThreadToCPU=[cpuid]

LcpScanProgressTimeout=300
LongMessageBuffer=512MB


[tcp default]
SendBufferMemory=2M
ReceiveBufferMemory=2M


# Management node 1
[NDB_MGMD]
NodeId=1
ArbitrationRank=1
HostName=192.168.1.205 # Hostname of the manager
LogDestination=FILE:filename=ndb_1_cluster.log,maxsize=10000000,maxfiles=10


# Management node 2 for redundancy
[NDB_MGMD]
NodeId=2
ArbitrationRank=2
HostName=192.168.1.207 # Hostname of the manager
LogDestination=FILE:filename=ndb_2_cluster.log,maxsize=10000000,maxfiles=10


[NDBD]
NodeId=11
NodeGroup=0
HostName=192.168.1.211 # Hostname of the first data node


[NDBD]
NodeId=12
NodeGroup=0
HostName=192.168.1.213 # Hostname of the second data node


#[NDBD]
#NodeId=13
#HostName=192.168.1.215 # Hostname of the third data node


#[NDBD]
#NodeId=14
#HostName=192.168.1.217 # Hostname of the fourth data node


[MYSQLD]
NodeId = 51
HostName = 192.168.1.95

[MYSQLD]
NodeId = 52
HostName = 192.168.1.205

[MYSQLD]
NodeId = 53
HostName = 192.168.1.207

[MYSQLD]
NodeId = 54
HostName = 192.168.1.221


There are a total of about 65 tables. All of them are disk storage. However, they're all empty. I have created a master log file group, and about 16 tablespaces to handle all of these tables.

Please advise what I may be doing wrong, and perhaps what config options should I change to get this to work?

Thanks so much!
Basant

MySQL NDB Cluster Backups (no replies)

MySql Cluster Sql Node and MySql Router (no replies)

$
0
0
What's the best way in MySql Cluster configuration to address client application to a specific Sql node and handling failover too ?
Is it possible using MySql Router or MySql Proxy ?

NDB Nodegroup questions (no replies)

$
0
0
Good day,

Today I was tasked with adding Datanodes to an existing cluster. Our current setup is 4 Datanodes, Nodes 1,2 and in Nodegroup 0, Nodes 3,4 in Nodegroup 1. With NoOfReplicas=2

[ndbd(NDB)] 4 node(s)
id=1 @10.2.2.20 (mysql-5.7.17 ndb-7.5.5, Nodegroup: 0, *)
id=2 @10.2.2.21 (mysql-5.7.17 ndb-7.5.5, Nodegroup: 0)
id=3 @10.2.3.20 (mysql-5.7.17 ndb-7.5.5, Nodegroup: 1)
id=4 @10.2.3.21 (mysql-5.7.17 ndb-7.5.5, Nodegroup: 1)

[# of node groups](2) = [# of data nodes](4) / NoOfReplicas(2)

Now my understanding is that, following NDB, the Data is replicated from Nodegroup to Nodegroup, and Partitioned inside the Nodegroup.

When I added 4 Datanodes, it create Nodegroups 2 and 3:

[ndbd(NDB)] 8 node(s)
id=1 @10.2.2.20 (mysql-5.7.17 ndb-7.5.5, Nodegroup: 0, *)
id=2 @10.2.2.21 (mysql-5.7.17 ndb-7.5.5, Nodegroup: 0)
id=3 @10.2.3.20 (mysql-5.7.17 ndb-7.5.5, Nodegroup: 1)
id=4 @10.2.3.21 (mysql-5.7.17 ndb-7.5.5, Nodegroup: 1)
id=5 @10.2.2.75 (mysql-5.7.17 ndb-7.5.5, Nodegroup: 2)
id=6 @10.2.2.118 (mysql-5.7.17 ndb-7.5.5, Nodegroup: 2)
id=7 @10.2.3.202 (mysql-5.7.17 ndb-7.5.5, Nodegroup: 3)
id=8 @10.2.3.198 (mysql-5.7.17 ndb-7.5.5, Nodegroup: 3)

[# of node groups](4) = [# of data nodes](8) / NoOfReplicas(2)

With the extra 2 added Nodegroups I'm now uncertain how the Replica and Partioning is handled.

If I'm wrong someone please correct me, but my understanding now is:
0 replicates with 1
2 replicates with 3

And if I lose any 1 of the 4 Nodegroups I lose the entire cluster.

Would that be correct?

Split NDB data node sync (no replies)

$
0
0
Dear Experts,

We have a NDB cluster with below setup

Site A
SQL & MGM node on Server-A-1
Data node on Server-A-2

Site B
Data node on Server-B-1


Question:

Above setup is live and the applications are using the NDB cluster. I would like to know if there is a way to stop the inter data-node sync/query as it seems some performance issues observed due network delays.....

Can we operate both data nodes independently, Obviously we do not want to redeploy the cluster all over again

Graphical tool for MySQL Cluster (no replies)

$
0
0
What's the best way to administer a MySQL Cluster in a graphical way ?
Start and stop nodes, Backup and so on

MySQL Enterprise Monitor seems just useful for monitoring
MySQL Workbench seems not a good solution

Which .deb package provides `ndb_mgm` ? (no replies)

$
0
0
Hi All,

I would like to install ndb_mgm from the APT repository.

I installed the `mysql-apt-config.deb` package, and selected "MySQL Server & Cluster -> mysql-cluster-7.6"

I can install ndb_mgmd, which is provided by the package `mysql-cluster-community-management-server`.

I can't find the package that provides the command-line interface to ndb_mgmd, i.e. ndb_mgm.

Could anyone please provide me with the name of the package I need to install to have ndb_mgm ?

Thanks a lot in advance

Linux configuration for MySQL Cluster (no replies)

Cluster SQL Nodes Not connecting (no replies)

$
0
0
Setup:
CentOS7 / Mgmt: .140, SQL1: .141, SQL2: .142, Data1: .143, Data2: .144
IPv6 Disabled / SELINUX=disabled / Firewall 3306 & 1186 open on all.
Cluster ver: 5.7.10

Mgmt:
ndb_mgm> SHOW
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @10.5.1.143 (mysql-5.7.22 ndb-7.5.10, starting, Nodegroup: 0)
id=3 @10.5.1.144 (mysql-5.7.22 ndb-7.5.10, starting, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @10.5.1.140 (mysql-5.7.22 ndb-7.5.10)

[mysqld(API)] 2 node(s)
id=4 (not connected, accepting connect from 10.5.1.141)
id=5 (not connected, accepting connect from 10.5.1.142)


SQL1:
To start DB:
# systemctl start mysqld
# systemctl status mysqld
● mysqld.service - MySQL Server
Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2018-05-07 11:04:35 EDT; 3min 10s ago
Docs: man:mysqld(8)
http://dev.mysql.com/doc/refman/en/using-systemd.html
Process: 860 ExecStart=/usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid $MYSQLD_OPTS (code=exited, status=0/SUCCESS)
Process: 837 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS)
Main PID: 872 (mysqld)
CGroup: /system.slice/mysqld.service
└─872 /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid


/etc/my.cnf (I have tried all kinds of configs for this and nothing works.)

[mysqld]

datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

ndb-nodeid=4
ndbcluster # run NDB storage engine
ndb-connectstring="10.5.1.140:1186" # location of management server
server-id=4


SQL Log:

2018-05-07T15:11:42.719184Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2018-05-07T15:11:42.720362Z 0 [Note] /usr/sbin/mysqld (mysqld 5.7.22-ndb-7.5.10-cluster-gpl) starting as process 999 ...
2018-05-07T15:11:42.722926Z 0 [Note] InnoDB: PUNCH HOLE support available
2018-05-07T15:11:42.722950Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2018-05-07T15:11:42.722955Z 0 [Note] InnoDB: Uses event mutexes
2018-05-07T15:11:42.722959Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2018-05-07T15:11:42.722962Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.3
2018-05-07T15:11:42.722966Z 0 [Note] InnoDB: Using Linux native AIO
2018-05-07T15:11:42.723268Z 0 [Note] InnoDB: Number of pools: 1
2018-05-07T15:11:42.723351Z 0 [Note] InnoDB: Using CPU crc32 instructions
2018-05-07T15:11:42.724708Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2018-05-07T15:11:42.730846Z 0 [Note] InnoDB: Completed initialization of buffer pool
2018-05-07T15:11:42.732617Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
2018-05-07T15:11:42.744127Z 0 [Note] InnoDB: Highest supported file format is Barracuda.
2018-05-07T15:11:42.751516Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables
2018-05-07T15:11:42.751589Z 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2018-05-07T15:11:42.762553Z 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
2018-05-07T15:11:42.763412Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
2018-05-07T15:11:42.763428Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
2018-05-07T15:11:42.763745Z 0 [Note] InnoDB: Waiting for purge to start
2018-05-07T15:11:42.814015Z 0 [Note] InnoDB: 5.7.22 started; log sequence number 2594153
2018-05-07T15:11:42.814424Z 0 [Note] Plugin 'FEDERATED' is disabled.
2018-05-07T15:11:42.818011Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
2018-05-07T15:11:42.818899Z 0 [Note] InnoDB: Buffer pool(s) load completed at 180507 11:11:42
2018-05-07T15:12:17.673217Z 0 [Note] NDB Binlog: Starting...
2018-05-07T15:12:17.673285Z 1 [Note] NDB Binlog: Started
2018-05-07T15:12:17.673290Z 1 [Note] NDB Binlog: Setting up
2018-05-07T15:12:17.673328Z 1 [Note] NDB Binlog: Created schema Ndb object, reference: 0x0, name: 'Ndb Binlog schema change monitoring'
2018-05-07T15:12:17.673341Z 1 [Note] NDB Binlog: Created injector Ndb object, reference: 0x0, name: 'Ndb Binlog data change monitoring'
2018-05-07T15:12:17.673346Z 1 [Note] NDB Binlog: Setup completed
2018-05-07T15:12:17.673350Z 1 [Note] NDB Binlog: Wait for server start completed
2018-05-07T15:12:17.673380Z 0 [Note] NDB Util: Starting...
2018-05-07T15:12:17.673411Z 2 [Note] NDB Util: Wait for server start completed
2018-05-07T15:12:17.673435Z 0 [Note] NDB Index Stat: Starting...
2018-05-07T15:12:17.673440Z 0 [Note] NDB Index Stat: Wait for server start completed
2018-05-07T15:12:17.732237Z 0 [Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.
2018-05-07T15:12:17.732490Z 0 [Warning] CA certificate ca.pem is self signed.
2018-05-07T15:12:17.733740Z 0 [Note] Server hostname (bind-address): '*'; port: 3306
2018-05-07T15:12:17.734689Z 0 [Note] IPv6 is not available.
2018-05-07T15:12:17.734711Z 0 [Note] - '0.0.0.0' resolves to '0.0.0.0';
2018-05-07T15:12:17.734735Z 0 [Note] Server socket created on IP: '0.0.0.0'.
2018-05-07T15:12:17.747536Z 0 [Note] Event Scheduler: Loaded 0 events
2018-05-07T15:12:17.747707Z 0 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.7.22-ndb-7.5.10-cluster-gpl' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Cluster Community Server (GPL)
2018-05-07T15:12:17.747732Z 1 [Note] NDB Binlog: Check for incidents
2018-05-07T15:12:17.747735Z 1 [Note] NDB Binlog: Wait for cluster to start
2018-05-07T15:12:17.747754Z 2 [Note] NDB Util: Wait for cluster to start
2018-05-07T15:12:17.747761Z 0 [Note] NDB Index Stat: Wait for cluster to start
2018-05-07T15:12:47.752173Z 0 [Warning] NDB : Tables not available after 30 seconds. Consider increasing --ndb-wait-setup value

ndb cluster 7.5 incremental backup (no replies)

$
0
0
Hi all, may I ask the ndb cluster 7.5 whether support incremental backup,
beside to test full backup restore, I'd like to test PITR
I cannot find any binary log from that, even two tables(mysql.ndb_apply_status,mysql.ndb_binlog_index) no any records.
Could u pls to help or any config setting need to enable, many thx.

--------------------------------------------------------------------
mysql> SELECT Position, @FIRST_FILE:=File
-> FROM mysql.ndb_binlog_index
-> WHERE epoch > @LATEST_EPOCH ORDER BY epoch ASC LIMIT 1;
Empty set (0.00 sec)
--------------------------------------------------------------------
mysql> SELECT @LATEST_EPOCH:=MAX(epoch)
-> FROM mysql.ndb_apply_status;
+---------------------------+
| @LATEST_EPOCH:=MAX(epoch) |
+---------------------------+
| NULL |
+---------------------------+
--------------------------------------------------------------------
ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.124.181 (mysql-5.7.21 ndb-7.5.9, Nodegroup: 0)
id=3 @192.168.124.182 (mysql-5.7.21 ndb-7.5.9, Nodegroup: 0, *)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.124.224 (mysql-5.7.21 ndb-7.5.9)

[mysqld(API)] 6 node(s)
id=4 @192.168.124.135 (mysql-5.7.21 ndb-7.5.9)
id=5 @192.168.124.136 (mysql-5.7.21 ndb-7.5.9)
id=6 (not connected, accepting connect from any host)
id=7 (not connected, accepting connect from any host)
id=8 (not connected, accepting connect from any host)
id=9 (not connected, accepting connect from any host)

Hit the "rare bug" again - LCP stopped... (no replies)

$
0
0
Hi Mikael,

we hit the - as you called it - rare bug again from here
https://forums.mysql.com/read.php?25,661125,662677#msg-662677

After fiddling around since Saturday morning with the cluster and trying to get it to work again we backported your fix to 7.5.10 and indeed got it running again.

Here are the Logfiles for your reference.
https://data.boerse-go.de/it-operations/ndb_error_report_20180513164622.tar.bz2
The problem started Saturday at around 5 AM.

So thanks for fixing even rare bugs and keep up your good work!

We use ndbd at data node(single thread mode), but found 34 threads of ndbd (no replies)

$
0
0
We use ndbd (not ndbtmd) at data node, but found 34 threads of ndbd.
I think ndbd is a single thread process, why there are so threads of ndbd?
So there is no need to change ndbd to ndbtmd?

ps -eT | grep ndbd

20374 20374 ? 6-16:36:00 ndbd
20374 20375 ? 00:04:49 ndbd
20374 20376 ? 00:04:35 ndbd
20374 20377 ? 00:00:38 ndbd
20374 20378 ? 00:06:24 ndbd
20374 20379 ? 00:06:30 ndbd
20374 20380 ? 00:03:50 ndbd
20374 20381 ? 00:06:19 ndbd
20374 20382 ? 00:02:51 ndbd
20374 20383 ? 00:00:00 ndbd
20374 20384 ? 00:00:00 ndbd
20374 20385 ? 00:05:51 ndbd
20374 20386 ? 00:02:45 ndbd
20374 20387 ? 00:06:28 ndbd
20374 20388 ? 00:03:32 ndbd
20374 20389 ? 00:03:31 ndbd
20374 20390 ? 00:06:21 ndbd
20374 20391 ? 00:00:00 ndbd
20374 20392 ? 00:00:42 ndbd
20374 20393 ? 00:00:00 ndbd
20374 20394 ? 00:00:00 ndbd
20374 20395 ? 00:00:00 ndbd
20374 20396 ? 00:01:58 ndbd
20374 20397 ? 00:06:33 ndbd
20374 20398 ? 00:06:26 ndbd
20374 20399 ? 00:04:51 ndbd
20374 20400 ? 00:06:20 ndbd
20374 20401 ? 00:06:37 ndbd
20374 20402 ? 00:06:23 ndbd
20374 20403 ? 00:06:28 ndbd
20374 20404 ? 00:01:25 ndbd
20374 20405 ? 00:01:22 ndbd
20374 20406 ? 00:00:00 ndbd
20374 20407 ? 00:00:00 ndbd
29336 29336 ? 00:42:15 ndbd

ndb cluster 7.5 mysqld.log Warning (no replies)

$
0
0
Please help to solve the following warning. search google but not any solution, do u have any idea, many thx.

mysqld.log

2018-05-24T16:18:54.936886+08:00 0 [Warning] NDB: server id set to zero - changes logged to bin log with server id zero will be logged with another server id by slave mysqlds

ndb cluster 7.5 monitor from Nagios (no replies)

$
0
0
May I ask how to monitor sql node in NDB cluster or any suggestion?
https://labs.consol.de/assets/downloads/nagios/check_mysql_health-3.0.0.5.tar.gz

When I install the DEPENDENCIES (yum install perl-DBD-mysql)
but get the errors shown as follow.

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: centos.communilink.net
* epel: mirror.pregi.net
* extras: centos.communilink.net
* updates: centos.communilink.net
Resolving Dependencies
--> Running transaction check
---> Package perl-DBD-MySQL.x86_64 0:4.023-6.el7 will be installed
--> Processing Dependency: libmysqlclient.so.18(libmysqlclient_18)(64bit) for package: perl-DBD-MySQL-4.023-6.el7.x86_64
--> Processing Dependency: libmysqlclient.so.18()(64bit) for package: perl-DBD-MySQL-4.023-6.el7.x86_64
--> Running transaction check
---> Package mariadb-libs.x86_64 1:5.5.56-2.el7 will be installed
Removing mariadb-libs.x86_64 1:5.5.56-2.el7 - u due to obsoletes from installed mysql-cluster-community-libs-7.5.9-1.el7.x86_64
--> Restarting Dependency Resolution with new changes.
--> Running transaction check
---> Package mariadb-libs.x86_64 1:5.5.56-2.el7 will be installed
--> Processing Dependency: libmysqlclient.so.18(libmysqlclient_18)(64bit) for package: perl-DBD-MySQL-4.023-6.el7.x86_64
--> Processing Dependency: libmysqlclient.so.18()(64bit) for package: perl-DBD-MySQL-4.023-6.el7.x86_64
--> Finished Dependency Resolution
Error: Package: perl-DBD-MySQL-4.023-6.el7.x86_64 (base)
Requires: libmysqlclient.so.18()(64bit)
Error: Package: perl-DBD-MySQL-4.023-6.el7.x86_64 (base)
Requires: libmysqlclient.so.18(libmysqlclient_18)(64bit)
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest

NDB datanode behavior (no replies)

$
0
0
We have two data node NDB cluster. If a data node loses connections to all other NDB cluster nodes - other data node, management node and sql node

Can the orphan data node be configured/allowed to run on its own ?

ERROR 1114 (HY000): The table '#sql-78b5_1a9' is full (no replies)

$
0
0
My NDB Cluster Server version is 5.6.31-ndb-7.4.12-cluster-gpl MySQL Cluster Community Server (GPL)

When I tried to change a table's engine from InnoDB to NDBCluster it gives Table is full error as below.

mysql> ALTER TABLE SM_USER_WIDGET_MAPPING engine=NDBCLUSTER;
ERROR 1114 (HY000): The table '#sql-78b5_1a9' is full

(Currently there are 40 tables with NDBCluster engine. 23 tables with InnoDB engine waiting to be altered.)

There are two data nodes. Each has 2G memory. Below is the memory report:

Node 2: Data usage is 19%(9389 32K pages of total 48384)
Node 2: Index usage is 16%(1143 8K pages of total 6944)
Node 3: Data usage is 19%(9389 32K pages of total 48384)
Node 3: Index usage is 16%(1143 8K pages of total 6944)

Below is cluster configuration

[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2 # Number of replicas
DataMemory=1512M # How much memory to allocate for data storage
IndexMemory=54M # How much memory to allocate for index storage
# For DataMemory and IndexMemory, we have used the
# default values. Since the "world" database takes up
# only about 500KB, this should be more than enough for
# this example Cluster setup.
MaxNoOfConcurrentOperations=100000
MaxNoOfLocalOperations=110000
MaxNoOfTables=4096
MaxNoOfTriggers=3500
MaxNoOfAttributes=25000

[tcp default]
# TCP/IP options:
portnumber=2202 # This the default; however, you can use any
# port that is free for all the hosts in the cluster
# Note: It is recommended that you do not specify the port
# number at all and simply allow the default value to be used
# instead

[ndb_mgmd]
# Management process options:
hostname=10.122.215.52 # Hostname or IP address of MGM node
datadir=/var/lib/mysql-cluster # Directory for MGM node log files

[ndbd]
# Options for data node "A":
# (one [ndbd] section per data node)
hostname=10.144.118.15 # Hostname or IP address
#hostname=10.122.215.54 # Hostname or IP address
datadir=/usr/local/mysql/data # Directory for this data node's data files

[ndbd]
# Options for data node "B":
hostname=10.122.215.58 # Hostname or IP address
datadir=/usr/local/mysql/data # Directory for this data node's data files

[mysqld]
# SQL node options:
hostname=10.144.118.14 # Hostname or IP address
#hostname=10.122.215.53 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
[mysqld]
# SQL node options:
hostname=10.122.215.52 # Hostname or IP address

I have spent several hours researching the solution without luck. It would be greatly appreciated if anyone could provide some help.

Thanks,

Qingyuan

MySQL NDB Cluster 7.6: GA and Benchmarks (no replies)

Viewing all 1553 articles
Browse latest View live




Latest Images