Quantcast
Channel: MySQL Forums - NDB clusters
Viewing all 1560 articles
Browse latest View live

Dimension a cluster or how to avoid the infamous Error 410: REDO log files overloaded (no replies)

$
0
0
WRT this other thread I opened

http://forums.mysql.com/read.php?25,574288,574288#msg-574288

I'd like to understand how to properly stres-test an NDB cluster and what to tweak to get the optimal performance from my hardware.
My need is to get a cluster able to do 300-500 writes per second (record size aprox 2K, probably a little less) and about 30K reads per second. Everything through the memcached interface (but if I can get better results, it's ok).

Now, as I said in the other threaed, I'm using Brutis to test the memcached instance(s) and I start tu push the write operations I get very quickly (half an hour or so) the NDB 410 error. Now, it's a 3-nodes test cluster, 24GB of RAM every node and SAS disks (no RAID, I'm using a plain disk now just to have an idea, cause I'm using some hardware I can spare right now), and when running the test IOPS are at 50-100 and *at most* I'm seeing 20MB/sec written on the disk. According to hdparm it can do ~110MB/sec in sequential reads, so 20MB/s shouldn't be a big deal, and in fact I/O wait is low.

So, why am I getting the 410 error?

Cluster conf.

MaxNoOfTables=256
MaxNoOfOrderedIndexes=256
MaxNoOfUniqueHashIndexes=128
MaxNoOfConcurrentOperations=1500000
MaxNoOfExecutionThreads=13
TimeBetweenLocalCheckpoints=20
NoOfFragmentLogFiles=8
DiskCheckpointSpeed=5M # MB/s


Now, I know I can increase NoOfFragmentLogFiles but I think that this will just give me more time before the 410 error, and if we're talking about sustained traffic, it's just pushing 5hrs further in time your problem. What can I do?

And as a bonus question :) where exactly are the REDO log files stored? I understand that they are in the ndb_$NODEID_fs directory, but what are their names?

Howto report errors under Windows (no replies)

$
0
0
I'd like to report a bug, but I am asked to run ndb_error_reporter and attach the resulting collection of configuration, trace and log files to the bug report.

As ndb_error_reporter is only available on Linux machines, I can't use it having a cluster with a Windows management node and Debian Linux data nodes.
Is there a way to collect the logfiles anyway? Do I have to add a second Linux based management node for future error reports?
What exactly is ndb_error_reporter doing? Could I gather all relevant files manually?

ndb_recovery questions (no replies)

$
0
0
Hi,

I have two questions about ndb backups.

1) Can be ndb backup recovered to another cluster? Because if there will be some case that I would have to retrieve some data, I would not recover into the production cluster but to some testing cluster.

2) Is there any tool that uses ndb backup, but it backs up also the data structure? AFAIK, it only backs up the data not the table structure.

Data node loss connection‏ (no replies)

$
0
0
Hi everybody,

I have a situation that the data node keep losing the connection form the Management nodes.
My setup is 2x management node, 2x data nodes, 2x SQL nodes, each node is using a different host
The Management nodes starts up correctlly. when I startup the data nodes, and run the "SHOW" command on the Management clients, it shows that data node are connected and after few seconds when I run the "SHOW" command again, one of the data nodes Loss the connection with the cluster.
I checked the firewall on each node and I verified that it was disabled.
so how can I troubleshoot this problem?
Thank you in advance

Huge data insertion in the cluster: any best practices? (-1 replies)

$
0
0
Hi All,

I have a question regarding an issue that i a dealing with eight now:
Are there any nest practices regarding huge data insertions in the cluster?
My testing project relies on a one table model that is supposed to receive some thousands/millions data lines every X minutes. An external application will be in charge of performing the data insertion , and java is a compolsory choice.
So how can such kind of treatment can be dealt with according to you? i want to take advantage of the existing APIs , maybe clusterJ / ClusterJPA but are there any recommandations as far as the data insertion itself? can batch-like approach be performed ( if yes than how?)

Thank u all for ur feedbacl!

Any tutorial/example of using REST over the cluster? (no replies)

$
0
0
HI ,

Are there any tutorials or examples of how to use the HTTP/REST APIs to connect/use the cluster?
Thank you

The table is full when deleting a huge batch of rows (no replies)

$
0
0
Hi

I'm still testing MySQL Cluster 7.2 and I found myself stuck with a

ERROR 1114 (HY000): The table 'demo_table' is full

when trying to DELETE batches of > 30000 rows at once (with the LIMIT clause). I really fullfilled the DataMemory (16M of records in 12GB of memory) but then when I tried to "clean" it, I found that I can only delete blocks of 30K rows. I guess that this is due to some misconfigured parameter, but which? Any hint?

Thanks!

My Cluster 7.2 - Table Data files (no replies)

$
0
0
We have a 2 data node mysql cluster of which I am trying to find our table data files. My questions are as follows...

1 - Where do the table data files live in a clustered environment as in an InnoDB based environment I would be looking for the X.MYD and X.MYI files?
2 - We seem to have some very large bin files in our data directory which represent the transaction data. Do these need purging after a while or is this handled automatically?

Many thanks in advance for any help!

Chris.

NDB Crashed both nodes - How to recover data (no replies)

$
0
0
NDB has crashed on both nodes of our database, and unfortunatly neither is restarting with different errors. I can't get one started to fix the other.

Anyone have an easier way to fix this than making another sql server, uploading to that server, and then moving that filtered data back to the cluster?

Cannot create index on ndbcluster table (1 reply)

$
0
0
Hi,
I can only create ndbcluster tables having no index defined. When I try to set an index to one of the columns I get no feedback from mysql server.
Does anybody know what could cause that problem?

show warnings; / show errors; is not giving any message...

This is my setup:
5.5.27-ndb-7.2.8-cluster-gpl

Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 4 node(s)
id=10 @192.168.1.10 (mysql-5.5.27 ndb-7.2.8, Nodegroup: 0)
id=12 @192.168.1.12 (mysql-5.5.27 ndb-7.2.8, Nodegroup: 0, Master)
id=14 @192.168.1.14 (mysql-5.5.27 ndb-7.2.8, Nodegroup: 1)
id=17 @192.168.1.17 (mysql-5.5.27 ndb-7.2.8, Nodegroup: 1)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.1.1 (mysql-5.5.27 ndb-7.2.8)

[mysqld(API)] 1 node(s)
id=50 @192.168.1.1 (mysql-5.5.27 ndb-7.2.8)

Update:
Alter table is working but I cannot insert data sets...getting no response on console and have to skip the insert query.

mysql> ALTER TABLE test ENGINE=NDBCLUSTER;
Query OK, 0 rows affected (2.08 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> describe test;
+-------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+--------------+------+-----+---------+-------+
| id | int(11) | NO | PRI | NULL | |
| text | varchar(255) | NO | | NULL | |
+-------+--------------+------+-----+---------+-------+
2 rows in set (0.01 sec)

MySQL Cluster Running On Ubuntu On MS Hyer-V (no replies)

$
0
0
Whenever we copy or merge a snapshot of one of our ndb nodes all the table x.frm files seem to be corrupt and we have to restore all our data from an sql dump!

We are running our data nodes on Ubuntu 12.10 on a Windows 2008R2 VM

Anyone else seen this?

In the meantime we are no longer moving VHD's and are taking regular sql dumps!

Thanks in advance for any help offered, Chris

Special setup of upstream mysql cluster replication question (no replies)

$
0
0
Currently I am doing some research on mysql cluster and also using mysql replication to get the data from our mysql (all MyISAM) production to it. The issue I am having now is very slow replication to cluster.

In the very beginning, I have converted our database ( about total 50GB ) from myisam to full in-memory based cluster with 4 data nodes. However, after converting, the upstream replication to cluster becomes very slow, comparing before conversion, myisam can quickly catch up master, but right now the cluster is catching up one second to master per second. ( comparing many seconds catching up in pure myisam before conversion, I am checking behind master seconds value in show slave status).

Is anyone having the same setup when considering migration from existing database to cluster?

Here is the scenario.

MyISAM production (machine A) -> MyISAM slave + master with bin log format=MIX (Machine B)-> MySQL cluster slave ( Machine C)

1. setup replication between A and B, the purpose of this setup is converting A's statement based replication to convert to row base.
2. setup replication between B and C
2.2 after slave catching up, stop slave.
3. run alter table on C to convert MyISAM to ndbcluster engine.
4. resume slave, and now I see very slow replication.

Thanks,
Benjamin

Add foreign key problem on NDB Cluster (no replies)

$
0
0
Hi guys!
I have installed MySql Cluster ver. 7.3 but I have problem adding foreign key with error: #150 – Cannot add foreign key constraint

Below my table schema:

CREATE TABLE IF NOT EXISTS `HsProductsPolicy` (
`HsProductsPolicyID` int(11) NOT NULL AUTO_INCREMENT COMMENT ‘ID della policy per i prodotti’,
`Code` varchar(64) COLLATE utf8_unicode_ci NOT NULL COMMENT ‘Codice della policy. Corrisponderà al nome del gruppo di FreeRadius’,
`Notes` text COLLATE utf8_unicode_ci COMMENT ‘Note della policy’,
`ForFreeHotSpots` tinyint(1) NOT NULL DEFAULT ’0′ COMMENT ‘Se True la policy è per prodotti da associare a HotSpots liberi’,
`TrafficBandwidthModulation` int(11) NOT NULL DEFAULT ’0′ COMMENT ‘Abilita la modulazione della velocità in base al traffico. 0 = Non abilitato; 1 = Abilitato medio; ecc.’,
`TrafficBandwidthModulationFromTime` int(11) NOT NULL DEFAULT ’0′ COMMENT ‘Attiva dalle ore’,
`TrafficBandwidthModulationToTime` int(11) NOT NULL DEFAULT ’0′ COMMENT ‘Attiva fino alle ore’,
`TrafficBandwidthModulationHours` int(20) NOT NULL DEFAULT ’0′ COMMENT ‘Periodo di tempo espresso in ore da considerare per l”analisi del traffico effettuato per la modulazione della velocità ’,
`TrafficBandwidthModulationTrafficUp` int(20) NOT NULL DEFAULT ’0′ COMMENT ‘Traffico limite in upload per la modulazione della velocità ’,
`TrafficBandwidthModulationTrafficDown` int(20) NOT NULL DEFAULT ’0′ COMMENT ‘Traffico limite in download per la modulazione della velocità ’,
`BandwidthMinUp` int(20) NOT NULL DEFAULT ’0′ COMMENT ‘Velocità minima di upload’,
`BandwidthMinDown` int(20) NOT NULL DEFAULT ’0′ COMMENT ‘Velocità minima di download’,
`BandwidthMaxDown` int(20) NOT NULL DEFAULT ’0′ COMMENT ‘Velocità massima di download’,
`BandwidthMaxUp` int(20) NOT NULL DEFAULT ’0′ COMMENT ‘Velocità massima di upload’,
PRIMARY KEY (`HsProductsPolicyID`),
KEY `HsProductsPolicyCode` (`Code`)
) ENGINE=NDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci ROW_FORMAT=COMPRESSED COMMENT=’Tabella con l”elenco delle policy’ AUTO_INCREMENT=8 ;

CREATE TABLE IF NOT EXISTS `radgroupreply` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`groupname` varchar(64) COLLATE utf8_unicode_ci NOT NULL DEFAULT ”,
`attribute` varchar(64) COLLATE utf8_unicode_ci NOT NULL DEFAULT ”,
`op` char(2) COLLATE utf8_unicode_ci NOT NULL DEFAULT ‘=’,
`value` varchar(253) COLLATE utf8_unicode_ci NOT NULL DEFAULT ”,
PRIMARY KEY (`id`),
KEY `GroupName_IDX` (`groupname`)
) ENGINE=NDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=129 ;

ALTER TABLE `radgroupreply`
ADD CONSTRAINT `radgroupreply_ibfk_1` FOREIGN KEY (`groupname`) REFERENCES `HsProductsPolicy` (`Code`) ON DELETE CASCADE ;

Why?

I hope you could help me

Best regards

Increasing Capacity using MySql Cluster (no replies)

$
0
0
Hi,

I'm considering migrating from mySql to mySql Cluster, since I have the impression that this might be the solution for my current capacity limitation.

Please, let me know if I'm addressing my inquiry to the wrong department!! If so, please accept my apologies.

Currently using one DELL XPS workstation as single server and storage, with 4 x 1Tb disks for storage (pure data, OS & mysql itself stored in a 5th disk)
Planning to set up a second similar server

The situation is as follows:
-I DO need the double amount of storage and processing capacity
-I DONT need duplicating data across the two nodes
-I DO need ONE single "interface" to data stored those two nodes -> that's why i thought of clustering

I'd RATHER HAVE data split as per my convenience (means, precisely know which table is stored where)


And my doubts:

* Is this possible?
* Should I configure two groups, one node each?
* Some readings you would recommend

Thanks in advance!
BR
David

data node restart more than 3 hours (no replies)

$
0
0
I have 4 data nodes with 48GB memory in each of them, but my database size is large and fill up DataMemory (config to 38G) up to 91%, IndexMemory (config to 5G ) 74%.

I am testing rolling start, and each node will need to take more than 3 hours to completely back online. I could see a lot of waiting before and after filling data into memory at phase 5 because I periodically check memory usage on the restarting node. I have no idea what the node is doing on phase 5. From iostat -x 1 issuing on all the nodes, the restarting node doesn't have much I/O like other nodes and doesn't have cpu usage at all. Can anyone give me an idea what it is doing and why it is taking long time to run on phase 5?

Thanks,
Benjamin

Error in getNdbIndexOperation mysql cluster 7.1.18 (no replies)

$
0
0
While using NdbTransaction::getNdbIndexOperation() method for on a table having unique hash index is showing following error:

Error Code: 4003
Error Message: Function not implemented yet

=============================================================

Table Definition is

CREATE TABLE `STUDENT_TABLE` (
`REG_ID` int(11) NOT NULL AUTO_INCREMENT,
`SECURITY_CODE` varchar(100) NOT NULL,
`BATCH` int(10) DEFAULT NULL,
`DOE` int(10) DEFAULT NULL,
`STUDENT_NAME` varchar(100) DEFAULT NULL,
PRIMARY KEY (`REG_ID`),
UNIQUE KEY `SECURITY_CODE` (`SECURITY_CODE`),
KEY `STUDENT_TBL_IDX1` (`STUDENT_NAME`),
KEY `STUDENT_TBL_IDX2` (`BATCH`)
) ENGINE=ndbcluster AUTO_INCREMENT=1 DEFAULT CHARSET=latin1

Code snippet is as following:
=========================================================
const NdbDictionary::Index *pIndex = NULL;
pIndex = pDict->getIndex("SECURITY_CODE","STUDENT_TABLE");

NdbTransaction *pTransaction = NULL;
pTransaction = gNdb->startTransaction();

NdbIndexOperation *pIndexOp = NULL;
pIndexOp = pTransaction->getNdbIndexOperation(pIndex);
---------------------------------------------------------

is something wrong with the table DDL statment or with the index creation ?

Thanks.

redo log growing even no queries to database (1 reply)

$
0
0
my redo log just growing constantly, all data node have something write to redo logs all the time even no activity on cluster, no mysqld up. I am going to run out of the space of disk soon. Does anyone knows what is it doing? and how can I stop or change it?

Cluster Query execution time slower (no replies)

$
0
0
Hi Team,

We are testing cluster setup for our company, the results are not as expected in Query execution time. We need your kind support in understanding the cause of this issue.

We tried our complex query in MYQSL SERVER version 5.6 & query execution time was 16 sec. Same query has been executed in Cluster with the below mentioned specification,
we got errors (ERROR 1296 (HY000): Got error -1 'Unknown error code' from NDBCLUSTER and ERROR 1297 (HY000): Got temporary error 20016 'Query aborted due to node failure' from NDBCLUSTER)
and we fixed the issue by running set ndb_join_pushdown=off; and the execution time was 577.359 sec.

so we tried to optimize query execution time by adaptive-query-localization method as mentioned in website , But when we try to run with ndb_join_pushdown=on; we are getting an error Error Code: 1297 Got temporary error 20008 'Query aborted due to out of query memory' from NDBCLUSTER
How can we solve this issue?

Cluster Specification:

Cluster Setup Version : mysql-cluster-gpl-7.3.0-win32
Cluster nodes : 1 Management node, 2 SQL Node & 4 DATA NODE (2 host machines)

Management Node & Data Nodes Specification are :
Windows XP,
Pentium CPU 3.40GHZ,
3.39GHZ, 2.98 GB RAM
SQL Nodes Specifications:
First SQL node
Windows7
Intel 2Duo CPU E8200 @ 2.66GHZ 2.67GZ
RAM 4 GB

Second SQL node
Windows Server 2008 R2 standard
Intel Xeon(R) CPU E5520 @ 2.27GHZ 2.27 GHZ (2 processors)
RAM : 16 GB

As mentioned in your website, we also tried below cases, but there is no luck
Case1 : Cluster having 2 Data nodes & 2 SqL node
Case2 : Cluster having 3 Data nodes & 3 SqL node
Case3 : Cluster having 4 Data nodes & 2 SqL node

Even running simple query (select with distinct) in cluster is slower than non-cluster MYSQL server, Just for your reference

Query execution time (100mb; diskless=0/1) Query execution time within cluster setup (1gb; diskless=0) Query execution time within cluster setup (1gb; diskless=1) Query execution time, Without cluster
9.907 8.188 7.797 3.328
9.953 7.543 7.062 1.75
9.891 7.766 7.875 1.75


Config file is as below

[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2 # Number of replicas
DataDir=C:/mysql/bin/cluster-data # Directory for each data node's data files

DataMemory = 1024M
IndexMemory = 512M
MaxNoOfConcurrentOperations = 42000
Diskless=1
# MaxAllocate = 1G
# MaxNoOfConcurrentIndexOperations = 25000
# MaxNoOfExecutionThreads=4
# MaxNoOfOrderedIndexes = 8000
# MaxNoOfAttributes = 8000
# MaxNoOfTables = 2500


[ndb_mgmd]
# Management process options:
HostName=10.81.129.235 # Hostname or IP address of management node
DataDir=C:/mysql/bin/cluster-logs # Directory for management node log files

[ndbd]
# Options for data node "A": # (one [ndbd] section per data node)
nodeid=10
HostName=10.81.129.233 # Hostname or IP address
datadir=C:/mysql/data

[ndbd]
# Options for data node "B":
nodeid=20
HostName=10.81.129.234 # Hostname or IP address
datadir=C:/mysql/data

[ndbd]
# Options for data node "A": # (one [ndbd] section per data node)
nodeid=30
HostName=10.81.129.233 # Hostname or IP address
datadir=C:/mysql/data1

[ndbd]
# Options for data node "B":
nodeid=40
HostName=10.81.129.234 # Hostname or IP address
datadir=C:/mysql/data1

[mysqld]
# SQL node options:
HostName=10.81.128.133 # Hostname or IP address

[mysqld]
# SQL node options:
HostName=10.81.127.94 # Hostname or IP address

Much appreciate your help.

Regards,
Karthick K

ndbmemcache and fetching values with multiple columns as key (no replies)

$
0
0
Table structure:

dev.Stats
- UserID int
- StatName varchar
- Value varchar
UserID + StatName are the PK.

In ndbmemcache.containers, I have "UserID,StatName" as the key_columns, and "Value" as the value_columns. In key prefixes, I have "stats:" as the key_prefix, "caching" as the policy for this container.

However, I can't seem to access it via memcache.

I've tried 'stats:1,MyFirstStat', and nothing. The data does exist in the table, and I even tried using every ascii character upto 256 in place of the comma.

mysql_cluster backup and restore (no replies)

$
0
0
How to to take backup and restore of mysql cluster.

192.168.0.1 (management node)
192.168.0.2 (data node1)
192.168.0.3 (date node2)

I have question how can i backup of mysql cluster, if i delete any table from node1 then i want to restore from backup, please suggest..
Viewing all 1560 articles
Browse latest View live