I'm testing out an upgrade to 7.4.11 and noticed that the RSS memory usage on the mysqld api node was extraordinarily high.
This is right after startup of the api node and with no client connections.
I'm compiling from source and running on CentOS 6.7 x86_64
After doing some digging, i found that at a max-connections setting of >= 303, the RSS memory usage increased exponentially (+313.2MB from 302 to 303).
In my tests an increase of max-connections by +1 normally resulted in an increase of RSS memory usage by +125KB.
Here are my results:
max-connections / RSS (MB)
001 / 087.9
050 / 093.1
100 / 099.4
200 / 111.8
300 / 124.2
301 / 124.3
302 / 124.4
303 / 438.5
350 / 446.7
400 / 455.4
/proc/<pid>/smaps reveals the following large memory mapping at 303 connections:
7fc031eeb000-7fc04b05d000 rw-p 00000000 00:00 0
Size: 411080 kB
Rss: 411080 kB
Pss: 411080 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 411080 kB
Referenced: 411080 kB
Anonymous: 411080 kB
AnonHugePages: 0 kB
Swap: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
VmFlags: rd wr mr mw me ac
while the mapping at 302 connections reveals:
7ff68f4d2000-7ff69538e000 rw-p 00000000 00:00 0
Size: 97008 kB
Rss: 97008 kB
Pss: 97008 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 97008 kB
Referenced: 97008 kB
Anonymous: 97008 kB
AnonHugePages: 0 kB
Swap: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
VmFlags: rd wr mr mw me ac
I have also tested by removing any buffers I had set in /etc/my.cnf such as:
max-allowed-packet,tmp_table_size,max_heap_table_size,join_buffer_size,sort_buffer_size,read_rnd_buffer_size,ndb-batch-size and also any buffers i had set in the cluster config under [MYSQLD DEFAULT]
Neither change had any effect on the RSS usage.
Any help is appreciated.
-Tony
This is right after startup of the api node and with no client connections.
I'm compiling from source and running on CentOS 6.7 x86_64
After doing some digging, i found that at a max-connections setting of >= 303, the RSS memory usage increased exponentially (+313.2MB from 302 to 303).
In my tests an increase of max-connections by +1 normally resulted in an increase of RSS memory usage by +125KB.
Here are my results:
max-connections / RSS (MB)
001 / 087.9
050 / 093.1
100 / 099.4
200 / 111.8
300 / 124.2
301 / 124.3
302 / 124.4
303 / 438.5
350 / 446.7
400 / 455.4
/proc/<pid>/smaps reveals the following large memory mapping at 303 connections:
7fc031eeb000-7fc04b05d000 rw-p 00000000 00:00 0
Size: 411080 kB
Rss: 411080 kB
Pss: 411080 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 411080 kB
Referenced: 411080 kB
Anonymous: 411080 kB
AnonHugePages: 0 kB
Swap: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
VmFlags: rd wr mr mw me ac
while the mapping at 302 connections reveals:
7ff68f4d2000-7ff69538e000 rw-p 00000000 00:00 0
Size: 97008 kB
Rss: 97008 kB
Pss: 97008 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 97008 kB
Referenced: 97008 kB
Anonymous: 97008 kB
AnonHugePages: 0 kB
Swap: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
VmFlags: rd wr mr mw me ac
I have also tested by removing any buffers I had set in /etc/my.cnf such as:
max-allowed-packet,tmp_table_size,max_heap_table_size,join_buffer_size,sort_buffer_size,read_rnd_buffer_size,ndb-batch-size and also any buffers i had set in the cluster config under [MYSQLD DEFAULT]
Neither change had any effect on the RSS usage.
Any help is appreciated.
-Tony