mysqlmemory-consumption

MySQL Memory Usage with many Tables


I have a MySQL with a few 100,000 tables. This is currently the best design for my system, since these tables are not related to each other and select queries will only be down on a single table. In addition, users will most likely not access the same tables very often.

I have 16GB RAM, but after about a day, MySQL consumes about 90% of that, with my total memory use of the system being at 99-100%. I tried numerous things, but simply can't get the memory usage down.

My innodb_buffer_pool_size is currently at 8GB, but I had it at 1G with the same issue. I also tried reducing open_files_limit but that didn't help either.

Here is my output for

SHOW GLOBAL STATUS LIKE '%Open_%';
+----------------------------+----------+
| Variable_name              | Value    |
+----------------------------+----------+
| Com_show_open_tables       | 0        |
| Innodb_num_open_files      | 431      |
| Open_files                 | 0        |
| Open_streams               | 0        |
| Open_table_definitions     | 615      |
| Open_tables                | 416      |
| Opened_files               | 4606655  |
| Opened_table_definitions   | 4598528  |
| Opened_tables              | 4661002  |
| Slave_open_temp_tables     | 0        |
| Table_open_cache_hits      | 30024782 |
| Table_open_cache_misses    | 4661002  |
| Table_open_cache_overflows | 4660579  |
+----------------------------+----------+

And here is my mysqld config:

sql-mode=''
innodb_buffer_pool_size = 8G
open_files_limit=100000
user        = mysql
pid-file    = /var/run/mysqld/mysqld.pid
socket  = /var/run/mysqld/mysqld.sock
port        = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir  = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
bind-address        = 127.0.0.1
key_buffer_size     = 16M
max_allowed_packet  = 16M
thread_stack        = 192K
thread_cache_size       = 8
myisam-recover-options  = BACKUP
query_cache_limit   = 1M
query_cache_size        = 16M
log_error = /var/log/mysql/error.log
expire_logs_days    = 10
max_binlog_size   = 100M

Anyone know how to efficiently handle these thousands of tables?

ADDITIONAL INFO

A) Mysqld: https://pastebin.com/PTiz6uRD

B) SHOW GLOBAL STATUS: https://pastebin.com/K4sCmvFz

C) SHOW GLOBAL VARIABLES: https://pastebin.com/Cc64BAUw

D) MySQLTuner: https://pastebin.com/zLzayi56

E) SHOW ENGINE INNODB STATUS: https://pastebin.com/hHDuw6gY

F) top: https://pastebin.com/6WYnSnPm

TEST AFTER SERVER REBOOT (LITTLE MEMORY CONSUMED):

A) ulimit -a: https://pastebin.com/FmPrAKHU

B) iostat -x: https://pastebin.com/L0G7H8s4

C) df -h: https://pastebin.com/d3EttR19

D) MySQLTuner: https://pastebin.com/T3DYDLg8


Solution

  • First block to remove will be in Linux command line, ulimit -n 124000 to get past the current limit of 1024 Open Files this can be done NOW, no shutdown/restart of Linux is required for this to be active.

    Suggestions to consider for your my.cnf [mysqld] section

    table_open_cache=10000  # from 431 available today to a practical upper limit
    table_definition_cache=10000  # from 615 available today to a practical upper limit
    thread_cache_size=100  # from 8 for V8 refman CAP suggested to avoid OOM
    max_heap_table_size=64M  # from 16M to reduce created_tmp_disk_tables
    tmp_table_size=64M  # from 16M should always be equal to max_heap_table_size
    innodb_lru_scan_depth=100  # from 1024 to reduce CPU workload every SECOND
    innodb_log_buffer_size=512M  # from 50M to avoid log rotation every 7 minutes
    

    Considering your situation, I would SKIP the one a day rule and monitor before next Global Variable change. Make all cnf changes. Stop/start services required because you are limited to open_files_limit in MySQL even though you requested 100,000 ulimit caused runtime to limit you to 1024.

    If you copy paste the block to the END of the MySQLD section, remove same named variables above the block in MYSQLD section only, you will get rid of the 'multiple variable confusion' for the next analyst.

    Please view profile for contact information and get in touch.