mysqlmysql-slow-query-log

mysql insert too slow and high io/cpu usage some time


the table row is about one hundred million, sometimes the io bps about 150 IOPS about 4k

  1. os version: CentOS Linux 7
  2. MySQL version: docker mysql:5.6
server_id=3310
skip-host-cache
skip-name-resolve
max_allowed_packet=20G
innodb_log_file_size=1G
init-connect='SET NAMES utf8mb4'
character-set-server = utf8mb4
collation-server = utf8mb4_general_ci
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=5120M
expire-logs-days=7
log_bin=webser
binlog_format=ROW
back_log=1024
slow_query_log
slow_query_log_file=slow-log
tmpdir=/var/log/mysql
sync_binlog=1000
  1. the create table statement
CREATE TABLE `device_record` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `os` tinyint(9) DEFAULT NULL,
  `uid` int(11) DEFAULT '0',
  `idfa` varchar(50) DEFAULT NULL,
  `adv` varchar(8) DEFAULT NULL,
  `oaid` varchar(100) DEFAULT NULL,
  `appId` tinyint(4) DEFAULT NULL,
  `agent` varchar(100) DEFAULT NULL,
  `channel` varchar(20) DEFAULT NULL,
  `callback` varchar(1500) DEFAULT NULL,
  `activeAt` datetime DEFAULT NULL,
  `chargeId` int(11) DEFAULT '0',
  `createAt` datetime DEFAULT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `idfa_record_index_oaid` (`oaid`),
  UNIQUE KEY `index_record_index_agent` (`agent`) USING BTREE,
  UNIQUE KEY `idfa_record_index_idfa_appId` (`idfa`) USING BTREE,
  KEY `index_record_index_uid` (`uid`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=1160240883 DEFAULT CHARSET=utf8mb4
  1. insert statement
@Insert(
      "insert into idfa_record (os,idfa,oaid,appId,agent,channel,callback,adv,createAt) "
          + "values(#{os},#{idfa},#{oaid},#{appId},#{agent},#{channel},#{callback},#{adv},now()) on duplicate key "
          + "update createAt=if(uid<=0,now(),createAt),activeAt=if(uid<=0 and channel != #{channel},null,activeAt),channel=if(uid<=0,#{channel},channel),"
          + "adv=if(uid<=0,#{adv},adv),callback=if(uid<=0,#{callback},callback),appId=if(uid<=0,#{appId},appId)")

Solution

  • 100M rows, but the auto_increment is already at 1160M? This is quite possible, but...

    This seems excessive: max_allowed_packet=20G.

    How many rows are inserted per second? What is "bps"? (I am pondering why there are 4K writes. I would expect about 2 IOPS per Unique key per INSERT, but that dones not add up to 4K unless you have about 500 Inserts/sec.

    Are the Inserts coming from different clients? (This feeds into "burning" ids, sluggishness, etc.)