I use a Google Cloud SQL (MySQL) as my production database. To replicate this database for testing in docker i use the mysql:8 image.
I noticed that one of my migration scripts succedes on the cloud db but failes during tests. The following script causes an error:
CREATE TABLE testTable
(
name varchar(1000)
);
CREATE INDEX idx_device_designs_name ON testTable (name);
The error: Specified key was too long; max key length is 3072 bytes [Failed SQL: (1071)...
I understand the reason for the error, but as our standard production db does not produce it I'm looking for the setting that disables this check.
EDIT1:
I compared create queries on production and in the docker container
production
CREATE TABLE `testTable` (
`name` varchar(1000) COLLATE utf8mb3_unicode_ci NOT NULL,
KEY `idx_device_designs_name` (`name`),
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb3 COLLATE=utf8mb3_unicode_ci
docker
CREATE TABLE `testTable` (
`name` varchar(1000) CHARACTER SET utf8mb3 COLLATE utf8mb3_unicode_ci NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
production had the index applied in the past (before a mysql update, so 5.7). beyond that it looks fundamentally the same to me.
To give this Question a valid answer:
There is a difference in the Column definition between the mysql:8 docker container and the mysql 8 google cloud database. Because the production DB was created back wenn it was mysql 5.7 the charset/collation of the database is utf8mb3. In mysql 8 the default charset/collation is utf8mb4.
As a result the varchar(1000) results in different key sizes for its index between these environments and the mb4 size is too large.
Solution: for our tests we start the container with utf8mb3 as its default, thereby emulating the state of production and avoiding the error.