Bug #102485 util.dumpSchema chunks incorrectly creating hundreds of small chunks
Submitted: 5 Feb 3:18 Modified: 14 Mar 20:40
Reporter: Robert Roland Email Updates:
Status: Can't repeat Impact on me:
Category:MySQL Server: Document Store: MySQL Shell Severity:S2 (Serious)
Version:8.0.23 OS:Microsoft Windows (10 x64)
Assigned to: MySQL Verification Team CPU Architecture:x86 (i9 9900K)
Tags: Chunking, dump, util.dumpInstance, util.dumpSchema

[5 Feb 3:18] Robert Roland
util.dumpSchema chunks incorrectly producing hundreds of small chunks that progressively take longer and longer to produce as the algorithm progresses through the primary index of a very large table. 

------------ event description -----------

The 8.023 server was started using innodb_force_recovery = 6.

Using MySQLShell dump 1.0.2  Distrib Ver 8.0.23 for Win64 on x86_64 for MySQL 8.0.23 the following command was executed after connection was established:

util.dumpSchemas(["ngrams"], "e:/dumps/ngrams",{ocimds:true,compatibility:"strip_definers",threads:7,routines:true}))

Initially, several tables in the schema dumped using 2-3 threads in a fairly normal manner. For some of these tables the chunks were small but reasonable. In every case the predicted number of chunks stated by the utility was much lower than the actual chunks produced. A typical example would be that the utility states it is going to dump xyz table in 11 chunks but actually produces 25.

We then reached a table which chunked very poorly causing us to have to kill the operation after several hours of producing hundreds of tiny chunks of about 150k rows each.

The progress for this table predictably and progressively slowed down as each chunk's select query was further and further down the primary key. When killed the transfer rate was under 100K/s. 
This was compounded because only 1 thread was assigned to compressing and writing chunks for this particular table for reasons unknown.
The table which chunked poorly has 75,317,850 rows with 11 columns where each column is varchar(30) or mediumint having a total data length of 30.3G. The table's first column is the primary key which is varchar(30) utf8mb4.

The chunking algorithm used by the utility stated that it was going to produce approx 80 chunks but we killed the process after over 500 chunks had been created for this single table. The utility was reporting over 100% progress on this table as well when we killed it (~106%).

I suspect that the issue may arise because table statistics are not available since we are in an innodb_force_recovery = 6 situation.

It should be noted that the utility produces a warning for each table that table statistics are not available and suggests we run ANALYZE TABLE.

However, we cannot write table statistics in our situation since we are in a read-only mode. 

The background event that caused us to be in a recovery mode is that our Server 8.023 unfortunately had an exception 0xc0000005 when InnoDB redo logging was disabled. As a result we cannot initialize the data dictionary and we are forced to dump the entire instance and reinstall / initialize a new server and re-populate it with backups.

How to repeat:
Create a large table (>10M rows) with a primary key

Start server using innodb_force_recovery = 6

Execute util.dumpSchemas command to dump the schema using default settings.

Suggested fix:
In situations where table statistics are unavailable the utility should fall back to other (slightly more time consuming) methods for estimating data length/table size.


Perhaps force chunks of certain (large) minimum size if table statistics are unavailable. (the fixing of Bug #31909408 apparently did not apply to this particular situation)
[5 Feb 17:20] Robert Roland
a temporary solution is to use option chunking:false , however, it will be single threaded
[27 Feb 4:32] MySQL Verification Team

I'm having some issues reproducing this. 
Why are you starting with force recovery? Is the table corrupted?

[14 Mar 20:40] MySQL Verification Team

I'm not reproducing this