Bug #109447 | Improve mysql-shell support when loading / dumping large tables and docs | ||
---|---|---|---|
Submitted: | 21 Dec 2022 9:09 | Modified: | 21 Dec 2022 12:10 |
Reporter: | Simon Mudd (OCA) | Email Updates: | |
Status: | Verified | Impact on me: | |
Category: | Shell Dump & Load | Severity: | S4 (Feature request) |
Version: | 8.0.31 | OS: | Any |
Assigned to: | CPU Architecture: | Any | |
Tags: | performance, shell, windmill |
[21 Dec 2022 9:09]
Simon Mudd
[21 Dec 2022 12:10]
MySQL Verification Team
Hello Simon, Thank you for the feature request! regards, Umesh
[21 Dec 2022 13:12]
Pawel Andruszkiewicz
Posted by developer: Regarding functionality requests: util.exportTable() is designed to dump into a single output file using a single thread, util.importTable() is able to load a single file using multiple threads, but only if the input file is not compressed. A single table can be dumped into multiple files in parallel using i.e. `util.dumpTables('s', ['t'], 'out')`, and then loaded (in parallel) using `util.loadDump('out')`. All the util.dump*() utilities are storing the value of gtid_executed server variable in the `@.json` metadata file, which can be later restored automatically using the `updateGtidSet` load option.