Bug #35075 Table-per-file coherent backup
Submitted: 5 Mar 2008 11:54 Modified: 15 Nov 2014 10:34
Reporter: Alexander Motin Email Updates:
Status: Open Impact on me:
None 
Category:MySQL Server: Command-line Clients Severity:S4 (Feature request)
Version: OS:Any
Assigned to: Assigned Account CPU Architecture:Any
Tags: Contribution

[5 Mar 2008 11:54] Alexander Motin
Description:
For incremental database backup purposes I have made a patch for mysqldump that dumps each table in separate gzipped file with it's own headers and replication position. It does this inside the same transaction to keep data coherent. Every day I dump only modified tables and in case of data corruption I have ability to restore only required tables to the known synchronous state then restore replication.

How to repeat:
To get coherent backup single mysqldump call required. But to dump multiple tables into different tables several mysqldump calls required now.

Suggested fix:
It would be good to import this my patch or if it is nor perfect, make another implementation of alike functionality.
[5 Mar 2008 11:55] Alexander Motin
mysqldump patch for mysql 5.1.23

Attachment: mysqldump.c.5.1.23.patch (application/octet-stream, text), 4.96 KiB.

[5 Mar 2008 12:40] MySQL Verification Team
Thank you for the bug report and contribution.
[6 Mar 2008 16:25] Jim Winstead
I haven't looked at the patch in detail, but I can say that launching '/usr/bin/gzip' to do the compression isn't going to work. It's not at all cross-platform. A patch to do this is going to need to use zlib for the compression.
[6 Mar 2008 19:46] Chad MILLER
Sorry, Alexander, but this must work on other OSes (where popen() doesn't exist) or on Unix if you remove the gzip program.   We can't include this as it is written.  Please consider resubmitting it, as I think it would be useful.

We have a top-level zlib/ directory in our server source tree.  It shouldn't be too hard to do exactly what the gzip program does.

If it's more than 10 additional source lines or so, then please sign our contributer license agreement, first:  
  http://forge.mysql.com/contribute/cla.php
[6 Apr 2008 23:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[7 Apr 2008 10:07] Susanne Ebrecht
Alexander,

many thanks for writing a patch. Chad Miller asked you for some details. I will set this back to "need feedback" that you can chat here with Chad a little bit more.

Susanne
[7 May 2008 23:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[15 Nov 2014 10:34] Daniƫl van Eeden
Duplicate of Bug #6945
[11 May 2018 18:29] Sachin Tiwari
This is a important feature request from end user perspective having large size database (~125GB or so). For database with such huge size it would be great to have capability of taking coherent backup in file-per-table. As that would allow the user to restore the data in parallel using these files and thus speed up the restoration process.