Bug #55807 | Last_IO_Errno: 1236 and mysqlbinlog unable to read log with --start-position=N | ||
---|---|---|---|
Submitted: | 6 Aug 2010 18:14 | Modified: | 17 Aug 2010 18:23 |
Reporter: | Ben Vins | Email Updates: | |
Status: | Not a Bug | Impact on me: | |
Category: | MySQL Server: Replication | Severity: | S1 (Critical) |
Version: | 4.1.22, 5.1.40 | OS: | Windows |
Assigned to: | CPU Architecture: | Any | |
Tags: | Last_IO_Errno: 1236 |
[6 Aug 2010 18:14]
Ben Vins
[9 Aug 2010 18:17]
Sveta Smirnova
Thank you for the report. I can not repeat described behavior with test binary log. Please upload us problem binary log as described in "Files" section. Can you repeat same issue with smaller binary log files? If so, please send us smaller binary log.
[12 Aug 2010 9:32]
Ben Vins
Ok, just got the problem once more. My bin log is only 57Meg but it is still too big to upload to this bug report. Is there any other way to give you this data?
[12 Aug 2010 16:59]
Sveta Smirnova
Thank you for the feedback. As described in "Files" section: If the data you need to attach is more than 500KB, you should create a compressed archive of the data and a README file that describes the data with a filename that includes the bug number (example: bug-data-55807.zip), and use FTP to upload the archive to ftp://ftp.mysql.com/pub/mysql/upload/. Once you have uploaded the file, add a comment to this bug to notify us about it. Note: This directory is unlistable, which means that once you have uploaded your file, you will not be able to see it.
[13 Aug 2010 10:20]
Ben Vins
Hi Sveta, Before I send you the binlog file, could you confirm I am sending the correct one? Basically, I am confused to what is happing. Something seems to have happen which made master stop inserting into the binlog 00005 before reaching the 1G limit and create the 00006 one. I would expect to see the replication broken on the slave whilst reading the binlog 00005 and not the first entry (position 79) on binlog 00006. Binlog 00006 does not seems that badly corrupted as Master is carrying on writing to it up to the 1G limit and then created the binlog 00007 etc.. and is still running. Do you have an idea to what is happening? Do you want GS2STDB01-bin.000005 or GS2STDB01-bin.000006? Here is what I am seeing: GS2STDB01-bin.000001 -> 1,048,585 KB GS2STDB01-bin.000002 -> 1,048,585 KB GS2STDB01-bin.000003 -> 1,048,585 KB GS2STDB01-bin.000004 -> 1,048,585 KB GS2STDB01-bin.000005 -> 357,786 KB GS2STDB01-bin.000006 -> 1,048,585 KB GS2STDB01-bin.000007 -> 1,048,585 KB etc... I pick up the GS2STDB01-bin.000006 file when it was still only 56.4Mb but as the master is live, it has carrying on growing. The slave shows mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Master_Host: 10.xx.xxx.24 Master_User: bfReplication Master_Port: 3306 Connect_Retry: 200 Master_Log_File: GS2STDB01-bin.000006 Read_Master_Log_Pos: 79 Relay_Log_File: slave-relay-bin.000017 Relay_Log_Pos: 228 Relay_Master_Log_File: GS2STDB01-bin.000006 Slave_IO_Running: No Slave_SQL_Running: Yes Replicate_Do_DB: bfresult Replicate_Ignore_DB: Replicate_Do_Table: bfresult.session_fact_hour,bfresult.bfmediaplexgoals_fact_hour,bfresult.bfpageloadtime_fact_hour,bfresult.campaigngoals_fact_hour,bfresult.browser_fact_hour,bfresult.formfields_fact_hour,bfresult.sitegrip_fact_hour,bfresult.formattempts_fact_hour,bfresult.formsequences_fact_hour,bfresult.pages2_fact_hour,bfresult.sessionmin_fact_5min,bfresult.referrer_fact_hour,bfresult.pages_fact_hour,bfresult.formvalues_fact_hour,bfresult.SEARCHENGINE_FACT_HOUR,bfresult.popularobjects_fact_hour Replicate_Ignore_Table: Replicate_Wild_Do_Table: bfresult.st_% Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 79 Relay_Log_Space: 436 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'bogus data in log event' Last_SQL_Errno: 0 Last_SQL_Error: 1 row in set (0.30 sec)
[13 Aug 2010 19:06]
Sveta Smirnova
Thank you for the feedback. I want any binary log mysqlbinlog has problem reading with. Better if it is closed (not used by master now) and smaller binary log.
[17 Aug 2010 11:34]
Ben Vins
Hi Sveta, I have now upload a corrupted binlog to your ftp. Please let me know if you need anything else. regards, Ben
[17 Aug 2010 18:23]
Sveta Smirnova
Thank you for the feedback. But binary log is really corrupted and can not be read even by mysqlbinlog from 4.1 installation. Therefore this is not a bug. max_allowed_packet can be set up to 1G. As workaround you can get queries using `strings` UNIX command, then edit that file manually to be able to load data into slave.