Description:
Dear experts,
CDbConnection failed to open the DB connection: SQLSTATE[HY000] [2013] Lost connection to MySQL server at 'reading initial communication packet', system error: 111
Quite interesting error for PHP developers that's why they argued momentarily.
As after examination we found that is not because of Firewall issue that , most of google search results said.
At the end we can figure out that, due to partitioning issue in Linux Server MySQL has rised disk full error. Here is a portion from error log:
2014-04-24 14:42:47 20672 [Warning] Disk is full writing '/var/lib/mysql/data/mysql-bin.000045' (Errcode: 28 - No space left on device). Waiting for someone to free space...
Also There is a different error messages:
2014-04-24 14:39:17 19994 [ERROR] /usr/sbin/mysqld: Incorrect key file for table '/tmp/#sql_4e1a_1.MYI'; try to repair it
2014-04-24 14:39:17 19994 [ERROR] Got an error from unknown thread, /pb2/build/sb_0-11763321-1394824719.47/rpm/BUILD/mysql-5.6.17/mysql-5.6.17/storage/myisam/mi_write.c:226
Also i think it is because of full /tmp disk too.
Another interesting information from error log:
2014-04-24 14:42:45 20672 [Note] Recovering after a crash using /var/lib/mysql/data/mysql-bin
2014-04-24 14:42:45 20672 [ERROR] Error in Log_event::read_log_event(): 'read error', data_len: 8214, event_type: 31
2014-04-24 14:42:45 20672 [Note] Starting crash recovery...
2014-04-24 14:42:45 20672 [Note] Found 1 prepared transaction(s) in InnoDB
2014-04-24 14:42:45 20672 [Note] Crash recovery finished.
2014-04-24 14:42:45 20672 [Note] Crashed binlog file /var/lib/mysql/data/mysql-bin.000044 size is 7725056, but recovered up to 7669199. Binlog trimmed to 7669199 bytes.
Further reading error log:
2014-04-24 14:42:47 20672 [ERROR] Error in Log_event::read_log_event(): 'read error', data_len: 2070, event_type: 29
2014-04-24 14:42:47 20672 [Warning] Error reading GTIDs from binary log: -1
"-1" what is "-1" here?
So going on:
In server /home directory there is enough space to moving all binary logs:
/dev/mapper/VolGroup-lv_home 418G available.
I tried to change binary log path to /home/data.
mkdir data
chown -R mysql:mysql data
[root@slavesrv1 ~]# ls -l /home | grep data
drwxr-xr-x. 2 mysql mysql 4096 2014-04-24 15:44 data
moved all bin_log files to /home/data after Shutting down MySQL..
[root@slavesrv1 ~]# ls -l /home/data/
total 41790100
-rwxr-xr-x. 1 mysql mysql 1074426588 2014-04-24 15:08 mysql-bin.000001
-rwxr-xr-x. 1 mysql mysql 1074200519 2014-04-24 15:09 mysql-bin.000002
-rwxr-xr-x. 1 mysql mysql 1075578308 2014-04-24 15:09 mysql-bin.000003
.
.
Keep only mysql-bin.index file which i manually updated to new path.
[root@slavesrv1 data]# pwd
/var/lib/mysql/data
[root@slavesrv1 data]# ls
mysql-bin.index mysql-slow.log
But When i start the server it says that permission denied.
But as you see /home/data owner is mysql user.
This server is slave server. So i loose my slave and at this moment can not fix this issue.
How to repeat:
Repeating maybe impossible.
But i setup replication setup with one master and one slave server.
Based on error from PHP script:
CDbConnection failed to open the DB connection: SQLSTATE[HY000] [2013] Lost connection to MySQL server at 'reading initial communication packet', system error: 111
Figure out that it was because of disk full error. Also MySQL crashes and recovery doesn't work.
After i want to change bin_log dir to /home/data folder but it was impossible too.
So make a replication setup on CEntos with 5.6.17 , make a full disk error on Slave side. Also Performance Schema is enabled on both servers.
Suggested fix:
Can not find exact solution.
Will provide full error log.