Bug #61871 | While using UDFs UPDATE command crashs | ||
---|---|---|---|
Submitted: | 14 Jul 2011 17:59 | Modified: | 27 Jul 2011 22:50 |
Reporter: | Salman Toor | Email Updates: | |
Status: | Not a Bug | Impact on me: | |
Category: | MySQL Server: User-defined functions ( UDF ) | Severity: | S1 (Critical) |
Version: | 5.5.12 | OS: | Linux (Scientific Linux SL release 5.3) |
Assigned to: | CPU Architecture: | Any |
[14 Jul 2011 17:59]
Salman Toor
[14 Jul 2011 18:25]
Sveta Smirnova
Thank you for the report. Please send us code of your UDF.
[15 Jul 2011 13:00]
Salman Toor
File to bolb conversion code.
Attachment: manageBLOB_mysql.cc (application/octet-stream, text), 20.82 KiB.
[15 Jul 2011 13:01]
Salman Toor
Hi again, Thanks for your response. Attachment contains the source code of the UDF. I have removed some of the functionality to make things simple. But I have checked that you can reproduce the problem using this code. How to run the code. I am assuming that you have build the library file and It is placed in the plugin directory. Command-1: mysql> select file_to_blob("/tmp/10000_9000_D.txt", "/tmp/D.dat"); First argument is the file that contains all the data. You can download the file from the following link. http://user.it.uu.se/~salto690/10000_9000_D.txt Second argument is used for some other purpose which is not relevant for reproducing the problem thus you can pass any empty string. The result of this command is to return a blob. which is working fine. Command-2: mysql> update aTriples set value=file_to_blob("/tmp/10000_9000_D.txt", "/tmp/D.dat") where taskId=2; You can use update command to fill the blob filed with the returned value. This command returns following error message. ERROR 2013 (HY000): Lost connection to MySQL server during query Actually I really want to solve this problem, I will be highly thankful if you can suggest something. Regards.. Salman.
[15 Jul 2011 14:40]
Salman Toor
Hi again, Another interesting finding. I just dumped the binary data (that I want to stored in the blob filed) to a simple flat file and then I use load_file() command and that gives the same result. mysql> update aTriples set value=load_file("/tmp/D.dat") where taskId=2; ERROR 2013 (HY000): Lost connection to MySQL server during query size of the D.dat file is 720MB. May be something else is not correct... It seems like I need to tune some parameter? I really need some help. Thanks in advance. Regards.. Salman.
[18 Jul 2011 19:55]
Sveta Smirnova
Thank you for the data provided. Last case can be just result of small max_allowed_packet. Please check if mysqld is up and running after you run this command. Check error log file and send us your configuration file also.
[18 Jul 2011 22:12]
Salman Toor
Following is the configurations on my machine [sztoor@sal6 ~]$ cat /etc/my.cnf [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 # Disabling symbolic-links is recommended to prevent assorted security risks; # to do so, uncomment this line: symbolic-links=0 key-buffer-size=900M max_allowed_packet = 1G wait_timeout = 8000 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid [client] max_allowed_packet = 1G And log message I have already posted. Regards.. Salman.
[21 Jul 2011 14:22]
Salman Toor
Hi again, I think I have found the problem. Since I am using the longblobs, when the process requires very large amount of memory the MySQL crash and get restarted. Previously I cannot mange to get the query response more the 200MB. Then I read from the MySQL official site http://dev.mysql.com/doc/refman/5.5/en/packet-too-large.html that there might be some problems while handling large binary objects and they suggest to set the memory limit using ulimit. I did that and then I mange to handle around 500MB of data. But I have also set following parameters in my my.cnf file. --------------------------- [mysqld] max_allowed_packet = 1G innodb_log_file_size = 1G key-buffer-size=900M wait_timeout = 8000 net_read_timeout = 1000 net_write_timeout = 1000 connect_timeout = 100000 query_cache_limit = 4294967295 [client] max_allowed_packet = 1G ---------------------------- I am not sure I need all these parameters or not. It was just when I didn't find any solution so I started to increase the variable limits to some higher values. I think max_allowed_packet innodb_log_file_size key-buffer-size wait_timeout are required but I am not sure about others. I have a machine of 2GB of RAM. Now I have started my server using following command. # ulimit -m 500000 && sudo mysqld_safe& I have increased the limit to handle the large data and its working. But I still need to handle even bigger data. Thus I increase the limit to 1GB # ulimit -m 1000000 && sudo mysqld_safe& But then It works fine for some more cases and started to get the same error (2013) again. This time I suspect that my be I am running out of memory. I check that with # free -t -m In my test Initially the free was around 1GB, and when I started the query then free memory started to decrease and when only 30MB left mysql daemon just restarted. Now I think the problem is solved or at least I understand how things are working and its time to use larger memory machine. Can anybody please guide me whether I am thinking in a right way or there could be other things that I should notice. Looking forward for some response. Regards.. Salman.
[27 Jul 2011 22:50]
Sveta Smirnova
Thank you for the feedback. You are thinking in right way and this is actually not MySQL bug, so closing it as such. You should periodically run FLUSH TABLES to free memory to prevent issues after few runs of your UDF.