Bug #61871 While using UDFs UPDATE command crashs
Submitted: 14 Jul 2011 17:59 Modified: 27 Jul 2011 22:50
Reporter: Salman Toor Email Updates:
Status: Not a Bug Impact on me:
None 
Category:MySQL Server: User-defined functions ( UDF ) Severity:S1 (Critical)
Version:5.5.12 OS:Linux (Scientific Linux SL release 5.3)
Assigned to: CPU Architecture:Any

[14 Jul 2011 17:59] Salman Toor
Description:
Hi, 
I have written a UDF that returns a big blob. when I run the UDF with the SELECT command it works fine but if I use the UDF with UPDATE command it returns error message (It tries to do something and then crash).  

Actually everything works fine if the BLOB size will be to 350MB, if it gets bigger then only UPDATE command crashes but still select command works fine. 

I am very new to MySQL, I have also increase the size of key-buffer-size=900M and max_allowed_packet = 1G in my.cnf    

For Log file 
-------------
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.

key_buffer_size=943718400
read_buffer_size=131072
max_used_connections=2
max_threads=151
thread_count=1
connection_count=1
It is possible that mysqld could use up to 
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1251280 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0xa6fbeb0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x707ce398 thread_stack 0x30000
/usr/sbin/mysqld(my_print_stacktrace+0x2d)[0x83c0cc7]
/usr/sbin/mysqld(handle_segfault+0x483)[0x811db75]
[0x6aa420]
/usr/sbin/mysqld[0x8525e96]
/usr/sbin/mysqld[0x8469f95]
/usr/sbin/mysqld[0x846a261]
/usr/sbin/mysqld[0x84ab61c]
/usr/sbin/mysqld[0x8468353]
/usr/sbin/mysqld[0x846bc61]
/usr/sbin/mysqld[0x846c5fc]
/usr/sbin/mysqld[0x84567ba]
/usr/sbin/mysqld[0x8445471]
/usr/sbin/mysqld(_ZN7handler13ha_update_rowEPKhPh+0x71)[0x829c2c3]
/usr/sbin/mysqld(_Z12mysql_updateP3THDP10TABLE_LISTR4ListI4ItemES6_PS4_jP8st_ordery15enum_duplicatesbPySB_+0x17c9)[0x820c96b]
/usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x18aa)[0x819878d]
/usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x147)[0x819e027]
/usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1803)[0x81a015a]
/usr/sbin/mysqld(_Z10do_commandP3THD+0xdf)[0x81a07e9]
/usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x365)[0x82388c3]
/usr/sbin/mysqld(handle_one_connection+0x49)[0x8238920]
/lib/libpthread.so.0[0xa69832]
/lib/libc.so.6(clone+0x5e)[0x9a8f6e]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0xa705c28): update aTriples set value=file_to_blob("/tmp/10000_9000_D.txt", "/tmp/D.dat") where taskId=2
Connection ID (thread ID): 4
Status: NOT_KILLED

The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
110714 19:41:55 mysqld_safe Number of processes running now: 0
110714 19:41:55 mysqld_safe mysqld restarted
110714 19:41:55 [Note] Plugin 'FEDERATED' is disabled.
110714 19:41:55 InnoDB: The InnoDB memory heap is disabled
110714 19:41:55 InnoDB: Mutexes and rw_locks use InnoDB's own implementation
110714 19:41:55 InnoDB: Compressed tables use zlib 1.2.3
110714 19:41:55 InnoDB: Using Linux native AIO
110714 19:41:56 InnoDB: Initializing buffer pool, size = 128.0M
110714 19:41:56 InnoDB: Completed initialization of buffer pool
110714 19:41:56 InnoDB: highest supported file format is Barracuda.
InnoDB: The log sequence number in ibdata files does not match
InnoDB: the log sequence number in the ib_logfiles!
110714 19:41:56  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
110714 19:41:57  InnoDB: Waiting for the background threads to start
110714 19:41:58 InnoDB: 1.1.6 started; log sequence number 4695822889
110714 19:41:58 [Note] Event Scheduler: Loaded 0 events
110714 19:41:58 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.12'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)
------------------------------------------

How to repeat:
Details: 

mysql> select file_to_blob("/tmp/10000_9000_D.txt", "/tmp/D.dat"); 

This returns a big binary block and works without any problem. 
But when I use it with the update command it give following error message. 

mysql> update aTriples set value=file_to_blob("/tmp/10000_9000_D.txt","/tmp/D.dat") where taskId=2;
ERROR 2013 (HY000): Lost connection to MySQL server during query
[14 Jul 2011 18:25] Sveta Smirnova
Thank you for the report.

Please send us code of your UDF.
[15 Jul 2011 13:00] Salman Toor
File to bolb conversion code.

Attachment: manageBLOB_mysql.cc (application/octet-stream, text), 20.82 KiB.

[15 Jul 2011 13:01] Salman Toor
Hi again,

Thanks for your response. Attachment contains the source code of the UDF. I have removed some of the functionality to make things simple. But I have checked that you can reproduce the problem using this code.

How to run the code.
I am assuming that you have build the library file and It is placed in the plugin directory.

Command-1:
mysql> select file_to_blob("/tmp/10000_9000_D.txt", "/tmp/D.dat");

First argument is the file that contains all the data. You can download the file from the following link.
http://user.it.uu.se/~salto690/10000_9000_D.txt

Second argument is used for some other purpose which is not relevant for reproducing the problem thus you can pass any empty string.

The result of this command is to return a blob. which is working fine.
 
Command-2:
mysql> update aTriples set value=file_to_blob("/tmp/10000_9000_D.txt", "/tmp/D.dat") where taskId=2;

You can use update command to fill the blob filed with the returned value. This command returns following error message.
ERROR 2013 (HY000): Lost connection to MySQL server during query

Actually I really want to solve this problem, I will be highly thankful if you can suggest something.

Regards..
Salman.
[15 Jul 2011 14:40] Salman Toor
Hi again, 

Another interesting finding. I just dumped the binary data (that I want to stored in the blob filed) to a simple flat file and then I use load_file() command and that gives the same result. 

mysql> update aTriples set value=load_file("/tmp/D.dat") where taskId=2;
ERROR 2013 (HY000): Lost connection to MySQL server during query

size of the D.dat file is 720MB. May be something else is not correct... 

It seems like I need to tune some parameter?

I really need some help. 

Thanks in advance. 

Regards..
Salman.
[18 Jul 2011 19:55] Sveta Smirnova
Thank you for the data provided.

Last case can be just result of small max_allowed_packet. Please check if mysqld is up and running after you run this command. Check error log file and send us your configuration file also.
[18 Jul 2011 22:12] Salman Toor
Following is the configurations on my machine

[sztoor@sal6 ~]$ cat /etc/my.cnf 
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1

# Disabling symbolic-links is recommended to prevent assorted security risks;
# to do so, uncomment this line:
symbolic-links=0
key-buffer-size=900M
max_allowed_packet = 1G
wait_timeout = 8000

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

[client]
max_allowed_packet = 1G

And log message I have already posted. 

Regards..
Salman.
[21 Jul 2011 14:22] Salman Toor
Hi again, 

I think I have found the problem. Since I am using the longblobs, when the process requires very large amount of memory the MySQL crash and get restarted. 

Previously I cannot mange to get the query response more the 200MB. Then I read from the MySQL official site 

http://dev.mysql.com/doc/refman/5.5/en/packet-too-large.html

that there might be some problems while handling large binary objects and they suggest to set the memory limit using ulimit. I did that and then I mange to handle around 500MB of data. But I have also set following parameters in my my.cnf file. 

---------------------------
[mysqld]
max_allowed_packet = 1G
innodb_log_file_size = 1G
key-buffer-size=900M
wait_timeout = 8000

net_read_timeout = 1000
net_write_timeout = 1000
connect_timeout = 100000
query_cache_limit = 4294967295

[client]
max_allowed_packet = 1G
----------------------------

I am not sure I need all these parameters or not. It was just when I didn't find any solution so I started to increase the variable limits to some higher values. I think

max_allowed_packet
innodb_log_file_size
key-buffer-size 
wait_timeout

are required but I am not sure about others. 

I have a machine of 2GB of RAM. Now I have started my server using following command. 

# ulimit -m 500000 && sudo mysqld_safe&

I have increased the limit to handle the large data and its working. 
But I still need to handle even bigger data. Thus I increase the limit to 1GB

# ulimit -m 1000000 && sudo mysqld_safe&

But then It works fine for some more cases and started to get the same error (2013) again. This time I suspect that my be I am running out of memory. I check that with 

# free -t -m 

In my test Initially the free was around 1GB, and when I started the query then free memory started to decrease and when only 30MB left mysql daemon just restarted.

Now I think the problem is solved or at least I understand how things are working and its time to use larger memory machine. 

Can anybody please guide me whether I am thinking in a right way or there could be other things that I should notice. 

Looking forward for some response. 

Regards..
Salman.
[27 Jul 2011 22:50] Sveta Smirnova
Thank you for the feedback.

You are thinking in right way and this is actually not MySQL bug, so closing it as such.

You should periodically run FLUSH TABLES to free memory to prevent issues after few runs of your UDF.