Bug #89183 | InnoDB: Assertion failure in thread 5416 in file btr0pcur.cc line 452 | ||
---|---|---|---|
Submitted: | 11 Jan 2018 9:56 | Modified: | 26 Jan 2018 13:39 |
Reporter: | O. Altun | Email Updates: | |
Status: | Not a Bug | Impact on me: | |
Category: | MySQL Server: InnoDB storage engine | Severity: | S1 (Critical) |
Version: | 5.7.20-21 x64 | OS: | Windows (10 x64) |
Assigned to: | CPU Architecture: | Any |
[11 Jan 2018 9:56]
O. Altun
[16 Jan 2018 21:40]
O. Altun
Upgrade to 5.7.21 didn't solve the problem
[17 Jan 2018 6:36]
MySQL Verification Team
Hi, Please show us the table structures for the tables involved. And please see if adding this to my.ini helps? [mysqld] internal-tmp-disk-storage-engine=MYISAM
[17 Jan 2018 12:21]
O. Altun
This is the configuration-File (my.ini) ########################################################### [client] port=3306 [mysql] no-beep= default-character-set=utf8 [mysqld] port=3306 datadir="D:\ProgramData\MySQL\MySQL Server 5.7\Data" character-set-server=utf8 default-storage-engine=INNODB sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" log-output=FILE general-log=0 general_log_file="ALTUN-NB.log" slow-query-log=1 slow_query_log_file="ALTUN-NB-slow.log" long_query_time=10 log-error="ALTUN-NB.err" server-id=1 secure-file-priv="D:\ProgramData\MySQL\MySQL Server 5.7\Uploads" max_connections=200 query_cache_size=0 table_open_cache=2000 tmp_table_size=16M thread_cache_size=10 myisam_max_sort_file_size=100G myisam_sort_buffer_size=8M key_buffer_size=400M read_buffer_size=1M read_rnd_buffer_size=0 innodb_flush_log_at_trx_commit=1 innodb_log_buffer_size=8M innodb_buffer_pool_size=400M innodb_log_file_size=48M innodb_thread_concurrency=9 innodb_autoextend_increment=64 innodb_buffer_pool_instances=8 innodb_concurrency_tickets=5000 innodb_old_blocks_time=1000 innodb_open_files=300 innodb_stats_on_metadata=0 innodb_file_per_table=1 innodb_checksum_algorithm=0 back_log=80 flush_time=0 join_buffer_size=256K max_allowed_packet=200M max_connect_errors=100 open_files_limit=4161 query_cache_type=0 sort_buffer_size=1M table_definition_cache=1400 binlog_row_event_max_size=8K sync_master_info=10000 sync_relay_log=10000 sync_relay_log_info=10000 innodb_flush_method=normal internal-tmp-disk-storage-engine=MYISAM [mysqldump] max_allowed_packet=200M ########################################################### Unfortunately the option "internal-tmp-disk-storage-engine=MYISAM" didn't help I asked for permission to provide you the table structure. That is going to be a post hidden from the public.
[25 Jan 2018 16:25]
MySQL Verification Team
Hi! We do not only require table structures, but also table contents and the operation that lead to the assert. In order to verify the bug, we need to be able to repeat it. Otherwise, we can not verify. Do also note that this kind of assert can happen due to hardware and OS problems. If you use ECC RAM modules, 2 bit checking and 1 bit correcting and high quality external storage, like SSD, these kind of asserts are to be expected. Last paragraph is irrelevant if you provide a test case that will always lead to the same assert.
[26 Jan 2018 9:10]
O. Altun
I will try to provide you the data. I have an internal SSD, and normal RAM, but never had a Problem with the Application. Could it be that my DB is corrupted somehow? I had a Windows crash and had to reinstall everything, but i could still use my MySQL Data, as i had that on another partition. And i assume the workaround wouldn't work, if the data is really corrupted.
[26 Jan 2018 12:54]
O. Altun
I found what the issue is. It's like how i assumed corrupted Data. A "select" on Table "document" lead to the crash. Row 294 and maybe above was corrupted. XML Files are in the DB as BLOBs See attached Log-File. I dropped the Scheme and created a new one. Please qualify and close the Bug-Ticket.
[26 Jan 2018 13:39]
MySQL Verification Team
Thank you very much for your feedback. Now, we have learned of another possible cause of the that family of asserts. Corrupted data, particularly BLOB data can cause this. Thank you.