Bug #18771 | QB access violation on large result sets | ||
---|---|---|---|
Submitted: | 4 Apr 2006 13:16 | Modified: | 6 Nov 2006 10:21 |
Reporter: | Andreas Götz | Email Updates: | |
Status: | Closed | Impact on me: | |
Category: | MySQL Query Browser | Severity: | S2 (Serious) |
Version: | 1.0.20/1.2.4rc | OS: | Windows (WinXP SP2, Win2kSP4) |
Assigned to: | Mike Lischke | CPU Architecture: | Any |
Tags: | Result Set |
[4 Apr 2006 13:16]
Andreas Götz
[4 Apr 2006 14:21]
MySQL Verification Team
verified as described. I used 138000 records in a table: mysql> select count(*) from bug18771; +----------+ | count(*) | +----------+ | 138000 | +----------+ 1 row in set (0.00 sec) mysql> show create table bug18771\G *************************** 1. row *************************** Table: bug18771 Create Table: CREATE TABLE `bug18771` ( `id` bigint(20) NOT NULL auto_increment, `c2` varchar(100) default NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 1 row in set (0.00 sec) Then issued a query "SELECT * FROM bug18771" and the error is seen in the bottom tool-bar after 130000 records are displayed.
[4 Apr 2006 14:28]
Andreas Götz
I've found that the query is faulty, as it's missing one join parameter, thus creating the large result set: ... and cf.value like '80%' However, crash is still the unexpected behaviour.
[29 Jun 2006 10:47]
Mike Lischke
Thank you for your bug report. This issue has been committed to our source repository of that product and will be incorporated into the next release. If necessary, you can access the source repository and build the latest available version, including the bug fix. More information about accessing the source trees is available at http://www.mysql.com/doc/en/Installing_source_tree.html
[4 Aug 2006 23:30]
Morgan Tocker
This did not appear to fix the issue, re-opening bug.
[8 Sep 2006 7:45]
Mike Lischke
Well, sorry, but I cannot reproduce this bug. It would be very helpful if you all could use data that is commonly available, e.g. the Sakila schema, to demonstrate the problem. I used SELECT * FROM payment, rental; (using Sakila schema), which produces a very large result set. I let QB get more than one million records from that result set without any problem. It uses ~500MB RAM at that time. I did not touch QB during retrieval time except that I stopped the query after I got 1M rows. Mike
[11 Oct 2006 1:05]
MySQL Verification Team
QB Crashing on Windows 2003 Server
Attachment: qb-crashing.PNG (image/png, text), 77.20 KiB.
[11 Oct 2006 1:10]
MySQL Verification Team
Still crashing with Shane's table definition (I attached a screenshot): mysql> CREATE TABLE `bug18771` ( -> `id` bigint(20) NOT NULL auto_increment, -> `c2` varchar(100) default NULL, -> PRIMARY KEY (`id`) -> ) ENGINE=MyISAM DEFAULT CHARSET=latin1; Query OK, 0 rows affected (0.05 sec) mysql> insert into bug18771 (c2) values (repeat('a',100)); Query OK, 1 row affected (0.00 sec) mysql> insert into bug18771 (c2) select c2 from bug18771; Query OK, 1 row affected (0.00 sec) Records: 1 Duplicates: 0 Warnings: 0 <cut> mysql> insert into bug18771 (c2) select c2 from bug18771; Query OK, 4194304 rows affected (1 min 43.19 sec) Records: 4194304 Duplicates: 0 Warnings: 0 mysql> select count(*) from bug18771; +----------+ | count(*) | +----------+ | 8388608 | +----------+ 1 row in set (0.41 sec) mysql>
[2 Nov 2006 14:32]
Mike Lischke
I tried again and loaded 4 million records into QB. It used ~1GB RAM but still did not crash. However I have set 1GB as swap space, so the error might come up only with much smaller RAM equipment. So I would like to know: 1) How many RAM do you have? How much does QB take when it shows the error message? 2) Can you continue to work with QB once the error message came up? I'm going to implement a check for available memory and show an error box with clear text to indicate exhausted memory. However this does not solve the original problem: the attempt to load more data than what can be kept in memory. And since we don't have client side cursors there is no other way than caching in memory.
[6 Nov 2006 10:22]
Mike Lischke
Memory watcher is now implemented and stops a query if it uses too much memory. Code is committed and will be included in next release. I close this report because I think the change pretty much fixes the issue. If you still see trouble don't hesitate to reopen this report.