Bug #74785 bad memory usage on large tables
Submitted: 11 Nov 2014 10:16 Modified: 11 Nov 2014 14:12
Reporter: Brett Pit Email Updates:
Status: Verified Impact on me:
None 
Category:MySQL Workbench: SQL Editor Severity:S5 (Performance)
Version:6.2.3 OS:Microsoft Windows (7)
Assigned to: CPU Architecture:Any
Tags: Memory

[11 Nov 2014 10:16] Brett Pit
Description:
Opening a large table:
- 214.549 Rows at 27 columns
- 20M Data,
- 80M Index
- utf8_general_ci collation

freezes the editor for over 2 Minutes and eats up 1,7G RAM

How to repeat:
open a large table with right click -> select rows (unlimited records)
[11 Nov 2014 14:12] Miguel Solorzano
Thank you for the bug report.
[29 Jan 2015 20:20] BJ Quinn
I posted the following comment to Bug #74557, which I believe is the same bug as this one.  I've copied my comment here for reference unless someone wants to mark this as a duplicate of the other bug.  For clarification, I do not believe that SSH tunnels have anything to do with it as mentioned in the other bug -- this will happen whether you're going over an SSH tunnel or not.

Copied comment follows --

I can add that I'm having this problem as well.  I have a MySQL server instance on my local network.  I've not set up an SSH tunnel to communicate with the server, just the normal default connection over port 3306.  So I don't think the problem is limited to large record sets over an SSH tunnel to a server on a remote network.  It happens with large record sets under any circumstances.

The problem is not exhibited in 6.1.7, although 6.1.7 exhibits Bug #72214 (Filter Rows: non matching rows are displayed with NULL rows), which is sufficiently problematic that I have to go back to 6.0.9.

The difference between 6.0.9/6.1.7 and 6.2.4 (or, really, 6.2.x) is dramatic.  I tested a table that is 50MB on disk on the server itself and simply did a "select * from tablename" without the 1000 row result limit.  There are 78,334 records in this table.

6.0.9 and 6.1.7 were both 6 seconds from query execution to full display in the query result window.  They both used roughly 60MB of RAM for the entire MySQL Workbench instance while viewing the query result, after briefly hitting 110MB during query execution, which is reasonable given that the table is 50MB and that MySQL Workbench was using 50MB-55MB of RAM before I even ran the query.

6.2.4 took over 3 minutes to accomplish the same query against the same server on the same machine.  The CPU and I/O on the server is effectively idle.  On the client, MySQL Workbench is utilizing 100% of one core throughout the wait process.  RAM usage climbed steadily to 1.35GB and stayed there after query execution.  Worse yet, if I ran the same query again, RAM usage went from 1.35GB to 2.35GB, whether I closed the tab and ran the query again or just ran the same query in the same sql query tab.  The older versions exhibited no such memory leaking issues with repeated multiple runs of the same query in the same query window (and/or after closing the original query window and opening up a new one).  The 32 bit machine I tried it on took over 5 minutes before Workbench simply crashed having said that the mysql client ran out of memory.  Same machine had no issues with the older versions.

I have tried all these tests against multiple servers on different networks from different client machines.  Clients are all Windows 7, servers are all different versions of MySQL 5.1.x, 5.5.x, and 5.6.x on RHEL.  The results are the same whether I have a MySQL Workbench window open that I've been using for a while or for clean restarts of MySQL Workbench.
[16 Apr 2015 15:01] BJ Quinn
I can confirm that this appears to be fixed on 6.3.2 RC.  It's in fact maybe even faster than 6.1.7 and earlier, though I haven't done any direct tests.  23k records in a fairly wide table loads nearly instantaneous now.