Bug #42262 advisor recommendation regarding Excessive Disk Temporary Table needs improving
Submitted: 22 Jan 2009 9:08 Modified: 6 Mar 2009 20:27
Reporter: Simon Mudd (OCA) Email Updates:
Status: Verified Impact on me:
None 
Category:MySQL Enterprise Monitor: Advisors/Rules Severity:S3 (Non-critical)
Version:2.0.1.7125 OS:Any
Assigned to: Assigned Account CPU Architecture:Any
Tags: windmill

[22 Jan 2009 9:08] Simon Mudd
Description:
We received the notification of Excessive Disk Temporary Table usage v1.8.

The recommendation is to increase the configured amount by 4MB. This box has 32GB and therefore it seems better to mention that the amount to be increased should be a percentage of total memory allocated to Mysqld. If this can be calculated that is fine.

How to repeat:
N/A

Suggested fix:
Note: the advisors are a bit confusing. Most of the rules are catching a problem or issue, and related to that issue you are being given advice. However this separation is not clearly shown in the popups.

For this particular issue you probably need something like:

Issue: Many queries which use temporary tables are creating temporary tables which are too large to store in memory. This can have an adverse performance impact. The amount of memory allocated for temporary tables (in memory) can be increased, and if this issue persists it would seem reasonable to increase this memory allocation within mysqld. Normally however it's best to try and see if the adverse useage of temporary tables is because of missing indexes or inefficient queries.

Advice:

To determine which queries may be sub-optimal you can use the query analyzer....

If you find that the queries are as optimal as you can make them then you need to increase memory usage allocated for temporary tables until this warning goes away or until you reach a size which you consider sufficient. When configuring the amount of memory to be used for temporary tables note that 2 different variables must be adjusted. These are .....

Monitoring of this problem can be done by checking the relative changes of the variables Created_tmp_disk_tables and Created_tmp_tables. This ratio should be low.

I know that wording this type of advice is difficult but it does seem it would help if a better explanation of the problem and then how to resolve it would be shown. Also an explanation of the way the issue is measured is extremely useful. Current advice (and formulas) are often not always intuitive.
[7 Dec 2010 12:21] Mark Leith
Ultimately this requires knowing what value will actually make a difference, not increasing by some percentage or some step wise fashion. 

To do that, we need to know what the largest disk table has ever been, to be able to say with confidence what the maximum is to set to satisfy all temporary tables. Then, one still has to consider concurrency (each thread may create a table of that size) with overall memory. 

Finding the maximum number of writes to any temporary table will be much easier in 5.5 with PERFORMANCE_SCHEMA, we should add a data collection - something like "max_disk_temporary_table_size" - that can better advise about this.