Bug #44625 Triggers should not be loaded on SELECTs
Submitted: 3 May 2009 9:59 Modified: 22 Aug 22:26
Reporter: Domas Mituzas Email Updates:
Status: Closed Impact on me:
None 
Category:MySQL Server: Stored Routines Severity:S3 (Non-critical)
Version:5.0, 5.1 OS:Any
Assigned to: CPU Architecture:Any

[3 May 2009 9:59] Domas Mituzas
Description:
Nowadays triggers are parsed at each table open, even for SELECTs, and can be even reparsed multiple times in single query, example call path, called 2000 times in single query that opens single table (multiple times):

             mysqld`MYSQLparse(void*)
              mysqld`parse_sql(THD*, Parser_state*, Object_creation_ctx*)+0x7b
              mysqld`Table_triggers_list::check_n_load(THD*, char const*, char const*, st_table*, bool)+0xb3b
              mysqld`open_unireg_entry(THD*, st_table*, TABLE_LIST*, char const*, char*, unsigned int, st_mem_root*, unsigned int)+0xea9
              mysqld`open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0xc4f
              mysqld`open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x303
              mysqld`open_and_lock_tables_derived(THD*, TABLE_LIST*, bool)+0xa5
              mysqld`execute_sqlcom_select(THD*, TABLE_LIST*)+0xf8
              mysqld`mysql_execute_command(THD*)+0xb28
              mysqld`mysql_parse(THD*, char const*, unsigned int, char const**)+0x1a9
              mysqld`dispatch_command(enum_server_command, THD*, char*, unsigned int)+0xdb9
              mysqld`do_command(THD*)+0x2d3
              mysqld`handle_one_connection+0xfc7
              libSystem.B.dylib`_pthread_start+0x141
              libSystem.B.dylib`thread_start+0x22

For now I'm not tagging it as 'Performance', as it contributes to huge memory leaks when same table is reopened in UNION. 

How to repeat:
create any trigger on a table, and it will be reparsed at every re-open of the table.

Suggested fix:
1. don't parse triggers on simple SELECT queries
2. cache parsed triggers
[3 May 2009 10:08] Domas Mituzas
I'm bumping up some triage values simply because this affects any SELECT query that uses triggers.
[5 May 2009 10:09] Domas Mituzas
To properly understand this bug one has to review how table cache works lately.
Table cache isn't unique anymore - multiple uses of tables in single query, or use of tables in multiple threads will consume multiple entries in table cache (thus getting table cache more prone to evicting).

If a table is in 'table_open_cache', it will cache triggers (though probably it makes much more sense to do that in table_definition_cache). 

One of major problems here is that LOCK_open will be held during the process of opening the trigger - and as it is relatively slow operation, it will entirely break scalability at this point.

This makes issue not seen too easily in simple benchmarks (as usually everyone makes sure no table_open_cache eviction is happening there), but in real world this will hit way more often, than one would expect (simply because open table cache is depending on connections*activetablecount). 

So, to illustrate edge case (which magnifies the problem seen in real world), I ran following sysbench run against server with table_open_cache=5 (on relatively powerful machine):

/sysbench --test=oltp --mysql-db=test --oltp-range-size=10 --oltp-read-only=on --oltp-table-size=1000000 --db-driver=mysql --mysql-socket=/tmp/mysql.sock --num-threads=1 --db-ps-mode=disable --num-threads=32 run

Once there was a trigger on the table, results were appalling: 

OLTP test statistics:
    queries performed:
        read:                            140000
        write:                           0
        other:                           20000
        total:                           160000
    transactions:                        10000  (28.33 per sec.)
    deadlocks:                           0      (0.00 per sec.)
    read/write requests:                 140000 (396.66 per sec.)
    other operations:                    20000  (56.67 per sec.)

Test execution summary:
    total time:                          352.9488s
    total number of events:              10000
    total time taken by event execution: 11288.4222
    per-request statistics:
         min:                                  1.36ms
         avg:                               1128.84ms
         max:                               1594.90ms
         approx.  95 percentile:            1252.80ms

Threads fairness:
    events (avg/stddev):           312.5000/3.48
    execution time (avg/stddev):   352.7632/0.17

After dropping the trigger results were much better:

OLTP test statistics:
    queries performed:
        read:                            140000
        write:                           0
        other:                           20000
        total:                           160000
    transactions:                        10000  (841.64 per sec.)
    deadlocks:                           0      (0.00 per sec.)
    read/write requests:                 140000 (11782.94 per sec.)
    other operations:                    20000  (1683.28 per sec.)

Test execution summary:
    total time:                          11.8816s
    total number of events:              10000
    total time taken by event execution: 379.8607
    per-request statistics:
         min:                                  1.37ms
         avg:                                 37.99ms
         max:                                235.82ms
         approx.  95 percentile:             110.26ms

Threads fairness:
    events (avg/stddev):           312.5000/28.29
    execution time (avg/stddev):   11.8706/0.01

So, if I managed to get 30x performance difference on edge case testing, I think following recommendations should be taken seriously:

- Load triggers for table_definition_cache, not table_open_cache 
or
- Don't load triggers on SELECT

and also important:
- do not hold global locks when parsing triggers...

I'm setting W1->W3 (as it is possible to increase table open cache, at additional costs).
[3 Aug 2009 17:37] Konstantin Osipov
Trigger definition is part of the cached TABLE. Each time a new table is loaded into the table cache, we create bodies for all defined triggers. 
This will not lead to any performance hit as long as your table cache is large enough.
[3 Aug 2009 17:38] Konstantin Osipov
Workaround is to increase the size of the table cache, that will reduce the performance hit since the table is loaded into the cache only once with large cache.
[3 Aug 2009 17:40] Konstantin Osipov
Re Risk/Effort - this is working as designed, the table cache doesn't distinguish read-only TABLE objects from objects used for updates.
[3 Aug 2009 17:58] Valeriy Kravchuk
I am not sure how the workaround suggested above will apply to MySQL on Windows. Check http://dev.mysql.com/doc/refman/5.1/en/limits-windows.html...
[3 Aug 2009 18:03] Konstantin Osipov
Windows open files limit is irrelevant: if you can't open that many files, you can't use that many tables anyway, whether you use them by means of the table cache or not.
Besides, Windows open files limit is fixed in 5.4.
[3 Aug 2009 18:19] Konstantin Osipov
Come to think of it, we can load trigger bodies on demand, in sp_cache_routines_and_add_tables_for_triggers().

This will also let us solve Bug#23022.

Will check with Dmitri when he's back from vacation.
[4 Aug 2009 7:23] Domas Mituzas
the "can't use if can't cache" doesn't really make sense - you can use as many tables as you want, just reopening will be much much more intensive, and LOCK_open hotspots will be even more apparent
[4 Aug 2009 7:46] Konstantin Osipov
Domas, every open table goes through the table cache, even if the total number of used tables at the moment exceeds the size of the table cache. You can't use a table bypassing the table cache.
[9 Feb 2023 22:48] Dmitry Lenev
Hello!

It is year 2023 now and MySQL version is 8.0.32. Part of the problem related to scalability/loading triggers under LOCK_open is no longer relevant as loading of triggers happens outside of LOCK_open.

Problem with overhead related to trigger parsing when new TABLE object is constructed for SELECT queries, should be, in my opinion, mostly visible in corner cases, when Table Cache can't satisfy most of the queries for some reason. 

However, problem with parsed trigger objects occupying too much space in Table Cache is still a real issue.

Therefore we are contributing a patch which should address two last problems to some extent.

The patch allows to avoid caching fully-loaded/parsed trigger bodies in the Table Cache for read-only statements.
    
This is done by implementing lazy full-loading/parsing of trigger bodies,
i.e. we don't parse and finalize their load by default, but only do this if
operation that does data change comes. Once such operation completes we
return the TABLE instance and associated with it triggers to the Table Cache.
Later we try to prefer reusing TABLE instances with fully-loaded triggers
for data change operations and TABLE instances sans trigger bodies for
read-only operations.
    
To ensure that spike in data change workload won't fill up the Table Cache
by TABLE objects with fully-loaded triggers soft limit on number of such
objects in the cache was introduced. If this limit is exceeded unused TABLE
objects with fully-loaded/parsed triggers are evicted from the cache in LRU
fashion.
[9 Feb 2023 22:49] Dmitry Lenev
Patch implementing avoiding loading of triggers for SELECT statements against 8.0.32 tree.

(*) I confirm the code being submitted is offered under the terms of the OCA, and that I am authorized to contribute it.

Contribution: ps-7963-8433-bug44625-contrib.patch (text/x-patch), 117.41 KiB.

[22 Aug 22:26] Jon Stephens
Fixed in MySQL 9.1 by WL#16455.

Closed.
[22 Aug 22:29] Jon Stephens
Our thanks to Dmitry Lenev for the contribution.