Bug #50422 Purge allows gradually falling behind by ignoring (LIMIT X == rows purged)
Submitted: 18 Jan 2010 16:40 Modified: 29 Jun 2010 11:27
Reporter: Mark Leith Email Updates:
Status: Closed Impact on me:
Category:MySQL Enterprise Monitor: Server Severity:S2 (Serious)
Version: OS:Any
Assigned to: Darren Oldag CPU Architecture:Any
Tags: purge, windmill

[18 Jan 2010 16:40] Mark Leith
If you generate more than 10000 rows per minute (the defaults for data.purgesize and data.purgeinterval), then we silently allow the purging to gradually fall behind, without making the user aware. 

How to repeat:
Generate more than 10000 rows per minute (such as running ~150-200 agents). 

Suggested fix:
1) Continually delete until rows = 0
2) Log/alert more aggressively?
[18 Jan 2010 16:48] Simon Mudd
Suggestion: Alert on the dashboard. the tomcat log is not really a good place for this... (or send out an email).
You NEED to catch our attention on this (dba/sysadmins)
[20 Jan 2010 17:16] Enterprise Tools JIRA Robot
Eric Herman writes: 
rather than wait the full delay cycle before doing additional purge, we will purge as many chunks as needed at the time the purge is called.
[20 Jan 2010 17:31] Enterprise Tools JIRA Robot
Keith Russell writes: 
Patch installed in versions =>
[29 Jun 2010 11:27] MC Brown
A note has been added to the 2.1.1 and 2.2.0 changelogs: 

        When purging old data, the purging process could fail to                                      
        remove all of the data if the inflow of new information was                                   
        very high. Purging now removes all outdated information at                                    
        each execution.