Bug #18707 Query cache: support limit 0,30... limit 30,30 etc
Submitted: 1 Apr 2006 11:08 Modified: 7 Jul 2017 9:14
Reporter: Jeff Armstrong Email Updates:
Status: Won't fix Impact on me:
None 
Category:MySQL Server: Query Cache Severity:S4 (Feature request)
Version:4 OS:Linux (Linux)
Assigned to: CPU Architecture:Any

[1 Apr 2006 11:08] Jeff Armstrong
Description:
If my interpretation of the docs is correct, and I...

  select * from mytable  where something='complex'
  order by 1,2,3 
  limit 0,30

and then I

  select * from mytable  where something='complex'
  order by 1,2,3 
  limit 30,30

the second query is not served from the query cache?

How to repeat:
Hmmm - this is a feature request, so n bug to repeat

Suggested fix:
For ordered limited queries, does the cache hold the complete resultset, and  would it be possible to server the result from the cache, applying the new limit?

A large number of websites would benefit from a tweak like this - too many applications developers are coding this on the webserver.
[24 Apr 2006 9:47] Valeriy Kravchuk
Thank you for a reasonable feature request.
[29 Sep 2006 17:19] Liviu Balan
This is a very important feature. Our company has to gather lots of data from huge tables and because of memory reasons it has to take everything in batches. Because of huge tables every limit query takes around 20 seconds so executing 1000 queries like that takes a lot of time. I think that can be improved a lot with this feature.
[24 Aug 2008 9:30] Bryce Nesbitt
Agreed!  You can verify LIMIT is not supported easily:

SHOW STATUS LIKE '%qcache%';
SELECT foo FROM expensive;
SHOW STATUS LIKE '%qcache%';
SELECT foo FROM expensive LIMIT 5 OFFSET 5;
SHOW STATUS LIKE '%qcache%';

In my case subsetting the cached query would be much much faster than rerunning it.
[12 Nov 2008 12:04] James Day
The query cache holds the exact results that are sent to the client in the low level format for sending those results. So it gets a query, does a very fast check to see whether the SQL is identical (even one more space is different) and if the SQL is identical it sends the original results.

The very simple comparison of identical is part of why the query cache is useful. If it also had to parse the SQL and optimise a query to some degree, then it would no longer be so fast because of the extra work.

When it gets a limit query the server doesn't send all results to the client. In the ideal case it uses an index to go to just the correct start position and looks for the next n rows and returns just them. So there's nothing for the query cache to look at to send different rows because it's never given the extra rows. No reason for it to store 100k rows that can match a query without limit when it doesn't need them.

For web applications a "permanent" work table can be regenerated every 30 seconds with a name that includes the time and paging can be done through that. Applications can get the one for the time they start paging and after a few hours the old ones can be deleted automatically. It's a solution to the slow query and slow paging problems.

The request to do this automatically seems better suited to MySQL Proxy. And even more, to MySQL Proxy combined with memcached, so memcached could store the full results. Have a read of the MySQL Proxy documentation and look at what the project is doing to see if it is doing things that are interesting for you.
[7 Jul 2017 9:14] Erlend Dahl
MySQL will no longer invest in the query cache, see:

http://mysqlserverteam.com/mysql-8-0-retiring-support-for-the-query-cache/