Bug #105934 suggestion: time out support for incremental generation of results
Submitted: 19 Dec 2021 21:11 Modified: 13 Jan 2022 16:07
Reporter: Don Cohen Email Updates:
Status: Not a Bug Impact on me:
Category:MySQL Server Severity:S4 (Feature request)
Version: OS:Any
Assigned to: CPU Architecture:Any

[19 Dec 2021 21:11] Don Cohen
This is a suggestion for both server and clients (such as the mysql command line client) - I'm afraid it requires changes to both.
The problem is that it's very frustrating to (1) not know how long a request is going to take and (2) when you get impatient and abort out of the request, not see any of the results that might have been found.
What I'd like is something like
 select ... limit 20 timelimit 10
which would mean report as many results (up to 20) as can be found in 10 seconds
And even better than that would be if the results could be reported AS THEY'RE found, rather than all at the end.  I recognize that for clients such as the mysql command line client, the formatting of the output depends on all of the results to be reported, but I'd be happy to forego the nice formatting in return for seeing results as they're found.
I suggest that this would also be a useful facility for cursors.
Right now the closest thing I can see to this is that you could write a program that sets the fetch size to 1 and iterates through a result set, showing rows as they are found.  This is clearly inefficient in some cases.  What I'd suggest is that additional parameters be used to control how results are incrementally reported: 
1. a limit - meaning whenever you have that many rows to report, report them without waiting any longer
2. a timelimit - meaning that when the oldest unreported row is that old, then report any rows waiting to be reported without waiting any longer
This would allow a client to always be able to find all the rows that were found as of some short time ago (by setting the timelimit to a few seconds)

If such a facility already exists, even in some other database system, I'd like to hear about it.  I'm about to try to implement it in my own, so I have some idea of what the difficulties are.  But I think it would be very useful.

How to repeat:
it's only a suggestion

Suggested fix:
already included above
[11 Jan 2022 17:13] MySQL Verification Team

Thank you for your feature request.

However, your feature already exists.

You have to set _use_ result option instead of the default _store_ result option. Then, you will get rows as they are found.

Regarding time limit, that is what we have a `kill` command for ..... the one for the statement and not for a connection ......

Setting time limit in a query would be very dangerous in many situations.

Not a bug.
[11 Jan 2022 18:03] Don Cohen
This is something in the c api ?  A web search shows something about it in php mysqli.  Is there some way to get the mysql command line client to use it?
Or any other interactive program where I enter a select statement and then get the response delivered?  Or is this only available in certain api's?  
Where should I look for doc to see which libraries or api's and how to use it?

Also, what would be dangerous about the time limit?  And how would setting a timelimit necessarily be different from using kill ?
[12 Jan 2022 20:25] Don Cohen
So far I have been unable to demonstrate that this works in php.
sample code:
$uresult = $mysqli->query($query, MYSQLI_USE_RESULT); 
if ($uresult) { while ($row = $uresult->fetch_assoc())
 { echo date('c') . $row['focal'] . PHP_EOL; } }
What I want to see is different time stamps for different rows, or even
better, just see them appear at different times.  Instead they all appear
at the same time with the same timestamp.
This is a query of form select ... (select ...), (select ...) from ...
where the time it takes with limit x is proportional to x (about .5 sec 
per row), and explain of the query shows
| id | select_type        | table | partitions | type | possible_keys | key  | key_len | ref  | rows   | filtered | Extra                                          | 
|  1 | PRIMARY            | f     | NULL       | ALL  | NULL          | NULL | NULL    | NULL |   9879 |   100.00 | NULL                                           | 
|  3 | DEPENDENT SUBQUERY | g     | NULL       | ALL  | day           | NULL | NULL    | NULL | 291012 |     1.11 | Range checked for each record (index map: 0x1) | 
|  2 | DEPENDENT SUBQUERY | g     | NULL       | ALL  | day           | NULL | NULL    | NULL | 291012 |     1.11 | Range checked for each record (index map: 0x1) | 
so I at least expect that the results are being generated one at a time

Perhaps I have to also use something like setFetchSize ?  Is that an option in php ?
Do you know where I can see an example of results appearing not all at once?
[13 Jan 2022 5:01] Tsubasa Tanaka
What you need is Statement Execution Time Optimizer Hints?

[13 Jan 2022 5:44] Don Cohen
The max execution time doesn't seem to have much to do with streaming, but now that I try it, I can't get that to work either.  In mysql client:
select /*+ MAX_EXECUTION_TIME(1000) */  ...
10 rows in set (4.190 sec)
[13 Jan 2022 16:07] Don Cohen
BTW if I just
set max_execution_time = 1000; 
then that does work to stop the execution, but I get no results back.
What I want is to see the results that were found before the time limit was reached.  So interrupting or killing statements etc. is only good enough if I can get the results to stream in the first place.