Description:
I have run a select query with table join in command line.
The query hung for a while and then just killed without any error prompt.
And there's no error logs neither.
The statement like below
select
awb.awb_no as hawb,
link.awb_no as mawb,
.......(total 203 columns here)
from awb.n_air_waybill as awb
left join awb.n_air_waybill_sup as sup on awb.awb_no=sup.awb_no
........ (total 29 tables left join)
left join acct.n_ac_voucher_distr as voucher_distr on voucher_distr.voucher_no=voucher.voucher_no
where awb.loc_row_crt_date > '2014-04-18' and awb.awb_group ='H'
I could get the output when "awb.loc_row_crt_date > '2014-04-19'"
The result is " 248481 rows in set (30.39 sec)"
I couldn't get the output when "awb.loc_row_crt_date > '2014-04-18'"
select ....
left join acct.n_ac_voucher_distr as voucher_distr on voucher_distr.voucher_no=voucher.voucher_no
-> where awb.loc_row_crt_date > '2014-04-18' and awb.awb_group ='H'
-> ;
Killed
I got the count result with "where awb.loc_row_crt_date > '2014-04-18' and awb.awb_group ='H'" as below
+----------+
| count(*) |
+----------+
| 273550 |
+----------+
1 row in set (0.79 sec)
I didn't break(Ctrl+C) the query on the keyboard anyway.
Here is the setting about query cache and buffer
mysql> show variables like '%query%';
+------------------------------+----------------------------------------------+
| Variable_name | Value |
+------------------------------+----------------------------------------------+
| binlog_rows_query_log_events | OFF |
| ft_query_expansion_limit | 20 |
| have_query_cache | YES |
| long_query_time | 10.000000 |
| query_alloc_block_size | 8192 |
| query_cache_limit | 1048576 |
| query_cache_min_res_unit | 4096 |
| query_cache_size | 1048576 |
| query_cache_type | OFF |
| query_cache_wlock_invalidate | OFF |
| query_prealloc_size | 8192 |
| slow_query_log | OFF |
| slow_query_log_file | /rdsdbdata/log/slowquery/mysql-slowquery.log |
+------------------------------+----------------------------------------------+
13 rows in set (0.00 sec)
mysql> show variables like '%buffer%';
+-------------------------------------+----------------+
| Variable_name | Value |
+-------------------------------------+----------------+
| bulk_insert_buffer_size | 8388608 |
| innodb_buffer_pool_dump_at_shutdown | OFF |
| innodb_buffer_pool_dump_now | OFF |
| innodb_buffer_pool_filename | ib_buffer_pool |
| innodb_buffer_pool_instances | 8 |
| innodb_buffer_pool_load_abort | OFF |
| innodb_buffer_pool_load_at_startup | OFF |
| innodb_buffer_pool_load_now | OFF |
| innodb_buffer_pool_size | 5705302016 |
| innodb_change_buffer_max_size | 25 |
| innodb_change_buffering | all |
| innodb_log_buffer_size | 8388608 |
| innodb_sort_buffer_size | 1048576 |
| join_buffer_size | 262144 |
| key_buffer_size | 16777216 |
| myisam_sort_buffer_size | 8388608 |
| net_buffer_length | 16384 |
| preload_buffer_size | 32768 |
| read_buffer_size | 262144 |
| read_rnd_buffer_size | 524288 |
| sort_buffer_size | 262144 |
| sql_buffer_result | OFF |
+-------------------------------------+----------------+
22 rows in set (0.00 sec)
And the status of buffer
Before the query start
mysql> show status like '%buffer%';
+---------------------------------------+-------------+
| Variable_name | Value |
+---------------------------------------+-------------+
| Innodb_buffer_pool_dump_status | not started |
| Innodb_buffer_pool_load_status | not started |
| Innodb_buffer_pool_pages_data | 2094 |
| Innodb_buffer_pool_bytes_data | 34308096 |
| Innodb_buffer_pool_pages_dirty | 0 |
| Innodb_buffer_pool_bytes_dirty | 0 |
| Innodb_buffer_pool_pages_flushed | 137 |
| Innodb_buffer_pool_pages_free | 346039 |
| Innodb_buffer_pool_pages_misc | 91 |
| Innodb_buffer_pool_pages_total | 348224 |
| Innodb_buffer_pool_read_ahead_rnd | 0 |
| Innodb_buffer_pool_read_ahead | 0 |
| Innodb_buffer_pool_read_ahead_evicted | 0 |
| Innodb_buffer_pool_read_requests | 7600931 |
| Innodb_buffer_pool_reads | 2095 |
| Innodb_buffer_pool_wait_free | 0 |
| Innodb_buffer_pool_write_requests | 206 |
+---------------------------------------+-------------+
17 rows in set (0.00 sec)
After the query interrupted
mysql> show status like '%buffer%';
+---------------------------------------+-------------+
| Variable_name | Value |
+---------------------------------------+-------------+
| Innodb_buffer_pool_dump_status | not started |
| Innodb_buffer_pool_load_status | not started |
| Innodb_buffer_pool_pages_data | 2094 |
| Innodb_buffer_pool_bytes_data | 34308096 |
| Innodb_buffer_pool_pages_dirty | 0 |
| Innodb_buffer_pool_bytes_dirty | 0 |
| Innodb_buffer_pool_pages_flushed | 141 |
| Innodb_buffer_pool_pages_free | 346039 |
| Innodb_buffer_pool_pages_misc | 91 |
| Innodb_buffer_pool_pages_total | 348224 |
| Innodb_buffer_pool_read_ahead_rnd | 0 |
| Innodb_buffer_pool_read_ahead | 0 |
| Innodb_buffer_pool_read_ahead_evicted | 0 |
| Innodb_buffer_pool_read_requests | 8391513 |
| Innodb_buffer_pool_reads | 2095 |
| Innodb_buffer_pool_wait_free | 0 |
| Innodb_buffer_pool_write_requests | 213 |
+---------------------------------------+-------------+
17 rows in set (0.00 sec)
How to repeat:
May need a select query to get over 300 thousands records.
Suggested fix:
I have not found out a workable way to get the select result yet.
I am wondering if there's any particular setting when the query result may be over 500 thousands records.
If there is, please advise. Thanks.
Hope there is efficient way to solve this query problem, as there may select over one year's data and the result will be over 1 million.
Thanks a lot if any suggestion.