Bug #44783 A SELECT using ORDER BY and LIMIT sometimes returns too many rows
Submitted: 11 May 2009 12:00 Modified: 16 Jul 2009 11:03
Reporter: Lars-Erik Bjørk Email Updates:
Status: Duplicate Impact on me:
None 
Category:MySQL Server: Falcon storage engine Severity:S3 (Non-critical)
Version:falcon-team OS:Any
Assigned to: Assigned Account CPU Architecture:Any
Tags: F_LIMIT

[11 May 2009 12:00] Lars-Erik Bjørk
Description:
This bug is filed as a part of the investigation of "umbrella" bug#42915 

When running a modified version of falcon_nolimit_int.yy, the test sometimes fails
with falcon returning too many rows when using LIMIT

The modified version of falcon_limit_int.yy looks like this:

query:
        select | dml | dml | dml | dml | dml ;

dml:
        update ;

select:
        SELECT * FROM _table where;

where:
        |
        WHERE _field sign value ;

sign:
        < ;

insert:
        INSERT INTO _table ( _field , _field ) VALUES ( value , value ) ;

update:
        UPDATE _table AS X SET _field = value where ;

value:
        _digit | _tinyint_unsigned ;

# Use only indexed fields:

_field:
        `int_key` ;

_table
        `E` ;

The test can be run with the following command from the gentest directory, given you have
branched the mysql-test-extra-6.0 tree:

perl runall.pl \
--basedir=<path_to_your_basedir> \
--mysqld=--loose-innodb-lock-wait-timeout=1 \
--mysqld=--table-lock-wait-timeout=1 \
--mysqld=--loose-falcon-lock-wait-timeout=1 \
--mysqld=--loose-falcon-debug-mask=2 \
--mysqld=--skip-safemalloc \
--grammar=conf/falcon_nolimit_int.yy \
--threads=1 --validator=Limit \ --reporters=Deadlock,ErrorLog,Backtrace,WinPackage \
--duration=1200 \
--vardir=/tmp/vardir \
--mysqld=--log-output=file \
--queries=100000 \
--engine=falcon

How to repeat:
See description
[11 May 2009 12:13] Lars-Erik Bjørk
I have added some debug printouts in IndexWalker::getValidatedRecord, and it seems that we some places have two entries in the index, for the same record. Both the entries pass as valid, and they are not adjacent in the index, and therefore not filtered by getValidatedRecord.

The aggregated data about some of the pages examined, presented below, is from a search
where we expect 1000 rows, but are returned 1137 rows:

Looking at page: 137 (517 entries, all key mismatches)
Looking at page: 217 (484 entries, all key mismatches)
Looking at page: 187 (only BUCKET_END)
Looking at page: 188 (1000 key mismatches, 20 'valid' entries returned)
Looking at page: 139 (960 'valid' entries returned)
Looking at page: 202 (42 key mismatches, 1 'valid' entry returned)
Looking at page: 136 (999 key mismatches, 1 ignored as duplicate)
Looking at page: 224 (509 key mismatches)
Looking at page: 225 (590 key mismatches, 1 ignored as duplicate)
Looking at page: 184 (only BUCKET_END)
Looking at page: 186 (1 key mismatch)
Looking at page: 138 (1 key mismatch, 137 'valid' entries returned)
Looking at page: 141 (1000 key mismatches, 1 ignored as duplicate)
Looking at page: 142 (BUCKET_END only)
Looking at page: 143 (193 key mismatches, 1 ignored as duplicate)
Looking at page: 147 (1100 key mismatches, 1 ignored as duplicate, 19 'valid' entries returned)
Looking at page: 144 (1 key mismatch)
Looking at page: 140 (1000 key mismatches, 1 ignored as duplicate)

<snip>

The rest of the nodes have no 'valid' entries. Most of them have a lot of key
mismatches, some of them only have BUCKET_END (empty node).

This will not be noticed during a regular index search (not using LIMIT), because this would only result in the same bit in the bitmap being set twice, during the scan phase. I tried to test this by adding the following to IndexRootPage::scanIndex

          ASSERT (!bitmap->isSet(number));  // Added this
          bitmap->set (number);

And this asserted during the test.

I need to find out why the index have two entries for the same record, that are sorted to different places in the index ...
[16 Jul 2009 11:03] Lars-Erik Bjørk
Closed as a duplicate of the newer bug#46144, as this gives a more accurate synopsis and description of the problem