Bug #44783 | A SELECT using ORDER BY and LIMIT sometimes returns too many rows | ||
---|---|---|---|
Submitted: | 11 May 2009 12:00 | Modified: | 16 Jul 2009 11:03 |
Reporter: | Lars-Erik Bjørk | Email Updates: | |
Status: | Duplicate | Impact on me: | |
Category: | MySQL Server: Falcon storage engine | Severity: | S3 (Non-critical) |
Version: | falcon-team | OS: | Any |
Assigned to: | Assigned Account | CPU Architecture: | Any |
Tags: | F_LIMIT |
[11 May 2009 12:00]
Lars-Erik Bjørk
[11 May 2009 12:13]
Lars-Erik Bjørk
I have added some debug printouts in IndexWalker::getValidatedRecord, and it seems that we some places have two entries in the index, for the same record. Both the entries pass as valid, and they are not adjacent in the index, and therefore not filtered by getValidatedRecord. The aggregated data about some of the pages examined, presented below, is from a search where we expect 1000 rows, but are returned 1137 rows: Looking at page: 137 (517 entries, all key mismatches) Looking at page: 217 (484 entries, all key mismatches) Looking at page: 187 (only BUCKET_END) Looking at page: 188 (1000 key mismatches, 20 'valid' entries returned) Looking at page: 139 (960 'valid' entries returned) Looking at page: 202 (42 key mismatches, 1 'valid' entry returned) Looking at page: 136 (999 key mismatches, 1 ignored as duplicate) Looking at page: 224 (509 key mismatches) Looking at page: 225 (590 key mismatches, 1 ignored as duplicate) Looking at page: 184 (only BUCKET_END) Looking at page: 186 (1 key mismatch) Looking at page: 138 (1 key mismatch, 137 'valid' entries returned) Looking at page: 141 (1000 key mismatches, 1 ignored as duplicate) Looking at page: 142 (BUCKET_END only) Looking at page: 143 (193 key mismatches, 1 ignored as duplicate) Looking at page: 147 (1100 key mismatches, 1 ignored as duplicate, 19 'valid' entries returned) Looking at page: 144 (1 key mismatch) Looking at page: 140 (1000 key mismatches, 1 ignored as duplicate) <snip> The rest of the nodes have no 'valid' entries. Most of them have a lot of key mismatches, some of them only have BUCKET_END (empty node). This will not be noticed during a regular index search (not using LIMIT), because this would only result in the same bit in the bitmap being set twice, during the scan phase. I tried to test this by adding the following to IndexRootPage::scanIndex ASSERT (!bitmap->isSet(number)); // Added this bitmap->set (number); And this asserted during the test. I need to find out why the index have two entries for the same record, that are sorted to different places in the index ...
[16 Jul 2009 11:03]
Lars-Erik Bjørk
Closed as a duplicate of the newer bug#46144, as this gives a more accurate synopsis and description of the problem