Bug #58382 Cardinality mismatch on InnoDB index
Submitted: 22 Nov 2010 13:45 Modified: 17 Jan 2013 9:05
Reporter: Jens Rantil Email Updates:
Status: Not a Bug Impact on me:
Category:MySQL Server: InnoDB Plugin storage engine Severity:S2 (Serious)
Version:5.1.61 OS:Linux (debian 6.0 (squeeze))
Assigned to: Matthew Lord CPU Architecture:Any
Tags: cardinality, INDEX, innodb

[22 Nov 2010 13:45] Jens Rantil
We have a table on two (master+slave) production servers that have an index that is heavily being used in certain SELECTs.

Since about six weeks back in time, the cardinality for this particular index started being heavily incorrect. I know about how InnoDB makes 'random dives' to estimate the cardinality of the index. However, the estimate is way off and can't possibly be correct.

mysql> SHOW INDEX FROM mytable;
| Table   | Non_unique | Key_name     | Seq_in_index | Column_name  | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
| mytable |          1 | myindex      |            1 | mycol        | A         |      500622 |     NULL | NULL   | YES  | BTREE      |         |
15 rows in set (0.01 sec)

The index in question is "myindex".

mysql> SELECT COUNT(DISTINCT mycol) FROM mytable;
| COUNT(DISTINCT accountid) |
|                     99506 | 
1 row in set (0.29 sec)

mysql> SELECT COUNT(*) FROM mytable;
| COUNT(*) |
|   499842 | 
1 row in set (0.17 sec)

 * It seems highly unreasonable that MySQL should report 5 times bigger cardinality estimate than the actual cardinality.
 * ANALYZE TABLE gives a more correct cardinality estimate, but only works for so long.
 * Yesterday ANALYZE TABLE locked the table for over a minute and we don't want to run this during the day anymore. The lock was probably due to a heavy cron job that was doing individual row locks.
 * Since the cardinality is wrong, SELECTs makes bad optimizations.
 * The examples above are from the master database.

How to repeat:
I don't know.

Suggested fix:
Don't know.
[22 Nov 2010 14:02] Miguel Solorzano
Try 5.0.91 release version:


Thanks in advance.
[22 Nov 2010 14:09] Jens Rantil
Out of curiousity, do you have a specific fix in mind that might have fixed this issue previously before 5.0.91?
[22 Nov 2010 17:47] Valeriy Kravchuk
Is it 64-bit OS and 64-bit MySQL server version?
[22 Nov 2010 18:05] Jens Rantil
> Is it 64-bit OS and 64-bit MySQL server version?
Yes, to both.
[22 Nov 2010 18:17] Valeriy Kravchuk
You can be affected by (a kind of) bug #43660 (fixed since 5.0.82) then.
[23 Nov 2010 9:28] Jens Rantil
> You can be affected by (a kind of) bug #43660 (fixed since 5.0.82) then.
Probably not. Cardinality is changed as I run ANALYZE TABLE manually. #43660 states that cardinality (extremelly seldom) will not change.

I will see if we can upgrade our production servers. I'll get back to you when I have more info. Thanks so far.
[24 Dec 2010 0:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[2 May 2011 16:11] Jens Rantil
We have now upgraded our databases to MySQL debian package 5.1.49-3.

Most certainly we experienced this bug again yesterday. Initially, the issue was SELECTs started going extremelly slow. Doing EXPLAIN SELECT showed that a bigger JOIN, using (among others) the previous reported "corrupt" table, was iterating over the whole resultset, ie. not using the corrupt index we've seen before.

Since it was our master database in production, and most of our staff was on weekend leave, we did not do a thorough analysis of the problem. ANALYZE TABLE on the table with the corrupt index solved the problem immediately.

The database does currently not run InnoDB Plugin, however we are considering migrating to it.

Next time we experience this we will definitely save more output from "SHOW INDEXES FOR <corrupt_table>". Do you have any input on what we can do to further pinpoint what might be wrong?
[2 May 2011 16:14] Jens Rantil
Also, when we were running 5.0.67 we executed OPTIMIZE TABLE on the corrupt table at least twice. This does not seem to have fixed the issue since it came back both on 5.0.67 and on our current installation (5.1.49).
[2 Feb 2012 15:55] Jens Rantil
Today we have experienced the same index issue on one of our database slaves. This time, on a new (sadly, bigger) table that has never been struggling before. We did some intense updates and reads about 24 hours ago that could have triggered it. Not so many insertions. Worth to note is that the columns updated yesterday were not the ones that the corrupt indexes were using.

Two indexes that normally should have cardinality around ~150000-200000 both showed cardinality around ~18. This was solved by an analyze table.

With previous tables, we have too been able to fix indexes with analyze tables. However, that problem kept coming back. I will report if we experience index corruption again.

Note that we have two slaves, and that the other slave does not have a corrupt index. However, it is also making any reads whatsoever from cdr.

I'll post the output from the CLI in a separate, private, comment.
[2 Feb 2012 16:05] Jens Rantil
I posed this previously, but never got an answer: Do you have any input on what we can do to further pinpoint what might be wrong?
[2 Feb 2012 16:06] Jens Rantil
Should have updated this field before. We are now running InnoDB Plugin. However, I guess the error is not necessarily there?
[1 Jun 2012 8:28] Jens Rantil
I think we experienced the same bug earlier today. I will post the output from EXPLAIN SELECTs from our master and slave in a separate, hidden, comment. Notice how the query optimizer on master clearly is making wrong priorities. Since this happened on a production system we issued an "ANALYZE TABLE" immediately that seemed to have solved our issue immediately. Unfortunately, we did not check which exact index broke this time.

The table that had a broken index somewhat a big one, and we are a bit worried that we will have issue an OPTIMIZE TABLE on it. Downtime is not an option unless we temporarily take the machine out of production which would require considerably amount of work.
[1 Jun 2012 11:57] Sveta Smirnova
Thank you for the feedback.

But version 5.1.49 is very old and many bugs were fixed since. Please either try current version 5.1.63 or send us dump of all involved tables, so we can repeat the problem on our side.
[2 Jul 2012 1:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[30 Jul 2012 16:06] Jens Rantil
We have now experienced this on a one of our servers running of 5.1.61-0+squeeze1. This has happened two times and have, both times, been solved by issuing "ANALYZE TABLE thetable;".

I am therefor bumping the MySQL version for this bug. Would you still like ut to upgrade to 5.1.63? Making a search for "index" in changelog of 5.1.62 and 5.1.63 does not give me any results of relevance.
[3 Aug 2012 18:27] Sveta Smirnova
Thank you for the feedback.

To verify, then fix it we should be sure this is MySQL bug. Therefore we need, with your help, to create repeatable test case. Please provide information I requested in previous comment.

Also having ANALYZE TABLE solved the problem this can be not a bug. Please send output of SHOW GLOBAL VARIABLES LIKE '%innodb%' also.
[7 Aug 2012 8:43] Jens Rantil
Hi Sveta,

Thank you for your feedback.

> Please send output of SHOW GLOBAL VARIABLES LIKE '%innodb%' also.

Here you go:

| Variable_name                   | Value                  |
| have_innodb                     | YES                    |
| ignore_builtin_innodb           | ON                     |
| innodb_adaptive_flushing        | ON                     |
| innodb_adaptive_hash_index      | ON                     |
| innodb_additional_mem_pool_size | 8388608                |
| innodb_autoextend_increment     | 8                      |
| innodb_autoinc_lock_mode        | 1                      |
| innodb_buffer_pool_size         | 64424509440            |
| innodb_change_buffering         | inserts                |
| innodb_checksums                | ON                     |
| innodb_commit_concurrency       | 0                      |
| innodb_concurrency_tickets      | 500                    |
| innodb_data_file_path           | ibdata1:10M:autoextend |
| innodb_data_home_dir            |                        |
| innodb_doublewrite              | ON                     |
| innodb_fast_shutdown            | 1                      |
| innodb_file_format              | Barracuda              |
| innodb_file_format_check        | Barracuda              |
| innodb_file_per_table           | ON                     |
| innodb_flush_log_at_trx_commit  | 2                      |
| innodb_flush_method             | O_DIRECT               |
| innodb_force_recovery           | 0                      |
| innodb_io_capacity              | 200                    |
| innodb_lock_wait_timeout        | 50                     |
| innodb_locks_unsafe_for_binlog  | OFF                    |
| innodb_log_buffer_size          | 8388608                |
| innodb_log_file_size            | 1073741824             |
| innodb_log_files_in_group       | 2                      |
| innodb_log_group_home_dir       | ./                     |
| innodb_max_dirty_pages_pct      | 75                     |
| innodb_max_purge_lag            | 0                      |
| innodb_mirrored_log_groups      | 1                      |
| innodb_old_blocks_pct           | 37                     |
| innodb_old_blocks_time          | 0                      |
| innodb_open_files               | 500                    |
| innodb_random_read_ahead        | OFF                    |
| innodb_read_ahead_threshold     | 56                     |
| innodb_read_io_threads          | 4                      |
| innodb_replication_delay        | 0                      |
| innodb_rollback_on_timeout      | OFF                    |
| innodb_spin_wait_delay          | 6                      |
| innodb_stats_method             | nulls_equal            |
| innodb_stats_on_metadata        | ON                     |
| innodb_stats_sample_pages       | 8                      |
| innodb_strict_mode              | ON                     |
| innodb_support_xa               | ON                     |
| innodb_sync_spin_loops          | 30                     |
| innodb_table_locks              | ON                     |
| innodb_thread_concurrency       | 20                     |
| innodb_thread_sleep_delay       | 10000                  |
| innodb_use_sys_malloc           | ON                     |
| innodb_version                  | 1.0.17                 |
| innodb_write_io_threads         | 4                      |
53 rows in set (0.00 sec)
[14 Jan 2013 22:11] Matthew Lord
You have described a scenario that is entirely plausible. The index statistics are calculated and updated at specific times:

The cardinality is only an estimate, and it can unfortunately end up being very off if the statistics are not updated for a long time (which is automatically done at startup, when metadata commands are run such as SHOW TABLE STATUS, etc). You have several options for workarounds here:
  1) Restart with innodb_stats_on_metadata=0 so that the stats are essentially persistent after the first calculation done at startup
  2) Force periodic statistics updates/recalculations with SHOW TABLE STATUS run in the database.
  3) Alter your queries and force the join order using the STRAIGHT_JOIN clause and other index/optimizer hints (force index, ignore index,etc.)

The foundational and underlying issue here is resolved in MySQL 5.6, with the addition of persistent and visible statistics for InnoDB tables, along with alleviating the need for ever doing a manual ANALYZE TABLE (--innodb_stats_auto_recalc):
[17 Jan 2013 9:05] Jens Rantil

Thank you for your feedback.

> Two indexes that normally should have cardinality around ~150000-200000 both showed cardinality around ~18.

Just double checking; So, you would also say that that it is plausible that cardinality can go from ~150000 to ~18?
[9 Jan 2014 10:03] Arnaud Adant
hi Jens,

This problem should be fixed in 5.6. You can specify parameters per table.

alter table t stats_persistent = 1, stats_sample_page = 1000;

--- each page is 16k unless you changed the default page, so 1000 = 16Mb sample
--- 100 could be enough

analyze table t;

This should give more accurate stats. Please note that Bug #67351 	Cardinality samples of partitioned tables sometimes widely off still has an impact on partitioned table cardinality. The cardinality will be multiplied by the number of partitions ...

Let us know if it fixed your issue.