Bug #31314 Strange impact of ndb_cache_check_time on cluster auto_increment performance
Submitted: 1 Oct 2007 12:47 Modified: 14 Feb 2010 8:11
Reporter: Hartmut Holzgraefe Email Updates:
Status: No Feedback Impact on me:
None 
Category:MySQL Cluster: Cluster (NDB) storage engine Severity:S3 (Non-critical)
Version:mysql-5.1 OS:Any
Assigned to: Assigned Account CPU Architecture:Any
Tags: 5.1.21

[1 Oct 2007 12:47] Hartmut Holzgraefe
Description:
Inserting rows into a cluster table using auto_increment is about 50% slower than inserting into a similar table without auto_increment feature and the ids counted up and put into the INSERT statements right away. The 50% difference is within the expected limits as retreiving the next auto_increment value from the cluster requires an extra network round trip.

With query cache enabled and ndb_cache_check_time set to a small non-zero value the auto_increment takes substantially longer though. E.g. when inserting 5000 rows without auto_increment takes ~3s 5000 auto_increment rows take about 5s with ndb_cache_check_time being 0 (the default), with non-zero settings the time increases as follows:

10000 5.14 sec
 1000 5.84 sec
  500 7.03 sec
  250 6.38 sec
  200 17.13 sec
  150 29.92 sec
  100 56.77 sec
   75 81.04 sec

This is especially strange as only INSERT statements are issued which shouldn't hit the query cache at all?

How to repeat:
Try the following with query cache enabled and different ndb_cache_check_time settings:

DROP TABLE IF EXISTS t1;
DROP TABLE IF EXISTS t2;
DROP PROCEUDRE IF EXISTS t1_insert_loop;
DROP PROCEUDRE IF EXISTS t2_insert_loop;

CREATE TABLE `t1` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`dummy` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=ndbcluster;

CREATE TABLE `t2` (
`id` int(11) NOT NULL,
`dummy` int(11) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=ndbcluster;

DELIMITER //

CREATE PROCEDURE t1_insert_loop () BEGIN DECLARE v1 int default 0; while v1 < 5000 DO INSERT INTO t1 (dummy) values(9); SET v1=v1+1; END WHILE; END //
CREATE PROCEDURE t2_insert_loop () BEGIN DECLARE v1 int default 0; while v1 < 5000 DO INSERT INTO t2 (id,dummy) values(v1,9); SET v1=v1+1; END WHILE; END //

DELIMITER ;

call t1_insert_loop();
call t2_insert_loop();
[1 Oct 2007 13:13] Hartmut Holzgraefe
a few more observations:

after running the tests i see the following query cache status

mysql> show status like "q%";
+-------------------------+----------+
| Variable_name           | Value    |
+-------------------------+----------+
| Qcache_free_blocks      | 1        | 
| Qcache_free_memory      | 33545304 | 
| Qcache_hits             | 0        | 
| Qcache_inserts          | 0        | 
| Qcache_lowmem_prunes    | 0        | 
| Qcache_not_cached       | 0        | 
| Qcache_queries_in_cache | 0        | 
| Qcache_total_blocks     | 1        | 
| Questions               | 30022    | 
+-------------------------+----------+
9 rows in set (0.01 sec)

compared to the status before running the test only 
the Questions value changed by ~30k 

there have been only two procedure calls, 10k INSERTs and 10k SETs
though so i have no idea how this should add up to 30k query cache questions?

The only substantial ammount shown in SHOW STATUS LIKE "Com%" is

| Com_insert               | 10000 | 

which is the expected result, all other Com_* values are < 10
[22 Apr 2009 14:44] Jonathan Miller
This was also reported in http://bugs.mysql.com/bug.php?id=42729
[14 Jan 2010 8:11] Martin Skold
Please verify this is not a bug, the cost of maintaining QC is independent
of the operations since it is done by a separate monitoring thread.
[15 Feb 2010 0:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".