Bug #34348 | Data Node should not crash in the event of table full | ||
---|---|---|---|
Submitted: | 6 Feb 2008 12:51 | Modified: | 12 Jan 2010 22:37 |
Reporter: | Premraj Nallasivampillai | Email Updates: | |
Status: | Closed | Impact on me: | |
Category: | MySQL Cluster: Cluster (NDB) storage engine | Severity: | S2 (Serious) |
Version: | mysql-5.1-telco-6.3 | OS: | Any |
Assigned to: | Pekka Nousiainen | CPU Architecture: | Any |
Tags: | mysql-5.1-telco-6.* |
[6 Feb 2008 12:51]
Premraj Nallasivampillai
[6 Feb 2008 13:30]
Tomas Ulin
Pekka, we should ref with (new) error code instead of ndbrequire.
[7 Oct 2008 12:58]
Lukasz Osipiuk
We encountered the same error recently Time: Friday 3 October 2008 - 17:37:18 Status: Temporary error, restart node Message: Array index out of range (Internal error, programming error or missing error message, please report a bug) Error: 2304 Error data: dbacc/DbaccMain.cpp Error object: DBACC (Line: 5272) 0x0000000e Program: /usr/sbin/ndbd Pid: 19419 Trace: /var/lib/mysql-cluster/ndb_10_trace.log.1 Version: mysql-5.1.24 ndb-6.3.16-RC ***EOM*** Mysql version "MySQL distrib mysql-5.1.24 ndb-6.3.16-RC, for debian-linux-gnu (x86_64)" I am not sure if I understand the description of the bug correctly. Is there a way to tune MySQL cluster in some way to avoid such behaviour. What do you mean by writing "more rows than allowed in the partition"? We are using MySQL cluster storing all data in memory. Most of data and index memory is free all the time so I doubt than run out of it. We encountered this error simultaneously on two redundant nodes so our cluster stopped functioning. Is there a way to avoid such situation in the future (some configuration changes?)? Or is it just a bug in MySQL itself which needs to be fixed. If later option I'd love to provide you with any additonal information you need to fix this. Just tell me :)
[13 Nov 2008 12:38]
Tomas Ulin
so setting max rows to a high number when creating the table will avoid hitting this bug
[17 Mar 2009 15:30]
Jonas Oreland
note duplicate: http://bugs.mysql.com/bug.php?id=43705
[29 Mar 2009 10:34]
Jonas Oreland
comment on bug fix: there is really two things that should be fixed 1) ACC should allow for more than 64k pages per fragment (i.e use DynArr256 instead of "home-grown" dyn-arr-256) 2) ACC should in case of out-of-memory in DynArr return error code instead of crashing comment on triage: - W4: work-around is easy and relatively well known - I4: not a lot of users (at least currently) hit the problem - R/E: based on fixing both of the issues
[21 Dec 2009 15:36]
Bugs System
A patch for this bug has been committed. After review, it may be pushed to the relevant source trees for release in the next version. You can access the patch from: http://lists.mysql.com/commits/95269 3067 Pekka Nousiainen 2009-12-21 bug#34348 01_acc.diff In ACC, hash expand fails if fragment top level directory is full (256). Instead of crash, set a flag on fragment and return error on any insert. The flag is cleared when a hash shrink takes place.
[12 Jan 2010 15:55]
Jon Stephens
Can't document fix for this bug until it has actually been pushed. Setting to Patch Pending.
[12 Jan 2010 16:27]
Pekka Nousiainen
push to 6.2 is in http://lists.mysql.com/commits/95308
[12 Jan 2010 22:37]
Jon Stephens
Discussed with Pekka on IRC, verified push and versions in which fix appears. Fix documented in the NDB-6.2.19, 6.3.1, and 7.0.11 changelogs, as follows: Trying to insert more rows than would fit into an NDB table caused data nodes to crash. Now in such situations, the insert fails gracefully with error 633, -Table fragment hash index has reached maximum possible size-. Closed.
[12 Jan 2010 22:48]
Jon Stephens
NDB 6.3 version in previous comment should been 6.3.31.