| Bug #75762 | Table is full error on sql nodes and subsequent crash of entire cluster | ||
|---|---|---|---|
| Submitted: | 4 Feb 2015 12:20 | Modified: | 3 Jun 2015 15:09 |
| Reporter: | Marco Sperandio | Email Updates: | |
| Status: | Not a Bug | Impact on me: | |
| Category: | MySQL Cluster: Cluster (NDB) storage engine | Severity: | S1 (Critical) |
| Version: | 7.3.7 | OS: | Linux (RHEL 5.4 64bit) |
| Assigned to: | MySQL Verification Team | CPU Architecture: | Any |
[4 Feb 2015 12:20]
Marco Sperandio
[4 Feb 2015 16:36]
Marco Sperandio
Assuming that the error log is true:
Error: 2341
Error data: DbaccMain.cpp
Error object: DBACC (Line: 4774) 0x00000002
Opening the DbaccMain.cpp from 7.3.7 source:
/* --------------------------------------------------------------------------------- */
/* ALLOC_OVERFLOW_PAGE */
/* DESCRIPTION: */
/* --------------------------------------------------------------------------------- */
void Dbacc::allocOverflowPage(Signal* signal)
{
tresult = 0;
if (cfreepages.isEmpty())
{
jam();
zpagesize_error("Dbacc::allocOverflowPage");
tresult = ZPAGESIZE_ERROR;
return;
}//if
seizePage(signal);
ndbrequire(tresult <= ZLIMIT_OF_ERROR);
{
LocalContainerPageList sparselist(*this, fragrecptr.p->sparsepages);
sparselist.addLast(spPageptr);
}
iopPageptr = spPageptr;
initOverpage(signal);
}//Dbacc::allocOverflowPage()
==============================
Reading the source code it seems that we reached a limit related to free pages, but again, there is no trace of indexmemory and datamemory overallocation in our checks, am i missing something?
still investigating.
regards
Marco
[4 Feb 2015 16:38]
Marco Sperandio
I forgot to say that line 4774 is: ndbrequire(tresult <= ZLIMIT_OF_ERROR); M.
[3 Jun 2015 15:08]
MySQL Verification Team
Hi, "table is full" error can happen from 2 reasons 1. you do not have enough free allocated ram (datamemory or indexmemory) 2. you do not have enough fragments for that table (too many rows) Solving [1] is obvious, you need more ram. In your case I'm not sure this is the reason for these errors. It could be as when you go over 70% of memory usage you can have, on a system that does large and/or long transactions, reported free space that's not actually free (yet) so you hit a limit not being aware of it. Solving [2] is not obvious but it's rather simple. Using alter table.. MAX_ROWS=.. you increase number of fragments for a table and overcome this problem. In order to check how MAX_ROWS actually works please consult documentation or if in need of assistance open a ticket with MySQL Support This behavior is a bug but a designed solution so I'm closing this request as not a bug. kind regards Bogdan Kecman
