Bug #24703 | Available space on tablespace depends on NoOfReplicas | ||
---|---|---|---|
Submitted: | 29 Nov 2006 21:36 | Modified: | 8 Jan 2007 10:57 |
Reporter: | Serge Kozlov | Email Updates: | |
Status: | No Feedback | Impact on me: | |
Category: | MySQL Cluster: Disk Data | Severity: | S3 (Non-critical) |
Version: | 5.1.14-bk | OS: | Linux (Linux FC4) |
Assigned to: | Assigned Account | CPU Architecture: | Any |
[29 Nov 2006 21:36]
Serge Kozlov
[4 Dec 2006 9:32]
Jonas Oreland
Hi This is not a bug, but a consekvens of extend based allocation. Default extent size is 1M, and no smaller entity can be allocated from this tablespace. Please verify that discrepancy observed is expected.
[7 Dec 2006 20:46]
Serge Kozlov
I've changed the conditions and now ones correspond to Jonas comment. But idea is still same - try to create same ndb_dd table for different NoOfReplacas and data node number and insert data until getting error 'Table is full'. create logfile group lg1 add undofile 'undofile.dat' initial_size 20M undo_buffer_size 1M engine=ndb; create tablespace ts1 add datafile 'datafile.dat' use logfile group lg1 initial_size 1M engine=ndb; create table t1 (a int not null primary key, b varchar(8000)) tablespace ts1 storage disk engine=ndb; If cluster has NoOfReplicas=2 and 2 ndbd then able to put 128 rows into table t1. If cluster has NoOfReplaces=4 and 4 ndbd then able to put 112 rows into table t1. If cluster has NoOfReplaces=1 and 4 ndbd then able to put 512 rows into table t1. Question is: why differs results?
[8 Dec 2006 10:57]
Jonas Oreland
As I said... Space is alloced in chunks of extent size. Each fragment replica allocates a full extent. Different setting for #nodes & #replicas create tables with different no of fragment replicas. Please compute and document how the space will be allocated in your example and will correct you where you got it wrong.
[9 Jan 2007 0:00]
Bugs System
No feedback was provided for this bug for over a month, so it is being suspended automatically. If you are able to provide the information that was originally requested, please do so and change the status of the bug back to "Open".