Description:
In the function 'open_table_from_share' there is code to ask the handler about properties of a table before that table has been opened. Similar to reading from a closed file handle.
This leads to unnecessarily hard error handling, since we get "failed to open" errors when calling 'handle::get_no_parts' or actually from one of the functions that calls it.
Pseducode:
No table t1 in table def cache!
No connection to cluster! 
SELECT * FROM t1;
 open_unireg_entry("t1")
   get_table_share_with_create("t1")
     get_table_share("t1")
       alloc_share < Share is now allocated
       open_table_def("t1.frm")
         open_binary_frm("t1.frm)
       open_table_from_share(t1)
         fix_parts_func
            ndb->get_no_parts()
              /* ^ Too early! Fails and we get
               error=4 => Incorrect information in file
               if done after the open, it should just need
               look in the NdbDictionary for this info. ie
               no risk it fails */
           int ha_err= ha_open(ndb, "t1.frm")
           /* This is where it should fail. If ha_open returns
               ha_err(the real storage engine error) we can
               handle it!
           */
How to repeat:
Create a NDB table and save away the .frm file. Then put this in bug.test:
copy_file $MYSQL_TEST_DIR/std_data/t1.frm $MYSQLTEST_VARDIR/master-data/test/t1.frm;
error 1296;
select * from test.t1;
and also a bug-master.opt:
--ndbcluster
See faulty error message "Incorrect key file for table %s, try to reapir it"
Check trace file
Suggested fix:
Move the setup of partition functions to after the table has been opened in handler.
Fix 'ha_ndbcluster::get_no_parts', to just return the fragment count from the cached table information that it has fetched during open.
Also change the function signature for handler::get:no_parts to _not_ include the table name. Since it's working with an opened table, that is superfluous information and just confusing.
  
 
 
Description: In the function 'open_table_from_share' there is code to ask the handler about properties of a table before that table has been opened. Similar to reading from a closed file handle. This leads to unnecessarily hard error handling, since we get "failed to open" errors when calling 'handle::get_no_parts' or actually from one of the functions that calls it. Pseducode: No table t1 in table def cache! No connection to cluster! SELECT * FROM t1; open_unireg_entry("t1") get_table_share_with_create("t1") get_table_share("t1") alloc_share < Share is now allocated open_table_def("t1.frm") open_binary_frm("t1.frm) open_table_from_share(t1) fix_parts_func ndb->get_no_parts() /* ^ Too early! Fails and we get error=4 => Incorrect information in file if done after the open, it should just need look in the NdbDictionary for this info. ie no risk it fails */ int ha_err= ha_open(ndb, "t1.frm") /* This is where it should fail. If ha_open returns ha_err(the real storage engine error) we can handle it! */ How to repeat: Create a NDB table and save away the .frm file. Then put this in bug.test: copy_file $MYSQL_TEST_DIR/std_data/t1.frm $MYSQLTEST_VARDIR/master-data/test/t1.frm; error 1296; select * from test.t1; and also a bug-master.opt: --ndbcluster See faulty error message "Incorrect key file for table %s, try to reapir it" Check trace file Suggested fix: Move the setup of partition functions to after the table has been opened in handler. Fix 'ha_ndbcluster::get_no_parts', to just return the fragment count from the cached table information that it has fetched during open. Also change the function signature for handler::get:no_parts to _not_ include the table name. Since it's working with an opened table, that is superfluous information and just confusing.