Bug #52109 Cluster/J assumes every table uses NDB storage engine
Submitted: 16 Mar 2010 18:29 Modified: 9 Jan 2015 16:41
Reporter: Todd Farmer (OCA) Email Updates:
Status: Closed Impact on me:
None 
Category:MySQL Cluster: Cluster/J Severity:S2 (Serious)
Version:7.1.2 OS:Windows (XP)
Assigned to: CPU Architecture:Any

[16 Mar 2010 18:29] Todd Farmer
Description:
Using custom build provided by dev (public beta does not start per BUG#52106), it seems that Cluster/J assumes that all tables use NDB storage engine.  As a result, Cluster/J incorrectly attempts to route persistence operations to NDB API, even when the associated table is InnoDB or some other storage engine.  The end result is an Exception for the table not existing.

How to repeat:
Create table with non-NDB storage engine, run OpenJPA-based app.

Suggested fix:
Don't really know.  It seems that using JDBC would create two transaction contexts, while the user is only working with one.  I don't know how that could possibly map, or how it is being done today when "complex queries" are routed to JDBC.

If this is a known limitation of the product, it is not adequately documented.  One might infer from the mention of lack of support for VIEWs that only NDB-based tables may be used, but it is not explicitly stated as such:

http://dev.mysql.com/doc/ndbapi/en/mccj-issues.html
[16 Mar 2010 21:29] Jon Stephens
I suspect that this is a docs issue, but I'm setting dev/lead to Craig/BOcklin and asking Craig to confirm this.

If my suspicion is true, please change Category/Assignee/Lead to Docs/me/Stefan, and I'll try to take it from there.

Thanks!
[16 Mar 2010 23:50] Craig Russell
True. Clusterjpa assumes all persistent classes are mapped to ndb tables. This is a doc issue but it's also possible (need to take a closer look) to confirm that the tables are indeed ndb tables. So I won't immediately reassign the bug...
[17 Mar 2010 3:36] Todd Farmer
I would be fine with such a limitation, if it were documented.  Supporting other storage engines would be great, but I can't see how that will be successful in the context of a single user transaction, where NDB tables are managed through the NDB API, while non-NDB tables get routed through a discrete transaction via JDBC.
[22 Mar 2010 20:43] Craig Russell
There are several separate issues here:

1. In clusterj, we simply refuse to operate if the table is not stored in cluster. If the mapped table does not exist, the user will get a localized message [ERR_Get_NdbTable:Failure getting NdbTable for class {0}, table {1}.].

2. In clusterjpa, the user can have OpenJPA automatically create tables that don't exist, and using the property openjpa.DBDictionary=TableType=ndb the table will automatically be created in ndb.

3. If the table exists and is suitable for clusterjpa but is defined with a storage engine not ndb, the clusterjpa code can detect this. The question is what to do: 

3a. throw an exception because the user clearly wanted clusterjpa to handle this table (the openjpa.ndb.failOnJDBCPath flag might be put into service for this)
or,
3b. let the normal jdbc path service requests for the table.

3a is straightforward and is a valid fix for this issue. 

3b is problematic and needs some help from the MySQL Cluster, MySQL Server, and Connector/J teams to implement a feature. If the user has some tables in cluster and some in another storage engine, the assumption is that atomic transactions are guaranteed, and this cannot be done without coordination. 

So at this point the best solution is to throw an exception that we can solve in future.
[20 Sep 2010 21:42] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

  http://lists.mysql.com/commits/118646

328 Craig L Russell	2010-09-20
      Bug#52109 Add to message ERR_Get_NdbTable: Verify that the table is defined with ENGINE=NDB.