Bug #29573 SQL server DTS ODBC 3.51 Driver Out of Memeory Error
Submitted: 5 Jul 2007 12:54 Modified: 31 May 2013 7:12
Reporter: Terry Anthony Email Updates:
Status: Closed Impact on me:
None 
Category:Connector / ODBC Severity:S3 (Non-critical)
Version:3.51 OS:Windows (Dell 6850 2003 server enterprise 32 GB ram, 4 way)
Assigned to: CPU Architecture:Any

[5 Jul 2007 12:54] Terry Anthony
Description:
MySQL database table has over 21 millions rows.  Using SQL server DTS to insert data into the table nightly.  The connection to the MySQL database returns an out of memory error.  The transformation in the DTS package is a select query that uses parameters to know how much data to transfer between SQL server and MySQL.  This issue is intermittent.  It does not occur every night.  I have turned on tracing in the ODBC driver but I don't know what I am looking for. Below is the error message I receive from SQL server.  I have the trace file if that would help.

Daily 0007 - Replicate to OLR --Level1 --Child 005(Replicate to OLR OSPROSQLRPT04 - OnlineReports - Export_Five - 000:02:11): Executed as user: PASSCORP\svc_sql. ...   DTSRun:  Executing...   DTSRun OnStart:  DTSStep_DTSDynamicPropertiesTask_1   DTSRun OnFinish:  DTSStep_DTSDynamicPropertiesTask_1   DTSRun OnStart:  DTSStep_DTSExecuteSQLTask_4   DTSRun OnFinish:  DTSStep_DTSExecuteSQLTask_4   DTSRun OnStart:  DTSStep_DTSDataPumpTask_1   DTSRun OnProgress:  DTSStep_DTSDataPumpTask_1; 583 Rows have been transformed or copied.; PercentComplete = 0; ProgressCount = 583   DTSRun OnFinish:  DTSStep_DTSDataPumpTask_1   DTSRun OnStart:  DTSStep_DTSExecuteSQLTask_3   DTSRun OnFinish:  DTSStep_DTSExecuteSQLTask_3   DTSRun OnStart:  DTSStep_DTSExecuteSQLTask_1   DTSRun OnFinish:  DTSStep_DTSExecuteSQLTask_1   DTSRun OnStart:  DTSStep_DTSExecuteSQL
Task_2   DTSRun OnFinish:  DTSStep_DTSExecuteSQLTask_2   DTSRun OnStart:  DTSStep_DTSDataPumpTask_10   DTSRun OnError:  DTSStep_DTSDataPumpTask_10, Error = -2147217871 (80040E31)      Error string:  [MySQL][ODBC 3.51 Driver][mysqld-5.0.4-beta-nt]MySQL clien...  Process Exit Code 1.  The step failed.

How to repeat:
Create table in my SQL with 10 columns.

Load table with 21 millions rows.

Create SQL DTS package to insert rows into the table.  

Occurs once nightly.  The server needs to be extremely busy and the 27 GB of the 32 GB of memory need to be allocated to the SQL server instance.
[5 Jul 2007 13:41] MySQL Verification Team
Thank you for the bug report. Are you the pretty older server version:
mysqld-5.0.4-beta-nt?. Could you please test with a latest released server
version?. Notice to upgrade you need to read the instructions how to
upgrade from that older version in our Manual. Thanks in advance.
[5 Aug 2007 23:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[1 Apr 2008 8:18] Johannes Kingma
Tested with MySQL ODBC driver 3.51 and 5.1 and MSSQL 2000. 

A DTS package that transfers data to a large, remote table on a MySQL (5) database either takes an extreme long time or fails with a memeory error.
With the MySQLAdministrator it is clear why this is. A select * from `catalog`.`large table` is performed. If the table is large enough the query fails. it is unclear why a select * is performed prior to the insert.

A similar DTS packages sending to another MSSQL database works without a problem.
[31 May 2013 7:12] Bogdan Degtyariov
Thank you for your bug report. This issue has already been fixed in the release version of that product, which you can download at

  http://www.mysql.com/downloads/

Due to limitations of MySQL Network Protocol the result of SELECT query has to be either completely read and not a single row cannot be skipped or discarded once the query is executed.
There are three possibilities:

1. Read the entire (huge) result set in the memory. The fastest way, which is
   set by default. However, it might consume a lot of memory.
2. Read records one by one: NO_CACHE=1 in the connection string or "Don't 
   cache result of forward-only cursor"
3. Read data in chunks (PREFETCH=number_of_rows_to_prefetch). Recommended for
   large tables and queries that return all table rows. Impacts the server
   performance because SELECT query is run with LIMIT each time the new set
   of rows is to be fetched.