Bug #29573 | SQL server DTS ODBC 3.51 Driver Out of Memeory Error | ||
---|---|---|---|
Submitted: | 5 Jul 2007 12:54 | Modified: | 31 May 2013 7:12 |
Reporter: | Terry Anthony | Email Updates: | |
Status: | Closed | Impact on me: | |
Category: | Connector / ODBC | Severity: | S3 (Non-critical) |
Version: | 3.51 | OS: | Windows (Dell 6850 2003 server enterprise 32 GB ram, 4 way) |
Assigned to: | CPU Architecture: | Any |
[5 Jul 2007 12:54]
Terry Anthony
[5 Jul 2007 13:41]
MySQL Verification Team
Thank you for the bug report. Are you the pretty older server version: mysqld-5.0.4-beta-nt?. Could you please test with a latest released server version?. Notice to upgrade you need to read the instructions how to upgrade from that older version in our Manual. Thanks in advance.
[5 Aug 2007 23:00]
Bugs System
No feedback was provided for this bug for over a month, so it is being suspended automatically. If you are able to provide the information that was originally requested, please do so and change the status of the bug back to "Open".
[1 Apr 2008 8:18]
Johannes Kingma
Tested with MySQL ODBC driver 3.51 and 5.1 and MSSQL 2000. A DTS package that transfers data to a large, remote table on a MySQL (5) database either takes an extreme long time or fails with a memeory error. With the MySQLAdministrator it is clear why this is. A select * from `catalog`.`large table` is performed. If the table is large enough the query fails. it is unclear why a select * is performed prior to the insert. A similar DTS packages sending to another MSSQL database works without a problem.
[31 May 2013 7:12]
Bogdan Degtyariov
Thank you for your bug report. This issue has already been fixed in the release version of that product, which you can download at http://www.mysql.com/downloads/ Due to limitations of MySQL Network Protocol the result of SELECT query has to be either completely read and not a single row cannot be skipped or discarded once the query is executed. There are three possibilities: 1. Read the entire (huge) result set in the memory. The fastest way, which is set by default. However, it might consume a lot of memory. 2. Read records one by one: NO_CACHE=1 in the connection string or "Don't cache result of forward-only cursor" 3. Read data in chunks (PREFETCH=number_of_rows_to_prefetch). Recommended for large tables and queries that return all table rows. Impacts the server performance because SELECT query is run with LIMIT each time the new set of rows is to be fetched.