Bug #35179 The incorrect usage of Queue<T> is reason of volatile mistake.
Submitted: 10 Mar 2008 12:42 Modified: 10 Mar 2008 19:31
Reporter: Alex OKhrimenko Email Updates:
Status: Not a Bug Impact on me:
None 
Category:Connector / NET Severity:S1 (Critical)
Version:5.0.8.1-5.1.5 OS:Windows
Assigned to: CPU Architecture:Any
Tags: AddNew, ProcedureCache

[10 Mar 2008 12:42] Alex OKhrimenko
Description:
Our company develops community portals and uses MySql and MySql connector 5.1.5.
We have faced the problem which repeats in MySql Connector library every 7-10 days during 3-5 minutes. In this time our site does not work at all.
The following mistake arises:
--------------------------------------------------------------------------------
System.ArgumentException: Source array was not long enough. Check srcIndex and length, and the array's lower bounds.
Generated: Fri, 07 Mar 2008 09:02:33 GMT
System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.ArgumentException: Source array was not long enough. Check srcIndex and length, and the array's lower bounds.
   at System.Array.Copy(Array sourceArray, Int32 sourceIndex, Array destinationArray, Int32 destinationIndex, Int32 length, Boolean reliable)
   at System.Collections.Generic.Queue`1.SetCapacity(Int32 capacity)
   at System.Collections.Generic.Queue`1.Enqueue(T item)
   at MySql.Data.MySqlClient.ProcedureCache.AddNew(MySqlConnection connection, String spName)
   at MySql.Data.MySqlClient.ProcedureCache.GetProcedure(MySqlConnection conn, String spName)
   at MySql.Data.MySqlClient.StoredProcedure.GetParameters(String procName)
   at MySql.Data.MySqlClient.StoredProcedure.Resolve()
   at MySql.Data.MySqlClient.MySqlCommand.ExecuteReader(CommandBehavior behavior)
   at MySql.Data.MySqlClient.MySqlCommand.ExecuteDbDataReader(CommandBehavior behavior)
   at System.Data.Common.DbCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior)
   at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior)
   at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior)
   at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet)
   at Seek4.Core.Dal.SqlEngine.LoadDataSetInternal(IDbCommand cmd, DataSet dataSet, String[] tableNames)
   at Seek4.Core.Dal.SqlEngine.LoadDataSet(IDbCommand cmd, DataSet dataSet, String[] tableNames)
-----------------------------------------------------------------------
I have done small investigation which showed that the reason of the mistake is incorrect usage of Queue<T>. This collection is not thread safe.
Here is code of the mistake:

        private DataSet AddNew(MySqlConnection connection, string spName)
        {
            DataSet procData = GetProcData(connection, spName);
            if (maxSize > 0)
            {
                int hash = spName.GetHashCode();
                lock (procHash.SyncRoot)
                {
                    if (procHash.Keys.Count >= maxSize)
                        TrimHash();
                    if (!procHash.ContainsKey(hash))
                    {
                        procHash[hash] = procData;
                        //hashQueue should be locked
                        // Queue<T> is not thread safe
                        hashQueue.Enqueue(hash);
                    }
                }
            }
            return procData;
        }

While working with Hashtable and Queue<T> collections for thread safe in the AddNew method the only Hashtable is locked. But Queue<T> collection should be locked too. This the reason of this mistake.

How to repeat:
Bug id volatile, so it‘s difficult to reproduce in the MySql Connector library.
We can see the bug on a simple example

class Worker
{
    public void Start()
    {
        queue = new Queue<string>();

        Thread[] threads = new Thread[maxThreads];
        for (int i = 0; i < threads.Length; i++)
            threads[i] = new Thread(PopulateQueue);

        Array.ForEach(threads, delegate(Thread t) { t.Start(); });
        Array.ForEach(threads, delegate(Thread t) { t.Join(); });
        
        Debug.Assert(queue.Count == maxThreads * maxIterations);
    }

    void PopulateQueue()
    {
        for (int i = 0; i < maxIterations; i++)
        {
            queue.Enqueue("foo");
        }
    }

    volatile Queue<string> queue;

    const int maxThreads = 5;
    const int maxIterations = 1000000;
}

This code ran successfully at least a dozen times, then suddenly blew up with the following exception: 
System.ArgumentException was unhandled
  Message="Destination array was not long enough. 
           Check srcIndex and length, and the array's lower bounds."
  Source="mscorlib"
  ParamName=""
  StackTrace:
       at System.Array.Copy ...
       at System.Collections.Generic.Queue`1.SetCapacity ...
       at System.Collections.Generic.Queue`1.Enqueue ...
       at Worker.PopulateQueue ...
       ...

Suggested fix:
There are several ways of fixing this mistake:
1) the usage thread safe of Hashtable
public ProcedureCache(int size)
{
  ...
  procHash = Hashtable.Synchronized(new Hashtable(maxSize));
  ...
}

In AddNew method not Hashtable but Queue<T> should be locked.
        
private DataSet AddNew(MySqlConnection connection, string spName)
{
 DataSet procData = GetProcData(connection, spName);
 if (maxSize > 0)
 {
  int hash = spName.GetHashCode();
  if (procHash.Keys.Count >= maxSize)
   TrimHash();
  if (!procHash.ContainsKey(hash))
  {
   procHash[hash] = procData;
   //hashQueue is locked
   lock (hashQueue)
   {
    hashQueue.Enqueue(hash);
   }
  }
 }
 return procData;
}

2)Change the AddNew method in the following way:
private DataSet AddNew(MySqlConnection connection, string spName)
{
 DataSet procData = GetProcData(connection, spName);
 if (maxSize > 0)
 {
  int hash = spName.GetHashCode();
  lock (procHash.SyncRoot)
  {
   if (procHash.Keys.Count >= maxSize)
    TrimHash();
   if (!procHash.ContainsKey(hash))
   {
    procHash[hash] = procData;
    //add lock for hashQueue
   lock(hashQueue)
   {	
    hashQueue.Enqueue(hash);
   }
  }
 }
}
return procData;
}
[10 Mar 2008 19:31] Reggie Burnett
I don't see this as a mistake plus I think this bug might be at least a partial duplicate of bug #34338.  We moved the code that trims the hash inside the lock for the 5.1.6 release.  THis means that our queue will not be accessed at all unless it is inside a lock which should be ok.