Bug #2323 Table caching problem? data doesn't update
Submitted: 8 Jan 2004 9:20 Modified: 26 Feb 2004 13:46
Reporter: [ name withheld ] Email Updates:
Status: No Feedback Impact on me:
Category:MySQL Server Severity:S1 (Critical)
Version:4.0.15 OS:MacOS (OSX 10.3.2)
Assigned to: CPU Architecture:Any

[8 Jan 2004 9:20] [ name withheld ]
Tables don't appear to be updating, this looks to me like a cache update problem but I'm not the 
expert! In this example, the problem is 'fixed' by flushing tables, in other examples, inserting the 
flush table command in a script failed to work and I actually had to close mySQL server and open it 
again to get it to see the changes. I am running on a single user machine and I believe that there is 
only one process running when this problem occurs. I am using Aaron Faby's mySQL build 
(www.serverlogistics.com) which I think is a completely 'standard' build.

Here's some code output that nicely illustrates the problem: 

/* Note that query returns 0 results */
SELECT Rec_Id,"00", county_cd
FROM PURDB. PURPermutations2  WHERE Rec_Id="PC100"
GROUP BY Rec_Id, county_cd

| Rec_Id | 00 | county_cd |
Number of Results: 0

Flush tables  <-------------------NOTE FLUSH TABLES HERE
RESULT: 0 Rows affected.

/* SAME query as before gets results now */
SELECT Rec_Id,"00", county_cd
FROM PURDB. PURPermutations2  WHERE Rec_Id="PC100"
GROUP BY Rec_Id, county_cd

| Rec_Id | 00 | county_cd |
| PC100  | 00 | 24        |
Number of Results: 1

How to repeat:
See code above. Here's conf file in case you need it, this is running on a G4 powerbook with 1 gig 
of ram:

# SHO Edited 101503
# note, mysql won't read this file if the world has write priviledges
# my tables are all MyISAM tables
# references
# http://www.mysql.com/doc/en/SHOW_VARIABLES.html

# The following options will be passed to all MySQL clients
#password	= your_password
port		= 3306
socket		= /tmp/mysql.sock

# The MySQL server variables here, note that the square brackets indicate the program or process 
and the bottom isamchk etc at bottom are NOT related to the mysqld variables.

# ••• General settings unrelated to memory allocation •••
log-slow-queries = /Library/MySQL/var/mySQLSlowQueries.log
long_query_time = 10
# allow upper and lower case names
lower_case_table_names = 0
port		= 3306
socket		= /tmp/mysql.sock

# set word length lower to catch ddt

# ••• shared by all threads •••
# Used for searching indexes
# huge=8

# see http://www.mysql.com/doc/en/Query_Cache_Configuration.html

#for dev
# query_cache_size=4M NOT NEEDED FOR DEV
# need to add query_cache_limit currently at 1M, most queries are 8k or less, little of cache is 
being used

#••• per thread,  max threads Ive seen: 8 •••
# 16 threads x 24M = 384M

#used for all sorts, huge=2M

#for dev

# for sequential reads (no sort), 128k default
#for dev
# read_buffer_size=8M

# for out of order reads (group and order by), 256k default

#••• General or applies to certain situations only •••
# Used for index creation http://www.mysql.com/doc/en/SHOW_VARIABLES.html

# tmp tables over this size go to disk 32 M is default
#for dev
# tmp_table_size=64M
# sun AM still seeing too many tmp to disk tables show status like 'created_tmp%'

#used for full joins only join_buffer_size = 128k default could be common
#for dev
# join_buffer_size=8M

# default is 8M per thread but rare

# only look at the first x chars when sorting, default is 1024

set-variable	= max_allowed_packet=16M


set-variable	= key_buffer=256M
set-variable	= sort_buffer=256M
set-variable	= read_buffer=2M
set-variable	= write_buffer=2M

set-variable	= key_buffer=256M
set-variable	= sort_buffer=256M
set-variable	= read_buffer=2M
set-variable	= write_buffer=2M


# Joins without keys: join_buffer_size

# Indexing: myisam_sort_buffer_size

# Using indexes: key_buffer_size

# General Queries: max_heap_table_size,read_buffer

# Order by: record_rnd_buffer,sort_buffer

# Inserts, bulk_insert_buffer_size,key_buffer_size
# http://www.mysql.com/doc/en/Insert_speed.html

# join_buffer_size (p249)
# used for joins without keys 

# key_buffer_size (p249)
# show status like 'key_read%' if Key_reads/Key_read_requests ratio over 1% increase
# recommend 25% of system memory Sun AM: .002

# max_heap_table_size  16M max table size in memory for complex queries p251

# max_join_size, set low on dev and high on deploy

# mysiam_max_extra_sort_file_size 252 unclear how to use

#myisam_sort_buffer_size p253 used when rebuilding keys for sorting key values
# recommend 25% of ram

#read_buffer p254 (used to be record_buffer) http://www.mysql.com/doc/en/
#recommend size = size of record*typical select number
# allocated per thread 

#read_rnd_buffer_size (was record_rnd_buffer) p254 used for read ahead caching used for ORDER 
# allocated per thread 128k default NOT settable?

#sort_buffer p255 used for group by and order by
#recommend size = 2M
# allocated per thread 2M default

#tmp_table_size p257 this is the value ABOVE which the system will switch to disk based temp 
#recommend size = 32M

#bulk_insert_buffer_size p257 effects load data, inserts
#recommend size = 8M set to 64M

#query_cache_size p258 used to cache repeated queries, possibly only useful on deploy

# note log format mentioned on pg265
[8 Jan 2004 11:29] Dean Ellis
We would need a table and data to go along with the query in order to test it...  Please attach a script to create and populate the table with enough data to demonstrate the issue.
[17 Jan 2004 12:16] MySQL Verification Team
Thanks for writting to us.

In order to be able to repeat the problem we need your table.

Please upload it tarred and gzipped (or just zipped) to the above "Files" tab.

Thanks in advance.
[26 Jan 2004 13:46] [ name withheld ]

It only works when you push a lot of data at it, and my datasets are too complex to send you the 
whole lot. However, that said, I think I have a description that will allow you to reproduce the 
problem. My scripts, that cause the problem are below, the problem is that the table 'Ranktable' 
gets corrupted,  I think the thing that leads to the problem is
a) creating a blank database with indexes
b) importing a lot of data into that table

the workaround I think is to repair the table after each import

hope this helps,

DROP TABLE IF EXISTS Symptoms.RankTable;
CREATE TABLE Symptoms.RankTable (
	Rec_Id	 			char(12),
	Type				char(80),
	Value1 				char(255),
	Value2 				char(255),
	Value3				char(255),
	Value4				char(255),
	Value5				TEXT,
	Points				int(8),
	pKey int not null auto_increment primary key,
	INDEX (Rec_Id),
	INDEX TypeVal1Rec (Type,Value1,Rec_Id),
	INDEX Val1TypeRec (Value1,Type,Rec_Id),
	INDEX Val2TypeRec (Value2,Type,Rec_Id),
	INDEX Val3TypeRec (Value3,Type,Rec_Id),
	INDEX Val4TypeRec (Value4,Type,Rec_Id),
	FULLTEXT Value5 (Value5)

DELETE FROM Symptoms.RankTable WHERE Type="Reg";
INSERT INTO Symptoms.RankTable (Rec_Id, Type, Value1, Points) 
SELECT  CCConsolidated.Rec_Id, "Reg", CCConsolidated.Country, SymptomPoints
FROM Regulatory.CCConsolidated 
INNER JOIN Regulatory.Countries ON CCConsolidated.Country= Countries.Country 
INNER JOIN General.CountryList ON CCConsolidated.Country= CountryList.CountryShort 
LEFT JOIN PANChem.Chemicals ON CCConsolidated.Rec_Id = Chemicals.Rec_Id
WHERE Data_Status LIKE "%registration%" and Key_CountryChem!="Reglist" ;

DELETE FROM Symptoms.RankTable WHERE Type="Reg" AND Value1="United States";
INSERT INTO Symptoms.RankTable (Rec_Id, Type, Value1, Points) 
SELECT  Rec_Id, "Reg", "United States", "30"
FROM PANChem.Chemicals WHERE EPA_Reg="Yes";
REPAIR TABLE Symptoms.RankTable;
/* NOTE Reglist feature might be droppable if CCConsolicated can be cleaned up.*/

INSERT INTO Symptoms.RankTable (Rec_Id, Type, Value1, Value2, Value4, Points)
SELECT  NCFAPUseFull.Rec_Id,Type, "United States", State, PSC AS Site_Code, 
Round(10*Log10(300*Sum(ACTrt97)/Sum(Ac97))) AS points
	PURdb. NCFAPUseFull INNER JOIN Symptoms.SiteRollups ON 
	State !="California"

DELETE FROM Symptoms.RankTable WHERE Type LIKE ('Use%') AND Points<1;
DELETE FROM Symptoms.RankTable WHERE Points IS NULL;
[14 Feb 2005 22:54] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".