Bug #34312 mysqld spontaneously jumps to high cpu (100%) usage
Submitted: 5 Feb 2008 15:39 Modified: 5 Sep 2012 3:33
Reporter: Brian Blood Email Updates:
Status: Closed Impact on me:
None 
Category:MySQL Server: General Severity:S2 (Serious)
Version:5.0.67 OS:MacOS (10.4 - 64bit)
Assigned to: Heikki Tuuri CPU Architecture:Any
Tags: 100%, cpu, high, restart

[5 Feb 2008 15:39] Brian Blood
Description:
10.4.11 - Xserve G5 2.0Ghz-DP - 4GB RAM
using the 64-bit version of mysqld

mysqld runs "normally" for a while, with expected peaks and troughs of activity.

At some point, mysqld seems to have some thread go into runaway mode as mysqld will utilize 100% cpu even with near-zero query activity in the dead of night.

SHOW PROCESSLIST shows all threads as idle

This graph from our cacti shows the load average of the machine for a 4-5 week period:
http://www3.macserve.net/pics/lindbergh_loadavg.png

The trough in the middle is when I restarted mysqld:
080118 10:13:58  mysqld ended
080118 10:14:00  mysqld started

Operationally, mysqld runs without fault.
I would just like my cpu cycles back.

primary client connectors are exim, courier, php5 through /tmp/mysql.sock

How to repeat:
let it run for a few days.

Suggested fix:
I can temporarily get mysqld back into nominal cpu utilization by stopping and starting mysqld.
[5 Feb 2008 18:08] Valeriy Kravchuk
Thank you for a problem report.

Do you use InnoDB? Can you try to install newer version, 5.0.51a, and check if this problem is repeatable with it?
[5 Feb 2008 18:28] Brian Blood
InnoDB is activated, but not used.

One thing I did consider is that the InnoDB table space IS placed inside the datadir:

datadir  = /var/mysql/data

innodb_data_home_dir            = /var/mysql/data/ibdata
innodb_log_group_home_dir       = /var/mysql/data/ibdata/logs
innodb_log_arch_dir             = /var/mysql/data/ibdata/logs
innodb_data_file_path           = ibdata1:128M:autoextend

I guess I should move ibdata up a directory into just /var/mysql

---

I will try moving the InnoDB space first and if the problem comes back, then try upgrading to 5.0.51a
[6 Feb 2008 18:58] Valeriy Kravchuk
Please, inform about any results.
[12 Feb 2008 23:58] Brian Blood
almost a week later after moving the InnoDB space and no spike.
[13 Feb 2008 8:48] Susanne Ebrecht
Brian,

you never used InnoDB and the change of the directory solved your problem.

Am I understand this right?
[13 Feb 2008 22:05] Brian Blood
Correct. One other possibly related item: I use phpMyAdmin to access the server sometimes and it would show the ibdata directory as a "database". As such PMA likely does a "USE ibdata" and then a "SHOW TABLES" from ibdata.

Not sure if those would merely produce errors or what....

Thanks
[15 Feb 2008 14:05] Heikki Tuuri
It is unlikely InnoDB caused the 100 % CPU usage if you did not use InnoDB.

If you do use InnoDB, and observe 100 % CPU usage, then please print SHOW INNODB STATUS\G, so that we can see what InnoDB is doing.
[22 Feb 2008 3:22] Brian Blood
at approximately 19:00 dallas time tonight, the cpu usage for mysqld has jumped into it's high state. This after running for 2 weeks with normal utilization patterns.

no changes to configurations/OS within the past 2 weeks other than the moving of the ibdata directory.
[22 Feb 2008 3:27] Brian Blood
I did perform a SHOW INNODB STATUS and it showed practically zero activity.
No threads obviously and:

=====================================
080221 21:14:44 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 54 seconds
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 4, signal count 4
Mutex spin waits 0, rounds 20, OS waits 1
RW-shared spins 4, OS waits 2; RW-excl spins 1, OS waits 1
------------
TRANSACTIONS
------------
Trx id counter 0 2816
Purge done for trx's n:o < 0 0 undo n:o < 0 0
History list length 0
Total number of lock structs in row lock hash table 0
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0 0, not started, OS thread id 65890304
MySQL thread id 2208114, query id 32769483 localhost root
SHOW INNODB STATUS
--------
FILE I/O
--------
I/O thread 0 state: waiting for i/o request (insert buffer thread)
I/O thread 1 state: waiting for i/o request (log thread)
I/O thread 2 state: waiting for i/o request (read thread)
I/O thread 3 state: waiting for i/o request (write thread)
Pending normal aio reads: 0, aio writes: 0,
 ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
Pending flushes (fsync) log: 0; buffer pool: 0
25 OS file reads, 3 OS file writes, 3 OS fsyncs
0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s
-------------------------------------
INSERT BUFFER AND ADAPTIVE HASH INDEX
-------------------------------------
Ibuf: size 1, free list len 0, seg size 2,
0 inserts, 0 merged recs, 0 merges
Hash table size 17393, used cells 0, node heap has 0 buffer(s)
0.00 hash searches/s, 0.00 non-hash searches/s
---
LOG
---
Log sequence number 0 43664
Log flushed up to   0 43664
Last checkpoint at  0 43664
0 pending log writes, 0 pending chkp writes
8 log i/o's done, 0.00 log i/o's/second
----------------------
BUFFER POOL AND MEMORY
----------------------
Total memory allocated 21322859; in additional pool allocated 676352
Buffer pool size   512
Free buffers       493
Database pages     19
Modified db pages  0
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages read 19, created 0, written 0
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
No buffer pool page gets since the last printout
--------------
ROW OPERATIONS
--------------
0 queries inside InnoDB, 0 queries in queue
1 read views open inside InnoDB
Main thread id 50491392, state: waiting for server activity
Number of rows inserted 0, updated 0, deleted 0, read 0
0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s
----------------------------
END OF INNODB MONITOR OUTPUT
[21 May 2008 18:25] Kevin Metcalf
I have also observed this same issue on a dual G5 server with 4 GB of ram running OS 10.4.  The server ran rock solid for at least a year, but when I upgraded to the 64bit version of MySQL, I began to notice issues like this.

On my multi-processor box, /usr/bin/sar shows the usage as "50%" because it's only one processor at full capacity.  When both processors do this, the system grinds to a halt and must be restarted.  I have written a simple monitoring script that logs processor usage every 60 seconds, and when I notice it's heavily used, I use:

[root@host ~]$ /usr/local/mysql/bin/mysqladmin -p shutdown
[root@host ~]$ /usr/local/mysql/bin/safe_mysqld

to return the system to a more stable configuration.

Should I consider moving back to the 32bit version of MySQL to observe whether this is more stable?  Has anyone else observed this phenomenon?
[23 May 2008 19:36] Brian Blood
I've now seen this condition occur on 2 other Xserve G4's
Both running 32-bit 10.4 versions - 5.0.51a
So I would surmise it is not a 64/32 bit issue.
[21 Jul 2008 18:00] Brian Blood
Even with latest 5.0.51b binary distribution, the high-cpu condition still spontaneously occurs
[17 Sep 2008 20:29] Mark Lundquist
I have been seeing what appears to be the same problem on an Intel Core Duo Xserve, also running OS X Server 10.4.11.  I am not using InnoDB.

I caught a little ktrace on the process while it was spinning its wheels:

 22652 mysqld   CALL  sigwait(0xb0438f1c,0xb0438f18)
 22652 mysqld   RET   sigwait 0
 22652 mysqld   CALL  pthread_sigmask(0x3,0x4fe568,0xb0438eac)
 22652 mysqld   RET   pthread_sigmask 0
 22652 mysqld   CALL  setitimer(0,0xb0438e60,0xb0438e50)
 22652 mysqld   RET   setitimer 0
 22652 mysqld   CALL  pthread_sigmask(0x3,0xb0438eac,0)
 22652 mysqld   RET   pthread_sigmask 0
 22652 mysqld   CALL  sigwait(0xb0438f1c,0xb0438f18)
 22652 mysqld   RET   sigwait 0
 22652 mysqld   CALL  pthread_sigmask(0x3,0x4fe568,0xb0438eac)
 22652 mysqld   RET   pthread_sigmask 0
 22652 mysqld   CALL  setitimer(0,0xb0438e60,0xb0438e50)
 22652 mysqld   RET   setitimer 0
 22652 mysqld   CALL  pthread_sigmask(0x3,0xb0438eac,0)
 22652 mysqld   RET   pthread_sigmask 0

Maybe this will help somebody figure out the bug...
[22 Oct 2008 17:48] Brian Blood
mysqld sample

Attachment: mysqld-gonewild-20081022.txt (text/plain), 8.56 KiB.

[22 Oct 2008 17:51] Brian Blood
std 32-bit mysql.com binary running on Xserve G4

saw that mysqld had entered high cpu condition. ran sample. uploaded.
two items stand out with high cpu time attached to them:

- select->fcntl in the very first thread

- a couple of other threads that seem to not be doing anything where they are calling: semaphore_wait_signal_trap
[22 Oct 2008 17:59] Brian Blood
Also did a ktrace and found similar calls as Mark:

   293 mysqld   CALL  pthread_sigmask(0x3,0xf0030d58,0)
   293 mysqld   RET   pthread_sigmask 0
   293 mysqld   CALL  sigwait(0xf0030dd8,0xf0030ddc)
   293 mysqld   RET   sigwait 0
   293 mysqld   CALL  pthread_sigmask(0x3,0x5a8de0,0xf0030d58)
   293 mysqld   RET   pthread_sigmask 0
   293 mysqld   CALL  setitimer(0,0xf0030ce8,0xf0030cf8)
   293 mysqld   RET   setitimer 0
   293 mysqld   CALL  pthread_sigmask(0x3,0xf0030d58,0)
   293 mysqld   RET   pthread_sigmask 0
   293 mysqld   CALL  sigwait(0xf0030dd8,0xf0030ddc)
   293 mysqld   RET   sigwait 0
   293 mysqld   CALL  pthread_sigmask(0x3,0x5a8de0,0xf0030d58)
   293 mysqld   RET   pthread_sigmask 0
   293 mysqld   CALL  setitimer(0,0xf0030ce8,0xf0030cf8)
   293 mysqld   RET   setitimer 0

My amateur eye says that something is setting a timer for ZERO time and/or waiting for ZERO time. Perhaps that might be a good place to start looking.
Thank you
[30 Mar 2009 15:36] Josh Alley
I'm also having this issue, xserve g4 w/ 64-bit mysql5.  I've tried all previous suggestions, no fixes yet.  I am using InnoDB, but the innodb status is quite calm.  Nothing is queued, innodb or otherwise, it just eats up a whole processor.
[30 Mar 2009 16:12] Brian Blood
Xserve G4s do not support 64-bit operation.
Use the 32 bit version.
[4 Sep 2012 23:13] MySQL Verification Team
It's this bug repeatable?. Please test latest version and comment the results. Thanks.
[5 Sep 2012 3:33] Brian Blood
We've moved on from using Apple PowerPC-based servers.

I would say this bug report is now irrelevant.