A single user creating nodes continously results in PDOException: SQLSTATE[08004] [1040] Too many connections in lock_may_be_available() exception.

To simulate the problem, create nodes (e.g. article) continuously. Typcially on my machine i get the exception on creating 44 nodes.

my machine is a WAMP stack on Windows 7 with 3GB memory and i5 processor

Steps to create:
1. Login as admin
2. create a shortcut to node/add/article and call the short cut Article
3. run the attached seleinum test (which just creates nodes with text hello

after creating 44 nodes, the above exception occurs.

I've marked this as critical as the apachae has to be restarted for the site to start working again.

Files: 
CommentFileSizeAuthor
#160 drupal-too_many_connections-930876-160.patch756 byteslunk_rat
#157 site-throwing-error2.png14.7 KBimkashif
#157 site-throwing-error.png15.13 KBimkashif
#142 too_many_connections_in-lock_may_be_available-at_reactos.org_.png9.95 KBklonos
#124 Selection_127.png31.52 KBgirishmuraly
#76 too-many-connections-930876-74.patch1.18 KBsukr_s
PASSED: [[SimpleTest]]: [MySQL] 33,986 pass(es).
[ View ]
#74 930876-too-many-connections.patch1.17 KBsukr_s
FAILED: [[SimpleTest]]: [MySQL] Unable to apply patch 930876-too-many-connections.patch. This may be a -p0 (old style) patch, which is no longer supported by the testbots.
[ View ]
#61 variables_lock.patch1.04 KBcatch
PASSED: [[SimpleTest]]: [MySQL] 39,092 pass(es).
[ View ]
#59 variables_lock.patch1021 bytescatch
FAILED: [[SimpleTest]]: [MySQL] Invalid PHP syntax in includes/bootstrap.inc.
[ View ]
#35 Drupal 7.2 Amazon phpinfo().pdf145.53 KBNiklasBr
#6 phpinfo.pdf146.14 KBsukr_s
selenium_testcase.txt28.96 KBsukr_s

Comments

Version:7.0-alpha7» 7.x-dev
Priority:Critical» Normal

I don't think this is critical, because AFAIK we do not claim to support Selenium tests. This is not a release blocker.

I'm setting the version to 7.x-dev, because if anything should be fixed, it will be fixed in HEAD. Can you check if the problem also occurs in the latest HEAD version?

Priority:Normal» Critical

just to ensure we are on the same page. I started getting this error when using a d7 site. to bring some predictability to this error, the selenium script was written. you can manually create 44+ nodes one after the other and the problem will still occur (without using selenium script).

i mark it (again) as critical as the site is not usable after this until apache is restarted. if you believe this is not critical, please change it back and i won't change the status after that.

i'm moving this out of the critical queue, because i followed the steps in the original post and i can't reproduce. the selenium test runs all the way to the end without any errors.

Priority:Critical» Normal

Cannot reproduce either with a SimpleTest that repetitively creates nodes.

The connections to the MySQL are properly closed, so the Threads_connected status variable stays low (4 in my case).

Can you please report the exact version of the components in your install (Apache, PHP and MySQL)?

StatusFileSize
new146.14 KB

i've attached the output of phpinfo. kindly let me know if you need any other information or if i can help in any other way.

I got the same error, i thought it was drupal 7 but i realized my partition for mysql was at 100%, you should take a look at your available space: df -h

I've seen this too but I don't think Drupal is responsible, probably a misconfiguration somewhere..

I got the error when I tried to import a csv file into nodes into Drupal 7 with the Feeds module. There is only 79 records in the csv files. Are there a better way to content from an old system to Drupal 7?

I'm getting this sometimes too, not sure why it happens but I suspect it's a db configuration issue on my hosting. I contacted them and I think they recreated my databas and didn't see the message in a while but it's showing up again lately. Any help will be appreciated.

fresh installation ??

I cannot reproduce it (LAMP).
I'm running selenium & devel genc same time and CPU hits 100%

I'm getting this error because of another module. Problem is, now that I've got it, I can't connect to my site via web interface or drush. I'm not sure how to reset the lock.

restart your mysql server, the locks will be reset

I'm still having this issue which occurs randomly making it difficult to reproduce and identify the cause. Here's the mysql configuration I got by printing phpinfo(). Any help will be appreciated.

mysql

MySQL Support enabled
Active Persistent Links 1
Active Links 1
Client API version 5.0.91
MYSQL_MODULE_TYPE external
MYSQL_SOCKET /var/lib/mysql/mysql.sock
MYSQL_INCLUDE -I/usr/include/mysql
MYSQL_LIBS -L/usr/lib -lmysqlclient

Directive Local Value Master Value
mysql.allow_persistent On On
mysql.connect_timeout 60 60
mysql.default_host no value no value
mysql.default_password no value no value
mysql.default_port no value no value
mysql.default_socket no value no value
mysql.default_user no value no value
mysql.max_links Unlimited Unlimited
mysql.max_persistent Unlimited Unlimited
mysql.trace_mode Off Off

mysqli

MysqlI Support enabled
Client API library version 5.0.91
Client API header version 5.0.90
MYSQLI_SOCKET /var/lib/mysql/mysql.sock

Directive Local Value Master Value
mysqli.default_host no value no value
mysqli.default_port 3306 3306
mysqli.default_pw no value no value
mysqli.default_socket no value no value
mysqli.default_user no value no value
mysqli.max_links Unlimited Unlimited
mysqli.reconnect Off Off

I also am now getting this issue.
This is on a brand new Drupal 7 Site install.
I had just installed CKEditor, and when I pressed the "Save changes" button after enabling CKEditor on the modules page, this started happening.
Now the site is hosed and can't be accessed at all.

So, I just ignored the site for an hour - then it started working again.
Something is definitely weird here.

I'm seeing the error. It happened when i was importing "books" (nodes are programmatically created) in a kind of bulk operation using a module developed by me. Things work OK under Window but collapse under Debian, outputing this error. Could be a configuration situation but the fact that it brings Drupal down makes it very important!
In contrast to others i didn't have to restart apache.

I've just started seeing this error - I hadn't done anything to the site before it started bringing up the error - can anyone help?

The site is working again - five minutes after I received the error - can I talk to my host about this - what should I be asking them to do?

I got this too on a fairly clean install of D7. Restarted Apache and could connect again. I can reproduce it by simply opening the "Edit menu" link for the main menu in a new tab. Every new tab creates a new DB connection that seems to just stay there. I can manually kill them through MySql Workbench, but obviously this isn't a real solution. Anything I can provide to help track down this issue?

Correction. Viewing any page on the site results in a new database connection.

subscribing

@Babalu Would you provide more info under wich circumstances did you get the error?
(just subscribing doesn't help much)

@ #23

Same error today after upgrading Apache SOLR last version (beta7), the run update match ok and after a while i use taxonomy-manager to arrange taxononies

The error come when i try to run cron and cache flush:

PDOException: SQLSTATE[08004] [1040] Too many connections in lock_may_be_available() (line 165 of /datas/goodmorning/www/includes/lock.inc).

If MySQL connections are not closed properly, then you might have an error within the code somewhere, that leads the script to end prematurely.

In standard cases, connections are properly closed.

Look at the detailed error log of your server, or activate on page error reporting if you're on a develpment site. You might find something useful, such as a forgotten dsm() somewhere with the devel module not being activated.

The error might not be displayed and leads the site to look as usual, but the PHP process is not ended properly, leaving MySQL connections opened.

@ #25 True / The display suite generate the error : (I report it on DS)

[Thu May 26 15:45:25 2011] [warn] mod_fcgid: stderr: PHP Fatal error: Call to undefined function dsm() in /datas/goodmorning/www/sites/all/modules/ds/modules/ds_search/ds_search.module on line 465
[Thu May 26 15:47:06 2011] [warn] mod_fcgid: stderr: PHP Parse error: syntax error, unexpected '<' in /datas/goodmorning/www/sites/all/modules/views/plugins/views_plugin_argument_default_php.inc(49) : eval()'d code on line 1

got it with user 1 after upgrade to 7.2,
went by itself again ..

Version:7.x-dev» 7.2

Getting the same error when working with views. Happened twice now. Had to log a support ticket with my hosting service in order to get the site back up again.

Version:7.2» 7.x-dev

@DrupalMike Please leave the issue pointing to dev.

Ideally, all bugs should be filed against -dev as all code changes are only applied to -dev.

...from How to make an issue report

I doubt there's any bug here at all. It's probably exactly what it sounds like: too many connections to the database, compared to the number it's configured to support.

Check the number of threads/processes/clients your webapp (not your webserver -- php, not apache) uses (maximum) vs. the max_connections configuration in MySQL (or the equivalent in other DBs). The former should never be able to go higher than the latter.

@Lee, the same db configuration on the same machine works for D6 without any errors (for 2 years now), how is it that in D7 alone the configuration is not correct. it means that D7 has issues in connection handling or D7 opens far too many connections compared to D6...in either case it has to be corrected.

Yep, I take it back actually. On a drupal that whitescreens on /user (I think due to the git versions of d7 oauth and twitter modules being installed), I can reproduce this by reloading a /user page > max_connections times, after just reloading mysql beforehand. That seems to confirm what erik2e said, in comment #25.

Subscribing

So, two sites that I host on Amazon AWS have begun reporting errors like this all the time and I have a great deal of trouble troubleshooting this:

PDOException: SQLSTATE[HY000] [1040] Too many connections in lock_may_be_available() (line 164 of /domain.tld/includes/lock.inc).

StatusFileSize
new145.53 KB

Attaching a php info printed file

Has this been solved... Im sick of that error

@WolfSoul: I tried to reproduce the error with 7.2 and i did not get the error anymore. which version are you using? if it's too frequent, you may want to consider increasing the priority.

Im using 7.4 and my site has been crashing whole week.. but I also posted in the forum

Version:7.x-dev» 7.4

Version:7.4» 7.x-dev

Every case of this so far was a follow-up issue due to some other bugs. Check your error logs for any sql errors and similar things.

Also, don't change the version, patches are always commited against the -dev branch and bugs must always be tested and verified for that version.

Ok sorry bout the devthing.. and i'll see if I can get some log (Im on shared hosting so no server access)
I suppose this either means some module caused it or Im on my own to figure it out

subscribing

same here - after installing media gallery and uploading a few hundred pictures.

Still a problem. Subscribing

Looks like I will go back to Drupal 6 until I can find a version that does not end up cuasing this error

My shared hosting helpdesk also suggested downgrade to 6.22 since "Drupal is the most carelessly written core since use resources very much. It is nonsense to use 800 thread connection to database for a ~1000 visit site."

Anyway, hostgator says the same so it looks like shared hosting sevices allow only 25 thread connection.
http://support.hostgator.com/articles/specialized-help/technical/max-use...

What is the solution? Go back to 6.22? VPS?

@vizualweb good piece of info.

I have two D7 (7.7) sites hosted by the same hoster. First site works fine. The other site worked fine for over 6 weeks untill it broke down totally >> [...]Too many connections. Other sites on the same db-server were also down.

Now 30 minutes later the mysql-connection is ok - and faster than before.

What I did:
1. I wrote a message to my hoster - asked them if they could identify the problem
2. In the meantime I deactivated some subdomains not in use.
3. I uninstalled the latest installed module "Hierarchical Select" not in use.
4. Ran update.php

I hope the mysql-state is now persistant. When I get further knowledge I will post it here.

Since I upgraded from D6 and created a few nodes, this error comes up rather often. Restarting Apache usually works.

Subscribing.

Very annoying! Hope the fix will come soon. Not a real good idea to launch a drupal 7 site atm.
subscribing

I got the same error today on my testing server. Each page load gave me a new connection to the database. And it did not drop the old one.
This doesnt happen on my live server.

Little extra information:

im creating a new view, so perhaps the issues lies within the views module. Ill post more info as soon as i know more.

To resolve this issue, a simple restart of the mysql is sufficient (in my case).

Your right about the not dropping the old database connections donniewiko. Looks like the database jams up sometimes, and each click on a link creates a new process resulting in an overload: "to many connections". I'm getting this error on a regular basis, when developing my site... Issue resolves itself after 2-5 minutes.

@donniewiko: have you tried using the Pressflow version of D7 (http://pressflow.org/). Do you have the same problem?

No i have not, i have never heard of it, but i will check this out. Thanks for the link

It seems the problem fixed itself. Very odd. I havent changed anything.

Same here... deleted a bunch of stuff in the database with Devel Variable editor. Haven't had it since. Even when clicking like a mad man on a bunch of stuff.

subscribing. same issue here

@sukr_s,

thanks for the info, never know PF7 released. I checkout it and compare with D7, there is no big diff, it added PRESSFLOW_SETTINGS / Pressflow Smart Start.

Sounds like not thing worth add into PF7 ATM or still too young.

hmmm... forget about my comment #55. just happened when clearing the cashes.

Version:7.x-dev» 8.x-dev
Component:node system» base system
Priority:Normal» Major
Status:Active» Needs review
Issue tags:+Performance
StatusFileSize
new1021 bytes
FAILED: [[SimpleTest]]: [MySQL] Invalid PHP syntax in includes/bootstrap.inc.
[ View ]

Pressflow 7 does not have any performance improvements compared to core yet, nor is it being actively developed as far as I can see. There were some performance issues in Drupal 6 that were only fixed in Pressflow (since Drupal 6 could not break backwards compatibility, require MySQL and PHP 5.2 etc.), but every one of those fixes is in Drupal 7 core now in one way or another.

There are only a few places in core where lock_may_be_available() is called, the earliest one is variable_init(), so please try this patch and report back if it works for you or not.

Status:Needs review» Needs work

The last submitted patch, variables_lock.patch, failed testing.

Status:Needs work» Needs review
StatusFileSize
new1.04 KB
PASSED: [[SimpleTest]]: [MySQL] 39,092 pass(es).
[ View ]

Here's one with no syntax error.

If I'm right about this, then the proper fix to this issue is #987768: Optimize variable caching. That patch is more complex but while there is a single test failure it should be fine to test.

If you don't know how to apply patches, see http://drupal.org/patch/apply

It would also be helpful to get some debug information from people who are running into this.

You can make this change to includes/lock.inc:

diff --git a/includes/lock.inc b/includes/lock.inc
index 7dd8db3..b45a9d5 100644
--- a/includes/lock.inc
+++ b/includes/lock.inc
@@ -164,6 +164,7 @@ function lock_acquire($name, $timeout = 30.0) {
  *   TRUE if there is no lock or it was removed, FALSE otherwise.
  */
function lock_may_be_available($name) {
+  file_put_contents('/tmp/lock.txt', print_r($name, 1) . "\n", FILE_APPEND);
   $lock = db_query('SELECT expire, value FROM {semaphore} WHERE name = :name',
   if (!$lock) {
     return TRUE;

You may need to change /tmp to a directory you have read and write access to on the server, for example your Drupal files directory or similar. You'll need to download the file, check it has something in it, then upload here.

Status:Needs review» Postponed (maintainer needs more info)

Needs more info from reporters, all patches here are for diagnosis only.

catch, thanks for looking into this. I'll add the code to includes/lock.inc. Luckily i get the error less often than before (odd). So might be awhile before i get some data.

just paste the code at the beginning or what? im getting this:
Parse error: syntax error, unexpected T_DEC in includes/lock.inc on line 64

You should use diff/patch, for both, but:

function lock_may_be_available($name) {
   file_put_contents('/tmp/lock.txt', print_r($name, 1) . "\n", FILE_APPEND);
   $lock = db_query('SELECT expire, value FROM {semaphore} WHERE name = :name',

This is what the top of lock_may_be_available() should look like once that diff is applied.

We've had this problem on our dev server twice and restarting Apache has been an effective short-term solution both times. Obviously, we need a proper solution before moving to production.

We've only been using Drupal for a month so none of us is really a Drupal core or Drupal documentation/community expert. As near as we can tell from our own experience and from the issue thread, Drupal is not expiring database connections in a timely manner. Since most/all http requests result in a new db connection and the default MySQl wait_timeout setting is 8 hours, the max_connections limit is reached quickly. (On our dev server max_connections was set to 100.) This seems like the conclusion reached by many people but can anyone from the Drupal team confirm that this is actually the cause of the problem? I notice the "issue's overall values" indicate this is an issue for 8.x. Does that mean we're going to have to wait for 8.x to get a solution?

The stopgap fix we've employed on our dev server has been to adjust wait_timeout to 2 minutes and max_connections to 256.

Comments #59 to #66 seemed a bit cryptic to me. How does the patch relate to the code? How do those relate to issue #987768? Do these relate to the Pressflow distro only?

Here's our stack:
** Cent OS v5.6 (Linux 2.6.18)
** Apache 2.2.19
** MySQL 5.0.77
** Drupal 7.7 (Standard, no special distro)

Please let me know if I can provide any additional information to help with this issue.

Thanks!

im not sure if its going to be a drupal problem, because i been getting this error quite a few times now, and i have 3 drupal websites on a shared account, each drupal one of them is on the 7.8 version and the others in 7.6 i think, and they all showed the error at the same time and fixed themself after a few minutes all three websites.

subscribed

I have the seem probleme . i use D7.8 for my website .. please, any one have the solution ? keep me informed

Same problem here, we are using Drupal 7. Very difficult to troubleshoot. Subscribing, though I notice this is tagged as D8 here...but the vast majority are reporting issues on D7?

I'm using Drupal 7.9 on a shared hosting.
Got the same error after an hour of simultaneous content creation on the same website with my comrade:
PDOException: SQLSTATE[HY000] [1040] Too many connections in lock_may_be_available() (line 167 of /home/user/data/www/mmh6.info/includes/lock.inc).

I went to log into the database through phpMyAdmin to look for the lock semaphore but I've got a similar error:
#1040 - Too many connections

So clearly it is a shared hosting limitation.
Is Drupal 7 more greedy for hosting resources? - Well, yes...
But is it more owesome then previous releases and most CMS? - You betcha!
I will contact my hosting support to resolve the issue and will keep you posted.

P.S. The solution follows in #75

u can to try with this code:

Set Global max_connections=2000

run it in your mysql

i got your same problem!... do it!...

pd.- i used phpmyadmin

StatusFileSize
new1.17 KB
FAILED: [[SimpleTest]]: [MySQL] Unable to apply patch 930876-too-many-connections.patch. This may be a -p0 (old style) patch, which is no longer supported by the testbots.
[ View ]

There is one possible place which could cause firing of sql when not needed. I've fixed in the patch for 7.x. Test and let me know if it solves the problem, i will port it to 8.x

Well, in my case above (#72) the solution came from a ticket to my hoster.
They came back to me with a rather prompt reply, that another shared user was hoarding up all the connections with massive queries.
They put the server back online, froze that users account and reported to have 1000 user connections limit, which that user still managed to bulk up. They also promised that such downtime would be a one time deal, which is true so far.
So it wasn't me, it wasn't Drupal - just some anonymous shared user's fault.

P.S. I use shared hosting Multisite3 at Hvosting.ua (not an ad on my part, just a piece of friendly feedback to anyone from Ukraine/exUSSR).

Status:Postponed (maintainer needs more info)» Needs review
StatusFileSize
new1.18 KB
PASSED: [[SimpleTest]]: [MySQL] 33,986 pass(es).
[ View ]

In the lock_may_be_available() the semaphore is deleted but not released from global $locks array, the patch form comment 74 ported for 8.x.

On a variable cache miss, the first query executed on the page is in the lock system, so if that page is the one adding 'too many connections', it'll show an error from the lock code - however this doesn't mean the lock system is responsible, just that is happens to be the first SQL query executed in those cases.

lock_may_be_available() should not unset the locks global afaik, I also think it's unlikely at best that this is connected to the bug report.

@catch: both the function lock_release() and lock_release_all() delete the semaphore and unsets the global array. Wouldn't it lead the system to an inconsistent state (probably handled elsewhere) if the semaphore is deleted but the global locks array is not unset (probably causing unnecessary SQLs being fired)?

Also for my understanding, when it's okay for lock_may_be_avaiable() to delete the semaphore, why
shouldn't the function unset the global locks array?

Status:Needs review» Active

lock_may_be_available() is designed to delete the (expired) semaphore hold by a previous request. It's not impossible, but totally unlikely that it will delete a semaphore hold by this request.

Anyway, back to the issue at hand: all the reports here seem to point to some kind of hosting issue. Maybe the PDO version used there doesn't properly close the MySQL connections?

#843114: DatabaseConnection::__construct() and DatabaseConnection_mysql::__construct() leaks $this (Too many connections) is about PDO leaking connections. I have no idea whether that is related to this issue, though.

I got this after clicking again (and possibly even again after that) on the "Log out" link after the first click did not seem to have any effect. Could it be something related to timing, and inadequate protection from re-clicking a button that is not responding quickly enough? Again, for me, it cleared up by itself after 5 or 10 minutes wait.

This error can also be caused by a general database failure. On my hoster (Bluehost) for unknown reasons but possibly after removing another drupal installation through simplescripts, the mysql database was missing some tables. So the semaphore table was indeed missing and there were only 34 tables left. After restoring a backup everything was back to normal (102 tables).

Status:Active» Postponed (maintainer needs more info)

I'm marking this as needs more info. If you're getting this error, please let us know the value of max_connections from my.cnf.

I'm also frequently getting this error, once per day (at least once per day that I am actively working on Drupal, hard to tell if it occurs on the days that I am not actively using Drupal!)

It seems to fix itself eventually but it's a significant frustration when it happens because I'm trying to get things done on my site and this issue causes the site to suddenly go offline for a few hours.

My shared hosting support have several tickets but so far unable to understand or resolve the cause of the problem.

phpMyAdmin Variables tab shows
max connections 100
max user connections 25

I'm also having this on my D7.12 -site. It happens almost daily and getting all my Drupal 7 -sites down. Only rebooting the server seems to help for a while. Restarting MySQL or Apache does not help.

Ubuntu server 10.04LTS
Apache 2.2.17
MySQL 5.1.34
Php 5.2.14

[mysqld]
local-infile=0
max_allowed_packet=64M
max_connections = 300
/etc/my.cnf (END)

I have this error several times per day while I am editing content. My web-host won't allow me to reboot their server :)

Max Connections = 500
I don't have admin rights to adjust this value.

PDOException: SQLSTATE[08004] [1040] Too many connections in lock_may_be_available() (line 167 of /websites/###/#########/#####/public_html/includes/lock.inc).

Drupal 7.14

The database is not localhost and they (123-reg) recommend no caching for drupal.

If you need more info let me know.

We see this error every once in a while. There is no value for max_connections in my.cnf, so I believe we are using the default of 151.

All I did was to create a new user with administrative privileges and that was all.

Version:8.x-dev» 7.14

I am getting this error ocassionally also. I am hosting my site also at 123-reg, might it be a problem with their (overloaded) servers? I have times when my site wont load cos server is too busy etc... i guess you get what you pay for sort of thing with web hosting applies... but still... would love if it there was a way i could sort this before i get the chance to switch hosts...

I just encountered the same error and the site was back after 30 min or so.
This is a high priority issue.

Same error, happened as followed:
I go to admin/reports/updates, and check manually.
Previously I disabled devel.
Now while the updates run, I wanted to go to uninstall devl, then the error happened.

*Sigh*

I am getting the same error on ec2 micro instance. Could it be I am hitting my memory limits or something? I have been doing drupal development for over 5 years and have never experienced this type of error, ever. It's extremely frustrating and annoying because I sold our company on using Drupal over Django and now everytime this happens they think drupal is a busted pile of crap (which I know it's not). I would like to get this resolved as soon as possible as we need to get this site up and running and I went with Drupal 7 instead of 6 because I figured it would be stable after being out for over a year now.

You let me know what type of information and I will get it to you.

Current specs:
amazon ec2 microinstance
ubuntu 12.04 server (one included in the quick setup via ec2 wizard).
My phpinfo output here: https://gist.github.com/3055471
mysqlversion: mysql Ver 14.14 Distrib 5.5.24, for debian-linux-gnu (x86_64) using readline 6.2
apache version: Server version: Apache/2.2.22 (Ubuntu)

I also installed everything via apt-get and standard packages.

I have the same issue on Amazon EC2 micro instance.

I have reduced "wait_timeout", "interactive_timeout" and "lock_wait_timeout" to be 3mins, 3mins, 10mins. And it seems to solve the problem. Will report later if I find the problem again.

In reference to #94:
I upgraded to a small instance (went from 500Mb of ram to 1500Mb) and have not run into the problem again. Before upgrading I remotely monitored with 'htop' and noticed the ram was getting pegged. I also noticed that swap isn't showing up which makes me think it was running out of memory and just locking up. I have gotten something similar before with drupal called the white screen of death but typically didn't require a complete restart of apache / mysql services.

I just got this error a couple minutes ago. I do not know what happned because it happened when I click on "people" tab.

I did have the site opened in a few windows and browers (ie, firefox) at the same time. And I had civicrm installed with drupal as well. One of our staff might be editing the webpage as well during the time the error appeared.

I have D7.14, MYSQL 5.1.61, Php 5.3.10, php memory limit 128 mb. civicrm db and drupal db are separate. The civicrm db size is only 2.7 mb, and drupal db size is about 8 mb. If anybody knows how to prevent this from happening again, please let me know. Greatly appreciated!

A follow up of this error on my site: I was told this error happened was because a wordpress DB on the server caused it. The admin solved the problem and I was told it is not my site's problem and no worries in the future :)

It occurred to me on a shared hosting. DB wasn't accessible at a time because of resource limit. So, i think is was not drupal's fault.

We too are were experiencing this problem while developing on a fairly big project. This could also be a result of a large number of php fatal errors caused by one or more modules as suggested here:
http://help.getpantheon.com/pantheon/topics/too_many_connections-agyju#r...
For us the problem arose after implementing several new custom made modules.
We've tried several settings for max connections ranging 100-300.

UPDATE: patch in #74 seems to work for our case (for now)

Same problem here. It occurred once on our virtual server but also learned it occurred on another of our servers a few weeks ago (I love good communication between departments). It seems to have occurred when all the developers working on the system had gone home, so I doubt it was anything they had done. The one person who found the issue is not a developer and has limited access, nor do I think she did anything more than updating content in a page.

I have run into this issue with Bluehost as well. Thinking about what modules I have installed that might cause connections to open/not close I have disabled Migrate, WordPress Migration and other related. I am working on transfer of WP to D - not sure if the same issue mentioned in #99 is the culp.
Will follow up with results.

It occurred to me on a shared hosting with HostGator. But my site was back after 20 minutes.

I had received this error after my Amazon EC2 small server instance crashed. Switched to a medium instance and have not had any problems since. I'll probably eventually have two instances, one for MySQL and the other for Apache2 and PHP.

subscribe

Hi, folks. The patch at #843114-50: DatabaseConnection::__construct() and DatabaseConnection_mysql::__construct() leaks $this (Too many connections) was just committed, and we suspect it might have a bearing on this issue. If someone having this problem has some PHP chops and would be willing to backport it to D7 for testing, that'd be muchly appreciated!

subscribe (I'm getting this problem on D7 too)

#108 use Follow button at top-right of a page

Just happened again (after about 4-5 weeks of working fine).

I just went through all the logs. In the access_log I found another clue (still no solution).
On our system, at :01:01 after each hour, there is an entry in the access log from ::1 for an automatic cron run. The only other time I see ::1 (local) accessing the system is around the time we had the issue. During this time frame I found roughly 37 other access by ::1 (in a 25 minute period), but this time with the comments:
::1 - - [16/Oct/2012:15:23:02 -0400] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.2.15 (CentOS) (internal dummy connection)"
::1 - - [16/Oct/2012:15:23:03 -0400] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.2.15 (CentOS) (internal dummy connection)"
::1 - - [16/Oct/2012:15:23:09 -0400] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.2.15 (CentOS) (internal dummy connection)"
::1 - - [16/Oct/2012:15:23:11 -0400] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.2.15 (CentOS) (internal dummy connection)"

These entries are not all sequential, but are spread out between when we noticed the error and when we restarted the server.

My current assumption is that Apache is opening connections to keep children processes from being terminated (see: http://user-agent-string.info/list-of-ua/browser-detail?browser=Apache%2...) but that these connections are then not closed, thus resulting in "too many connections." This seems to explain why restarting Apache seems to clear up the issue.

What the actual child process(es) are and if they are causing the issue or just a symptom of it are not clear to me. We run multiple sites on our server, but if I can narrow down what site(s) is causing this we might be able to determine if it is a bug in a specific module used in that site.

Any other ideas?

The issue mentioned by webchick has been ported and is RTBC. So if you are experiencing this issue, please apply that patch and check if things improve. Also make sure that you watch the output of "show processlist" and check how many open connections there are, check the user/database they're connected to and try to track it down to a specific site if you have multiple ones.

That said, this is the error that happens when there are no more connections and there are like a million reasons for that, this is just how this issue then manifests itself, not the cause for it. We can't "fix" it.

If you find this page while googling "PDOException: SQLSTATE[08004] [1040] Too many connections in lock_may_be_available()", save yourself some time and firstly confirm that your mysql database instance is operational and accessible. My shared hosting provider's mysql server was down, but this was the error I saw at my site. They restarted the server, and all was well again. No need to run patches, etc.

Status:Postponed (maintainer needs more info)» Needs review

#61: variables_lock.patch queued for re-testing.

Status:Needs review» Postponed (maintainer needs more info)

Version:7.14» 7.19

Hi all,
after upgrading to Drupal 7.19 I'm having serious problems which cause my site
http://www.marketingaziende.it
to become unusable.
In particulare I have two kinds of PDOExeception
PDOException: SQLSTATE[08004] [1040] Too many connections in lock_may_be_available()

and a second one
user XXXXXX has more than max user connections'.

This never happened before upgrading. I'm wondering if the new version 7.19 relies on threads more than previous versions. However, since caching is enabled, each script closes the connection to the DB as long as is no more required, I realy don't know what to do.

Here are some SQL configuration variables:

max allowed packet 1.048.576
max connect errors 60.000
max connections 200
max user connections 4
max delayed threads 20
max error count 64
max heap table size 16.777.216
max insert delayed threads 20
max join size 18.446.744.073.709.552.000
max length for sort data 1.024
max prepared stmt count 16.382
max relay log size 0
max seeks for key 18.446.744.073.709.552.000
max sort length 1.024
max sp recursion depth 0
max tmp tables 32

Thanks in advance for your help
Luciano

Hi,

same problem here with 7.19 installation.
Site worked perfect for several weeks and now suddenly the problem appeared...

Regarding the last two comments, I am not sure why anything would have changed with 7.19, but in any case, have you tested the patch at #843114: DatabaseConnection::__construct() and DatabaseConnection_mysql::__construct() leaks $this (Too many connections) as described above?

Note that it's already been committed to 7.x-dev and will be in the next bug fix release of Drupal 7 too (i.e., most likely will be in Drupal 7.21 when it comes out).

Version:7.19» 7.21
Status:Postponed (maintainer needs more info)» Needs work

Subscribing. Getting this error right now. I changed status to "needs work" since users here have given the maintainer more than enough, two years worth, of information.

PDOException: SQLSTATE[HY000] [1040] Too many connections in lock_may_be_available() (line 167 of /home/mst3k/mst3k.tv/includes/lock.inc).

Status:Needs work» Postponed (maintainer needs more info)

I don't see anyone reporting that they tested the patch in #843114: DatabaseConnection::__construct() and DatabaseConnection_mysql::__construct() leaks $this (Too many connections) yet.

(That did not make it into 7.21 by the way, but it's still in 7.x-dev and the committed patch can also be applied to Drupal 7.21.)

See also #111.

Ask your domain provider's tech team.

I had the same problem only on server not on my local machine for the same site. So I told them why I am getting this error (PDOException: SQLSTATE[08004] [1040] Too many connections in lock_may_be_available()) only on your server?

That guy replied me "There was a problem with your account, I have corrected it so now you won't get this message again"

I used to sort this kind of problem with restart mysql via command line or if you are on cPanel then click on phpMyAdmin and your database. Your site will work fine after that.

Check your webserver's error log! I ran into this error, too, from one day to the other. Trying to increase mysql connections, restarting mysql and other actions didn't lead to success.

In my case there was a third party module trying to call functions provided by a not loaded php extension. Disabling this module solved the case.

I need Help on where to input the Try / Catch PDO EXCEPTION STATEMENT
SO THAT MY SITE CAN BE SAFER
I THOUGHT MAYBE INSIDE settings file
..
k indly help

This was actually almost same thing with me as with drubb, there's was some page that included PHP code that didn't work with Drupal, I had to check the logs from Nginx to see where it was coming, removing the page fixed it. So Guys please check your logs and see where it's coming from.

StatusFileSize
new31.52 KB

I just saw this exception appear on api.drupal.org. Is it possible for the drupal.org webmasters in this thread to try it on api.drupal.org?

For fetching logs if that may be helpful, the timestamp when I got this error was
14:25 BST on 26 Sept 2013.

I just upgrading from 7.22 to 7.23 and I keep getting this error!

It's very annoying!!!

Not sure what more information anyone requires but I'm happy to help.

I hadn't experienced this until I started using 7.23.. hope that much helps.

Version:7.21» 7.23
Priority:Major» Critical

I can't do anything now. This is on one of my production servers so, for me, this is critical!

It's times like this that I seriously consider switching to a new framework such as WordPress.
I just don't have time to develop a site from sctratch.. although it might be worth finding the time!

IMHO Drupal has been getting buggier and buggier over the past year or so.

I'm done with my quick rant.. for now.

What is the solution for this?

I've hit the same issue. For the moment, I've been able to minimize the occurrences of this error by increasing the max_connections MySQL config variable, but should not be needed. Drupal should be able to wait until a MySQL thread is free.

Well, I think I've found the culprit. variable_initialize() (https://api.drupal.org/api/drupal/includes%21bootstrap.inc/function/vari...) calls lock_acquire() and which in turn calls lock_may_be_available(). I think the problem is in lock_acquire() which under heavy DB load (and high wait time) will try repeteadly to query for an available lock, eventually exhausting the connection limit. The solution would probably be tuning the timeouts in lock_wait(), or maybe rewriting the do loop in lock_acquire. Still have to look in depth, but that's clearly the problem.

Do each of those calls actually open up a new database connection, though? I would have assumed they all share the same connection, but if they don't that definitely sounds like a problem...

This PDOException now continuously occurs on d.o, so if there is a performance issue here I'd say this should be a higher priority. The site is now next to unusable.

Another thing that can fix this issue:
Check your free diskspace!

Albertczyx, any update from you on this? Sounds like you found the cause of the problem, but how about a fix?

Thanks

Also, variable_initialize() is recursively calling itself until it gets a lock, waiting before each recursive call via calling lock_wait(). Is this recursion a problem?
/**
* Loads the persistent variable table.
*
* The variable table is composed of values that have been saved in the table
* with variable_set() as well as those explicitly specified in the
* configuration file.
*/
function variable_initialize($conf = array()) {
// NOTE: caching the variables improves performance by 20% when serving
// cached pages.
if ($cached = cache_get('variables', 'cache_bootstrap')) {
$variables = $cached->data;
}
else {
// Cache miss. Avoid a stampede.
$name = 'variable_init';
if (!lock_acquire($name, 1)) {
// Another request is building the variable cache.
// Wait, then re-run this function.
lock_wait($name);
return variable_initialize($conf);
}
else {
// Proceed with variable rebuild.
$variables = array_map('unserialize', db_query('SELECT name, value FROM {variable}')->fetchAllKeyed());
cache_set('variables', $variables, 'cache_bootstrap');
lock_release($name);
}
}

foreach ($conf as $name => $value) {
$variables[$name] = $value;
}

return $variables;
}

Looking at the code for lock_acquire(), its do loop is calling lock_may_be_available() at most only two times.

As such, your statement:
"I think the problem is in lock_acquire() which under heavy DB load (and high wait time) will try repeatedly to query for an available lock, eventually exhausting the connection limit." is not correct, since lock_acquire() only calls loc_may_be_available() at most two times.

But variable_initialize() is calling lock_acquire() potentially an unlimited number of times due to the recursion, so you had the right idea, but the wrong location for the looping (as far as I can tell). So, should a change to variable_initialize() be done so it doesn't get into an endless loop? Or increase the wait caused in its call to lock_wait()?

Thanks

Sorry, one more correction. lock_acquire() executes its do loop at most two times, but calls lock_may_be_available() in only one of those two passes.

Ok, how about this?

We are all seeing this error in lock_may_be_available() line 167 (for drupal 7.23)

The line is

<?php
  $lock
= db_query('SELECT expire, value FROM {semaphore} WHERE name = :name', array(':name' => $name))->fetchAssoc();
?>

There is no exception handling for this line in the lock_may_be_available() function.

Normally, if lock_may_be_available() is unable to find an expired lock it is able to reuse, it returns FALSE.

So why don't we just wrap the above line in a try/catch, which will catch the case when there are too many connections to the db and as such the db query can not execute? And if an exception happens, we return FALSE. Problem solved?

Proposal for start of lock_may_be_available:

<?php
function lock_may_be_available($name) {
  try {
     
$lock = db_query('SELECT expire, value FROM {semaphore} WHERE name = :name', array(':name' => $name))->fetchAssoc();
  }
  catch (
PDOException $e) {
   
watchdog('PDO',
         
'Failed to query semaphore table for available lock: ' . _drupal_decode_exception($e),
          array(
'', ''),
         
WATCHDOG_NOTICE);
      return
FALSE;
  }
?>

I've implemented also this solution (slightly modified) and it effectively suppresses the error:

try {
          $lock = db_query('SELECT expire, value FROM {semaphore} WHERE name = :name', array(':name' => $name))->fetchAssoc();
  }
  catch (PDOException $e) {
        file_put_contents('/tmp/lock.log', "Exception in lock_may_be_available!!\n", FILE_APPEND);
        return FALSE;
  }
  if (!$lock) {
    return TRUE;
  }
  $expire = (float) $lock['expire'];
  $now = microtime(TRUE);

However, I wouldn't add a watchdog call there, because it will generate a lot more of DB operations, and as we got into the exception code because we weren't able to get a free connection, the watchdog call will make the problem worse.

Anyway, this seems to work for me under heavy load, but it's just a hack that masks the error. This should be fixed by making the lock mechanism so smart as to be able to avoid avalanche effects like this. I'm still looking into it, but I've not been able to devote much time lately.

I implemented @albertczyk's solution in #140 in Drupal 7.23 and am waiting to see how it affects the error.

I saw this when visiting http://www.reactos.org/news-archive today:

too many connections in lock_may_be_available() @ ReactOS.org

It went away once I waited and refreshed after a minute or so, but my point is that it effects valuable organizations that selected Drupal as their platform over other open-source CMSes. This hurts our reputation.

I realize that my comment doesn't help in solving the issue itself, but from what I see it stands from back in 2010! I'm sure that if the problem persists someone from ReactOS will eventually show up in this issue, but perhaps it would be a great idea to contact them ahead.

...just saying.

I have also this error even on 7.24

GREAT

@News4u: if you want to stay updated on this issue, you can use the Follow button in the top right block. Otherwise, it's helpful to only comment with relevant information/questions, because off-topic comments create distraction in the issue queue. Thanks.

I am new -- this is my first time posting -- and normally I can figure out from reading through the forums the solution to my issues.

This one has me stumped though -- it just started happening to me (same error as listed above) today:

Error

The website encountered an unexpected error. Please try again later.
Error messagePDOException: SQLSTATE[08004] [1040] Too many connections in lock_may_be_available() (line 167 of /home/heather/public/reference/public/includes/lock.inc).

As I am developing the site, there should be no one connecting to it except me, so I am not sure why I would be running out of anything?

Please excuse my ignorance... like I said, I can normally figure stuff out, but I am now getting this error about every 5 - 7 clicks of the mouse (so, once it clears out, I may be able to click to admin, click to content types, click to the content type I want to edit, click edit and then get the error again) and my googling led me here, but I'm not really sure I understand how to resolve this. My site is in good shape otherwise, so I don't want to randomly start trying things and mess something up!

Can you tell me in layman's terms what I can do to resolve this? I am developing this website for about 20,000 users to start, so I am a little worried that just me working on it is causing connection issues.

Thanks so much and again I am sorry if I didn't ask the right way or if I missed something above that answered the question!

Thanks
Heather

Heather, what version of Drupal are you running?

I am running 7.26.

@hfortune
Ok, I was experiencing this when I was using 7.23. From reading the above messages, I had hoped that upgrading to 7.26 would have fixed this issue. Please see the fix suggested in #140. It seems to take care of this issue. You will need to edit includes\lock.inc and find the function lock_may_be_available(). Apply the changes shown in #140, wrapping this line
$lock = db_query('SELECT expire, value FROM {semaphore} WHERE name = :name', array(':name' => $name))->fetchAssoc();
in a try/catch.
Should make the problem go away.

Thank you hanksterr7!

Today I had 3 sites going down with this error.....

They are still down and waiting for answer to a support ticket to my host....

I could see in their server newspage that they have updated php this night from 5.2 to 5.3 dont know if that is info worth anything...

I really would propose that this has something to do with the difference between PHP 5.2 and PHP 5.3

My host changed PHP from 5.2 > 5.3 and restarted the servers at 0800 hours today, I noticed my sites being down at 0930, and they have been down since.....

Can't get them back up since I dont have Reload (Flush) privilegies on the server....

But not well versed enough to figure which difference it should be looking at the php difference page

acoustika, a PHP version definitely not a root of problem. My Drupal 6 site worked on PHP 5.3 for a year without any problems. But right after upgrading to Drupal 7 I constantly see this error.

Hello drupal forums and community!

After years of contenting managing a drupal site, I'm teaching myself development and just encountered my first major glitch:

PDOException: SQLSTATE[08004] [1040] Too many connections in lock_may_be_available() entity moduel

I have two live sites installed on my server in D6, and I'm developing a site on D7, which is how I caused the problem: immediately after installing Entity API and Entity Reference modules in D7, and setting up a reference between two content types, all three of my Drupal sites went offline (or at least timed out).

After a delay, I managed to get into cpanel and delete these modules. All three sites came back online and are now working ok. The D6 sites are running fine. The D7 site is very sluggish. Not sure what to do?

I'm not sure if this is useful, or even related, but I've run reports on my databases and found these errors:

sitename_primary.drupal_cache_menu] warning: 1 client is using or hasn't closed the table properly

sitename_primary.drupal_cache_page] warning: 1 client is using or hasn't closed the table properly

This issue also occurs if you open multiple tabs of the admin and flush the cache simultaneously. I am going to try the fix in #140 and report if the issue is resolved.

StatusFileSize
new15.13 KB
new14.7 KB

2 different errors on different files in different time span. It's gone when page refresh. I am the only developer who just start developing site on D7 and site is in maintenance mode. No other users. It's on shared hosting.

Version:7.23» 7.x-dev
Issue tags:-Performance+DrupalWTF

I removed the Performance tag because this error completely shuts down production sites for indeterminate duration, intermittently and without warning. Mine stayed down until I hacked core code, which was at least several hours of downtime, possibly a day or two. My concern is that this issue will be overlooked in the queue if maintainers think it's only about performance. The bug might be in performance related code, but this error is much more severe than a performance issue.

#140 worked for me. I added some more extensive logging and found that lock_may_be_available() isn't called on most page loads, even after flushing all cache. I added the exact same logging code elsewhere to verify that it works. After the error was resolved, I was never able to reproduce it, and I couldn't even get it to call that function again.

As others have noted, the problem of too many connections can be caused by many things, even other sites on a shared host. But there seems to be some incorrect logic in the locking, and no error handling to recover from this fairly common condition. In my case, it got stuck in that state permanently, showing users nothing but a cryptic error, until I added the try/catch.

#140 worked for me as well. I would have to agree with the fact that it could be due to multiple reasons. If you are on AWS, like I am, the chances of that are high. I am however continuing to monitor the site using http://monitor.us nevertheless.

StatusFileSize
new756 bytes

Here is #140 in patch form. Hope it helps someone.