A single user creating nodes continously results in PDOException: SQLSTATE[08004] [1040] Too many connections in lock_may_be_available() exception.

To simulate the problem, create nodes (e.g. article) continuously. Typcially on my machine i get the exception on creating 44 nodes.

my machine is a WAMP stack on Windows 7 with 3GB memory and i5 processor

Steps to create:
1. Login as admin
2. create a shortcut to node/add/article and call the short cut Article
3. run the attached seleinum test (which just creates nodes with text hello

after creating 44 nodes, the above exception occurs.

I've marked this as critical as the apachae has to be restarted for the site to start working again.

Support from Acquia helps fund testing for Drupal Acquia logo

Comments

marcvangend’s picture

Version: 7.0-alpha7 » 7.x-dev
Priority: Critical » Normal

I don't think this is critical, because AFAIK we do not claim to support Selenium tests. This is not a release blocker.

I'm setting the version to 7.x-dev, because if anything should be fixed, it will be fixed in HEAD. Can you check if the problem also occurs in the latest HEAD version?

sukr_s’s picture

Priority: Normal » Critical

just to ensure we are on the same page. I started getting this error when using a d7 site. to bring some predictability to this error, the selenium script was written. you can manually create 44+ nodes one after the other and the problem will still occur (without using selenium script).

i mark it (again) as critical as the site is not usable after this until apache is restarted. if you believe this is not critical, please change it back and i won't change the status after that.

Anonymous’s picture

i'm moving this out of the critical queue, because i followed the steps in the original post and i can't reproduce. the selenium test runs all the way to the end without any errors.

Anonymous’s picture

Priority: Critical » Normal
Damien Tournoud’s picture

Cannot reproduce either with a SimpleTest that repetitively creates nodes.

The connections to the MySQL are properly closed, so the Threads_connected status variable stays low (4 in my case).

Can you please report the exact version of the components in your install (Apache, PHP and MySQL)?

sukr_s’s picture

FileSize
146.14 KB

i've attached the output of phpinfo. kindly let me know if you need any other information or if i can help in any other way.

sabesneit’s picture

I got the same error, i thought it was drupal 7 but i realized my partition for mysql was at 100%, you should take a look at your available space: df -h

Berdir’s picture

I've seen this too but I don't think Drupal is responsible, probably a misconfiguration somewhere..

nelp’s picture

I got the error when I tried to import a csv file into nodes into Drupal 7 with the Feeds module. There is only 79 records in the csv files. Are there a better way to content from an old system to Drupal 7?

Ericmaster’s picture

I'm getting this sometimes too, not sure why it happens but I suspect it's a db configuration issue on my hosting. I contacted them and I think they recreated my databas and didn't see the message in a while but it's showing up again lately. Any help will be appreciated.

droplet’s picture

fresh installation ??

I cannot reproduce it (LAMP).
I'm running selenium & devel genc same time and CPU hits 100%

Anonymous’s picture

I'm getting this error because of another module. Problem is, now that I've got it, I can't connect to my site via web interface or drush. I'm not sure how to reset the lock.

sukr_s’s picture

restart your mysql server, the locks will be reset

Ericmaster’s picture

I'm still having this issue which occurs randomly making it difficult to reproduce and identify the cause. Here's the mysql configuration I got by printing phpinfo(). Any help will be appreciated.

mysql

MySQL Support enabled
Active Persistent Links 1
Active Links 1
Client API version 5.0.91
MYSQL_MODULE_TYPE external
MYSQL_SOCKET /var/lib/mysql/mysql.sock
MYSQL_INCLUDE -I/usr/include/mysql
MYSQL_LIBS -L/usr/lib -lmysqlclient

Directive Local Value Master Value
mysql.allow_persistent On On
mysql.connect_timeout 60 60
mysql.default_host no value no value
mysql.default_password no value no value
mysql.default_port no value no value
mysql.default_socket no value no value
mysql.default_user no value no value
mysql.max_links Unlimited Unlimited
mysql.max_persistent Unlimited Unlimited
mysql.trace_mode Off Off

mysqli

MysqlI Support enabled
Client API library version 5.0.91
Client API header version 5.0.90
MYSQLI_SOCKET /var/lib/mysql/mysql.sock

Directive Local Value Master Value
mysqli.default_host no value no value
mysqli.default_port 3306 3306
mysqli.default_pw no value no value
mysqli.default_socket no value no value
mysqli.default_user no value no value
mysqli.max_links Unlimited Unlimited
mysqli.reconnect Off Off

CasaDelGato’s picture

I also am now getting this issue.
This is on a brand new Drupal 7 Site install.
I had just installed CKEditor, and when I pressed the "Save changes" button after enabling CKEditor on the modules page, this started happening.
Now the site is hosed and can't be accessed at all.

CasaDelGato’s picture

So, I just ignored the site for an hour - then it started working again.
Something is definitely weird here.

neoglez’s picture

I'm seeing the error. It happened when i was importing "books" (nodes are programmatically created) in a kind of bulk operation using a module developed by me. Things work OK under Window but collapse under Debian, outputing this error. Could be a configuration situation but the fact that it brings Drupal down makes it very important!
In contrast to others i didn't have to restart apache.

dubj1m’s picture

I've just started seeing this error - I hadn't done anything to the site before it started bringing up the error - can anyone help?

dubj1m’s picture

The site is working again - five minutes after I received the error - can I talk to my host about this - what should I be asking them to do?

unclejustin’s picture

I got this too on a fairly clean install of D7. Restarted Apache and could connect again. I can reproduce it by simply opening the "Edit menu" link for the main menu in a new tab. Every new tab creates a new DB connection that seems to just stay there. I can manually kill them through MySql Workbench, but obviously this isn't a real solution. Anything I can provide to help track down this issue?

unclejustin’s picture

Correction. Viewing any page on the site results in a new database connection.

Babalu’s picture

subscribing

neoglez’s picture

@Babalu Would you provide more info under wich circumstances did you get the error?
(just subscribing doesn't help much)

cybernostra’s picture

@ #23

Same error today after upgrading Apache SOLR last version (beta7), the run update match ok and after a while i use taxonomy-manager to arrange taxononies

The error come when i try to run cron and cache flush:

PDOException: SQLSTATE[08004] [1040] Too many connections in lock_may_be_available() (line 165 of /datas/goodmorning/www/includes/lock.inc).

erik2e’s picture

If MySQL connections are not closed properly, then you might have an error within the code somewhere, that leads the script to end prematurely.

In standard cases, connections are properly closed.

Look at the detailed error log of your server, or activate on page error reporting if you're on a develpment site. You might find something useful, such as a forgotten dsm() somewhere with the devel module not being activated.

The error might not be displayed and leads the site to look as usual, but the PHP process is not ended properly, leaving MySQL connections opened.

cybernostra’s picture

@ #25 True / The display suite generate the error : (I report it on DS)

[Thu May 26 15:45:25 2011] [warn] mod_fcgid: stderr: PHP Fatal error: Call to undefined function dsm() in /datas/goodmorning/www/sites/all/modules/ds/modules/ds_search/ds_search.module on line 465
[Thu May 26 15:47:06 2011] [warn] mod_fcgid: stderr: PHP Parse error: syntax error, unexpected '<' in /datas/goodmorning/www/sites/all/modules/views/plugins/views_plugin_argument_default_php.inc(49) : eval()'d code on line 1

ytsurk’s picture

got it with user 1 after upgrade to 7.2,
went by itself again ..

DrupalMike’s picture

Version: 7.x-dev » 7.2

Getting the same error when working with views. Happened twice now. Had to log a support ticket with my hosting service in order to get the site back up again.

neoglez’s picture

Version: 7.2 » 7.x-dev

@DrupalMike Please leave the issue pointing to dev.

Ideally, all bugs should be filed against -dev as all code changes are only applied to -dev.

...from How to make an issue report

Lee B’s picture

I doubt there's any bug here at all. It's probably exactly what it sounds like: too many connections to the database, compared to the number it's configured to support.

Check the number of threads/processes/clients your webapp (not your webserver -- php, not apache) uses (maximum) vs. the max_connections configuration in MySQL (or the equivalent in other DBs). The former should never be able to go higher than the latter.

sukr_s’s picture

@Lee, the same db configuration on the same machine works for D6 without any errors (for 2 years now), how is it that in D7 alone the configuration is not correct. it means that D7 has issues in connection handling or D7 opens far too many connections compared to D6...in either case it has to be corrected.

Lee B’s picture

Yep, I take it back actually. On a drupal that whitescreens on /user (I think due to the git versions of d7 oauth and twitter modules being installed), I can reproduce this by reloading a /user page > max_connections times, after just reloading mysql beforehand. That seems to confirm what erik2e said, in comment #25.

NiklasBr’s picture

Subscribing

NiklasBr’s picture

So, two sites that I host on Amazon AWS have begun reporting errors like this all the time and I have a great deal of trouble troubleshooting this:

PDOException: SQLSTATE[HY000] [1040] Too many connections in lock_may_be_available() (line 164 of /domain.tld/includes/lock.inc).

NiklasBr’s picture

Attaching a php info printed file

WolfSoul’s picture

Has this been solved... Im sick of that error

sukr_s’s picture

@WolfSoul: I tried to reproduce the error with 7.2 and i did not get the error anymore. which version are you using? if it's too frequent, you may want to consider increasing the priority.

WolfSoul’s picture

Im using 7.4 and my site has been crashing whole week.. but I also posted in the forum

WolfSoul’s picture

Version: 7.x-dev » 7.4
Berdir’s picture

Version: 7.4 » 7.x-dev

Every case of this so far was a follow-up issue due to some other bugs. Check your error logs for any sql errors and similar things.

Also, don't change the version, patches are always commited against the -dev branch and bugs must always be tested and verified for that version.

WolfSoul’s picture

Ok sorry bout the devthing.. and i'll see if I can get some log (Im on shared hosting so no server access)
I suppose this either means some module caused it or Im on my own to figure it out

TripX’s picture

subscribing

same here - after installing media gallery and uploading a few hundred pictures.

Adam S’s picture

Still a problem. Subscribing

WolfSoul’s picture

Looks like I will go back to Drupal 6 until I can find a version that does not end up cuasing this error

vizualweb’s picture

My shared hosting helpdesk also suggested downgrade to 6.22 since "Drupal is the most carelessly written core since use resources very much. It is nonsense to use 800 thread connection to database for a ~1000 visit site."

Anyway, hostgator says the same so it looks like shared hosting sevices allow only 25 thread connection.
http://support.hostgator.com/articles/specialized-help/technical/max-use...

What is the solution? Go back to 6.22? VPS?

neoglez’s picture

@vizualweb good piece of info.

suldan’s picture

I have two D7 (7.7) sites hosted by the same hoster. First site works fine. The other site worked fine for over 6 weeks untill it broke down totally >> [...]Too many connections. Other sites on the same db-server were also down.

Now 30 minutes later the mysql-connection is ok - and faster than before.

What I did:
1. I wrote a message to my hoster - asked them if they could identify the problem
2. In the meantime I deactivated some subdomains not in use.
3. I uninstalled the latest installed module "Hierarchical Select" not in use.
4. Ran update.php

I hope the mysql-state is now persistant. When I get further knowledge I will post it here.

d.holmen@gmail.com’s picture

Since I upgraded from D6 and created a few nodes, this error comes up rather often. Restarting Apache usually works.

Subscribing.

makeapage’s picture

Very annoying! Hope the fix will come soon. Not a real good idea to launch a drupal 7 site atm.
subscribing

donniewiko’s picture

I got the same error today on my testing server. Each page load gave me a new connection to the database. And it did not drop the old one.
This doesnt happen on my live server.

Little extra information:

im creating a new view, so perhaps the issues lies within the views module. Ill post more info as soon as i know more.

To resolve this issue, a simple restart of the mysql is sufficient (in my case).

makeapage’s picture

Your right about the not dropping the old database connections donniewiko. Looks like the database jams up sometimes, and each click on a link creates a new process resulting in an overload: "to many connections". I'm getting this error on a regular basis, when developing my site... Issue resolves itself after 2-5 minutes.

sukr_s’s picture

@donniewiko: have you tried using the Pressflow version of D7 (http://pressflow.org/). Do you have the same problem?

donniewiko’s picture

No i have not, i have never heard of it, but i will check this out. Thanks for the link

donniewiko’s picture

It seems the problem fixed itself. Very odd. I havent changed anything.

makeapage’s picture

Same here... deleted a bunch of stuff in the database with Devel Variable editor. Haven't had it since. Even when clicking like a mad man on a bunch of stuff.

omnyx’s picture

subscribing. same issue here

droplet’s picture

@sukr_s,

thanks for the info, never know PF7 released. I checkout it and compare with D7, there is no big diff, it added PRESSFLOW_SETTINGS / Pressflow Smart Start.

Sounds like not thing worth add into PF7 ATM or still too young.

makeapage’s picture

hmmm... forget about my comment #55. just happened when clearing the cashes.

catch’s picture

Version: 7.x-dev » 8.x-dev
Component: node system » base system
Priority: Normal » Major
Status: Active » Needs review
Issue tags: +Performance
FileSize
1021 bytes

Pressflow 7 does not have any performance improvements compared to core yet, nor is it being actively developed as far as I can see. There were some performance issues in Drupal 6 that were only fixed in Pressflow (since Drupal 6 could not break backwards compatibility, require MySQL and PHP 5.2 etc.), but every one of those fixes is in Drupal 7 core now in one way or another.

There are only a few places in core where lock_may_be_available() is called, the earliest one is variable_init(), so please try this patch and report back if it works for you or not.

Status: Needs review » Needs work

The last submitted patch, variables_lock.patch, failed testing.

catch’s picture

Status: Needs work » Needs review
FileSize
1.04 KB

Here's one with no syntax error.

catch’s picture

If I'm right about this, then the proper fix to this issue is #987768: [PP-1] Optimize variable caching. That patch is more complex but while there is a single test failure it should be fine to test.

If you don't know how to apply patches, see http://drupal.org/patch/apply

It would also be helpful to get some debug information from people who are running into this.

You can make this change to includes/lock.inc:

diff --git a/includes/lock.inc b/includes/lock.inc
index 7dd8db3..b45a9d5 100644
--- a/includes/lock.inc
+++ b/includes/lock.inc
@@ -164,6 +164,7 @@ function lock_acquire($name, $timeout = 30.0) {
  *   TRUE if there is no lock or it was removed, FALSE otherwise.
  */
 function lock_may_be_available($name) {
+  file_put_contents('/tmp/lock.txt', print_r($name, 1) . "\n", FILE_APPEND);
   $lock = db_query('SELECT expire, value FROM {semaphore} WHERE name = :name', 
   if (!$lock) {
     return TRUE;

You may need to change /tmp to a directory you have read and write access to on the server, for example your Drupal files directory or similar. You'll need to download the file, check it has something in it, then upload here.

catch’s picture

Status: Needs review » Postponed (maintainer needs more info)

Needs more info from reporters, all patches here are for diagnosis only.

makeapage’s picture

catch, thanks for looking into this. I'll add the code to includes/lock.inc. Luckily i get the error less often than before (odd). So might be awhile before i get some data.

makeapage’s picture

just paste the code at the beginning or what? im getting this:
Parse error: syntax error, unexpected T_DEC in includes/lock.inc on line 64

catch’s picture

You should use diff/patch, for both, but:

 function lock_may_be_available($name) {
   file_put_contents('/tmp/lock.txt', print_r($name, 1) . "\n", FILE_APPEND);
   $lock = db_query('SELECT expire, value FROM {semaphore} WHERE name = :name', 

This is what the top of lock_may_be_available() should look like once that diff is applied.

jamesTbaker’s picture

We've had this problem on our dev server twice and restarting Apache has been an effective short-term solution both times. Obviously, we need a proper solution before moving to production.

We've only been using Drupal for a month so none of us is really a Drupal core or Drupal documentation/community expert. As near as we can tell from our own experience and from the issue thread, Drupal is not expiring database connections in a timely manner. Since most/all http requests result in a new db connection and the default MySQl wait_timeout setting is 8 hours, the max_connections limit is reached quickly. (On our dev server max_connections was set to 100.) This seems like the conclusion reached by many people but can anyone from the Drupal team confirm that this is actually the cause of the problem? I notice the "issue's overall values" indicate this is an issue for 8.x. Does that mean we're going to have to wait for 8.x to get a solution?

The stopgap fix we've employed on our dev server has been to adjust wait_timeout to 2 minutes and max_connections to 256.

Comments #59 to #66 seemed a bit cryptic to me. How does the patch relate to the code? How do those relate to issue #987768? Do these relate to the Pressflow distro only?

Here's our stack:
** Cent OS v5.6 (Linux 2.6.18)
** Apache 2.2.19
** MySQL 5.0.77
** Drupal 7.7 (Standard, no special distro)

Please let me know if I can provide any additional information to help with this issue.

Thanks!

hate90’s picture

im not sure if its going to be a drupal problem, because i been getting this error quite a few times now, and i have 3 drupal websites on a shared account, each drupal one of them is on the 7.8 version and the others in 7.6 i think, and they all showed the error at the same time and fixed themself after a few minutes all three websites.

Aaronwa’s picture

subscribed

ali_rahmi’s picture

I have the seem probleme . i use D7.8 for my website .. please, any one have the solution ? keep me informed

bob.hinrichs’s picture

Same problem here, we are using Drupal 7. Very difficult to troubleshoot. Subscribing, though I notice this is tagged as D8 here...but the vast majority are reporting issues on D7?

Ace Cooper’s picture

I'm using Drupal 7.9 on a shared hosting.
Got the same error after an hour of simultaneous content creation on the same website with my comrade:
PDOException: SQLSTATE[HY000] [1040] Too many connections in lock_may_be_available() (line 167 of /home/user/data/www/mmh6.info/includes/lock.inc).

I went to log into the database through phpMyAdmin to look for the lock semaphore but I've got a similar error:
#1040 - Too many connections

So clearly it is a shared hosting limitation.
Is Drupal 7 more greedy for hosting resources? - Well, yes...
But is it more owesome then previous releases and most CMS? - You betcha!
I will contact my hosting support to resolve the issue and will keep you posted.

P.S. The solution follows in #75

DR_AKM’s picture

u can to try with this code:

Set Global max_connections=2000

run it in your mysql

i got your same problem!... do it!...

pd.- i used phpmyadmin

sukr_s’s picture

There is one possible place which could cause firing of sql when not needed. I've fixed in the patch for 7.x. Test and let me know if it solves the problem, i will port it to 8.x

Ace Cooper’s picture

Well, in my case above (#72) the solution came from a ticket to my hoster.
They came back to me with a rather prompt reply, that another shared user was hoarding up all the connections with massive queries.
They put the server back online, froze that users account and reported to have 1000 user connections limit, which that user still managed to bulk up. They also promised that such downtime would be a one time deal, which is true so far.
So it wasn't me, it wasn't Drupal - just some anonymous shared user's fault.

P.S. I use shared hosting Multisite3 at Hvosting.ua (not an ad on my part, just a piece of friendly feedback to anyone from Ukraine/exUSSR).

sukr_s’s picture

Status: Postponed (maintainer needs more info) » Needs review
FileSize
1.18 KB

In the lock_may_be_available() the semaphore is deleted but not released from global $locks array, the patch form comment 74 ported for 8.x.

catch’s picture

On a variable cache miss, the first query executed on the page is in the lock system, so if that page is the one adding 'too many connections', it'll show an error from the lock code - however this doesn't mean the lock system is responsible, just that is happens to be the first SQL query executed in those cases.

lock_may_be_available() should not unset the locks global afaik, I also think it's unlikely at best that this is connected to the bug report.

sukr_s’s picture

@catch: both the function lock_release() and lock_release_all() delete the semaphore and unsets the global array. Wouldn't it lead the system to an inconsistent state (probably handled elsewhere) if the semaphore is deleted but the global locks array is not unset (probably causing unnecessary SQLs being fired)?

Also for my understanding, when it's okay for lock_may_be_avaiable() to delete the semaphore, why
shouldn't the function unset the global locks array?

Damien Tournoud’s picture

Status: Needs review » Active

lock_may_be_available() is designed to delete the (expired) semaphore hold by a previous request. It's not impossible, but totally unlikely that it will delete a semaphore hold by this request.

Anyway, back to the issue at hand: all the reports here seem to point to some kind of hosting issue. Maybe the PDO version used there doesn't properly close the MySQL connections?

c960657’s picture

#843114: DatabaseConnection::__construct() and DatabaseConnection_mysql::__construct() leaks $this (Too many connections) is about PDO leaking connections. I have no idea whether that is related to this issue, though.

Jens Dill’s picture

I got this after clicking again (and possibly even again after that) on the "Log out" link after the first click did not seem to have any effect. Could it be something related to timing, and inadequate protection from re-clicking a button that is not responding quickly enough? Again, for me, it cleared up by itself after 5 or 10 minutes wait.

plusfront’s picture

plusfront’s picture

mmatiaschek’s picture

This error can also be caused by a general database failure. On my hoster (Bluehost) for unknown reasons but possibly after removing another drupal installation through simplescripts, the mysql database was missing some tables. So the semaphore table was indeed missing and there were only 34 tables left. After restoring a backup everything was back to normal (102 tables).

catch’s picture

Status: Active » Postponed (maintainer needs more info)

I'm marking this as needs more info. If you're getting this error, please let us know the value of max_connections from my.cnf.

Smenzo’s picture

I'm also frequently getting this error, once per day (at least once per day that I am actively working on Drupal, hard to tell if it occurs on the days that I am not actively using Drupal!)

It seems to fix itself eventually but it's a significant frustration when it happens because I'm trying to get things done on my site and this issue causes the site to suddenly go offline for a few hours.

My shared hosting support have several tickets but so far unable to understand or resolve the cause of the problem.

phpMyAdmin Variables tab shows
max connections 100
max user connections 25

Abilnet’s picture

I'm also having this on my D7.12 -site. It happens almost daily and getting all my Drupal 7 -sites down. Only rebooting the server seems to help for a while. Restarting MySQL or Apache does not help.

Ubuntu server 10.04LTS
Apache 2.2.17
MySQL 5.1.34
Php 5.2.14

[mysqld]
local-infile=0
max_allowed_packet=64M
max_connections = 300
/etc/my.cnf (END) 
pharscape’s picture

I have this error several times per day while I am editing content. My web-host won't allow me to reboot their server :)

Max Connections = 500
I don't have admin rights to adjust this value.

PDOException: SQLSTATE[08004] [1040] Too many connections in lock_may_be_available() (line 167 of /websites/###/#########/#####/public_html/includes/lock.inc).

Drupal 7.14

The database is not localhost and they (123-reg) recommend no caching for drupal.

If you need more info let me know.

byrond’s picture

We see this error every once in a while. There is no value for max_connections in my.cnf, so I believe we are using the default of 151.

maclore’s picture

All I did was to create a new user with administrative privileges and that was all.

pingu2k4’s picture

Version: 8.x-dev » 7.14

I am getting this error ocassionally also. I am hosting my site also at 123-reg, might it be a problem with their (overloaded) servers? I have times when my site wont load cos server is too busy etc... i guess you get what you pay for sort of thing with web hosting applies... but still... would love if it there was a way i could sort this before i get the chance to switch hosts...

ah0’s picture

I just encountered the same error and the site was back after 30 min or so.
This is a high priority issue.

gausarts’s picture

Same error, happened as followed:
I go to admin/reports/updates, and check manually.
Previously I disabled devel.
Now while the updates run, I wanted to go to uninstall devl, then the error happened.

xamox’s picture

*Sigh*

I am getting the same error on ec2 micro instance. Could it be I am hitting my memory limits or something? I have been doing drupal development for over 5 years and have never experienced this type of error, ever. It's extremely frustrating and annoying because I sold our company on using Drupal over Django and now everytime this happens they think drupal is a busted pile of crap (which I know it's not). I would like to get this resolved as soon as possible as we need to get this site up and running and I went with Drupal 7 instead of 6 because I figured it would be stable after being out for over a year now.

You let me know what type of information and I will get it to you.

Current specs:
amazon ec2 microinstance
ubuntu 12.04 server (one included in the quick setup via ec2 wizard).
My phpinfo output here: https://gist.github.com/3055471
mysqlversion: mysql Ver 14.14 Distrib 5.5.24, for debian-linux-gnu (x86_64) using readline 6.2
apache version: Server version: Apache/2.2.22 (Ubuntu)

I also installed everything via apt-get and standard packages.

danithaca’s picture

I have the same issue on Amazon EC2 micro instance.

I have reduced "wait_timeout", "interactive_timeout" and "lock_wait_timeout" to be 3mins, 3mins, 10mins. And it seems to solve the problem. Will report later if I find the problem again.

xamox’s picture

In reference to #94:
I upgraded to a small instance (went from 500Mb of ram to 1500Mb) and have not run into the problem again. Before upgrading I remotely monitored with 'htop' and noticed the ram was getting pegged. I also noticed that swap isn't showing up which makes me think it was running out of memory and just locking up. I have gotten something similar before with drupal called the white screen of death but typically didn't require a complete restart of apache / mysql services.

ouyang1512’s picture

I just got this error a couple minutes ago. I do not know what happned because it happened when I click on "people" tab.

I did have the site opened in a few windows and browers (ie, firefox) at the same time. And I had civicrm installed with drupal as well. One of our staff might be editing the webpage as well during the time the error appeared.

I have D7.14, MYSQL 5.1.61, Php 5.3.10, php memory limit 128 mb. civicrm db and drupal db are separate. The civicrm db size is only 2.7 mb, and drupal db size is about 8 mb. If anybody knows how to prevent this from happening again, please let me know. Greatly appreciated!

ouyang1512’s picture

A follow up of this error on my site: I was told this error happened was because a wordpress DB on the server caused it. The admin solved the problem and I was told it is not my site's problem and no worries in the future :)

Vocabulo’s picture

It occurred to me on a shared hosting. DB wasn't accessible at a time because of resource limit. So, i think is was not drupal's fault.

realnerd’s picture

We too are were experiencing this problem while developing on a fairly big project. This could also be a result of a large number of php fatal errors caused by one or more modules as suggested here:
http://help.getpantheon.com/pantheon/topics/too_many_connections-agyju#r...
For us the problem arose after implementing several new custom made modules.
We've tried several settings for max connections ranging 100-300.

UPDATE: patch in #74 seems to work for our case (for now)

remlingj’s picture

Same problem here. It occurred once on our virtual server but also learned it occurred on another of our servers a few weeks ago (I love good communication between departments). It seems to have occurred when all the developers working on the system had gone home, so I doubt it was anything they had done. The one person who found the issue is not a developer and has limited access, nor do I think she did anything more than updating content in a page.

mvahlsing’s picture

I have run into this issue with Bluehost as well. Thinking about what modules I have installed that might cause connections to open/not close I have disabled Migrate, WordPress Migration and other related. I am working on transfer of WP to D - not sure if the same issue mentioned in #99 is the culp.
Will follow up with results.

astra’s picture

It occurred to me on a shared hosting with HostGator. But my site was back after 20 minutes.

mobonobomo’s picture

I had received this error after my Amazon EC2 small server instance crashed. Switched to a medium instance and have not had any problems since. I'll probably eventually have two instances, one for MySQL and the other for Apache2 and PHP.

densolis’s picture

subscribe

webchick’s picture

Hi, folks. The patch at #843114-50: DatabaseConnection::__construct() and DatabaseConnection_mysql::__construct() leaks $this (Too many connections) was just committed, and we suspect it might have a bearing on this issue. If someone having this problem has some PHP chops and would be willing to backport it to D7 for testing, that'd be muchly appreciated!

dan_aka_jack’s picture

subscribe (I'm getting this problem on D7 too)

podarok’s picture

#108 use Follow button at top-right of a page

remlingj’s picture

Just happened again (after about 4-5 weeks of working fine).

I just went through all the logs. In the access_log I found another clue (still no solution).
On our system, at :01:01 after each hour, there is an entry in the access log from ::1 for an automatic cron run. The only other time I see ::1 (local) accessing the system is around the time we had the issue. During this time frame I found roughly 37 other access by ::1 (in a 25 minute period), but this time with the comments:
::1 - - [16/Oct/2012:15:23:02 -0400] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.2.15 (CentOS) (internal dummy connection)"
::1 - - [16/Oct/2012:15:23:03 -0400] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.2.15 (CentOS) (internal dummy connection)"
::1 - - [16/Oct/2012:15:23:09 -0400] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.2.15 (CentOS) (internal dummy connection)"
::1 - - [16/Oct/2012:15:23:11 -0400] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.2.15 (CentOS) (internal dummy connection)"

These entries are not all sequential, but are spread out between when we noticed the error and when we restarted the server.

My current assumption is that Apache is opening connections to keep children processes from being terminated (see: http://user-agent-string.info/list-of-ua/browser-detail?browser=Apache%2...) but that these connections are then not closed, thus resulting in "too many connections." This seems to explain why restarting Apache seems to clear up the issue.

What the actual child process(es) are and if they are causing the issue or just a symptom of it are not clear to me. We run multiple sites on our server, but if I can narrow down what site(s) is causing this we might be able to determine if it is a bug in a specific module used in that site.

Any other ideas?

Berdir’s picture

The issue mentioned by webchick has been ported and is RTBC. So if you are experiencing this issue, please apply that patch and check if things improve. Also make sure that you watch the output of "show processlist" and check how many open connections there are, check the user/database they're connected to and try to track it down to a specific site if you have multiple ones.

That said, this is the error that happens when there are no more connections and there are like a million reasons for that, this is just how this issue then manifests itself, not the cause for it. We can't "fix" it.

Screenack’s picture

If you find this page while googling "PDOException: SQLSTATE[08004] [1040] Too many connections in lock_may_be_available()", save yourself some time and firstly confirm that your mysql database instance is operational and accessible. My shared hosting provider's mysql server was down, but this was the error I saw at my site. They restarted the server, and all was well again. No need to run patches, etc.

dansboy’s picture

Status: Postponed (maintainer needs more info) » Needs review

#61: variables_lock.patch queued for re-testing.

webchick’s picture

Status: Needs review » Postponed (maintainer needs more info)
lucianomallamaci’s picture

Version: 7.14 » 7.19

Hi all,
after upgrading to Drupal 7.19 I'm having serious problems which cause my site
http://www.marketingaziende.it
to become unusable.
In particulare I have two kinds of PDOExeception
PDOException: SQLSTATE[08004] [1040] Too many connections in lock_may_be_available()

and a second one
user XXXXXX has more than max user connections'.

This never happened before upgrading. I'm wondering if the new version 7.19 relies on threads more than previous versions. However, since caching is enabled, each script closes the connection to the DB as long as is no more required, I realy don't know what to do.

Here are some SQL configuration variables:

max allowed packet 1.048.576
max connect errors 60.000
max connections 200
max user connections 4
max delayed threads 20
max error count 64
max heap table size 16.777.216
max insert delayed threads 20
max join size 18.446.744.073.709.552.000
max length for sort data 1.024
max prepared stmt count 16.382
max relay log size 0
max seeks for key 18.446.744.073.709.552.000
max sort length 1.024
max sp recursion depth 0
max tmp tables 32

Thanks in advance for your help
Luciano

maxplus’s picture

Hi,

same problem here with 7.19 installation.
Site worked perfect for several weeks and now suddenly the problem appeared...

David_Rothstein’s picture

Regarding the last two comments, I am not sure why anything would have changed with 7.19, but in any case, have you tested the patch at #843114: DatabaseConnection::__construct() and DatabaseConnection_mysql::__construct() leaks $this (Too many connections) as described above?

Note that it's already been committed to 7.x-dev and will be in the next bug fix release of Drupal 7 too (i.e., most likely will be in Drupal 7.21 when it comes out).

spencerthayer’s picture

Version: 7.19 » 7.21
Status: Postponed (maintainer needs more info) » Needs work

Subscribing. Getting this error right now. I changed status to "needs work" since users here have given the maintainer more than enough, two years worth, of information.

PDOException: SQLSTATE[HY000] [1040] Too many connections in lock_may_be_available() (line 167 of /home/mst3k/mst3k.tv/includes/lock.inc).

David_Rothstein’s picture

Status: Needs work » Postponed (maintainer needs more info)

I don't see anyone reporting that they tested the patch in #843114: DatabaseConnection::__construct() and DatabaseConnection_mysql::__construct() leaks $this (Too many connections) yet.

(That did not make it into 7.21 by the way, but it's still in 7.x-dev and the committed patch can also be applied to Drupal 7.21.)

See also #111.

Manish Pandya’s picture

Ask your domain provider's tech team.

I had the same problem only on server not on my local machine for the same site. So I told them why I am getting this error (PDOException: SQLSTATE[08004] [1040] Too many connections in lock_may_be_available()) only on your server?

That guy replied me "There was a problem with your account, I have corrected it so now you won't get this message again"

I used to sort this kind of problem with restart mysql via command line or if you are on cPanel then click on phpMyAdmin and your database. Your site will work fine after that.

drubb’s picture

Check your webserver's error log! I ran into this error, too, from one day to the other. Trying to increase mysql connections, restarting mysql and other actions didn't lead to success.

In my case there was a third party module trying to call functions provided by a not loaded php extension. Disabling this module solved the case.

News4u’s picture

I need Help on where to input the Try / Catch PDO EXCEPTION STATEMENT
SO THAT MY SITE CAN BE SAFER
I THOUGHT MAYBE INSIDE settings file
..
k indly help

eync’s picture

This was actually almost same thing with me as with drubb, there's was some page that included PHP code that didn't work with Drupal, I had to check the logs from Nginx to see where it was coming, removing the page fixed it. So Guys please check your logs and see where it's coming from.

girishmuraly’s picture

FileSize
31.52 KB

I just saw this exception appear on api.drupal.org. Is it possible for the drupal.org webmasters in this thread to try it on api.drupal.org?

For fetching logs if that may be helpful, the timestamp when I got this error was
14:25 BST on 26 Sept 2013.

clayball’s picture

I just upgrading from 7.22 to 7.23 and I keep getting this error!

It's very annoying!!!

Not sure what more information anyone requires but I'm happy to help.

I hadn't experienced this until I started using 7.23.. hope that much helps.

clayball’s picture

Version: 7.21 » 7.23
Priority: Major » Critical

I can't do anything now. This is on one of my production servers so, for me, this is critical!

clayball’s picture

It's times like this that I seriously consider switching to a new framework such as WordPress.
I just don't have time to develop a site from sctratch.. although it might be worth finding the time!

IMHO Drupal has been getting buggier and buggier over the past year or so.

I'm done with my quick rant.. for now.

gumdal’s picture

What is the solution for this?

albertczyk’s picture

I've hit the same issue. For the moment, I've been able to minimize the occurrences of this error by increasing the max_connections MySQL config variable, but should not be needed. Drupal should be able to wait until a MySQL thread is free.

albertczyk’s picture

Well, I think I've found the culprit. variable_initialize() (https://api.drupal.org/api/drupal/includes%21bootstrap.inc/function/vari...) calls lock_acquire() and which in turn calls lock_may_be_available(). I think the problem is in lock_acquire() which under heavy DB load (and high wait time) will try repeteadly to query for an available lock, eventually exhausting the connection limit. The solution would probably be tuning the timeouts in lock_wait(), or maybe rewriting the do loop in lock_acquire. Still have to look in depth, but that's clearly the problem.

David_Rothstein’s picture

Do each of those calls actually open up a new database connection, though? I would have assumed they all share the same connection, but if they don't that definitely sounds like a problem...

a.ross’s picture

This PDOException now continuously occurs on d.o, so if there is a performance issue here I'd say this should be a higher priority. The site is now next to unusable.

jobjol’s picture

Another thing that can fix this issue:
Check your free diskspace!

hanksterr7’s picture

Albertczyx, any update from you on this? Sounds like you found the cause of the problem, but how about a fix?

Thanks

hanksterr7’s picture

Also, variable_initialize() is recursively calling itself until it gets a lock, waiting before each recursive call via calling lock_wait(). Is this recursion a problem?
/**
* Loads the persistent variable table.
*
* The variable table is composed of values that have been saved in the table
* with variable_set() as well as those explicitly specified in the
* configuration file.
*/
function variable_initialize($conf = array()) {
// NOTE: caching the variables improves performance by 20% when serving
// cached pages.
if ($cached = cache_get('variables', 'cache_bootstrap')) {
$variables = $cached->data;
}
else {
// Cache miss. Avoid a stampede.
$name = 'variable_init';
if (!lock_acquire($name, 1)) {
// Another request is building the variable cache.
// Wait, then re-run this function.
lock_wait($name);
return variable_initialize($conf);
}
else {
// Proceed with variable rebuild.
$variables = array_map('unserialize', db_query('SELECT name, value FROM {variable}')->fetchAllKeyed());
cache_set('variables', $variables, 'cache_bootstrap');
lock_release($name);
}
}

foreach ($conf as $name => $value) {
$variables[$name] = $value;
}

return $variables;
}

hanksterr7’s picture

Looking at the code for lock_acquire(), its do loop is calling lock_may_be_available() at most only two times.

As such, your statement:
"I think the problem is in lock_acquire() which under heavy DB load (and high wait time) will try repeatedly to query for an available lock, eventually exhausting the connection limit." is not correct, since lock_acquire() only calls loc_may_be_available() at most two times.

But variable_initialize() is calling lock_acquire() potentially an unlimited number of times due to the recursion, so you had the right idea, but the wrong location for the looping (as far as I can tell). So, should a change to variable_initialize() be done so it doesn't get into an endless loop? Or increase the wait caused in its call to lock_wait()?

Thanks

hanksterr7’s picture

Sorry, one more correction. lock_acquire() executes its do loop at most two times, but calls lock_may_be_available() in only one of those two passes.

hanksterr7’s picture

Ok, how about this?

We are all seeing this error in lock_may_be_available() line 167 (for drupal 7.23)

The line is

  $lock = db_query('SELECT expire, value FROM {semaphore} WHERE name = :name', array(':name' => $name))->fetchAssoc();

There is no exception handling for this line in the lock_may_be_available() function.

Normally, if lock_may_be_available() is unable to find an expired lock it is able to reuse, it returns FALSE.

So why don't we just wrap the above line in a try/catch, which will catch the case when there are too many connections to the db and as such the db query can not execute? And if an exception happens, we return FALSE. Problem solved?

hanksterr7’s picture

Proposal for start of lock_may_be_available:

function lock_may_be_available($name) {
  try {
	  $lock = db_query('SELECT expire, value FROM {semaphore} WHERE name = :name', array(':name' => $name))->fetchAssoc();
  }
  catch (PDOException $e) {
	watchdog('PDO',
          'Failed to query semaphore table for available lock: ' . _drupal_decode_exception($e),
          array('', ''),
          WATCHDOG_NOTICE);
  	return FALSE;
  }
albertczyk’s picture

I've implemented also this solution (slightly modified) and it effectively suppresses the error:

 try {
          $lock = db_query('SELECT expire, value FROM {semaphore} WHERE name = :name', array(':name' => $name))->fetchAssoc();
  }
  catch (PDOException $e) {
        file_put_contents('/tmp/lock.log', "Exception in lock_may_be_available!!\n", FILE_APPEND);
        return FALSE;
  }
  if (!$lock) {
    return TRUE;
  }
  $expire = (float) $lock['expire'];
  $now = microtime(TRUE);

However, I wouldn't add a watchdog call there, because it will generate a lot more of DB operations, and as we got into the exception code because we weren't able to get a free connection, the watchdog call will make the problem worse.

Anyway, this seems to work for me under heavy load, but it's just a hack that masks the error. This should be fixed by making the lock mechanism so smart as to be able to avoid avalanche effects like this. I'm still looking into it, but I've not been able to devote much time lately.

xolotl’s picture

I implemented @albertczyk's solution in #140 in Drupal 7.23 and am waiting to see how it affects the error.

klonos’s picture

I saw this when visiting http://www.reactos.org/news-archive today:

too many connections in lock_may_be_available() @ ReactOS.org

It went away once I waited and refreshed after a minute or so, but my point is that it effects valuable organizations that selected Drupal as their platform over other open-source CMSes. This hurts our reputation.

I realize that my comment doesn't help in solving the issue itself, but from what I see it stands from back in 2010! I'm sure that if the problem persists someone from ReactOS will eventually show up in this issue, but perhaps it would be a great idea to contact them ahead.

...just saying.

clashar’s picture

I have also this error even on 7.24

News4u’s picture

GREAT

ar-jan’s picture

@News4u: if you want to stay updated on this issue, you can use the Follow button in the top right block. Otherwise, it's helpful to only comment with relevant information/questions, because off-topic comments create distraction in the issue queue. Thanks.

hfortune’s picture

I am new -- this is my first time posting -- and normally I can figure out from reading through the forums the solution to my issues.

This one has me stumped though -- it just started happening to me (same error as listed above) today:

Error

The website encountered an unexpected error. Please try again later.
Error messagePDOException: SQLSTATE[08004] [1040] Too many connections in lock_may_be_available() (line 167 of /home/heather/public/reference/public/includes/lock.inc).

As I am developing the site, there should be no one connecting to it except me, so I am not sure why I would be running out of anything?

Please excuse my ignorance... like I said, I can normally figure stuff out, but I am now getting this error about every 5 - 7 clicks of the mouse (so, once it clears out, I may be able to click to admin, click to content types, click to the content type I want to edit, click edit and then get the error again) and my googling led me here, but I'm not really sure I understand how to resolve this. My site is in good shape otherwise, so I don't want to randomly start trying things and mess something up!

Can you tell me in layman's terms what I can do to resolve this? I am developing this website for about 20,000 users to start, so I am a little worried that just me working on it is causing connection issues.

Thanks so much and again I am sorry if I didn't ask the right way or if I missed something above that answered the question!

Thanks
Heather

hanksterr7’s picture

Heather, what version of Drupal are you running?

hfortune’s picture

I am running 7.26.

hanksterr7’s picture

@hfortune
Ok, I was experiencing this when I was using 7.23. From reading the above messages, I had hoped that upgrading to 7.26 would have fixed this issue. Please see the fix suggested in #140. It seems to take care of this issue. You will need to edit includes\lock.inc and find the function lock_may_be_available(). Apply the changes shown in #140, wrapping this line
$lock = db_query('SELECT expire, value FROM {semaphore} WHERE name = :name', array(':name' => $name))->fetchAssoc();
in a try/catch.
Should make the problem go away.

hfortune’s picture

Thank you hanksterr7!

acoustika’s picture

Today I had 3 sites going down with this error.....

They are still down and waiting for answer to a support ticket to my host....

I could see in their server newspage that they have updated php this night from 5.2 to 5.3 dont know if that is info worth anything...

acoustika’s picture

I really would propose that this has something to do with the difference between PHP 5.2 and PHP 5.3

My host changed PHP from 5.2 > 5.3 and restarted the servers at 0800 hours today, I noticed my sites being down at 0930, and they have been down since.....

Can't get them back up since I dont have Reload (Flush) privilegies on the server....

But not well versed enough to figure which difference it should be looking at the php difference page

RedRat’s picture

acoustika, a PHP version definitely not a root of problem. My Drupal 6 site worked on PHP 5.3 for a year without any problems. But right after upgrading to Drupal 7 I constantly see this error.

spkrs’s picture

Hello drupal forums and community!

After years of contenting managing a drupal site, I'm teaching myself development and just encountered my first major glitch:

PDOException: SQLSTATE[08004] [1040] Too many connections in lock_may_be_available() entity moduel

I have two live sites installed on my server in D6, and I'm developing a site on D7, which is how I caused the problem: immediately after installing Entity API and Entity Reference modules in D7, and setting up a reference between two content types, all three of my Drupal sites went offline (or at least timed out).

After a delay, I managed to get into cpanel and delete these modules. All three sites came back online and are now working ok. The D6 sites are running fine. The D7 site is very sluggish. Not sure what to do?

spkrs’s picture

I'm not sure if this is useful, or even related, but I've run reports on my databases and found these errors:

sitename_primary.drupal_cache_menu] warning: 1 client is using or hasn't closed the table properly

sitename_primary.drupal_cache_page] warning: 1 client is using or hasn't closed the table properly

barryayers’s picture

This issue also occurs if you open multiple tabs of the admin and flush the cache simultaneously. I am going to try the fix in #140 and report if the issue is resolved.

imkashif’s picture

2 different errors on different files in different time span. It's gone when page refresh. I am the only developer who just start developing site on D7 and site is in maintenance mode. No other users. It's on shared hosting.

galundin’s picture

Version: 7.23 » 7.x-dev
Issue tags: -Performance +DrupalWTF

I removed the Performance tag because this error completely shuts down production sites for indeterminate duration, intermittently and without warning. Mine stayed down until I hacked core code, which was at least several hours of downtime, possibly a day or two. My concern is that this issue will be overlooked in the queue if maintainers think it's only about performance. The bug might be in performance related code, but this error is much more severe than a performance issue.

#140 worked for me. I added some more extensive logging and found that lock_may_be_available() isn't called on most page loads, even after flushing all cache. I added the exact same logging code elsewhere to verify that it works. After the error was resolved, I was never able to reproduce it, and I couldn't even get it to call that function again.

As others have noted, the problem of too many connections can be caused by many things, even other sites on a shared host. But there seems to be some incorrect logic in the locking, and no error handling to recover from this fairly common condition. In my case, it got stuck in that state permanently, showing users nothing but a cryptic error, until I added the try/catch.

barryayers’s picture

#140 worked for me as well. I would have to agree with the fact that it could be due to multiple reasons. If you are on AWS, like I am, the chances of that are high. I am however continuing to monitor the site using http://monitor.us nevertheless.

lunk rat’s picture

Here is #140 in patch form. Hope it helps someone.

jkwilson’s picture

We started seeing this problem crop up more and more, and after quite a bit of research and work, it seems that the actual issue was a busy site overloading MySQL, not any Drupal issue.

We have a somewhat older instance of MySQL (5.0.95) that (as I found out) had never had any profiling or tuning done. Very little had been changed from the defaults. These were all helpful in diagnosing the problem:

Turning on MySQL's slow query log
https://drupal.org/node/2601
https://drupal.org/node/51263
High Performance MySQL

The biggest culprit on our site turned out to be Views. We were overloading it with some complex querying without much caching.

In our case, a simple patch like #140 did NOT help. In fact, it caused the site to hang. Sounds like it's helpful for others, there may be a different root cause in play, thought.

tapas_talukder’s picture

I have got the same error.

I don't this its drupal error.

After restart my mysql service, I have seen the error has been solved.

AstaC’s picture

I have this error on my site now. I have version 7.31. This is a completely new issue for me after having my site in Drupal for years. I used the patch in #140 can not say if it helps or not, my page is up one day and down the next. I have a test site on the same server, in an older version of Drupal and there is no trouble on that site at all. I have no idea what to do. Anyone here that have found a solution. I DO think it is a Drupal problem.
I did nothing when this problem started, I just tried to open my site.

Edit: When searching it seems this is an old problem not yet solved. My site is almost unusable and there seems to be no solution for this issue - is the solution to change CMS? What have you people done that have written in this thread?

RaulMuroc’s picture

+1

AstaC’s picture

As far as I can see this issue is set to be critical - when searching the Drupal site and Google I can see that many have problems with it. Still it is postponed here. My site is completely useless because of this problem, I am no computerwizard and can not write any code to solve this.
Is anyone at all working with this problem? I would really like to know. I am starting up a new site for my dogclub and I was going to build it in Drupal, but if this issue have no solution I will make an other choice. And in that case I will also start to convert my own site, I can not sit and wait for a solution that's not coming.
So could someone tell me if there is anything going on or if I ought to change CMS? If I can manage to do that in the few moments every now and then that my site is working.
I am on a shared host, I am the only user, my site have been running for almost ten years in Drupal and I have never ever had this problem before so I do not think it is a hostproblem. I have always stayed with Drupal because turning to the Drupal-site has always helped me to solve my problem, but it seems as this problem is too diffucult. Three years since it was first reported and still no solution - seems to have started with Drupal 7.

markspall’s picture

Me too, on my local windows machine. Running 7.30.

I've been experimenting with the Eventbrite modules which required locale module and Eventbrite Field module. Have uninstalled all these modules now. Not sure if they're the culprits in some way but we'll see.

markspall’s picture

BTW - To release the lock just restart MySQL - just in case people are stuck with a broken site.

NiklasBr’s picture

This error crops up now and again, the last time this happened was when changing CCK fields. PHP version does not seem to matter, I have seen this issue on PHP 5.2, 5.3, 5.4, and most recently on 5.5.9.

/etc/mysql/my.cnf contains these lines:

key_buffer		= 16M
max_allowed_packet	= 16M
thread_stack		= 192K
thread_cache_size       = 8
max_connections        = 100

For what it's worth, this is afaik the default mysql configuration for LAMP on Ubuntu 14.04 LTS.

AstaC’s picture

My site is down all the time now. On a shared host and have had discussions with them and they have tried to help me but we have not been able to solve anything. Changed the passwprd to the database, just in case...
Have now, after almost ten years with Drupal, started a divorceprocess. Since I am no computerwizard I have no possibility to fix it on my own.

a.ross’s picture

AstaC, what is your hosting provider? Could you paste the output of phpinfo() (admin/reports/status/php) here? It has happened to us in the past that hosting providers lowered some performance parameters of MySQL and otherwise broke configuration in a backward incompatible manner. If you cannot pinpoint the problem, you cannot blindly blame Drupal.

AstaC’s picture

My hosting provider is Surftown.se but I can not provide you with the other information you want since my site is not reachable.

a.ross’s picture

You could also place a text file in your webroot via FTP called phpinfo.php with the following content:

<?php
phpinfo();

Then access the URL from your browser (based on your domain of course): http://example.com/phpinfo.php

AstaC’s picture

That worked. But it is a lot of information, which parts are nteresting?

a.ross’s picture

The parts under "Core" and under "MySQL" are most important.

AstaC’s picture

FileSize
164.1 KB

OK this is what I get there.
Link to phpinfo

a.ross’s picture

I must say those are some extremely dubious PHP configuration parameters for a shared hosting provider. Did their support department change configuration specifically for you to troubleshoot the issue? I'll explain what's wrong.

PHP max_execution_time is 50 minutes. That means PHP scripts that are run by the webserver (IE when someone browses your site) are only killed when they've been running endlessly for 50 minutes. If, for some reason, there is an infinite loop in the application, a user browsing your site would experience that as a page not loading for 50 minutes, and then getting an error. Typically, browsers don't even wait for 50 minutes and show the user an error before receiving a response from the server.
Also, the allowed memory (memory_limit) is extremely high at 512 MB. The combination of these 2 mean that the doors are wide open for a DOS attack and even, if someone's FTP credentials are leaked, losing a lot of system resources to a botnet. It could well be that due to this configuration, their servers are being overloaded and your site suffers as a consequence. I would ask them why these configuration parameters are set so high (unheard of for shared hosting), and if they cannot offer an explanation, move your site elsewhere.

AstaC’s picture

What should those values be?

a.ross’s picture

The defaults are:

max_execution_time = 30 ;seconds, so this is only half a minute.
memory_limit = 128M

See:

ar-jan’s picture

That's not to say PHP defaults are always OK to use. It depends for example on their hardware and how many clients they host per server. And a Drupal site with a bunch of modules installed may already need more than 128M memory_limit (though you can usually increase the limit on-the-fly via settings.php, so a lower default is still sane, yes).

It sounds like the database server of your provider is overloaded. It *could* be due to a module or other problem with your site, but then your provider should be able to give you a hint (e.g., mysql slow query log could help you poinpoint a problematic module on your site). It could also be their server configuration. Either way, if they cannot help you, I would consider trying another hosting provider rather than ditching Drupal. It's a good idea to avoid shared hosting, unless they're really well-recommended specifically for Drupal.

a.ross’s picture

We're running a site that requires a memory limit of 512 MB, but it runs on a fast bare metal server. But even we don't use an execution limit of 50 minutes. That's an insanely high limit. Think of what a resource intensive script being run (exploited, really) by a crawler or a botnet operator would do to the server load. 50 minutes of uninterrupted processing power and memory usage (up to 512MB, at that)!

It wouldn't surprise me if their MySQL configuration is messed up as well.

AstaC’s picture

OK thank's I'll have talk with them.

AstaC’s picture

I have a question. Can these values really create the problem that problem?

a.ross’s picture

Like I said, if they're denying everything, move to new host.

swickham’s picture

Adding my two cents: We were having this issue with a site and it turned out that a certain webform configuration was causing our cache_form table to load up with entries every time a page displaying a webform we had was loaded. The problem was having a file field on a webform causes it to add to the cache every single time that form is rendered. We moved the webform to a link rather than having it render on every page since we needed that file field in it and this problem has gone away and the site's performance has increased.

I realize this probably isn't what is causing it for a lot of you but since using Webform is practically standard I'd assume a few of you may have been caught by the same thing.

Here's my ticket in the Webform issue queue that explains it - https://www.drupal.org/node/2352209

decibel.places’s picture

I had so much bad traffic on my personal web site, it was going down even with mysql max_user_connections raised to 480K.

My site is also served through CloudFlare, but they were useless helping me with the flood. Also when a site goes down because of db issues, CloudFlare DOES NOT serve a cached version; I asked them to reconsider this policy for certain cases.

In desperation, I came up with a fix that randomizes among 4 mysql users in settings.php, and that "fixed" it, so that one user does not exceed the limit.

I had already deployed Bad Behavior, Honeypot, http:BL and allowing the site to load blocked enough bad traffic that they went away to bother someone else.

My web hosting company was not happy with my solution, since it bypasses resource limits they set. Too bad.

https://www.drupal.org/node/2323373

mikeytown2’s picture

Just a heads up as I was linked to this issue from another; there are some patches for core that should help in this situation. https://groups.drupal.org/node/415883
The one that might help a lot is this issue #1898204: Do not use InnoDB for the semaphore table, use Memory

NiklasBr’s picture

Your work on this is very appreciated mikeytown2!

Andrew Schulman’s picture

It seems that this problem can be caused by many different things. In our case, it turned out to be caused by another application entirely. The application had a connection leak, causing it to use up the available connections. Then when Drupal tried to connect, it couldn't. So although Drupal presented the error message, it wasn't Drupal's fault at all.

wagafo’s picture

I just got this error with 7.37 core. Despite having all error messages and warnings turned off, the PDOExeption message is shown and the path to the drupal installation in the server is exposed. I manage a lot of Drupal installations in a lot of servers and this is the first time I see this.

mikeytown2’s picture

Another heads up that https://www.drupal.org/project/apdqc is out and will fix most occurrences of this

wagafo’s picture

Concerning my comment #189, after setting

max_allowed_packet=64M
max_connections = 300

in my mysql configuration I was still getting the error, but it went away after enabling cache, css and javascript compression in Drupal.

cilefen’s picture

Hi @mikeytown2:

  • Based on #190, and the fact that a few hundred sites are reported as using apdqc, do you think this issue can be downgraded to Major (because there is a workaround)?
  • Can Drupal 7 core be patched for this without breaking BC?
  • Is Drupal 8 likely to exhibit this issue?
mr4711’s picture

Well, I had this error many times a while ago. And it was any other error following a 'full' MySQL DB (limititation of the web provider. Clearing the caches helped only a short period of time). After reorg the DB the error was gone.

catch’s picture

I moved this issue back to 7.x because the most common cause of it (lock_may_be_available()) is no longer called anywhere near as frequently in 8.x as it was in 7.x - this is because the variables system, which was the worst offender, has been removed in 8.x.

So the issue should stay against 7.x. However I think it's more likely to be fixed by bumping one of the two related issues I'm adding here to critical and marking this duplicate.

Leagnus’s picture

It happens when one of sites on same server opens huge amount of sessions and doesn't close.
it ain't necessarily Drupal.

NiklasBr’s picture

it ain't necessarily Drupal.

Of course not, the error might not always be because Drupal. But in some cases it is indeed because of Drupal, as you can read in the description and comments on this issue.

forfunity’s picture

This error has been appearing for the last week or so. It was appearing on one of my personal servers (located in the UK) , i restarted mysql and it went away and applied the patch above. I started to investigate as it started to recur but still have not found a solution.

The strange thing is that at work some of our Drupal 7 sites started having the same issue that are located on a different server in the US (AWS). It seems very strange that this has occurred on unrelated Drupal 7 sites on unrelated servers.

Anyone else having the same issues.

joseph.olstad’s picture

@forfunity , I highly recommend you run mysql_tuner.pl and take it's advice for server settings optimisation , once your mysql configuration is optimized I'm confident this problem will go away for you and you will never see it again.

leymannx’s picture

That's what solved the issue for me:

  1. Edit settings.php to comment database credentials.
  2. Truncate all cache tables, the session table and the watchdog table.
  3. Uncomment database credentials in settings.php again.

Voilà. Site back.

Note to self: Shared hosting sucks.

I truncated the tables via PHPMyAdmin, which only became accessible again after commenting the database credentials in settings.php.

joseph.olstad’s picture

Category: Bug report » Support request
joseph.olstad’s picture

Component: base system » mysql db driver
Status: Postponed (maintainer needs more info) » Fixed
Issue tags: -DrupalWTF

The provided answers are sufficient.

Status: Fixed » Closed (fixed)

Automatically closed - issue fixed for 2 weeks with no activity.