A wonderful PHP bug.

Because we are being nice and keeping our process around for PHP we actually run out of file descriptors and are unable to launch tasks, in that case we log the following to the task log:

proc_open(): unable to create pipe Too many open files backend.inc:237

The simple solution is to restart the queue runner, but maybe we should die or something. It's a bit nasty in any case.

Comments

steven jones’s picture

Sadly this PHP bug is still not fixed.

https://bugs.php.net/bug.php?id=47396

anarcat’s picture

Wait, so this happens even if the process re-execs itself? How can PHP resources (which I understand to be different from file descriptors) be inherited between system calls? That seems rather odd... It seems to me the exec should fix the problem, maybe this is happening because within an hour we run out of descriptors on busy task lists?

In this case a workaround could be #1191154: reload after N task instead of timeout... a proper solution would be to inspect the current resource ID, maybe based on the last fork or something...

See: http://www.php.net/manual/en/language.types.resource.php

steven jones’s picture

Well, I've not been able to confirm that this is exactly what was going on here, but I just stopped and restarted the daemon and all was well.

I've seen this on two different servers, but not on another, it seems really weird!

anarcat’s picture

I confirm this bug. I have written the following loop:

#! /usr/bin/php
<?php

echo "starting almost infinite loop to overflow PHP resources (file descriptors), press enter to continue or control-c to abort";
$stdin = fopen('php://stdin', 'r');
stream_set_blocking($stdin, TRUE);
fgets($stdin);

for ($i= -105 ; $i < 2^64; $i++) {
  $fd = fopen('/etc/fstab', 'r');
  if (!(($i % 10000))) {
    echo "$fd\n";
  }
  if (!$fd || !fclose($fd) || $fd < 0) {
    echo "failed to open file, stopping at iteration " . $i + 105 . " fd: $fd, restarting";
    pcntl_exec($_SERVER['argv'][0]);
  }  
}

It runs out of file descriptors after 2147480111 iterations on Ubuntu Lucid 64 bits. Running pcntl_exec() on the process fixes the issue, so I believe that restarting the daemon after X tasks should fix this problem.

anarcat’s picture

Notice how 2147480111 is just thousands away from 2147483648, which is the MAX_INT for signed integers (2^31). Amazing that PHP doesn't bother to use 64bit counters there...

j0nathan’s picture

Title: PHP runs out of file descriptors » PHP runs out of file descriptors (Too many open files)
anarcat’s picture

workaround: restart the hosting queue runner, of course.

anarcat’s picture

Issue summary: View changes

intersetingly enough: exec() is not *supposed* to fix that bug necessarily, unless file descriptors are marked as "close on exec" (FD_CLOEXEC) with fcntl:

http://stackoverflow.com/questions/1643304/how-to-set-close-on-exec-by-d...

also, to clarify here: the issue is not necessarily that the file descriptors are *actually* opened! as well documented by the first link steven posted above, it's that FD indexes are not being reused, even if closed. the site was deleted now, an archive is available here:

https://web.archive.org/web/20140719023355/http://gnuvince.wordpress.com...

the gist of it is this:

$fd = fopen('/etc/passwd', 'r');
echo "$fd\n";
fclose($fd);

$fd = fopen('/etc/fstab', 'r');
echo "$fd\n";
fclose($fd);

And the program’s output:

$ php fds.php
Resource id #5
Resource id #6

Here’s the equivalent Python program for comparison purposes:

f = open('/etc/passwd')
print f.fileno()
f.close()

f = open('/etc/fstab')
print f.fileno()
f.close()

And the output:

$ python fds.py
3
3

this is just beyond. but it explains why pcntl_exec() actually fixes the issue.

i am at a loss for words. get me out of the PHP nightmare please.

ergonlogic’s picture

Status: Active » Closed (won't fix)

Nothing we can do about this here. The hosting_queued now restarts itself after a while, but that still sucks. I've re-implemented the queue daemon in Python here: https://github.com/GetValkyrie/skynet