Server is Xen latest stable BOA release on Debian Squeeze, Percona.

I was attempting to upload some video files today using ckeditor/ckfinder and I got to a file that was over 100MB. Went to upload it, I got an I/O error and complaints about the file size being too large, so after a bit of reading, it appeared to be both an Nginx and PHP limitation.

I adjusted the client_max_body_size in file /var/aegir/config/nginx.conf which seemed ok, then proceeded to change the post_max_size and upload_max_filesize in /opt/etc/php.ini.

I then restarted Nginx and PHP-FPM and I was away again the files were uploading in CKFinder.
Problem is, it's been sitting on Uploaded 100% Processing... for about an hour now which is unreasonable. I tried it again, and same problem left over night. So there's something else wrong.

Is there something I'm missing when it comes to uploading large video (*.flv) files using ckeditor/ckfinder in BOA or is this not BOA related at all?

Comments

omega8cc’s picture

You also need to raise timeout limits probably:

1. max_execution_time It is hardcoded to 300 seconds, no matter how much RAM is available.

2. max_input_time It is hardcoded to 300 seconds, no matter how much RAM is available.

When changed to higher value, you also need to modify php-fpm config file: /opt/etc/php-fpm.conf.
See <value name="request_terminate_timeout">300s</value> there.

Note: Barracuda will *not* overwrite your changes made to php.ini or php-fpm.conf on the next upgrade if you will touch the empty control file:

$ touch /opt/etc/custom.php.ini

omega8cc’s picture

Project: Octopus » Barracuda
Category: feature » support

This belongs to Barracuda queue.

snlnz’s picture

Priceless! thanks for the tip and sorry for the queue mix up. Sometimes it's difficult to know which issue belongs where! :)

Jackinloadup’s picture

This question is only partially related to the original question but does BOA use the nginx upload module? if not are there any plans to or any reasons not to?

The nginx upload module im talking about is posted here
http://wiki.nginx.org/3rdPartyModules
and links here
http://www.grid.net.ru/nginx/upload.en.html

The module seems work like so. Nginx catches the file upload and handles saving it to disk (/tmp/dir) and doesn't tell php about it until its completed so PHP doesn't have the handle the file stream during the upload. thus reducing memory and processor usage as well as allowing for larger uploads.

I have not implemented this so this is only information that I have read.

Thank you for the wonderful BOA system!

omega8cc’s picture

Status: Active » Fixed

@Jackinloadup - we don't use this module. Instead, we use upload progress module and I think it is not possible to use both of them, while upload progress is probably much more useful and expected feature. Feel free to open separate feature request if you will find a way to use both of them.

Status: Fixed » Closed (fixed)

Automatically closed -- issue fixed for 2 weeks with no activity.

snlnz’s picture

Priority: Normal » Critical
Status: Closed (fixed) » Active

Don't mean to re-open a fixed issue, but I'm experiencing a new issue that the above details don't seem to fix.

On a site with an Image field enabled on a content type I try to upload images in the front end and am getting the error:
An unrecoverable error occurred. The uploaded file likely exceeded the maximum file size (100 MB) that this server supports.

If I disable the "Nginx Upload Progress" module then I just get:
The file could not be uploaded.

I've changed the timeout settings to 3000s from 300s and the upload limits have been increased significantly by default since this issue was first posted so it's neither of those two issues.

Nothing in the logs and nothing in nginx/php-fpm explains why or what is stopping the image uploads.
Any ideas?

omega8cc’s picture

Priority: Critical » Normal
Status: Active » Closed (fixed)

Please open new issue, but it can't be critical, imo :)

Also, make sure you have read existing related issues in the queue.

snlnz’s picture

Following up on this issue seemed to make the most sense

Is there any documentation available for increasing server settings on a per site basis.

I am considering putting some additional config in local.settings.php.
max_execution_time
max_input_time

Being able to notch these up bit by bit without effecting the whole server would be ideal.

While trying to do a batch import I'm getting errors and incomplete imports randomly on BOA but it works fine on my local apache server, with the above settings at 3000 so I'm assuming this is the problem.

What I don't want is to increase the global settings and overload the server and start crashing other sites.
Any suggestions?

Thanks in advance.

omega8cc’s picture

While you could override some PHP limits using local.settings.php, so per site, it will not help much, since besides limits on the system level php.ini or site level local.settings.php files, there are also system level limits in the php-fpm configuration files and they will kill php-fpm process anyway, so to get rid of this, you will still need to raise limits the way BOND script does. There is no workaround to define them per site only, so messing with local.settings.php is useless.

omega8cc’s picture

[EDIT] In fact, you could still keep default limits in the system level php.ini files and override them in the site specific local.settings.php, plus, raise the request_terminate_timeout value in the php-fpm configuration files - it may work.

snlnz’s picture

Thanks for your reply. I was on the understanding the bond script had been deprecated?

What is a heavier timeout limit we could set before it becomes dangerous without testing a notch up at a time until things start breaking? It would be good to know in advance prior to killing the server or interfering with other sites. :)

omega8cc’s picture

The BOND script has been updated recently and does its mission brilliantly again! :)

See: http://drupalcode.org/project/barracuda.git/commit/0acc8e1 (just note that it is now designed to work with BOA HEAD and upcoming BOA-2.0.4)

You must determine what limits are high enough for your use case and set them just a bit over your requirements.

snlnz’s picture

I might hold off for 2.0.4 before I run bond then. Sounds promising!
We have been eagerly awaiting the new release! :):)

mattman’s picture

Version: » 6.x-2.x-dev

At the time of my writing this, this issue was closed. However, I wanted to add my personal setup/findings in order to make things clear for anyone else wanting to tweak Barracuda for uploading larger files like video files.

If I can get one of the Omega devs to verify my information then that would be great. Uploading files larger than 100M are working for me. Although, because I'm uploading through https, which Barracuda proxies, I don't get to see the nice progress from upload progress - that's the only thing for me to fix/get working if I can.

I've given execution time more than I personally need because I have a 25G up pipe. However, I'm assuming that many connections may not have that type of up speed and therefore a full minute of time might be needed. Feedback on this as well would be helpful!

Here's the changes I make to upload files larger than 100M.

# Increase the upload limits and times

vim /var/aegir/config/nginx.conf

client_max_body_size = 200m;

vim /opt/local/etc/php53.ini

; 1 minute upload time
max_execution_time = 3600
max_input_time = 3600

; Increase max upload size - should match client_max_body_size
post_max_size = 200M
upload_max_filesize = 200M


vim /opt/etc/php-fpm.conf

<value name="request_terminate_timeout">3600s</value>

Of course if you want these changes to persist, you have to modify your .barracuda.cnf to not overwrite the conf files on updates.

Chipie’s picture

@mattman: Thank you for posting your configuration. I was looking for a solution for this problem for a long time.