Server is Xen latest stable BOA release on Debian Squeeze, Percona.
I was attempting to upload some video files today using ckeditor/ckfinder and I got to a file that was over 100MB. Went to upload it, I got an I/O error and complaints about the file size being too large, so after a bit of reading, it appeared to be both an Nginx and PHP limitation.
I adjusted the client_max_body_size in file /var/aegir/config/nginx.conf which seemed ok, then proceeded to change the post_max_size and upload_max_filesize in /opt/etc/php.ini.
I then restarted Nginx and PHP-FPM and I was away again the files were uploading in CKFinder.
Problem is, it's been sitting on Uploaded 100% Processing... for about an hour now which is unreasonable. I tried it again, and same problem left over night. So there's something else wrong.
Is there something I'm missing when it comes to uploading large video (*.flv) files using ckeditor/ckfinder in BOA or is this not BOA related at all?
Comments
Comment #1
omega8cc commentedYou also need to raise timeout limits probably:
1.
max_execution_timeIt is hardcoded to 300 seconds, no matter how much RAM is available.2.
max_input_timeIt is hardcoded to 300 seconds, no matter how much RAM is available.When changed to higher value, you also need to modify php-fpm config file:
/opt/etc/php-fpm.conf.See
<value name="request_terminate_timeout">300s</value>there.Note: Barracuda will *not* overwrite your changes made to
php.iniorphp-fpm.confon the next upgrade if you will touch the empty control file:$ touch /opt/etc/custom.php.iniComment #2
omega8cc commentedThis belongs to Barracuda queue.
Comment #3
snlnz commentedPriceless! thanks for the tip and sorry for the queue mix up. Sometimes it's difficult to know which issue belongs where! :)
Comment #4
Jackinloadup commentedThis question is only partially related to the original question but does BOA use the nginx upload module? if not are there any plans to or any reasons not to?
The nginx upload module im talking about is posted here
http://wiki.nginx.org/3rdPartyModules
and links here
http://www.grid.net.ru/nginx/upload.en.html
The module seems work like so. Nginx catches the file upload and handles saving it to disk (/tmp/dir) and doesn't tell php about it until its completed so PHP doesn't have the handle the file stream during the upload. thus reducing memory and processor usage as well as allowing for larger uploads.
I have not implemented this so this is only information that I have read.
Thank you for the wonderful BOA system!
Comment #5
omega8cc commented@Jackinloadup - we don't use this module. Instead, we use upload progress module and I think it is not possible to use both of them, while upload progress is probably much more useful and expected feature. Feel free to open separate feature request if you will find a way to use both of them.
Comment #7
snlnz commentedDon't mean to re-open a fixed issue, but I'm experiencing a new issue that the above details don't seem to fix.
On a site with an Image field enabled on a content type I try to upload images in the front end and am getting the error:
An unrecoverable error occurred. The uploaded file likely exceeded the maximum file size (100 MB) that this server supports.If I disable the "Nginx Upload Progress" module then I just get:
The file could not be uploaded.I've changed the timeout settings to 3000s from 300s and the upload limits have been increased significantly by default since this issue was first posted so it's neither of those two issues.
Nothing in the logs and nothing in nginx/php-fpm explains why or what is stopping the image uploads.
Any ideas?
Comment #8
omega8cc commentedPlease open new issue, but it can't be critical, imo :)
Also, make sure you have read existing related issues in the queue.
Comment #9
snlnz commentedFollowing up on this issue seemed to make the most sense
Is there any documentation available for increasing server settings on a per site basis.
I am considering putting some additional config in local.settings.php.
max_execution_time
max_input_time
Being able to notch these up bit by bit without effecting the whole server would be ideal.
While trying to do a batch import I'm getting errors and incomplete imports randomly on BOA but it works fine on my local apache server, with the above settings at 3000 so I'm assuming this is the problem.
What I don't want is to increase the global settings and overload the server and start crashing other sites.
Any suggestions?
Thanks in advance.
Comment #10
omega8cc commentedWhile you could override some PHP limits using
local.settings.php, so per site, it will not help much, since besides limits on the system levelphp.inior site levellocal.settings.phpfiles, there are also system level limits in the php-fpm configuration files and they will kill php-fpm process anyway, so to get rid of this, you will still need to raise limits the way BOND script does. There is no workaround to define them per site only, so messing withlocal.settings.phpis useless.Comment #11
omega8cc commented[EDIT] In fact, you could still keep default limits in the system level
php.inifiles and override them in the site specificlocal.settings.php, plus, raise therequest_terminate_timeoutvalue in the php-fpm configuration files - it may work.Comment #12
snlnz commentedThanks for your reply. I was on the understanding the bond script had been deprecated?
What is a heavier timeout limit we could set before it becomes dangerous without testing a notch up at a time until things start breaking? It would be good to know in advance prior to killing the server or interfering with other sites. :)
Comment #13
omega8cc commentedThe BOND script has been updated recently and does its mission brilliantly again! :)
See: http://drupalcode.org/project/barracuda.git/commit/0acc8e1 (just note that it is now designed to work with BOA HEAD and upcoming BOA-2.0.4)
You must determine what limits are high enough for your use case and set them just a bit over your requirements.
Comment #14
snlnz commentedI might hold off for 2.0.4 before I run bond then. Sounds promising!
We have been eagerly awaiting the new release! :):)
Comment #15
mattman commentedAt the time of my writing this, this issue was closed. However, I wanted to add my personal setup/findings in order to make things clear for anyone else wanting to tweak Barracuda for uploading larger files like video files.
If I can get one of the Omega devs to verify my information then that would be great. Uploading files larger than 100M are working for me. Although, because I'm uploading through https, which Barracuda proxies, I don't get to see the nice progress from upload progress - that's the only thing for me to fix/get working if I can.
I've given execution time more than I personally need because I have a 25G up pipe. However, I'm assuming that many connections may not have that type of up speed and therefore a full minute of time might be needed. Feedback on this as well would be helpful!
Here's the changes I make to upload files larger than 100M.
Of course if you want these changes to persist, you have to modify your .barracuda.cnf to not overwrite the conf files on updates.
Comment #16
Chipie commented@mattman: Thank you for posting your configuration. I was looking for a solution for this problem for a long time.