Early Bird Registration for DrupalCon Portland 2024 is open! Register by 23:59 PST on 31 March 2024, to get $100 off your ticket.
Please allow support for thousands of file. What I mean is in Unix there is limit of 32K nodes per directory so you can't have more files than that (or something like that). So Allow some sort of limit criteria beyong which it will autocreate directories and upload files there
So if /files is the file upload directory and 10000 is given limit
/files/10000 should have first 10000 files
/files/20000 should have next (10001-20000) files
/files/30000 should have next (20001,30000) files
Comments
Comment #1
ajayg CreditAttribution: ajayg commentedIf there is no limit specified it should just upload in the /files directory (if it is the chosen path) to be compatible with current functionality.
Comment #2
quicksketchYou can do something similar to this already, just use a token for date or user in your file path. Like "images/[year]-[month]-[day]". As long as your site is up for less than 87 years you shouldn't hit the sub-directory problem. As far as I know, the 32K file-system limit is actually for *sub-directories" under a parent directory, not for the total number of files in a directory, so this might not even be a problem to begin with.
Comment #3
quicksketchFrom http://en.wikipedia.org/wiki/Ext2:
So actually by creating subdirectories you may actually create a problem where there is none if you do not create them. I'd probably suggest a date-based directory structure just for clarity anyway if you're going to have such a huge number of files. If you insist on having a blocked stucture based on number of files, you can write a module that provides a "[file-block]" token that follows the pattern you describe.
Comment #5
ajayg CreditAttribution: ajayg commentedI am trying to recall from my memory since I could not find a reference that approaching 32k number of files in a directory used to be a performance issue. Even on the page you have mentioned
here is a quote.
After some more search it seems ext3 (which is now default on many linux distributions) has better performance and ext4 now has (64k) subdirectory limit.
You suggestion using token is simplest and most flexible way to try different limits to suit your need.
Has anybody tried using this approach for huge number files? I am thinking this would be one of crucial
decision for content intensive sites like newspaper sites who may need to store ever growing articles and associated images .
Comment #6
quicksketchClosing after lack of activity.