Closed (fixed)
Project:
Update Status
Version:
5.x-2.0
Component:
Code
Priority:
Normal
Category:
Bug report
Assigned:
Reporter:
Created:
11 Jan 2008 at 02:06 UTC
Updated:
1 Feb 2008 at 01:21 UTC
Jump to comment: Most recent file
Upon updating my Drupal installation to Drupal 5.6, the Administer -> Logs -> Available Updates page immediately claimed that 5.6 was insecure and recommended that I download Drupal 5.5 and earlier. Drupal 5.6 did not appear on the list of suggested installs on that page.
This may be one of those problems that goes away quickly by itself, as some database updates itself after the new release. If that happens, perhaps we may regard this as a request to fix the new-version-release procedure to prevent such a delay in the future?
| Comment | File | Size | Author |
|---|---|---|---|
| #12 | 208687_update_status_info_ctime.patch | 3 KB | dww |
| #9 | 208687_update_status_info_mtime.patch | 2.68 KB | dww |
| #3 | 2008-01-10_1827.png | 31.19 KB | matt@antinomia |
Comments
Comment #1
mike booth commentedOkay, problem apparently fixed as of thirty seconds ago.
Why the delay? Perhaps the reason is not important enough to care about and we can close this issue right now.
Comment #2
drutube commentedI"m getting this to only its showing all the updates back to 5.1 as available. The assumption is that it things the site is back to 5.0 .. I don't know.
Comment #3
matt@antinomia commentedHere's a screenshot for reference...
Comment #4
dwwYup, merlinofchaos and I just had a chance to clearly diagnose this problem in IRC with some folks that were seeing this. Here's the deal:
- your sites have the stale information about available updates from the last time your cron job checked (sometime within the last 24 hours).
- update_status now sees that your core version is 5.6
- when it's comparing the version you have with the data about available releases, it's confused, since it doesn't see your release anywhere in the available update data.
so, it warns you about every security release it saw, and tells you to upgrade to the latest release it knows about (5.5).
if you "check manually", all will be solved.
however, this is lame. update_status can do better. a few options:
A) it can compare the mtimes on the .info files. If any .info files have a newer date than the timestamp it last checked for available updates, it should automatically refresh.
B) if it can't find your existing release at all, it should automatically refresh and start over on the available update comparisons.
C) maybe both.
Comment #5
dww(sorry about blasting the title -- it was an unfinished thought and i forgot to fix it when i submitted)
Comment #6
mike booth commented"if you 'check manually', all will be solved."
I could have sworn that I had tried that, several times, before I chose to risk embarrassing myself by putting up this issue.
But it's hard to say, since my personal blog doesn't constitute a nice consistent test bed for a bug like this. Perhaps I was looking at cached pages, or something.
Thanks for your help!
Comment #7
dww@mechfish: you might also have fallen prey to http://drupal.org/node/184418, which also complicates this problem.
Comment #8
Leeteq commentedsubscribing
Comment #9
dwwThis should do it, but I have a few open questions about this patch. First, let me explain what it's doing:
- In update_status_get_projects() we now cache the $projects array since there are now code paths where we'd ht that multiple times in the same page load. That's probably worth doing, regardless of the rest of this patch, so no worries here.
- In update_status_get_projects(), when we're parsing the .info files, we also call filemtime() and stash the file's modification time in '_info_file_mtime' in each info array. Doing it here saves us the grief of finding all the .info files again, which is an expensive operation (system_get_files_database(), etc).
- In update_status_get_available(), before we consider the cache, we first call update_status_get_projects() and iterate through all the project .info files, and make sure that none of them have an '_info_file_mtime' that's greater than the last time we checked for available update data. If we find a .info file newer than our available update data, we force a refresh of the data.
So, here are the open questions/problems:
A) If update_status_get_available() is invoked such that it's explicitly being asked not to refresh data, should we do all this mtime checking? The patch says 'yes'. There are only 2 places where we are explicit about not wanting to refresh: hook requirements (where we wanted to just print "no data" instead of forcing the refresh) and cvs_deploy.module (more on this below). However, in the case of a newer .info file, hook_requirements() is exactly where we most need to either do this forced refresh, or to just invalidate the cache and say we have no data at all. When merlinofchaos and I were discussing this bug in IRC, he was leaning towards not asking the admin to have to click their way out of this situation. And, there's another solution to the cvs_deploy case. So, for now, the patch always refreshes the available update data if we find a newer .info file.
B) Should we print a message when we hit this case? Does the admin even really need to know that update_status is being smart like this? All they care about is that they get accurate available update reports. Seems like noise (and would further break the D6 string freeze) if we added a message here.
C) Should we worry, inside update_status_get_available(), about infinite recursion? update_status_get_available() now calls update_status_get_projects(), which in turn invokes the hook to alter the .info files. Well, cvs_deploy's implementation of that hook used to call update_status_get_available() if it found a release with a version of "HEAD", so that it could try to resolve HEAD into whatever the current version string is of the release node pointing to HEAD. So, with *just* this patch, and an existing version of cvs_deploy installed, you kill your site with infinite recursion... yay. ;) There are three possible solutions to this:
C.1) cvs_deploy() could be patched to just call cache_get() directly, instead of update_status_get_available(). This works fine, and is already done. This is probably the simplest, and least complicated, so it's therefore probably best.
C.2) Try to be fancy inside update_status_get_available() about preventing infinite recursion with a little "static $been_here" flag. Even if we did this, we'd still need to patch cvs_deploy to just get the data from cache, anyway, since it would end up with the empty array of available update data (thanks to the infinite recursion bail-out), and wouldn't be able to resolve HEAD into real versions, anyway. :(
C.3) Instead of calling update_status_get_projects() inside update_status_get_available(), find the mtimes on the .info files directly (would require another call to system_get_files_database(), etc). This seems crappy all the way around, since it makes the common case more expensive and complicated to let cvs_deploy be more careless.
So, the patch currently uses option C.1, and I'll just need to patch cvs_deploy. However, I was planning to patch both, anyway, and make 5.x-2.1 update_status require a new 5.x-1.1 version of cvs_deploy, to make [TODO - mtime issue link] work, anyway.
Comment #10
dwwNote: option B from comment #4 is sort of a can of worms, which is why I chose to go with option A. By the time we're comparing available updates with what we have currently, it's way too late to try to refetch, etc. I suppose we could add some logic that if we don't find the currently installed version anywhere in the available update data, we mark the project in error, but that'd be a separate patch.
Comment #11
dwwFYI: see http://drupal.org/node/209242 about fixing this in update.module in D6 core.
Comment #12
dwwBetter version of the patch that uses filectime() instead of filemtime(). See http://drupal.org/node/209242#comment-694389 for more about why.
Comment #13
dwwYay, #209242 landed in core. Given this is nearly identical code, I've tested it heavily, and it seems no one else is going to review or test this, I just committed to the DRUPAL-5--2 branch.
Ideally, some people would help get us closer to a 5.x-2.1 official release by testing the 5.x-2.x-dev version for a little while (I just cheated and kicked the packaging script to rebuild the tarball for that release after my commit). There are only a small handful of other changes that I think need to happen before we ship 5.x-2.1, but testing now would be welcome. Also, if you're really interested in beating on this code, you can setup your own test site for serving the release history files and change all sorts of stuff to see how things behave using the instructions I wrote over at http://drupal.org/node/210984.
Comment #14
dwwNote: cvs_deploy users are safe if they update cvs_deploy to the end of the DRUPAL-5 branch again. See http://drupal.org/node/211232.
Comment #15
Christefano-oldaccount commentedFor me this problem persisted until the cache tables were manually cleared (using drush). Is it a good idea to roll out an update in the update_status.install to do this?
Comment #16
dww@christefano: The point of this patch is that the new code will automatically flush the cache if it notices newer .info files than the last check for available updates. So no, I don't think we need an update in update_status.install. As soon as someone installs the forthcoming 5.x-2.1 version of update_status, the new code will fire in hook_requirements(), notice that update_status.info is newer than the last check for available updates, and will invalidate the cache and refresh the data.
In D6 core, there's a hook to declare tables that should be cleared automatically when running update.php, and update.module now does so: http://drupal.org/node/210141. There's no such hook for D5 update.php, but now that this patch has landed, I don't really see any need for it.
Comment #17
Christefano-oldaccount commentedThanks for the explanation, dww. It's topnotch -- as is your work -- and makes sense even to someone pulling a 30-hour day.
Comment #18
Anonymous (not verified) commentedAutomatically closed -- issue fixed for two weeks with no activity.