The Media module has no valid tests that should run against the bot (because it cannot use versioned dependencies correctly), and now it's recently throwing errors for all of our patches which we have only been using the bot for "does this apply or not"

Example: http://qa.drupal.org/pifr/test/330128

Comments

I was just looking at that because the test a few minutes ago was one of the first tests on a php5.3.5 testbot. Had it already failed like this before this afternoon, or could it be related to the php5.3.5 thing? (I guess I should run it on a regular testbot and then I'll know)

Update: I ran it on 654 (not yet php5.3.5) and it came out with the same result, so this is a new regression, I expect.

Functionality changed in PIFR 6.x-2.12, with the commit at http://drupalcode.org/project/project_issue_file_review.git/commitdiff/8...

The goal was to detect cases where tests were expected, but no results were returned. This modified the logic such that, if test files are found we would expect simpletest to detect tests. If it doesn't, we throw an error.

Unfortunately, in this case, we've got a legitimate scenario where there *are* test files, but no *valid* tests (due to the file_entity dependency).

I don't suppose it's enough to say that if you see the 'run-tests.sh reported no tests found', the testbot has proceeded to the point where you can assume that the patch does apply correctly, eh?

Except then it auto-marks everything as needs work. I can disable testing for Media module, but I don't want to unless it's a last resort.

@jthorson, you mean 6.x-2.11 in #2 , right? That's what's deployed.

Seems like everybody liked it the way it was...

Oooops ... yeah. 6.x-2.11. And the 'needs work' is definately not optimal.

If we can figure out a way to make everyone happy, the 'fix for the fix' will be in 6.x-2.12. ;)

As we discussed in IRC, we have seen things be committed to core through human error too many times. The change catches those cases and makes it very clear so rolling it back is something I would like to avoid.

We are getting into territory that would be better handled by the new system and hard to choose how far to take things now. If we can come up with a simple fix to correct the code that tries to detect if tests should be expected that would be great, seems like we should be able to just need to take a look.