This information is from the code coverage report (see http://coverage.cwgordon.com/coverage).

We need to test:

1) Cache flushing.
2) Minimum cache lifetimes.
3) Cache clearing with minimum cache lifetimes.

Comments

cwgordon7’s picture

Assigned: cwgordon7 » Unassigned
Category: task » bug
Priority: Normal » Critical

Bumping to critical on behalf of chx.

cwgordon7’s picture

Bumping to critical on behalf of chx.

R.Muilwijk’s picture

I'm making tests for the expiry part

R.Muilwijk’s picture

StatusFileSize
new14.17 KB

Here my first attempt for the tests. Please comment on how to improve them. By writing these tests I found 2 bugs making 3 assertions fail.

For two a easy patch is provided:
http://drupal.org/node/277440

For one we have to decide what to do:
http://drupal.org/node/277448

R.Muilwijk’s picture

Status: Active » Needs review
R.Muilwijk’s picture

Assigned: Unassigned » R.Muilwijk
dries’s picture

Status: Needs review » Fixed

I've made some minor changes to this patch and committed it to CVS HEAD. It might still be worth a review from someone else.

R.Muilwijk’s picture

Hmmz Dries you should know I don't have real much experience with testing untill Drupal.. so I'd really like some people to give some feedback about the tests.

catch’s picture

Title: Tests needed: cache.inc » cache.inc test failures
Status: Fixed » Needs review
StatusFileSize
new6.92 KB

There was a discussion on whether tests known to fail should be committed to core at http://groups.drupal.org/node/12051. Fairly inconclusive, but the general view was they should stay in the issue queue until the bug is fixed, or need some special handling to make it clear what's going on. Since there's no special handling for known test failures in place, it makes it near impossible for anyone to keep track of what's supposed to pass or fail - which means a lot of people aren't running all tests when reviewing patches, and when they do, they have to ask one of about three people which are known failures and which aren't.

So here's a revert for that hunk - which could then be attached to http://drupal.org/node/277448 to demonstrate the bug.

pwolanin’s picture

Status: Needs review » Reviewed & tested by the community

yes please - much better to have the bug fix and the test in the same issue

dries’s picture

I'm not sure. It isn't necessarily a bad thing to see that something is broken. It actually encourages us to go and fix it. Why would we start hiding important bugs?

catch’s picture

Dries, that's what I thought too, but I was persuaded - at least the way things are set up at the moment.

There's two immediate wins for testing in terms of development:

1. Knowing if your own code broke something
2. testing.drupal.org helping to stop anyone's code breaking stuff.

For #1, developers need to know that the testing framework is reliable - this should mean 100% passes all the time unless they broke something.

At the moment there, are many reasons why a test might fail:

* Because there's an existing bug in core
* Because there's a bug in a test
* Because there's a bug in the test framework
* Because my particular development environment runs PHP as CGI, doesn't have clean URLs, has a bad cURL version
* Because I've CVS upped and Dries just committed some broken code
* Because I've CVS upped and Dries just committed a test which exposes some broken code
* Because I've written some code which exposes a deficiency in the test framework
* Because I've written some code which exposes a latent bug in a test (bad variable_set in a test which wasn't causing failures before - i.e. recent comment module patch)
* Because I've written some code which requires a test to be updated (string change or whatever)
* Because I've written some broken code and this is causing test failures

Anyone trying to write patches for Drupal core - especially new developers or people new to testing, needs to know that 99% of the time, fails are down to the latter two reasons.

Otherwise, I can be writing a patch, cvs up, and suddenly poll module's got 7 failures, or cache.inc's got 1 failure - and I have no way to know why it's broken. Even if I know where to look for core commit messages and how to apply patches, I'll have to go through each recent commit individually, find the patches, revert them one by one and run all tests to compare whether it broke or not (as I did for poll a couple of days ago to find out it was the content generator header which ought to have nothing to do with poll module). If they don't know where cvs logs and the issue queue are, or that we have a policy of committing tests that fail (and no-one knows if we do or not at the moment), they have no way to know at all. None of the failing tests have any kind of pointer back to the issues they relate to - one new failure (poll.test) had no issue associated with it until I spent about an hour tracking down the commit that triggered it.

Additionally, as I understand it, it'll be impossible to run testing.drupal.org unless core has 100% passes - and testing.drupal.org is essential for filtering out patches which break tests. There's over 350 patches at CNR or RTBC in the queue, many don't even apply any more. We have no way of knowing this without someone manually downloading the patch and trying it themselves and a massive shortage of people who review patches. That's before we even get to checking whether they break tests or not. testing.drupal.org definitely can't know why tests fail - so it won't be able to accurately report back to the issue queue.

Since we have an issue tracker, with a list of critical bugs, this ought to be enough encouragement to fix something. If we could have a special group for 'known issues' - write test cases for that, and ensure the testing module (and testing.drupal.org) doesn't run them as 'run all tests' - then that might be a good option - especially if they also prominently point back to related issues. But that doesn't exist yet, so at the moment, unless you spend all day on the issue queue, watching commit logs, running all tests, then passes and fails appear seemingly at random, and it's going to be very hard for people to 'embrace testing' when it appears so unpredictable.

boombatower’s picture

Agree with catch about trying to get/maintain 100% pass.

Not sure if we want a separate "group" of test for know issues as that would create a rather odd facet of the testing framework. Having tests that are intended not to pass just seems odd. Rather I would vote for keeping the tests in the issue queue with the bug. Anyone interested in fixing the bug can apply the test patch and work until it passes. That also works for t.d.o which will apply the test when it checks any patches for that bug.

The point of the tests is to ensure bugs are not introduced and to do that they need to pass so although having failing tests points out bugs it isn't the point of the tests.

</mytwocents>

dries’s picture

Status: Reviewed & tested by the community » Needs work

Sorry, I think we should optimize for making Drupal work, rather than for making the tests pass.

catch’s picture

Status: Needs work » Postponed

Alright then, postponed until http://drupal.org/node/277448 is fixed

pwolanin’s picture

Status: Postponed » Needs work

@Dries - I have to agree strongly with catch on this - we need to normally have all tests pass so the existing tests can meaningfully and reliably detect regression and so that developers accept them as a normal part of development.

webchick’s picture

Rant mode: Engaged. ;)

Paraphrased from http://webchick.net/itch-of-the-week/fix-testing-crisis ... go read that for the whole long-winded thing:

Our #1 priority is to fix the existing tests so that they all pass. Without this, our testing framework is basically useless. If a developer new to testing can't trust that what's there now is working, how can they possibly know if the code they just wrote breaks because they did something wrong, or because something was broken before they got there? Every time there's a bug in the testing framework, and every time an existing core test fails to run, this serves to completely destroy developers' confidence. If we're lucky, these people will go work on other things for a little while, and then check back periodically to see if things have been sorted things out yet. If we're unlucky, they start to develop animosity and resentment about the very idea of testing, and then start to distance themselves from doing development and encourage others to as well. Either way, the end result is stalling development on D7, and fewer people to spread the work around of writing further tests because now not only do you have to grok testing, you also have to be up on the current state of passing tests. You also don't know if your cache-related code broke something new or if it's ok to disregard the result because cache tests are known not to pass.

Additionally, this is a pre-requisite for the #1 (and realistically, only) way to keep the existing tests passing: testing.drupal.org.

When I originally wrote that post on May 24, we had more than half of our tests failing. A month later, on June 24th, we were down to 3. Joy, enthusiasm, and other such things ensued. Now, I see we're suddenly back up to 26. :( This makes me tear my hair out in chunks. And I already have too little of it as it is! ;)

We already have a means for indicating that things are broken and must be fixed before release. It's called category: bug report, priority: critical. Put the tests that knowingly fail inside the bug reports marked thus, and they'll get rolled in with the patches that fix them, so our tests stay at 100% passing at all times.

I've just been informed by chx that the awesome testing party has been accepted as a session at Drupalcon Szeged. There's two ways this could go:

#1: All tests pass. People who attend write their tests, instantly discover if they've introduced a bug, and are able to fix it. Testing experts remain on-hand for trickier questions related to testing, or for giving newbies extra guidance. You know, the stuff testing experts are meant to do. :)

#2: Some tests pass, some tests don't. People who attend write their tests, but are mystified whether the failures they get are because of something they did or something that's already broken, and there's no way for them to know without flagging down a testing expert. Testing experts then become inundated with requests to look at each test and see if it's because of a known failure or not. Mentoring doesn't happen, because all of our time is spent on assessment.

I know which one I'd prefer. ;)

catch’s picture

Status: Needs work » Needs review

While I covered most of my views on this in #12, I should note that this particular failing test is also based on an as yet undecided behaviour of the menu cache - see R.Muilwijk's initial posts in this issue, along with #277448: Cache system isn't doing CACHE_TEMPORARY like documented and #277448: Cache system isn't doing CACHE_TEMPORARY like documented - until the former is resolved, we don't even know if the test is testing desired behaviour or not yet (since the latter patch doesn't cause the test to pass - so both the code and the test might be wrong).

edit: there are also almost 1,000 bugs in the queue against Drupal 6 and 7 (with and without patches) - I'd rather see the tests for those in their respective issues than in core. Tests are great for preventing regressions, but as webchick pointed out, not a replacement for the issue queue.

boombatower’s picture

Once again, I agree with catch and webchick.

cwgordon7’s picture

I do not believe that new failing tests need to be committed in order to expose the bug - the bug has already been exposed by the test, let's fix the bug, and then commit the test, in order to uphold the image of our testing framework as reliable.

So, basically I agree with catch, pwolanin, webchick, and boombatower that this should be reverted in the absence of any other immediate way to make the tests pass. The proper procedure should be: 1) Submit patch containing test that fails. 2) Open core issue to fix cause of test failures. 3) Cross link between the issues. 4) Get the underlying fix committed. 5) Get the the test itself committed.

webchick’s picture

Simpler workflow:

1) Submit patch containing test that fails.
2) Roll test into patch that fixes bug.
3) Commit fix and test at the same time.

cwgordon7’s picture

Status: Needs review » Needs work

That works too :)

Unfortunately, this reversion no longer applies, so cnw until it's rerolled...

cwgordon7’s picture

Status: Needs work » Needs review
StatusFileSize
new6.84 KB

Rerolled.

boombatower’s picture

Status: Needs review » Reviewed & tested by the community

Applies, passes. Beautiful.

dries’s picture

Status: Reviewed & tested by the community » Fixed

I still disagree with this direction but I've committed the patch on popular demand.

R.Muilwijk’s picture

Status: Fixed » Needs work

I hope you guys make the same effort now trying to get these tests working right and in again.

catch’s picture

Thanks Dries, this'll make it a lot easier to keep track of regressions. R.Muilwijk - do you think the test should be moved over to #277448: Cache system isn't doing CACHE_TEMPORARY like documented ?

R.Muilwijk’s picture

It's fine by me... though I don't understand the reason of all the 'Tests needed: xxx' issues then.

catch’s picture

There's two kinds of "Tests needed: xxx" - issues.

1. Issues for completely untested bits of core discovered via the code coverage report. These were opened up en masse so we can track what needs to be done easily, and point people to one of them if they say "I want to write a test, where do I start?". The assumption is that the tests for these issues will pass in 99% of cases.

2. Tests for bugs which have already been fixed, but for which we want posthumous tests for the bug so it doesn't come back to life. These tests also ought to pass once written since a bug fix already went in.

What we're missing is a policy decision on Dries as to whether we commit tests which fail due to bugs in core without the associated bug fix - this issue is as close to one as we've got at the moment. Ideally - failing tests from 1., get their own critical issue for the bug itself, or attached to an existing bug report that doesn't have a test for it yet so we can track the bug and the test together.

pwolanin’s picture

@catch - I think the ideal (and perhaps informal policy) should be that the test and bugfix should be part of the same issue - so that we don't add failing tests and can keep the test and patch in sync.

boombatower’s picture

@pwolanin: That's been my hope.

BioALIEN’s picture

Fully agree with #30. Anything less would make it very confusing for us all.

R.Muilwijk’s picture

@catch: so if I'm not mistaking at the moment you write a test for one of the Tests needed: xxx series and you discover a bug. You post the tests that are working there and the one which discover a bug you post a new issue with the fix and test in the patch.

catch’s picture

Title: cache.inc test failures » Tests for cache.inc
Status: Needs work » Postponed

R.Muilwijk: yep, I'd add 'search for possible duplicates' before posting the new issue to that ;) but yeah I reckon that's the best workflow. Bit of a cross-post, but #280972: Add a new project issue status option: active (reproducible) would help those issues get more notice than just 'active' if there's no obvious patch.

I'm going to mark this postponed pending the other issue - then we can close it when the test + fix get in from there.

chx’s picture

Title: Tests for cache.inc » Tests for cache.inc need expected fails
Assigned: R.Muilwijk » chx
Status: Postponed » Needs review
StatusFileSize
new15.34 KB

So Barry Jaspan whined how it sucks that we can't commit tests that have known failures. #9 above says "Since there's no special handling for known test failures in place" -- welll now we have. See $this->expectedFail()->assertCacheRemoved(t('Temporary cache with lifetime does not exists after wipe.')); -- easy! We can add expected fails which still make a test pass but later, like during code freeze we can focus on it and fix it. Just dont ship with expectedFail in place. That's one grep to run on the code or a select on the the simpletest table for some utility module.

boombatower’s picture

I don't really see the usefulness. Having tests in issues avoids the extra bloat and maintains a simple workflow.

chx’s picture

StatusFileSize
new15.53 KB

I added doxygen and changed the expected fail icon to a warning. But it's still a pass so green. As it's so much easier to write a test which reveals a bug than to actually fix a bug this is a very useful feature IMO. And I hope Barry will chime in protecting his feature request :)

cwgordon7’s picture

Boombatower - that was my initial thought as well, but the more I thought about this, the more sense it made. It's an easy way for us to distinguish assertions that actually fail in core from the assertions that you may have broken with your patch; it also means that tests that expose bugs in core can be committed while still maintaining the 100% test pass rate. However, in contrast to simply attaching the tests to the issues that fix the core bugs, we can now commit tests to core, where everyone will be aware of them (as they are marked 'expected fail') but not be confused by them (as these fails are 'expected', and not your fault).

I don't understand your complaint about extra bloat. What extra bloat? This patch? It is only a few extra lines of code. If you are referring to the tests themselves that are now getting committed, isn't that just what we want, many many more tests?

I disagree that the current workflow is any simpler than the proposed one here. "Attach failing tests to issues where the fails are corrected and don't commit them to core" is no simpler and, I would argue, much more unintuitive then "Just mark the tests that fail due to core bugs as an expected failure."

I am very for this patch.

bjaspan’s picture

The ability to declare expected failures is a standard property of mature testing systems. It's basically essential.

For the record, an expected failure that *passes* must be considered a full failure. Suppose that a bug exists. Someone writes a test and marks it as expected failure. Later, someone fixes the bug. If the expected failure test continues to pass, it is likely that no one will ever removed the "expected failure." Now, if the bug is re-introduced, the test will continue to pass because it is still an expected failure. Instead, if the expected failure fails when the underlying test actually passes, someone will remove the expected failure indicator, and if the bug is re-introduced the test will correctly fail.

I'll try to review the patch now.

bjaspan’s picture

FYI, I've found a few bugs in the expected failures patch. Also, at Karoly's suggest, I will be re-posting the expected failures patch as a separate issue instead of tying it to the cache.inc tests. Bu I'm not done yet. I'll post that URL here when it exists.

bjaspan’s picture

Status: Needs review » Needs work

I've posted my new version of the expected failures patch at http://drupal.org/node/301005. The version in this issue's patch should be removed.

chx’s picture

Assigned: chx » Unassigned
Status: Needs work » Postponed

http://drupal.org/node/301048 moved the expected fail functionality there.

bjaspan’s picture

marcvangend’s picture

Component: tests » ajax system

Is there still something to do here for D7, or can this issue be closed?

rfay’s picture

Component: ajax system » other
damien tournoud’s picture

Title: Tests for cache.inc need expected fails » Fix the minimum cache lifetime feature
Status: Postponed » Needs review
StatusFileSize
new7.71 KB

Here is the patch rerolled with just the failed tests, and not the whole surrounding expected fail feature. There are failures in there that we badly need to fix.

Status: Needs review » Needs work

The last submitted patch, 276267-cache-fail.patch, failed testing.

Letharion’s picture

Status: Needs work » Needs review

#46: 276267-cache-fail.patch queued for re-testing.

dstuart’s picture

Ive tested this patch locally and can confirm the failures

Temporary cache without lifetime valid after user cache expiration. Other cache.test 413 CacheExpiryCase->testTemporaryLifetime() Fail
Temporary cache with lifetime does not exists after wipe. Other cache.test 417 CacheExpiryCase->testTemporaryLifetime() Fail
Unit timestamp cache without lifetime deleted after expiration and wipe. Other cache.test 487 CacheExpiryCase->testUnixTimestampNoLifetime() Fail
Unit timestamp cache with lifetime deleted after expiration and wipe. Other cache.test 513 CacheExpiryCase->testUnixTimestampLifetime() Fail

Status: Needs review » Needs work

The last submitted patch, 276267-cache-fail.patch, failed testing.

bjaspan’s picture

I am looking into this. I am starting by trying to reverse-engineer the meaning of the cache_lifetime and cache_flush_$bin variables, $user->cache, and how all of them are used in deciding when to clear the cache. The fact that so far I'm very confused is not a good sign.

bjaspan’s picture

Here is what I have so far. I am not yet able to document the semantics of cache_flush_$bin because I do not understand them:

  /**
   * This class uses state stored externally:
   *
   * - variable cache_flush_$bin: If non-zero, the REQUEST_TIME
   *   at which a general cache clear for $bin was last requested. This
   *   variable is reset to 0 each time a TODO rather confusing set of
   *   conditions occur.
   * - variable cache_lifetime: The global minimum cache lifetime.
   *   CACHE_TEMPORARY items are not wiped until they are at least this old.
   * - $user->cache: The REQUEST_TIME at which a general cache clear for ANY
   *   bin was last requested during a page request for $user. If
   *   cache_lifetime is non-zero, $user->cache overrides the minimum cache
   *   lifetime by simulating that cache entries created before $user->cache
   *   (i.e.: before the most recent cache clear for any bin, even a different
   *   bin than the cache entry being read) do not exist.
   */

I now have a session to attend.

bjaspan’s picture

Okay, here's what I have now. I think this is correct and consistent:

  /**
   * This class uses the following state stored externally:
   *
   * - variable cache_lifetime: The global minimum cache lifetime, in
   *   seconds. When set, normal attempts to clear CACHE_TEMPORARY items (via
   *   clear(NULL) or garbageCollection()) are skipped until cache_lifetime has
   *   passed since the previous call to clear(NULL) for a bin. Note that this
   *   is subtly different from "the minimum lifetime for every item in the
   *   cache" unless the cache clear occurs every second. An item might
   *   have expired eons ago but not will be cleared until clear(NULL) or
   *   garbage collection is called twice at least cache_lifetime apart. This
   *   behavior means the whole cache system depends on the fact that
   *   clear(NULL) is called frequently.
   * - variable cache_flush_$bin: If non-zero, the REQUEST_TIME
   *   at which the current cache clear cycle for $bin started. When
   *   cache_lifetime is non-zero, purging of expired CACHE_TEMPORARY items
   *   during read-time garbgage collection and clear(NULL) is skipped until
   *   cache_lifetime seconds have passed since the cache_flush_$bin cycle
   *   began.
   * - $user->cache: The REQUEST_TIME at which a clear(NULL) for ANY
   *   bin was last requested during a page request for $user. If
   *   cache_lifetime is non-zero, $user->cache overrides the minimum cache
   *   lifetime by simulating that cache entries created before $user->cache
   *   (i.e.: before the most recent cache clear for any bin, even a different
   *   bin than the cache entry being read) do not exist. This allows a user
   *   to see content changes caused by their own page requests faster than
   *   the minimum cache lifetime allows.
   */
bjaspan’s picture

Now that I understand what I documented above, the test failures make sense to me. It turns out that the tests are broken, but I also think there is a bug in cache.inc. Here's is what I've found:

  protected function setupLifetime($time) {
    variable_set('cache_lifetime', $time);
    variable_set('cache_flush', 0);
  }

The cache_flush variable is now cache_flush_$bin.

+    $user->cache = isset($user->cache) ? $user->cache + 2 : time() + 2;

All the cache expiration logic in cache.inc is now based on REQUEST_TIME, not time(). Thus, all the logic in cache.test that uses time() and sleep to affect caching behavior is wrong, and is what is causing at least some/most of the failures. If we want to do time-based testing, we need to change the helper functions at the top of the file to use drupalGet() instead of API calls directly.

Alternatively, we need to decide that using REQUEST_TIME in cache.inc is actually wrong. Consider:

1. In a long-running process, I call cache_set() with a timestamp expiration time of 60 seconds.
2. The process runs for another hour.
3. cache_get() continues to serve the cache entry because REQUEST_TIME has not changed.

I know David Strauss is looking to making a long-running Drupal daemon, and I am interested in that as well (gotta get away from bootstrapping every page request!). Caching clearing will have to depend on clock time, not REQUEST_TIME, for that. However, D7 does not support long-running daemons, so this is not a D7 issue.

+    $this->assertCacheExists(t('Unit timestamp cache data without lifetime exists before wipe.'));

s/Unit/Unix/. This whole test case is one big cut-and-paste disaster.

Here is the actual bug I might have found in cache.inc. This is from DrupalDatabaseCache::garbgageCollection():

    if ($cache_flush && ($cache_flush + variable_get('cache_lifetime', 0) <= REQUEST_TIME)) {
      // Reset the variable immediately to prevent a meltdown in heavy load situations.
      variable_set('cache_flush_' . $this->bin, 0);
      // Time to flush old cache data
      db_delete($this->bin)
        ->condition('expire', CACHE_PERMANENT, '<>')
        ->condition('expire', $cache_flush, '<=')
        ->execute();
    }

When the minimum lifetime has passed, cache items expiring before $cache_flush are deleted. This means cache items expiring before the FIRST call to cache_clear_all() are deleted.

This is from DrupalDatabaseCache::clear():

        elseif (REQUEST_TIME > ($cache_flush + variable_get('cache_lifetime', 0))) {
          // Clear the cache for everyone, cache_lifetime seconds have
          // passed since the first request to clear the cache.
          db_delete($this->bin)
            ->condition('expire', CACHE_PERMANENT, '<>')
            ->condition('expire', REQUEST_TIME, '<')
            ->execute();
          variable_set('cache_flush_' . $this->bin, 0);
        }

When the minimum lifetime has passed, cache items expiring before REQUEST_TIME are deleted. This means cache items expiring before the SECOND call to cache_clear_all() are deleted.

cache.inc should probably decide what to compare expiration time against and be consistent, unless there is a good reason for the difference in which case it should be documented.

I am not yet taking on rewriting expiration tests in cache.inc; someone else probably can with this info. But if no one else does, at this point I know how to so I can.

drunken monkey’s picture

Status: Needs work » Needs review

I re-implemented the tests using the information in #54. Three of them still fail (locally), but since I don't really know anything about the cache system (even after reading Barry's comments) I can't tell whether the patches or the cache system are broken. Just taking some coding off of people more versed in the cache system. ;)

drunken monkey’s picture

StatusFileSize
new11.58 KB

Actually attaching the patch helps, of course …

Status: Needs review » Needs work

The last submitted patch, cache-fail.patch, failed testing.

moshe weitzman’s picture

Priority: Critical » Normal

After much discussion, what we have here are some proposed tests that don't work yet due to a normal priority bug. Downgrading.

olli’s picture

Status: Needs work » Needs review

#56: cache-fail.patch queued for re-testing.

Status: Needs review » Needs work

The last submitted patch, cache-fail.patch, failed testing.

  • Dries committed 53c395c on 8.3.x
    - Patch #276267 by R.Muilwijk: wrote tests for the caching API.  They...
  • Dries committed 7881dd1 on 8.3.x
    - Patch #276267 by cwgordon7, boombatower, catch, et al: remove failing...

  • Dries committed 53c395c on 8.3.x
    - Patch #276267 by R.Muilwijk: wrote tests for the caching API.  They...
  • Dries committed 7881dd1 on 8.3.x
    - Patch #276267 by cwgordon7, boombatower, catch, et al: remove failing...

  • Dries committed 53c395c on 8.4.x
    - Patch #276267 by R.Muilwijk: wrote tests for the caching API.  They...
  • Dries committed 7881dd1 on 8.4.x
    - Patch #276267 by cwgordon7, boombatower, catch, et al: remove failing...

  • Dries committed 53c395c on 8.4.x
    - Patch #276267 by R.Muilwijk: wrote tests for the caching API.  They...
  • Dries committed 7881dd1 on 8.4.x
    - Patch #276267 by cwgordon7, boombatower, catch, et al: remove failing...

  • Dries committed 53c395c on 9.1.x
    - Patch #276267 by R.Muilwijk: wrote tests for the caching API.  They...
  • Dries committed 7881dd1 on 9.1.x
    - Patch #276267 by cwgordon7, boombatower, catch, et al: remove failing...

Status: Needs work » Closed (outdated)

Automatically closed because Drupal 7 security and bugfix support has ended as of 5 January 2025. If the issue verifiably applies to later versions, please reopen with details and update the version.