I am +1 to implement that as an compressing serializer decorator.

For concerns that it's not worth to compress little things and there should be a decision:

- The serializer can do:

public function encode($data) {
$data = $this->serializer->encode($data);
  if (sizeof($data) > $this->threshold) {
    $data = '1' . 'gzcompress($data, $this->compression_level);
  }
  else {
    $data = '0' . $data;
  }

  return $data;
}

public function decode($data) {
  $encoded = substr($data, 0, 1);
  $data = substr($data, 1);

  if ($encoded === "1") {
    $data = gzuncompress($data);
  }

  return $this->serializer->decode($data);
}

and that's it already.

Of course in a next step we could also abstract out a CompressorInterface, but that's not needed for a simple first start.

----

Originally I got this idea when I bumped into a shared hosting situation that had max_allowed_packet set to a value of around 2MB and the contents of the serialized admin menu turned out to be just a bit larger than 2MB. The hoster refuses to change this value,so I had to come up with a different solution: cache the data before storing it.

I did some tests on my local machine and it turns out that it is also faster for almost every size of cache data. I.e: writing to the cache is faster starting around 100 to 500KB. Reading the cache is about same speed for (uncompressed) sizes of 5K and faster for higher sizes. This is a setup where the database is on the same machine as the webserver. I did not test it in a situation where the database is on a different server but expect it there to be even faster.

Advantages:
- Performance.
- Less database storage.
- Less network traffic.
- Less chance of running into the dreaded "MySQL Max Allowed Packet Size Exceeded" message.

Disadvantages:
- Slightly more complex cache internals.

CommentFileSizeAuthor
#27 1281408-27.patch4 KBAnonymous (not verified)
#11 1281408-10.patch2.51 KBfietserwin
#10 1281408-10.patch2.59 KBfietserwin
#1 compress-cache.patch1004 bytesfietserwin

Comments

fietserwin’s picture

Status: Active » Needs review
StatusFileSize
new1004 bytes

Please find attached a patch that implements this, using:
- The field serialize to also indicate whether the data has been compressed or not.
- A variable to set a threshold (or disable it completely by setting it to an extremely high value) above which data is compressed.
- Uses the gzcompress and gzuncompress functions. These are the most easy and most available. Availability of these functions is tested.

Questions:
- Should the field be renamed to something like encoding?
- Should the variable be accessible via the UI?

thehong’s picture

Status: Needs review » Needs work

This feature should allow user to disable it by put $config['cache_compress'] = FALSE in settings.php
hook_update_N() needed to flush existing cache.

fietserwin’s picture

The hook_update_N is not needed. Because I extended the usage of the field 'serialized', the code knows if a stored entry is compressed or not. Existing entries will be handled correctly and will be compressed when they are refreshed (and are larger than the given threshold value).

I'm not sure about using $config. I use a variable 'cache_compress_threshold', which if set to PHP_MAX_INT will disable caching. So this can be done with one lookup instead of 2. Remains the question whether we want to expose this variable in the UI. Going through the core code, I see lots of variables without UI exposure: locale_cache_length, taxonomy_maintain_index_table, cache_default_class, etc.

We can wait for the variables API, hoping it will be added to core, so we can describe all these variables, or we can add this variable to the current performance page.

Setting to "needs review" again, not to ignore your remark, but to attract more remarks on this subject.

fietserwin’s picture

Status: Needs work » Needs review
xjm’s picture

Status: Needs review » Needs work

Thanks for the patch! Interesting proposal. Here's some feedback on the patch itself:

+++ b/includes/cache.incundefined
@@ -401,7 +401,14 @@ class DrupalDatabaseCache implements DrupalCacheInterface {
+        // @todo: log message?

This could be a lot of messages, so let's remove this @todo.

Also, I love bitwise logic as much as the next gal, but let's rework the patch to be a little more legible. At the least let's add some inline comments explaining what's going on. :)

thehong’s picture

Compress data = more processing, I think we still need allow user to disable this feature. And compress level, it should be also configurable.

wojtha’s picture

Nice patch, but I agree with @thehong. This should be optional, configurable (might be only via settings.php) and disabled by default.

+      if (!function_exists('gzuncompress')) {
+        // @todo: log message?
+        return FALSE;
+      }

You can check this in the hook_requirements. E.g. if (variable_get('cache_compress', 0) && !function_exists('gzuncompress')) { ...

claudiu.cristea’s picture

Interesting feature.

"Performance" is a pro? I think this needs more benchmarking.

thehong’s picture

Note that we have /admin/config/development/performance form and site cache is not always DrupalDatabaseCache.

fietserwin’s picture

StatusFileSize
new2.59 KB

Here is a next version of the patch.

Changes:
- Move to /core reroll.
- Changed documentation of involved methods to comply with the new standards.
- Cleaner code.
- Use of constants in the bit logic. I hope it is now self explaining (#5).
- Removed todo: "fail" silently by returning FALSE ("there is no usable cached version" of the data you are requesting) (#5).
- Changed default for variable cache_compress_threshold to PHP_INT_MAX, so the default is now off (#6, #7).

Notes:
- This setting is configurable via settings.php (and, in the future, via the variable module?) (#7, #9).
- Compress level = 9: the idea of cached data is write once, read many, and in such cases 9 is always best (#6).
- It is implemented in the database cache class self. So cache classes with other back ends are not touched by this change. As this is an implementation detail, this seems the correct place. Moreover, other cache back-ends may not even have a metadata field like 'serialized'.

Unsure about:
- Using a form to change this highly technical variable.
- hook_requirements: should we do that and if so, in what .install file (I guess system.install). Note that we are talking about a situation that will only occur after some sort of migration, and cache data should never be migrated.
- We could extend expire() to delete compressed items if the system does not support gzuncompress. But we are talking the same situation and the data will directly after a failed lookup be overwritten anyway.

fietserwin’s picture

Status: Needs work » Needs review
StatusFileSize
new2.51 KB

Patch seems wrong, new try

jared_sprague’s picture

We developed a module that does this without having to change core. Our cache_form table was growing by gigabytes per day, and our solution was to compress the cache data going into cache_form.

The beautiful thing about drupal's caching desgin is that it's object oriented and easy to extend and create different implementations, because of this I think that this doesn't belong in core, but in an extension of the default implementation. That's exactly what we did.

You can find the module here: http://drupal.org/sandbox/jared_sprague/1545886

This module is more flexible than the above patch because you can specify specific bins that you want to add compression too, unlike putting this in core where it's compress all or nothing.

Here are the features:
- Extends default database cache class, doesn't change core
- Can configure which cache bins you want to compress via settings.php
- Can configure the gzip level via settings.php

Currently this module is being run on the Red Hat Knowlegebase here: https://access.redhat.com/knowledge

And just compressing cache_form reduced our binary log file sizes by 80%. After it's been production tested for a few weeks we're going to apply to get it promoted to a full project.

fietserwin’s picture

Good to read. Can you specify a threshold size below which you don't gzip? In my experience (performance based) that is more important than gzip level, as basically, with caching you are in a write once, read many situation where 9 is always the best. But cache_form may be the exception to the write once, read may rule, so adding it to the form does not harm.

If your project gets promoted, I think this can be closed as a won't fix.

xjm’s picture

Issue tags: +needs profiling

anavarre queued 11: 1281408-10.patch for re-testing.

Status: Needs review » Needs work

The last submitted patch, 11: 1281408-10.patch, failed testing.

Version: 8.0.x-dev » 8.1.x-dev

Drupal 8.0.6 was released on April 6 and is the final bugfix release for the Drupal 8.0.x series. Drupal 8.0.x will not receive any further development aside from security fixes. Drupal 8.1.0-rc1 is now available and sites should prepare to update to 8.1.0.

Bug reports should be targeted against the 8.1.x-dev branch from now on, and new development or disruptive changes should be targeted against the 8.2.x-dev branch. For more information see the Drupal 8 minor version schedule and the Allowed changes during the Drupal 8 release cycle.

Version: 8.1.x-dev » 8.2.x-dev

Drupal 8.1.9 was released on September 7 and is the final bugfix release for the Drupal 8.1.x series. Drupal 8.1.x will not receive any further development aside from security fixes. Drupal 8.2.0-rc1 is now available and sites should prepare to upgrade to 8.2.0.

Bug reports should be targeted against the 8.2.x-dev branch from now on, and new development or disruptive changes should be targeted against the 8.3.x-dev branch. For more information see the Drupal 8 minor version schedule and the Allowed changes during the Drupal 8 release cycle.

Version: 8.2.x-dev » 8.3.x-dev

Drupal 8.2.6 was released on February 1, 2017 and is the final full bugfix release for the Drupal 8.2.x series. Drupal 8.2.x will not receive any further development aside from critical and security fixes. Sites should prepare to update to 8.3.0 on April 5, 2017. (Drupal 8.3.0-alpha1 is available for testing.)

Bug reports should be targeted against the 8.3.x-dev branch from now on, and new development or disruptive changes should be targeted against the 8.4.x-dev branch. For more information see the Drupal 8 minor version schedule and the Allowed changes during the Drupal 8 release cycle.

cburschka’s picture

Issue summary: View changes

Similar functionality would be good to have in the keyValueExpirable (and possibly even keyValue) storage, because the former stores serialized form data which can become very big when editing nodes with paragraphs.

Adding a compression step to the PhpSerializer service (maybe optional, with a configurable size threshold) would fix this problem everywhere serialized data is stored in the database.

ndobromirov’s picture

I would go with a compression service to be generally available in core (even if fake one - no compression).
On top of that - have decorators for each sub-system that will need compression available.
This way existing code should not be changed, but everything is just tweaks to the container services definitions and pluming.

Personally I think that only cache entries should be stored in compressed manner. Raw data should be kept raw for many reasons (maintainability, portability, durability). Drupal (8) is very thorough about what to cache (almost everything), so origin data store is rarely the bottleneck when there are warm caches.

Version: 8.3.x-dev » 8.4.x-dev

Drupal 8.3.6 was released on August 2, 2017 and is the final full bugfix release for the Drupal 8.3.x series. Drupal 8.3.x will not receive any further development aside from critical and security fixes. Sites should prepare to update to 8.4.0 on October 4, 2017. (Drupal 8.4.0-alpha1 is available for testing.)

Bug reports should be targeted against the 8.4.x-dev branch from now on, and new development or disruptive changes should be targeted against the 8.5.x-dev branch. For more information see the Drupal 8 minor version schedule and the Allowed changes during the Drupal 8 release cycle.

geek-merlin’s picture

Crosslinking working code and profiling:
Redis #2826332-7: Option to compress data

mikeytown2’s picture

Note that in D7 the https://www.drupal.org/project/apdqc module can do this with the default cache schema.

Per bin and global: Can pick to compress just serialized data or all data and the compression level. It uses the compressed data only if it's smaller (sometimes compression will bloat the size). Below is the relevant code from the module.

function apdqc_inflate_unserialize(&$cache) {
  $converted = FALSE;
  if (is_array($cache)) {
    $cache = (object) $cache;
    $converted = TRUE;
  }
  // If the data is compressed, uncompress it.
  if ($cache->serialized > 1) {
    $inflate = @gzinflate($cache->data);
    if ($inflate !== FALSE && $cache->data !== FALSE) {
      $cache->data = $inflate;
    }
    $cache->serialized -= 2;
  }
  // If the data is permanent or not subject to a minimum cache lifetime,
  // unserialize and return the cached data.
  if ($cache->serialized) {
    $data = @unserialize($cache->data);
    if ($cache->data === 'b:0;' || $data !== FALSE) {
      $cache->data = $data;
    }
  }
  if ($converted) {
    $cache = (array) $cache;
  }
}


  /**
   * Constructs a DrupalDatabaseCache object.
   *
   * @param string $bin
   *   The cache bin for which the object is created.
   */
  public function __construct($bin) {
...
    $this->compressed = $this->apdqcCacheCompressBin($bin);
    if ($this->compressed) {
      $this->compression_level = $this->apdqcCacheCompressionLevelBin($bin);
    }
...
  }

  /**
   * Checks if this cache bin is compressed or not.
   *
   * @param string $bin
   *   The cache bin name.
   *
   * @return bool
   *   TRUE if this cache bin is compressed.
   */
  protected function apdqcCacheCompressBin($bin) {
    $compress = variable_get('apdqc_cache_compress_' . $bin);
    if (!isset($compress)) {
      $compress = variable_get('apdqc_cache_default_compress', APDQC_CACHE_DEFAULT_COMPRESS);
    }
    return $compress;
  }

  /**
   * Checks if this cache bin is compressed or not.
   *
   * @param string $bin
   *   The cache bin name.
   *
   * @return bool
   *   TRUE if this cache bin is compressed.
   */
  protected function apdqcCacheCompressionLevelBin($bin) {
    $compress = variable_get('apdqc_cache_compression_level_' . $bin);
    if (!isset($compress)) {
      $compress = variable_get('apdqc_cache_default_compression_level', APDQC_CACHE_DEFAULT_COMPRESSION_LEVEL);
    }
    return $compress;
  }

  /**
   * Implements DrupalCacheInterface::set().
   */
  public function set($cid, $data, $expire = CACHE_PERMANENT) {
...
    $fields = array(
      'serialized' => 0,
      'created' => REQUEST_TIME,
      'expire' => $expire,
    );
    if (!is_string($data)) {
      $fields['data'] = serialize($data);
      $fields['serialized'] = 1;
    }
    else {
      $fields['data'] = $data;
      $fields['serialized'] = 0;
    }
    if ($this->compressed > 1
      || ($this->compressed && $fields['serialized'])
    ) {
      $deflate = gzdeflate($fields['data'], $this->compression_level);
      if (strlen($deflate) < strlen($fields['data'])) {
        $fields['data'] = $deflate;
        $fields['serialized'] += 2;
      }
    }
...
  }

Version: 8.4.x-dev » 8.5.x-dev

Drupal 8.4.4 was released on January 3, 2018 and is the final full bugfix release for the Drupal 8.4.x series. Drupal 8.4.x will not receive any further development aside from critical and security fixes. Sites should prepare to update to 8.5.0 on March 7, 2018. (Drupal 8.5.0-alpha1 is available for testing.)

Bug reports should be targeted against the 8.5.x-dev branch from now on, and new development or disruptive changes should be targeted against the 8.6.x-dev branch. For more information see the Drupal 8 minor version schedule and the Allowed changes during the Drupal 8 release cycle.

Andre-B’s picture

this issue is gold - really looks like something to help us a lot. I did some investigation and it seems like there are currently two approaches being worked on to get the possibility / something like this in core:

1) change database cache backend.
which is this. but the patch did not get much love in the past. related: https://www.drupal.org/project/drupal/issues/2886405

2) use serializer service in database cache backend in order to be able to replace that serializer service by one which adds compression.

related issues:
https://www.drupal.org/project/drupal/issues/839444
https://www.drupal.org/project/igbinary
https://www.drupal.org/project/drupal/issues/2886405

I sincerely believe 2) is the way to go for core. But to get the compression quickly done without too many side effects I am going to either reroll this patch, or create a different database backend which extends the current database backend and adds compression.

Anonymous’s picture

StatusFileSize
new4 KB

By the way, here is the reroll.

+++ b/core/lib/Drupal/Core/Cache/DatabaseBackend.php
@@ -234,15 +250,21 @@ protected function doSetMultiple(array $items) {
+      if (strlen($fields['data']) > 1000 && function_exists('gzcompress')) {

1000 is not optimal value, of course. I took it at random to check how this will affect on the amount APCU entries (spoiler: not affect, but the speed really improved).

PS:
For unprepared users who want to try the patch on production, and then rollback (do not forget to clear the cache table in db to restore the site ;)

mile23’s picture

Version: 8.5.x-dev » 8.6.x-dev

Feature requests should be against 8.6.x. Patch still applies.

Andre-B’s picture

Here's a contrib module prototype (Different Cache Backend implementation): https://github.com/AndreBaumeier/module-compressed_cache

After some further testing with real world data this will become a proper module. Feedback is always appreciated.

module namespace: https://www.drupal.org/project/compressed_cache

nithinkolekar’s picture

Original issue summary

....shared hosting situation that had max_allowed_packet set to a value of around 2MB and the contents of the serialized admin menu turned out to be just a bit larger than 2MB. The hoster refuses to change this value,so I had to come up with a different solution: cache the data before storing it.

It that situation still exist and still needs compressing cache in DB?
I mean, with current SSD(most of the hosting service providing this), higher network band width etc. could we store it in flat file(only cache) avoiding DB hit completely.

ndobromirov’s picture

  1. +++ b/core/lib/Drupal/Core/Cache/DatabaseBackend.php
    @@ -162,8 +172,14 @@ protected function prepareItem($cache, $allow_invalid) {
    +      if (!function_exists('gzuncompress')) {
    +        // We cannot decompress the compressed cache data, ignore it.
    +        return FALSE;
    +      }
    +      $cache->data = gzuncompress($cache->data);
    

    Have this check on the constructor and prepare a NULL / FALSE callback as a place-holder in case gzuncompress is missing

    This way we will always call the callback and we spare the function exists call.

  2. +++ b/core/modules/rest/tests/src/Functional/EntityResource/EntityResourceTestBase.php
    @@ -503,6 +504,9 @@ public function testGet() {
    +      if (($cache_item->serialized & DatabaseBackend::CACHE_COMPRESSED) !== 0) {
    +        $cache_item->data = gzuncompress($cache_item->data);
    +      }
    

    What if the function is missing...

  3. +++ b/core/lib/Drupal/Core/Cache/DatabaseBackend.php
    @@ -234,15 +250,21 @@ protected function doSetMultiple(array $items) {
    +      if (strlen($fields['data']) > 1000 && function_exists('gzcompress')) {
    

    Have this as a configurable thing. Avoid magic constants in code.

Andre-B’s picture

It that situation still exist and still needs compressing cache in DB?
I mean, with current SSD(most of the hosting service providing this), higher network band width etc. could we store it in flat file(only cache) avoiding DB hit completely.

these are still true:

Advantages:
- Performance.
- Less database storage.
- Less network traffic.

having something like boost for Drupal 8 might be worth looking at for some though.

Anonymous’s picture

#29: Thanks for this!

#31: You are right. But it is just PoC. The need for changes in the EntityResourceTestBase only demonstrates the BC-break. I'm guessing the way with the new backend that @Andre-B is talking in the #26.2 more promising.

Andre-B’s picture

Have this check on the constructor and prepare a NULL / FALSE callback as a place-holder in case gzuncompress is missing

function_exists calls are not cached. probably better to check that once in constructor and store a boolean value to look for within cache set/get. there are plenty of cache calls happening.

Andre-B’s picture

some "benchmark" values:

on a page with 100k+ nodes, prefetching the entire page took the same amount of time with the alternative cache backend of compressed_cache module as with the standard database backend. Application itself has a HA-Setup with database server and web server being on two different servers. Server A has db + web, Server B has db + web, one of them is flagged as master server for all requests coming in. master-master replication happening.

So that's a good find - it's not slower. But here's whats more important in my opinion:

bin default database backend (total size in MB) compressed cache database backend (total size in MB) reduction in %
cache_data 4385,48 532,48 87,85811359
cache_dynamic_page_cache 12419,56 4486,42 63,87617597
cache_page 20641,09 5701,08 72,37994699
cache_render 7026,89 3110,8 55,73005981
ndobromirov’s picture

Issue tags: +scalability

Based on the results above I am adding the scalability tag.
This will drop data storage and RAM requirements for DB on any D8 installation.

fabianx’s picture

Just my 2c:

Compression is always useful when there is more CPU time available than DB time, e.g. when the scalability is DB-bound and not CPU bound.

D8 was for a long time CPU bound and DB played only an insignificant role. This has changed thanks to PHP 7.2.

That said:

This should probably be based upon #839444: Make serializer customizable for Cache\DatabaseBackend instead of hardcoded ...

kim.pepper’s picture

You can also turn on mysql compression per table:

alter table cache_page ROW_FORMAT=COMPRESSED;

I haven't tested this out, but I assume mysql would be more efficient than PHP?

This would solve the disk usage on the db instance, however, this doesn't reduce the size of the payload between php and db.

Andre-B’s picture

#38
this moves the burden of compression and decompression on the database server, which is harder to scale than web servers.

fietserwin’s picture

And it doesn't solve the original reason I came up with this idea :) , but for those running into storage limits it might be a simple solution.

Version: 8.6.x-dev » 8.7.x-dev

Drupal 8.6.0-alpha1 will be released the week of July 16, 2018, which means new developments and disruptive changes should now be targeted against the 8.7.x-dev branch. For more information see the Drupal 8 minor version schedule and the Allowed changes during the Drupal 8 release cycle.

fabianx’s picture

Title: Compress cache data before storing it in the database » Add a compressing serializer decorator
Category: Feature request » Task
Priority: Normal » Major
Issue summary: View changes
Issue tags: -needs profiling
Parent issue: » #3014511: Increase performance and scalability of the cache subsystem

I am +1 to implement that as an compressing serializer decorator.

For concerns that it's not worth to compress little things and there should be a decision:

- The serializer can do:

public function encode($data) {
$data = $this->serializer->encode($data);
  if (strlen($data) > $this->threshold) {
    $data = '1' . 'gzcompress($data, $this->compression_level);
  }
  else {
    $data = '0' . $data;
  }

  return $data;
}

public function decode($data) {
  $encoded = substr($data, 0, 1);
  $data = substr($data, 1);

  if ($encoded === "1") {
    $data = gzuncompress($data);
  }

  return $this->serializer->decode($data);
}

and that's it already.

Of course in a next step we could also abstract out a CompressorInterface, but that's not needed for a simple first start.

fietserwin’s picture

Thanks for rephrasing the request and solution in a more D8 way.

It seems that gzip data should start with (see e.g https://en.wikipedia.org/wiki/Gzip), so I should leave out the '1' and '0' and just check for these bytes. However, gzcompress() does not add this header and footer data, so we should use gzencode()and gzdecode() (which happens to coincide with our own encode()/decode() methods :)).

public function encode($data) {
  $data = $this->serializer->encode($data);
  if (strlen($data) >= $this->threshold) {
    $data = gzencode($data, $this->compression_level);
  }
  return $data;
}

public function decode($data) {
  if (substr($data, 3) === '\x1f\x8b\x08') {
    $decoded = gzdecode($data);
    if ($decoded) {
      $data = $decoded;
    }
  }
  return $this->serializer->decode($data);
}

This way, you can install and start using it without clearing the cache and saving some string operations in the process as well.

Note: this might interfere with higher level code that already compressed the data before storing it in the cache, but that probably did not use gzencode(), but just deflate() or gzcompress().

geek-merlin’s picture

#42: Totally agree. We had exactly this in redis module now for some time: #2826332: Option to compress data

Version: 8.7.x-dev » 8.8.x-dev

Drupal 8.7.0-alpha1 will be released the week of March 11, 2019, which means new developments and disruptive changes should now be targeted against the 8.8.x-dev branch. For more information see the Drupal 8 minor version schedule and the Allowed changes during the Drupal 8 release cycle.

Version: 8.8.x-dev » 8.9.x-dev

Drupal 8.8.0-alpha1 will be released the week of October 14th, 2019, which means new developments and disruptive changes should now be targeted against the 8.9.x-dev branch. (Any changes to 8.9.x will also be committed to 9.0.x in preparation for Drupal 9’s release, but some changes like significant feature additions will be deferred to 9.1.x.). For more information see the Drupal 8 and 9 minor version schedule and the Allowed changes during the Drupal 8 and 9 release cycles.

Version: 8.9.x-dev » 9.1.x-dev

Drupal 8.9.0-beta1 was released on March 20, 2020. 8.9.x is the final, long-term support (LTS) minor release of Drupal 8, which means new developments and disruptive changes should now be targeted against the 9.1.x-dev branch. For more information see the Drupal 8 and 9 minor version schedule and the Allowed changes during the Drupal 8 and 9 release cycles.

Version: 9.1.x-dev » 9.2.x-dev

Drupal 9.1.0-alpha1 will be released the week of October 19, 2020, which means new developments and disruptive changes should now be targeted for the 9.2.x-dev branch. For more information see the Drupal 9 minor version schedule and the Allowed changes during the Drupal 9 release cycle.

Version: 9.2.x-dev » 9.3.x-dev

Drupal 9.2.0-alpha1 will be released the week of May 3, 2021, which means new developments and disruptive changes should now be targeted for the 9.3.x-dev branch. For more information see the Drupal core minor version schedule and the Allowed changes during the Drupal core release cycle.

Version: 9.3.x-dev » 9.4.x-dev

Drupal 9.3.0-rc1 was released on November 26, 2021, which means new developments and disruptive changes should now be targeted for the 9.4.x-dev branch. For more information see the Drupal core minor version schedule and the Allowed changes during the Drupal core release cycle.

Version: 9.4.x-dev » 9.5.x-dev

Drupal 9.4.0-alpha1 was released on May 6, 2022, which means new developments and disruptive changes should now be targeted for the 9.5.x-dev branch. For more information see the Drupal core minor version schedule and the Allowed changes during the Drupal core release cycle.

Version: 9.5.x-dev » 10.1.x-dev

Drupal 9.5.0-beta2 and Drupal 10.0.0-beta2 were released on September 29, 2022, which means new developments and disruptive changes should now be targeted for the 10.1.x-dev branch. For more information see the Drupal core minor version schedule and the Allowed changes during the Drupal core release cycle.

Version: 10.1.x-dev » 11.x-dev

Drupal core is moving towards using a “main” branch. As an interim step, a new 11.x branch has been opened, as Drupal.org infrastructure cannot currently fully support a branch named main. New developments and disruptive changes should now be targeted for the 11.x branch, which currently accepts only minor-version allowed changes. For more information, see the Drupal core minor version schedule and the Allowed changes during the Drupal core release cycle.

andypost’s picture

Version: 11.x-dev » main

Drupal core is now using the main branch as the primary development branch. New developments and disruptive changes should now be targeted to the main branch.

Read more in the announcement.