Problem/Motivation

When normalizing collection requests we should be able to leverage the cached data from the normalization of the individual entities.

Proposed resolution

If a resource (entity) has been already normalized in the past, then reuse it during the normalization process. Incrementally expand the cache entry for a resource (an entity), just like Render Cache already does (see #16).

See a more detailed explanation of the performance improvements in #108. This is a summary.

Scenario Resp (avrg) Resp (max) Resp (min)
No Cache 459 ms 817 ms 436 ms
With Cache 191 ms 925 ms 174 ms

Remaining tasks

None.

User interface changes

N/A

API changes

None.

Data model changes

None.

Release notes snippet

JSON:API now caches resource (entity) normalizations, resulting in a typical 2⨉ speed-up. The improved performance is especially noticeable on responses with many resources (entities), typically when using lots of includes. Notably, using different combinations of sparse fieldsets does not cause different cache entries; all requests for the same resource uses the same cache entry, regardless of the sparse fieldset requested.

CommentFileSizeAuthor
#195 2819335-195.patch25.79 KBWim Leers
#195 interdiff.txt1.41 KBWim Leers
#194 2819335-194.patch25.79 KBWim Leers
#194 interdiff.txt1.29 KBWim Leers
#189 descoped.txt54.15 KBWim Leers
#189 2819335-189.patch25.8 KBWim Leers
#188 BetterTimeline.png420.6 KBitsekhmistro
#187 SerializeField.png159.89 KBitsekhmistro
#187 Timeline.png462.34 KBitsekhmistro
#180 2819335--cache-normalizations--180.patch77.81 KBe0ipso
#180 2819335--interdiff--177-180.txt1.37 KBe0ipso
#177 2819335--interdiff--170-177.txt10.22 KBe0ipso
#177 2819335--cache-normalizations--177.patch77.8 KBe0ipso
#170 2819335--interdiff--165-170.txt10.1 KBe0ipso
#170 2819335--cache-normalizations--170.patch79.04 KBe0ipso
#169 2819335--cache-normalizations--168.patch77.49 KBe0ipso
#169 2819335--interdiff--165-168.txt10.02 KBe0ipso
#165 2819335--interdiff--158-165.txt58.5 KBe0ipso
#165 2819335--cache-normalizations--165.patch69.73 KBe0ipso
#158 2819335-144.patch15.06 KBe0ipso
#151 drupal-2819335-151-COMBINED_POC.patch43.01 KBkristiaanvandeneynde
#144 2819335-144.patch15.06 KBgabesullice
#144 interdiff.txt4.08 KBgabesullice
#138 2819335--cache-normalizations--137.patch14.85 KBe0ipso
#125 2819335--interdiff--119-125.txt1.28 KBe0ipso
#125 2819335--cache-normalizations--125.patch14.52 KBe0ipso
#121 2819335--cache-normalizations--119.patch15.8 KBe0ipso
#109 2819335--interdiff--105-109.txt13.71 KBe0ipso
#109 2819335--cache-normalizations--109.patch15.59 KBe0ipso
#108 ab-with-cache--108.txt1.27 KBe0ipso
#108 ab-no-cache--108.txt1.27 KBe0ipso
#108 2019-02-21 06-12-15.png180.16 KBe0ipso
#105 2819335--interdiff--102-105.txt2.17 KBe0ipso
#105 2819335--cache-normalizations--105.patch9.85 KBe0ipso
#102 2819335--interdiff--98-102.txt7.94 KBe0ipso
#102 2819335--cache-normalizations--102.patch12.02 KBe0ipso
#98 2819335--cache-normalizations--98.patch12.43 KBe0ipso
#81 2819335-80-experiment.patch22.41 KBgabesullice
#81 interdiff.txt20.17 KBgabesullice
#78 2819335-78.patch19.15 KBgabesullice
#78 interdiff.txt5.81 KBgabesullice
#75 2819335-75.patch19.49 KBWim Leers
#75 interdiff.txt811 bytesWim Leers
#73 2819335-73.patch19.54 KBWim Leers
#73 interdiff.txt2.43 KBWim Leers
#70 2819335-70.patch18.89 KBWim Leers
#70 interdiff.txt15.25 KBWim Leers
#54 2819335-54.patch15.17 KBjibran
#47 2819335-46.patch15.5 KBWim Leers
#47 interdiff.txt4.45 KBWim Leers
#45 2819335-45.patch15.44 KBWim Leers
#45 interdiff.txt1.3 KBWim Leers
#42 2819335-42.patch14.18 KBWim Leers
#42 interdiff.txt3.18 KBWim Leers
#41 2819335-41.patch14.19 KBWim Leers
#41 interdiff.txt1.21 KBWim Leers
#40 2819335-40.patch13.01 KBWim Leers
#40 interdiff.txt804 bytesWim Leers
#39 2819335-39.patch12.25 KBWim Leers
#39 interdiff.txt8.33 KBWim Leers
#29 2819335-28.patch8.98 KBWim Leers
#29 interdiff.txt1001 bytesWim Leers
#27 2819335-27.patch8.98 KBWim Leers
#27 interdiff.txt2.98 KBWim Leers
#25 2819335-25.patch8.32 KBWim Leers
#25 interdiff.txt5.37 KBWim Leers
#23 2819335-23.patch5.52 KBWim Leers
#6 custom-normalizing.png87.88 KBsteven.wichers
Support from Acquia helps fund testing for Drupal Acquia logo

Comments

e0ipso created an issue. See original summary.

e0ipso’s picture

Status: Active » Postponed

After a conversation with @Wim Leers we decided to postpone this. Views is doing this because it's incredibly complex. That complexity already includes the layers of abstraction to leverage this.

We don't have those layers of abstraction, and it's not worth adding them for this.

Wim Leers’s picture

Status: Postponed » Closed (outdated)

I think it's safe to close this — nobody has complained about JSON API performance. Let's see if it's an actual problem first — we can still open an issue then.

btully’s picture

We're seeing LOTS of performance issues with the normalization process with collections of 50 nodes. Leveraging cache for requests that have already been normalized seems like a big win, especially since normalization seems to be a big bottleneck in our experience.

Only local images are allowed.

steven.wichers’s picture

Note that it's down to this code

      $jsonapi_doc_object = $serializer->normalize($data, $format, [
        'request' => $request,
        'resource_type' => $request->get('resource_type'),
      ]);

Which calls

  public function normalize($data, $format = NULL, array $context = []) {
    if ($this->selfSupportsNormalization($data, $format)) {
      return parent::normalize($data, $format, $context);
    }
    if ($this->fallbackNormalizer->supportsNormalization($data, $format)) {
      return $this->fallbackNormalizer->normalize($data, $format, $context);
    }
    return parent::normalize($data, $format, $context);
  }

and is ultimately responsible for nearly a million function calls split between it and child functions.

This is with what I would consider a fairly simple and typical content type. Only core fields are on the content type. There are entity reference (taxonomy term) fields but with minimal references. No nested fields. If I override renderResponseBody and do my own normalization the response times drop from 10-20s~ to 0.5-2s~.

I think many people likely have this problem but it goes unnoticed because of lax requirements or caching after the first request. In our case we have a strict performance target and caching is either minimal or unavailable, so this problem is brought front and center. I haven't dug into the normalization system, but my brainstorming of possible changes is below:

* Implement what this ticket discussed.
* Static cache of which fields support normalization so there's only one check per request checking over and over. Idea is to reduce needless, nested function calls.
* Like the above, but couldn't this be done by entity_type:field_type instead of the individual field?
* Add the ability to normalize sparse fieldsets instead of always normalizing the entire node (runs counter to caching discussed here).

steven.wichers’s picture

FileSize
87.88 KB

I've implemented a custom normalizer for the specific endpoints we need to be highly performant. The difference is fairly drastic.

Process for the results was:

  • Clear cache
  • Load endpoint that uses text-based searching and filters
  • Wait for it to fully load
  • Reload

Between the original and custom test, I flipped a toggle that lets me enable/disable the custom normalization on the fly.

Custom normalization comparison

steven.wichers’s picture

e0ipso’s picture

I concur with @Wim Leers, I don't think that is a widespread problem. However, I'm curious to learn about your patch. Would you be willing to post it here?

If I override renderResponseBody and do my own normalization the response times drop from 10-20s~ to 0.5-2s~.

Wim Leers’s picture

Status: Closed (outdated) » Postponed (maintainer needs more info)
Issue tags: +API-First Initiative, +Performance

Great research, @btully and @steven.wichers, thank you very much for posting it! 🙏 👏

#6 shows a 3.5-fold improvement. That's wonderful! But it's curious that the response size is different. Why is that? It seems like some things are no longer being normalized that should be normalized…

Agreed with @e0ipso that it'd be great to see this patch posted here.

Finally: can you share:

  • the JSON API URL used (which would allow us to see the query complexity)
  • the amount of content (entities) being queried?
  • how many includes on average per returned entity? (Otherwise I don't see how you could reach >15 KB response sizes unless the queried entity type has very many fields.)
steven.wichers’s picture

re e0ipso: I don't have a patch to share because I didn't make modifications to the jsonapi module itself. I decorated the jsonapi.resource_response.subscriber service to add my own functionality. In my class I check if the request is for the specific endpoint we need to make sure performance is better for and then call a custom document builder. My normalization grabs the fields from the entity and pulls the value depending on the field type. Ex:

...snip...
      case 'text':
      case 'text_long':
      case 'text_with_summary':
        if (!empty($value[0]['value'])) {
          $format = !empty($value[0]['format']) ? $value[0]['format'] : 'full_html';
          $result = [
            'processed' => \check_markup($value[0]['value'], $format),
          ];
        }
        break;

      case 'entity_reference':
        $result = isset($value[0]['target_id']) ? intval($value[0]['target_id']) : NULL;
        break;
...snip...

It's "generic" but definitely only tested and validated against our specific usage. I might be able to pull out some of the custom client information to share the rest in a gist, but I am sure there are different field types or other scenarios not being accounted for.

I haven't fixed any bug in jsonapi, only worked around it.

re Wim:

I cannot share the endpoint.

The response size is different because my normalization is tailored specifically to the project. For example, we don't need the original value of formatted text fields. The normalized value for my implementation only includes the "processed" key instead of the original and processed. The other big difference is that we don't need the vocabulary added a as a relationship, so it's not brought in. I believe the OOTB normalization adds that on its own. I've done a diff of the before/after to validate and outside of those two things it's the same.

Amount of entities being queried against: 39076
Amount of entities returned in a typical response: 50
Number of includes: Will vary, let's say 4 on overage but with overlap (so number of included items is lower).

However, number of includes is not what bumps the response size up. For example, if I go to /jsonapi/node/{content type} with no filters/includes then response size is 350KB~. If I go to /jsonapi/node/{content type}?page[limit]=1 the response size is ~6KB. I don't see how someone wouldn't reach >15KB with any sort of body field being included.

I should also point out that even with a liberal usage of sparse fieldsets the response times were dog slow. I believe because everything is still being normalized regardless of what is included in the response.

Wim Leers’s picture

I made a typo, I meant to write 150 KB 😅

steven.wichers’s picture

150KB makes more sense.

There are 18 fields on the content type, not including title and body.

9 are simple text fields/values (short/small values)
1 is a boolean
1 is a date
4 are formatted text (short/small values)
3 are multivalue text fields (short/small values, 0-3 values)

If I request everything but the body field then the response size is 70k. If I include the body then the response size is 250k+ (depends on article length).

Article length can vary and ranges from 1k~ to 145k on the extreme. Out of 40k articles, only a few are near the extreme end. The average length of articles is 4476. Which means 4.5 * 50 = 225k + 70k (of other fields/overhead) = 295k rough estimated average response.

But that's not indicative of a real request we're making. We are using sparse fieldsets to restrict the returned data down to a few fields. We are including the taxonomy terms, but restricting them down to just their labels. If I limit the results to just one item with no includes but still including the taxonomy fields, it's 3KB. If we pretend that is a consistent size then we're still back at 150kb for 50 items.

I believe, however, the entire set of entities (node+includes) still get normalized regardless of if we're requesting just the title or the entire thing. If we pretend I have a 1MB body field I am pretty sure that is getting normalized even if I don't want it in the response. I can only assume that means on a site with complex content (referenced nodes) this problem is greatly magnified.

Wim Leers’s picture

If you apply #2997600: Resolve included resources prior to normalization, should we consider this issue moot/solved, or is it still relevant?

Wim Leers’s picture

gabesullice’s picture

Version: 8.x-1.x-dev » 8.x-2.x-dev
Status: Postponed (maintainer needs more info) » Active

should we consider this issue moot/solved, or is it still relevant?

@e0ipso alluded to this in our API-First call. This is still relevant as a next-step in the overall performance epic. It's roughly mentioned in #2996637-8: [META] Improve collection and include performance.

Where I'd like to see this issue go is for EntityNormalizer to check a persistent cache for a normalized entity first.

I'm not sure if cache entries should be varied by sparse fields or not. @e0ipso seemed to think "no" and would like normalize and cache all fields, then strip unwanted fields at runtime.

I think it depends on how the cache is filled.

If the cache is only filled on-demand, then I think it should vary by field set.

OTOH, a queue worker could normalize all entities on insert/update, warming the cache ahead of time. In that case, I would say it should not vary by field set.

Wim Leers’s picture

Issue tags: +D8 cacheability

I'd be happy to take this on. Not right now though, we have more important issues to tackle first. This is very similar to what I've had to do for Drupal core's render cache and Dynamic Page Cache.

gabesullice’s picture

e0ipso’s picture

I'm not sure if cache entries should be varied by sparse fields or not.

I think we should not vary per field set because that increases the cache HIT ratio, even if there is an initial penalty to normalize all fields. This cache becomes important when other layers MISS, in those situations field set configurations will not be homogeneous. In other words, these cache entries will be useful for cross pollination between unrelated requests that happen to touch the same entity.

Wim Leers’s picture

I think #18 is saying that a cached JSON:API resource should always include all fields. Then the same cache entry can be used for both the case of no sparse fieldset being specified as well as any sparse fieldset, because all that needs to happen is some key-value pairs need to be removed.

Correct?

e0ipso’s picture

Correct?

Correct.

Wim Leers’s picture

Thanks for confirming — I agree with you! 👍

Wim Leers’s picture

Wim Leers’s picture

Title: [FEATURE] Leverage partial caches during the normalization process » Resource (entity) normalization should use partial caching
Status: Active » Needs review
Issue tags: +needs profiling
Related issues: +#3022584: Consolidate and simplify NormalizerValue objects: introduce CacheableNormalization
FileSize
5.52 KB

#3022584: Consolidate and simplify NormalizerValue objects: introduce CacheableNormalization landed! Now we can use that infrastructure that it introduces.

My hunch in #22 was correct: the below PoC patch was written in ~30 minutes.

Let's test this: ab -c 1 -n 100 -H "Accept: application/vnd.api+json" http://d8/jsonapi/node/article with only 2 small articles. Dynamic Page Cache & Page Cache disabled.

HEAD
Concurrency Level:      1
Time taken for tests:   10.784 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      684200 bytes
HTML transferred:       595700 bytes
Requests per second:    9.27 [#/sec] (mean)
Time per request:       107.842 [ms] (mean)
Time per request:       107.842 [ms] (mean, across all concurrent requests)
Transfer rate:          61.96 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   100  108   7.5    106     154
Waiting:      100  108   7.5    106     154
Total:        100  108   7.5    106     155
With patch
Concurrency Level:      1
Time taken for tests:   8.632 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      684201 bytes
HTML transferred:       595700 bytes
Requests per second:    11.59 [#/sec] (mean)
Time per request:       86.318 [ms] (mean)
Time per request:       86.318 [ms] (mean, across all concurrent requests)
Transfer rate:          77.41 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    82   86   3.9     85     109
Waiting:       82   86   3.9     85     109
Total:         83   86   3.9     85     109

… and of course the benefit increases linearly; this is the same but with 4 articles:

HEAD
Concurrency Level:      1
Time taken for tests:   11.737 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      1216000 bytes
HTML transferred:       1128300 bytes
Requests per second:    8.52 [#/sec] (mean)
Time per request:       117.370 [ms] (mean)
Time per request:       117.370 [ms] (mean, across all concurrent requests)
Transfer rate:          101.18 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   111  117   5.7    116     149
Waiting:      106  113   5.6    112     144
Total:        111  117   5.7    116     149
With patch
Concurrency Level:      1
Time taken for tests:   8.758 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      1216000 bytes
HTML transferred:       1128300 bytes
Requests per second:    11.42 [#/sec] (mean)
Time per request:       87.583 [ms] (mean)
Time per request:       87.583 [ms] (mean, across all concurrent requests)
Transfer rate:          135.59 [Kbytes/sec] received

10 ms extra in HEAD (so about 5 ms per entity), but only 1 ms extra with patch. Without digging into details (which we should before committing this), this seems to be reducing the normalization cost per entity by 80% for warm caches.


This patch's caching is not impacted by using sparse fieldsets: one and the same cache item is used for both the absence of sparse fieldsets and any combination of sparse fieldsets. This means some work still needs to be done per entity, but that's a fine trade-off: for zero work, there's Dynamic Page Cache. But that only works for the specific URL that was requested, whereas the partial caching introduced here works for any URL where that entity appears.

Finally, thanks to #2997600: Resolve included resources prior to normalization, this also automatically works for includes. :)

Status: Needs review » Needs work

The last submitted patch, 23: 2819335-23.patch, failed testing. View results
- codesniffer_fixes.patch Interdiff of automated coding standards fixes only.

Wim Leers’s picture

Status: Needs work » Needs review
FileSize
5.37 KB
8.32 KB
  1. Fix CS violations.
  2. Inject cache backend.

Status: Needs review » Needs work

The last submitted patch, 25: 2819335-25.patch, failed testing. View results

Wim Leers’s picture

Status: Needs work » Needs review
FileSize
2.98 KB
8.98 KB

Clean-up.

Status: Needs review » Needs work

The last submitted patch, 27: 2819335-27.patch, failed testing. View results

Wim Leers’s picture

Status: Needs work » Needs review
FileSize
1001 bytes
8.98 KB

Oops :)

Sign it's getting late. More tomorrow!

One of the failing tests seems to be related to \Drupal\jsonapi\LabelOnlyEntity, and one seems to be failing due to a different order of the key/value pairs under relationships, but the actual values are identical.

Status: Needs review » Needs work

The last submitted patch, 29: 2819335-28.patch, failed testing. View results

gabesullice’s picture

What concerns me about this patch is that cold caches will be made slower when sparse field sets are involved. Why?

Because in order for this to work, when an entity is not in cache, we need to normalize every field. In another issue, we showed that skipping complex fields (like a large node body field) can provide huge gains on a cold cache requests.

***I think it would be wrong to assume that most real-world requests that miss PC or DPC will still have a significant number of cached entity normalizations, so we should probably be more concerned about cold cache normalizations.***

This patch doesn't currently handle the fact that sparse field sets need to be ignored on cache miss, but when it does get to that, we must not forget that scenario.

What I propose as a solution is to actually skip setting a cache value if a sparse fieldset is requested. In that case, we should create a normalization queue and insert that entity as a queue item instead of setting it in cache. On Kernel::TERMINATE, we can then process that queue and normalize it with no skipped fields.

Aside: with this feature, a nice contrib module could be made to preemptively do entity normalizations for entities that haven't even been requested.

gabesullice’s picture

+++ b/src/Normalizer/EntityNormalizer.php
@@ -109,22 +122,59 @@ class EntityNormalizer extends NormalizerBase implements DenormalizerInterface {
-    foreach ($this->getFields($entity, $bundle, $resource_type) as $field_name => $field) {
-      $in_sparse_fieldset = in_array($field_name, $field_names);
-      // Omit fields not listed in sparse fieldsets.
-      if (!$in_sparse_fieldset) {
-        continue;
-      }
+    foreach ($this->getFields($entity, $resource_type->getBundle(), $resource_type) as $field_name => $field) {

This is the optimization we lose on cold caches.

Wim Leers’s picture

Indeed, that is something we lose in the current patch. I should've mentioned that in #23.

What I propose as a solution is to actually skip setting a cache value if a sparse fieldset is requested.

+1

In that case, we should create a normalization queue and insert that entity as a queue item instead of setting it in cache. […]

Is it really worth that extra complexity? We could choose to simply only create a cache entry in case no sparse fieldset is specified. That means that if any entity is only ever requested with a sparse fieldset that we don't create a cache item. But the benefit in that case is already lower.

I'll do some benchmarking/profiling tomorrow to check whether the more complete yet more complex approach you propose is worth it.

gabesullice’s picture

Is it really worth that extra complexity?

I thought of this too and I agree it is a lot of extra complexity. I worry that without it, there won't actually be a benefit though. I guess benchmarks are what we really need (of course, it will be hard to simulate real-world usage).

We could choose to simply only create a cache entry in case no sparse fieldset is specified. That means that if any entity is only ever requested with a sparse fieldset that we don't create a cache item. But the benefit in that case is already lower.

I'll do some benchmarking/profiling tomorrow to check whether the more complete yet more complex approach you propose is worth it.

It's probably also worth considering cacheing per sparse field set... we should ask ourselves: how diverse are field sets are in practice? For example, how often do we believe that terms are actually fetched without just ?fields[taxonomy_term--tags]=name? How variable are field sets? Perhaps there are only 2 or 3 variations (how many View Modes do most content types have?).


Finally, I think we need an answer to this before we commit something:

By what mechanism will this cache be warmed so that it will be impactful when [Dynamic ]Page Cache is cold?

The only scenario that seems plausible to me is collections that are serving a Views-like component with "exposed filters". Am I missing another?

e0ipso’s picture

This is the optimization we lose on cold caches.

I am convinced that this is an acceptable trade-off. The extra work on cold caches will pay off in subsequent requests that:

  1. Include that entity
  2. Request that entity with a different sparse fieldset combination

Additionally (and probably most importantly) this will allow the creation of jsonapi_boost which can cache warm the API. This would not be possible if we vary the cache entry by the fieldset combination or skip caching on the presence of sparse fieldsets.


We could, alternatively, cache field (and link) normalizations.

Wim Leers’s picture

I am convinced that this is an acceptable trade-off. The extra work on cold caches will pay off in subsequent requests

This was my thinking too. :)

We could, alternatively, cache field (and link) normalizations.

This would lead to a lot of I/O, easily hundreds or even thousands of cache retrievals per request. I don't think that's wise.


Thanks to the feedback from both of you, I realized there is a middle ground to have the best both 😄

It will be slightly more complex than the current patch, but far less so than the queue approach (which also involves redoing work that was already done), would not make cold caches slower (i.e. zero regression compared to HEAD's sparse field set optimization), and still only needs one cache item per entity instead of one per sparse field set combination per entity.

It's an approach similar to what I already worked on for Drupal core's RenderCache: on cold cache, compute exactly what was requested. For fields that are omitted, generate a FALSE entry (meaning: "this was not yet computed"), and cache that. Subsequent requests with the same field set will not have to do any computations. But requests with a different field set will run into some fields that have FALSE. Those we then compute (still saving quite a bit of time, because we only need to normalize a few instead of all) and … we then update the cache item.

In other words: we grow the cache item for a given resource (entity). If for a given resource only 2 fields are ever requested, then only those 2 are ever normalized (and cached). If for another resource sometimes 3 fields are requested, sometimes 17, and sometimes all, then they all end up being normalized and cached, eventually.

Thoughts? :)

e0ipso’s picture

This would lead to a lot of I/O, easily hundreds or even thousands of cache retrievals per request. I don't think that's wise.

I agree. Just a passing thought.


I am not convinced that the increase of complexity pays off. Would you consider implementing the less complex version of this, and make the evolvable cache entry feature a follow-up that we can discuss separately?

Wim Leers’s picture

Would you consider implementing the less complex version of this, and make the evolvable cache entry feature a follow-up that we can discuss separately?

That was my intent all along, ever since I posted that first patch at #23. But … that does mean that we technically would regress in the mean time: it'd remove the performance optimization that #3011099: Only serialize sparse_fieldset fields in \Drupal\jsonapi\Normalizer\EntityNormalizer::normalize() introduced!

So what I propose is that I first get the current patch (#23#29) green.

And then, still in this issue, I post an iteration that implements #36. We can then see how we feel about it. Because I think (am hopeful) that I am not convinced that the increase of complexity pays off. may end up not being very complex at all. But we'll have to see.

If that works out, we'd not regress performance for any case. :)

Wim Leers’s picture

So, let's first make #23#29 green. In the overly simplistic approach I've implemented so far, I'm not yet varying by bubbled cache contexts … which is of course essential. Otherwise this would show the results that the first user to access this entity via JSON:API got to all users. That's of course a massive security hole.

This is complex only because #2551419: Abstract RenderCache into a separate service that is capable of cache redirects in a non-render array-specific way hasn't happened yet. But that doesn't need to stop us. Dynamic Page Cache uses this exact same approach (see \Drupal\dynamic_page_cache\EventSubscriber\DynamicPageCacheSubscriber). In fact, I've copy/pasted the code from there and adjusted to this use case. The complexity is contained: it lives only in those 3 methods that this is adding.

Wim Leers’s picture

Fix the ::testPatchIndividual() failures.

Wim Leers’s picture

I know for a fact that \Drupal\Tests\jsonapi\Functional\UserTest::testGetMailFieldOnlyVisibleToOwner() will still fail like it did in #29. However … that's caused by this core bug, for which I filed a patch: #3026264: UserAccessControlHandler::checkFieldAccess() denies access to mail field without varying by the ‘user’ cache context 😬

Wim Leers’s picture

Yay, #39's testing completed, and it resulted in a big step forward! Sadly, CS violations make the result harder to see. NodeTest and others are down to a single failing test, as expected: ::testPatchIndividual(). And that's what #40 is fixing.

Let's fix CS violations so that we can see the results more clearly.

The last submitted patch, 40: 2819335-40.patch, failed testing. View results
- codesniffer_fixes.patch Interdiff of automated coding standards fixes only.

The last submitted patch, 41: 2819335-41.patch, failed testing. View results
- codesniffer_fixes.patch Interdiff of automated coding standards fixes only.

Wim Leers’s picture

I think this patch might be green :)

The last submitted patch, 42: 2819335-42.patch, failed testing. View results

Wim Leers’s picture

And it is green! 🎉


So let's build on top of this now: let's see what implementing #36 looks like. (#36 would address the concerns about deteriorating performance in the cold cache scenario.)

This was even simpler than I thought it would be! The first hunk in the interdiff is also just bringing back the original code from HEAD, reducing the overall set of changes.

gabesullice’s picture

Issue summary: View changes

By what mechanism will this cache be warmed so that it will be impactful when [Dynamic ]Page Cache is cold?

The only scenario that seems plausible to me is collections that are serving a Views-like component with "exposed filters". Am I missing another?

I think @e0ipso responded to this indirectly with:

  1. Include that entity
  2. Request that entity with a different sparse fieldset combination

But how/when does that actually happen on a real site? Let's say there's this request: /jsonapi/node/article?include=field_tags&fields[taxonomy_term--tags]=name. There's no size limit to includes, so you're potentially adding an enormous cost to that request if the articles are referencing a lot of tags. Further, in that scenario, how often is someone then going to directly request a tag entity individually or in a collection with more fields so that the cache actually pays off? My gut tells me that will actually be fairly infrequent and it would be better to let that request be slower.

To be clear, I'm not arguing against partial caching, I really want to see it land. I just need to be sure we don't regress the 80% case.

Additionally (and probably most importantly) this will allow the creation of jsonapi_boost which can cache warm the API. This would not be possible if we vary the cache entry by the fieldset combination or skip caching on the presence of sparse fieldsets.

I'm glad we're on the same page :D

Aside: with this feature, a nice contrib module could be made to preemptively do entity normalizations for entities that haven't even been requested. — #31


  1. +++ b/jsonapi.services.yml
    @@ -66,7 +66,7 @@ services:
    +    arguments: ['@jsonapi.link_manager', '@jsonapi.resource_type.repository', '@entity_type.manager', '@entity_field.manager', '@plugin.manager.field.field_type', '@render_cache']
    
    @@ -145,6 +145,12 @@ services:
    +  cache.jsonapi_normalizations:
    +    class: Drupal\Core\Cache\CacheBackendInterface
    +    tags:
    +      - { name: cache.bin }
    +    factory: cache_factory:get
    +    arguments: [jsonapi_normalizations]
    
    +++ b/src/Normalizer/EntityNormalizer.php
    @@ -138,12 +203,94 @@ class EntityNormalizer extends NormalizerBase implements DenormalizerInterface {
    +  protected static function generateLookupRenderArray(ResourceType $resource_type, EntityInterface $entity) {
    +    return [
    +      '#cache' => [
    +        'keys' => [$resource_type->getTypeName(), $entity->uuid()],
    +        'bin' => 'jsonapi_normalizations',
    +      ],
    +    ];
    ...
    +    $base = static::generateLookupRenderArray($resource_type, $entity);
    +    $data_as_render_array = $base + [
    +      // The data we actually care about.
    +      '#data' => $normalization_parts,
    +      // Tell RenderCache to cache the #data property: the data we actually care
    +      // about.
    +      '#cache_properties' => ['#data'],
    +      // These exist only to fulfill the requirements of the RenderCache, which
    +      // is designed to work with render arrays only. We don't care about these.
    +      '#markup' => '',
    +      '#attached' => '',
    +    ];
    

    Using the render cache and render arrays seems a little smelly to me. What are we buying? Would it be simpler in the end to not reuse this and have less magic and more domain-specific code? (I'm not at all familiar with this stuff, maybe I'm wildly off the mark).

  2. +++ b/src/Normalizer/EntityNormalizer.php
    @@ -118,11 +130,64 @@ class EntityNormalizer extends NormalizerBase implements DenormalizerInterface {
    +   *   An array with two key-value pairs:
    +   *   - 'base': array, the base normalization of the entity, that does not
    +   *             depend on which sparse fieldset was requested.
    +   *   - 'fields': CacheableNormalization for all requested fields.
    

    This array feels a little inelegant, could we move the "base" normalization out of this method? Then have rename it to getFieldNormalizations.

Finally, I did a lot of work in #2997600: Resolve included resources prior to normalization to get rid of n+1 queries, this is bringing back a DB round trip per entity (albeit to save time in normalization). Could we make this work with CacheBackendInterface::getMultiple()? I'm thinking we might if we have a normalizer for the EntityCollection object.

Wim Leers’s picture

#48: I see that you responded to much of the discussion in #31 through #35.

But you did not yet respond to #36/#37/#38, which was implemented in #47. That completely addresses that entire conversation!


  1. It is smelly. But again, once #2551419: Abstract RenderCache into a separate service that is capable of cache redirects in a non-render array-specific way lands, all that weirdness evaporates. We have no choice but to do this. If we don't do this, then we have to reimplement all the logic in \Drupal\Core\Render\RenderCache. That is extremely risky and a huge burden.
    See #39. We need to do this, because you accessing a certain entity might result in the normalization varying by user.permissions, but for me it might vary by user. The RenderCache service takes care of all the underlying cache redirection logic that is necessary for that to work correctly, and for it to gradually "learn" the different potential variations.
    Yes, there are tiny bits of render arrays in here, but that's literally just a transport layer.

Could we make this work with CacheBackendInterface::getMultiple()? I'm thinking we might if we have a normalizer for the EntityCollection object.

That kinda requires leaky abstractions: it means that EntityNormalizer won't be self-contained anymore; we'll need something that knows about which entities will be normalized but before normalization starts to prefetch those cache entries. I quickly PoC'd this, and looks like something like this in \Drupal\jsonapi\Controller\EntityResource::getCollection() would work:

diff --git a/src/Controller/EntityResource.php b/src/Controller/EntityResource.php
index 77674e3..286fe38 100644
--- a/src/Controller/EntityResource.php
+++ b/src/Controller/EntityResource.php
@@ -354,6 +354,12 @@ class EntityResource {
     $entity_collection = new EntityCollection($collection_data);
     $entity_collection->setHasNextPage($has_next_page);
 
+    // Prewarm cache to avoid many distinct cache GETs.
+    $normalization_cids_to_prefetch = array_map(function (EntityInterface $entity) {
+      return $entity->getEntityTypeId() . '--' . $entity->bundle() . ':' . $entity->uuid();
+    }, $entity_collection->toArray());
+    \Drupal::cache('jsonapi_normalizations')->getMultiple($normalization_cids_to_prefetch);
+
     // Calculate all the results and pass them to the EntityCollectionInterface.
     $count_query_cacheability = new CacheableMetadata();
     if ($resource_type->includeCount()) {

at least if we make the cache.jsonapi_normalizations cache bin use a chained cache backend that first caches in memory and if it's not in there checks the database, like in this patch: #3018287-62: ResourceTypeRepository computes ResourceType value objects on *every request*.

gabesullice’s picture

But you did not yet respond to #36/#37/#38, which was implemented in #47. That completely addresses that entire conversation!

I misread #38, I thought you were agreeing to postpone to a followup and regress temporarily. I agree that my concerns about regression go away with your solution in #47.


That kinda requires leaky abstractions: it means that EntityNormalizer won't be self-contained anymore; we'll need something that knows about which entities will be normalized but before normalization starts to prefetch those cache entries.

I suggested an EntityCollection normalizer to keep the abstraction fairly contained. This could fetch all the cache entries for all its entities then only normalize entities that are either incomplete or have no entry. I think a helper like getSparseEntityNormalization(CacheableNormalization $entity, array $fields): FALSE|CacheableNormalization (FALSE when the given normalization does not have all necessary fields) would make this fairly clean. It makes sense that an EntityCollection normalizer would have the "smarts" to lazily normalize its children.

I quickly PoC'd this, and looks like something [using a chained backend]

I was imagining something like this too, if we didn't want to do the above.

Wim Leers’s picture

Just talked to @gabesullice. We agreed that if profiling shows that this is still a net speed-up, that we can just stick to the approach of the current patch in #47: we can keep doing individual cache ::get()s and leave optimizing that to a single massive ::getMultiple() to a follow-up.

I was thinking that @jibran might be interested in doing that, since he reported #2996637: [META] Improve collection and include performance which led to #3011099: Only serialize sparse_fieldset fields in \Drupal\jsonapi\Normalizer\EntityNormalizer::normalize(), based on a real-world scenario/workload! Would be perfect to validate this patch using that same scenario/workload. So pinged him on Twitter: https://twitter.com/wimleers/status/1085653956280299526.

ndobromirov’s picture

Will test this one in the upcoming days depending on free time.

Wim Leers’s picture

🎉

gabesullice’s picture

<musing>

Going back to the idea of n+1 queries...

If/when #3015438: Wrap entity objects in a ResourceObject which carries a ResourceType lands, I wonder if we could extend ResourceObject into a NormalizedResourceObject? Then, instead of loading entities in EntityResource::getCollection and IncludeResolver::resolve, we first try to load instances of NormalizedResourceObjects. Doing that would avoid first loading entities, then again loading their normalized values (in which case a loaded entity just isn't needed).

The challenge would be in how to handle includes across includes and translations...

ResourceObject could have a getRelationship($field) method added to it, which would a list of resource identifiers. The NormalizedResourceObject could derive those directly from the normalized value and the existing class could derive them from an entity reference.

Translations I guess would just be an extra column in a custom table where we stored the NormalizedResourceObjects.

Maybe this would never work because of entity access...

</musing>

jibran’s picture

To start the benchmarking page_cache and dynamic_page_cache both were uninstalled.

Without Patch

Simple node collection endpoint jsonapi/node/node_type (cold cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   11.771 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      1508200 bytes
HTML transferred:       1407100 bytes
Requests per second:    8.50 [#/sec] (mean)
Time per request:       117.712 [ms] (mean)
Time per request:       117.712 [ms] (mean, across all concurrent requests)
Transfer rate:          125.12 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    61  118 487.3     67    4941
Waiting:       60  114 455.2     66    4619
Total:         61  118 487.3     67    4941

Percentage of the requests served within a certain time (ms)
  50%     67
  66%     69
  75%     70
  80%     70
  90%     76
  95%     80
  98%    182
  99%   4941
 100%   4941 (longest request)
-----------------------------------------------------------------------

Simple node collection endpoint jsonapi/node/node_type (warm cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   6.526 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      1508200 bytes
HTML transferred:       1407100 bytes
Requests per second:    15.32 [#/sec] (mean)
Time per request:       65.262 [ms] (mean)
Time per request:       65.262 [ms] (mean, across all concurrent requests)
Transfer rate:          225.68 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    61   65   6.3     64     117
Waiting:       60   64   6.3     63     117
Total:         61   65   6.3     64     117

Percentage of the requests served within a certain time (ms)
  50%     64
  66%     65
  75%     66
  80%     67
  90%     68
  95%     73
  98%     82
  99%    117
 100%    117 (longest request)
-----------------------------------------------------------------------

With Patch

Simple node collection endpoint jsonapi/node/node_type (cold cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   12.009 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      1508200 bytes
HTML transferred:       1407100 bytes
Requests per second:    8.33 [#/sec] (mean)
Time per request:       120.088 [ms] (mean)
Time per request:       120.088 [ms] (mean, across all concurrent requests)
Transfer rate:          122.65 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    65  120 503.5     68    5104
Waiting:       64  117 476.9     67    4836
Total:         65  120 503.5     68    5104

Percentage of the requests served within a certain time (ms)
  50%     68
  66%     69
  75%     70
  80%     70
  90%     72
  95%     76
  98%    184
  99%   5104
 100%   5104 (longest request)
-----------------------------------------------------------------------

Simple node collection endpoint jsonapi/node/node_type (warm cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   7.043 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      1508200 bytes
HTML transferred:       1407100 bytes
Requests per second:    14.20 [#/sec] (mean)
Time per request:       70.431 [ms] (mean)
Time per request:       70.431 [ms] (mean, across all concurrent requests)
Transfer rate:          209.12 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    65   70   5.7     69     110
Waiting:       64   70   5.7     68     109
Total:         65   70   5.7     69     110

Percentage of the requests served within a certain time (ms)
  50%     69
  66%     70
  75%     71
  80%     72
  90%     76
  95%     81
  98%     89
  99%    110
 100%    110 (longest request)
-----------------------------------------------------------------------
jibran’s picture

Issue tags: -needs profiling

Without Patch

8 fields, 2 computed, 2 filters, title sort, Pager 0-50 (cold cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   43.483 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5653100 bytes
HTML transferred:       5546400 bytes
Requests per second:    2.30 [#/sec] (mean)
Time per request:       434.829 [ms] (mean)
Time per request:       434.829 [ms] (mean, across all concurrent requests)
Transfer rate:          126.96 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   366  435 545.8    377    5836
Waiting:      364  428 506.9    374    5444
Total:        366  435 545.8    377    5836

Percentage of the requests served within a certain time (ms)
  50%    377
  66%    380
  75%    381
  80%    384
  90%    387
  95%    418
  98%    472
  99%   5836
 100%   5836 (longest request)
-----------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 0-50 (warm cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   38.215 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5653100 bytes
HTML transferred:       5546400 bytes
Requests per second:    2.62 [#/sec] (mean)
Time per request:       382.146 [ms] (mean)
Time per request:       382.146 [ms] (mean, across all concurrent requests)
Transfer rate:          144.46 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   368  382  10.2    380     415
Waiting:      366  380  10.2    377     412
Total:        369  382  10.2    380     415

Percentage of the requests served within a certain time (ms)
  50%    380
  66%    384
  75%    388
  80%    392
  90%    396
  95%    401
  98%    414
  99%    415
 100%    415 (longest request)
-----------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 50-100 (warm cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   40.856 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5756400 bytes
HTML transferred:       5650200 bytes
Requests per second:    2.45 [#/sec] (mean)
Time per request:       408.559 [ms] (mean)
Time per request:       408.559 [ms] (mean, across all concurrent requests)
Transfer rate:          137.59 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   340  408 261.2    357    2648
Waiting:      338  406 261.0    354    2644
Total:        340  409 261.2    357    2648

Percentage of the requests served within a certain time (ms)
  50%    357
  66%    372
  75%    383
  80%    388
  90%    417
  95%    451
  98%   1412
  99%   2648
 100%   2648 (longest request)
-----------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 50-100 (warm cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   38.131 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5756400 bytes
HTML transferred:       5650200 bytes
Requests per second:    2.62 [#/sec] (mean)
Time per request:       381.311 [ms] (mean)
Time per request:       381.311 [ms] (mean, across all concurrent requests)
Transfer rate:          147.43 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   338  381  15.1    380     458
Waiting:      336  379  15.0    377     456
Total:        338  381  15.1    380     458

Percentage of the requests served within a certain time (ms)
  50%    380
  66%    383
  75%    387
  80%    388
  90%    397
  95%    410
  98%    425
  99%    458
 100%    458 (longest request)
-----------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 100-150 (warm cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   40.387 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5812500 bytes
HTML transferred:       5706200 bytes
Requests per second:    2.48 [#/sec] (mean)
Time per request:       403.867 [ms] (mean)
Time per request:       403.867 [ms] (mean, across all concurrent requests)
Transfer rate:          140.55 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   346  404 219.6    380    2574
Waiting:      344  401 219.6    378    2571
Total:        346  404 219.6    380    2574

Percentage of the requests served within a certain time (ms)
  50%    380
  66%    384
  75%    387
  80%    389
  90%    398
  95%    419
  98%    441
  99%   2574
 100%   2574 (longest request)
-----------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 100-150 (warm cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   38.230 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5812500 bytes
HTML transferred:       5706200 bytes
Requests per second:    2.62 [#/sec] (mean)
Time per request:       382.301 [ms] (mean)
Time per request:       382.301 [ms] (mean, across all concurrent requests)
Transfer rate:          148.48 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   367  382  12.9    380     463
Waiting:      365  380  12.7    378     459
Total:        367  382  12.9    380     463

Percentage of the requests served within a certain time (ms)
  50%    380
  66%    384
  75%    385
  80%    388
  90%    397
  95%    405
  98%    414
  99%    463
 100%    463 (longest request)
-----------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 150-182 (warm cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   28.366 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      3712800 bytes
HTML transferred:       3622600 bytes
Requests per second:    3.53 [#/sec] (mean)
Time per request:       283.660 [ms] (mean)
Time per request:       283.660 [ms] (mean, across all concurrent requests)
Transfer rate:          127.82 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   248  284 152.0    267    1785
Waiting:      247  282 152.0    264    1784
Total:        248  284 152.0    267    1785

Percentage of the requests served within a certain time (ms)
  50%    267
  66%    271
  75%    273
  80%    275
  90%    283
  95%    293
  98%    297
  99%   1785
 100%   1785 (longest request)
-----------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 150-182 (warm cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   26.648 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      3712800 bytes
HTML transferred:       3622600 bytes
Requests per second:    3.75 [#/sec] (mean)
Time per request:       266.478 [ms] (mean)
Time per request:       266.478 [ms] (mean, across all concurrent requests)
Transfer rate:          136.06 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   241  266   8.0    265     290
Waiting:      239  265   8.0    263     289
Total:        241  266   8.0    265     290

Percentage of the requests served within a certain time (ms)
  50%    265
  66%    269
  75%    271
  80%    272
  90%    279
  95%    284
  98%    288
  99%    290
 100%    290 (longest request)
-----------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 200-250 [Empty response] (warm cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   5.558 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      257700 bytes
HTML transferred:       179400 bytes
Requests per second:    17.99 [#/sec] (mean)
Time per request:       55.582 [ms] (mean)
Time per request:       55.582 [ms] (mean, across all concurrent requests)
Transfer rate:          45.28 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    49   55   6.8     54     112
Waiting:       49   55   6.8     54     112
Total:         49   56   6.8     54     112

Percentage of the requests served within a certain time (ms)
  50%     54
  66%     56
  75%     57
  80%     58
  90%     60
  95%     62
  98%     69
  99%    112
 100%    112 (longest request)
-----------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 200-250 [Empty response] (warm cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   5.497 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      257700 bytes
HTML transferred:       179400 bytes
Requests per second:    18.19 [#/sec] (mean)
Time per request:       54.966 [ms] (mean)
Time per request:       54.966 [ms] (mean, across all concurrent requests)
Transfer rate:          45.78 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    49   55   4.8     54      69
Waiting:       49   55   4.8     54      69
Total:         49   55   4.8     54      69

Percentage of the requests served within a certain time (ms)
  50%     54
  66%     55
  75%     57
  80%     58
  90%     63
  95%     66
  98%     68
  99%     69
 100%     69 (longest request)
-----------------------------------------------------------------------

With patch

8 fields, 2 computed, 2 filters, title sort, Pager 0-50 (cold cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   34.150 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5653100 bytes
HTML transferred:       5546400 bytes
Requests per second:    2.93 [#/sec] (mean)
Time per request:       341.503 [ms] (mean)
Time per request:       341.503 [ms] (mean, across all concurrent requests)
Transfer rate:          161.66 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    78  341 1715.3    123   17263
Waiting:       77  320 1666.8    105   16764
Total:         78  341 1715.3    123   17264

Percentage of the requests served within a certain time (ms)
  50%    123
  66%    179
  75%    193
  80%    214
  90%    325
  95%    641
  98%    861
  99%  17264
 100%  17264 (longest request)
-----------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 0-50 (warm cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   8.242 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5653100 bytes
HTML transferred:       5546400 bytes
Requests per second:    12.13 [#/sec] (mean)
Time per request:       82.423 [ms] (mean)
Time per request:       82.423 [ms] (mean, across all concurrent requests)
Transfer rate:          669.79 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    77   82  16.8     80     247
Waiting:       76   81   9.1     79     166
Total:         77   82  16.8     80     247

Percentage of the requests served within a certain time (ms)
  50%     80
  66%     81
  75%     82
  80%     82
  90%     83
  95%     86
  98%     95
  99%    247
 100%    247 (longest request)
-----------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 50-100 (warm cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   15.658 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5756400 bytes
HTML transferred:       5650200 bytes
Requests per second:    6.39 [#/sec] (mean)
Time per request:       156.583 [ms] (mean)
Time per request:       156.583 [ms] (mean, across all concurrent requests)
Transfer rate:          359.01 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    77  157 737.5     81    7457
Waiting:       77  155 730.7     80    7388
Total:         78  157 737.5     81    7457

Percentage of the requests served within a certain time (ms)
  50%     81
  66%     82
  75%     83
  80%     83
  90%     87
  95%     90
  98%    172
  99%   7457
 100%   7457 (longest request)
-----------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 50-100 (warm cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   8.324 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5756400 bytes
HTML transferred:       5650200 bytes
Requests per second:    12.01 [#/sec] (mean)
Time per request:       83.242 [ms] (mean)
Time per request:       83.242 [ms] (mean, across all concurrent requests)
Transfer rate:          675.32 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    78   83   9.3     82     169
Waiting:       77   82   9.3     81     168
Total:         78   83   9.3     82     169

Percentage of the requests served within a certain time (ms)
  50%     82
  66%     83
  75%     84
  80%     85
  90%     88
  95%     90
  98%     98
  99%    169
 100%    169 (longest request)
-----------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 100-150 (warm cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   15.908 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5812500 bytes
HTML transferred:       5706200 bytes
Requests per second:    6.29 [#/sec] (mean)
Time per request:       159.080 [ms] (mean)
Time per request:       159.080 [ms] (mean, across all concurrent requests)
Transfer rate:          356.82 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    79  159 704.2     84    7128
Waiting:       78  158 704.1     83    7125
Total:         79  159 704.2     84    7128

Percentage of the requests served within a certain time (ms)
  50%     84
  66%     87
  75%     92
  80%     93
  90%    100
  95%    106
  98%    273
  99%   7128
 100%   7128 (longest request)
-----------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 100-150 (warm cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   8.193 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5812500 bytes
HTML transferred:       5706200 bytes
Requests per second:    12.21 [#/sec] (mean)
Time per request:       81.933 [ms] (mean)
Time per request:       81.933 [ms] (mean, across all concurrent requests)
Transfer rate:          692.79 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    77   82   3.5     81     103
Waiting:       77   81   3.5     80     102
Total:         77   82   3.5     81     103

Percentage of the requests served within a certain time (ms)
  50%     81
  66%     82
  75%     82
  80%     83
  90%     86
  95%     88
  98%     95
  99%    103
 100%    103 (longest request)
-----------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 150-182 (warm cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   11.574 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      3712800 bytes
HTML transferred:       3622600 bytes
Requests per second:    8.64 [#/sec] (mean)
Time per request:       115.736 [ms] (mean)
Time per request:       115.736 [ms] (mean, across all concurrent requests)
Transfer rate:          313.28 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    67  116 447.3     70    4544
Waiting:       66  115 447.2     69    4542
Total:         67  116 447.3     70    4544

Percentage of the requests served within a certain time (ms)
  50%     70
  66%     71
  75%     71
  80%     72
  90%     73
  95%     78
  98%    100
  99%   4544
 100%   4544 (longest request)
-----------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 150-182 (warm cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   7.181 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      3712800 bytes
HTML transferred:       3622600 bytes
Requests per second:    13.93 [#/sec] (mean)
Time per request:       71.807 [ms] (mean)
Time per request:       71.807 [ms] (mean, across all concurrent requests)
Transfer rate:          504.93 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    67   72   6.9     70     125
Waiting:       66   71   6.9     70     124
Total:         67   72   6.9     70     125

Percentage of the requests served within a certain time (ms)
  50%     70
  66%     71
  75%     72
  80%     72
  90%     74
  95%     80
  98%    102
  99%    125
 100%    125 (longest request)
-----------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 200-250 [Empty response] (warm cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   5.304 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      257700 bytes
HTML transferred:       179400 bytes
Requests per second:    18.85 [#/sec] (mean)
Time per request:       53.036 [ms] (mean)
Time per request:       53.036 [ms] (mean, across all concurrent requests)
Transfer rate:          47.45 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    49   53   6.8     52     118
Waiting:       49   53   6.8     52     118
Total:         49   53   6.8     52     118

Percentage of the requests served within a certain time (ms)
  50%     52
  66%     53
  75%     53
  80%     53
  90%     54
  95%     55
  98%     61
  99%    118
 100%    118 (longest request)
-----------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 200-250 [Empty response] (warm cache)

-----------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   5.262 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      257700 bytes
HTML transferred:       179400 bytes
Requests per second:    19.01 [#/sec] (mean)
Time per request:       52.616 [ms] (mean)
Time per request:       52.616 [ms] (mean, across all concurrent requests)
Transfer rate:          47.83 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    49   53   2.5     52      67
Waiting:       49   52   2.5     52      67
Total:         49   53   2.5     52      67

Percentage of the requests served within a certain time (ms)
  50%     52
  66%     53
  75%     53
  80%     54
  90%     56
  95%     56
  98%     61
  99%     67
 100%     67 (longest request)
-----------------------------------------------------------------------
Wim Leers’s picture

Thanks, @jibran! Could you update your comments to indicate what the results look like without this patch? Without that information, it's impossible to interpret your results.

Or am I just overlooking something?

jibran’s picture

Oh! all of these stats are with the patch. I can get without patch stats as well. :)

jibran’s picture

Added without patch stats to both comments.

Wim Leers’s picture

What is "warm cache" when the patch is not applied? With Page Cache & Dynamic Page Cache turned off, I don't know what this refers to.

The "with patch" results versus "without patch" results in #57 are quite encouraging, but without an answer to the above, I don't dare interpret these numbers or draw conclusions.

Thanks again for taking the time to test this so thoroughly! 🙏

jibran’s picture

What is "warm cache" when the patch is not applied?

Maybe I used the wrong terminology. Here is the profiling scenario:

Without patch

  • Uninstalled page cache and dynamic page cache modules.
  • Updated JSON:API and JSON:API_EXTRA to latest dev version.
  • Cleared Drupal Cache using drush cr
  • Ran the ab command. I called this cold cache.
  • Every ab command I ran after that has written warm cache againest it.

With patch

  • Uninstalled page cache and dynamic page cache modules.
  • Updated JSON:API and JSON:API_EXTRA to latest dev version.
  • Applied the patch
  • Cleared Drupal Cache using drush cr
  • Ran the ab command. I called this cold cache.
  • Every ab command I ran after that has written warm cache againest it.

I hope this will clear the confusion.

Wim Leers’s picture

I think when you wrote

Every ab command I ran after that has written cold cache againest it.

you actually meant

Every ab command I ran after that has written cold warm cache againest it.

Correct?

(Don't worry, I know it was stupidly late over there!)

jibran’s picture

*facepalm* Yes, you are right. I have updated the comment to show that.

Wim Leers’s picture

Note that all the cold cache cases are really still 99 "warm cache" responses: only the first is an actual cold cache hit: you can see this by looking at the very last number in each benchmark result. (In the first one it's 4941 for example: 5 seconds for the first request.) That first request is doing things like priming entity caches. Perhaps you even are starting without a built container?

Observations:

  • In #56's /jsonapi/node_type/node_type example, the longest request goes up from 4941 to 5104 ms. That's 3% longer. That seems within the margin of error when so much I/O is going on, TBH.
  • Let's look at #56's warm cache cases. The median is 64 ms response time in HEAD, with 6.3 ms standard deviation. With the patch, it goes up to a median of 69, but with a lower sd of 5.7 ms. That means the median went up one whole sd. That seems problematic. But not enough data to draw conclusions.
  • #57's cold cache cases go up from 5836, 2648, 2574 and 1785 to 17264, 7457, 7128 and 4544. Those are very noticeable increases for the cold cache cases! About threefold! 😱 This is bad! 👎
  • When looking at #57's warm cache cases, I see medians (and sds) of 380 (10.2), 380 (15.1), 380 (12.9), 265 (8.0) for HEAD and 80 (16.8), 82 (169), 81 (3.5), 70 (6.9). About a fourfold increase in throughput! 🎉 This is great! 👍
  • Note that I omitted #57's "empty" case: that went unchanged both in the cold case (from 112 to 118) and the warm case (from 69 to 67). This suggests that the numbers obtained above are indeed consequences of caching normalizations of entities, because in the empty case, there is nothing to cache, hence there is no extra work, and in those cases the numbers remained unchanged. In other words: the lack of change here lends credibility to the significant changes elsewhere.

Conclusion

  • 4x better performance with warm caches (requests 2–N).
  • 3x worse performance with cold caches (requests 1)
  • This means the getMultiple() concern raised by @gabesullice in #48 is a non-issue.
  • I personally think this is a fine trade-off. @e0ipso does too — see #35. Cache warming is common practice for dynamic web sites. Typical requests can be trivially prewarmed. And note that even a single request can pre-warm many requests: if they normalize the same entities.

Thoughts, @jibran, @gabesullice, @e0ipso?

e0ipso’s picture

#65 it's awesome to get those numbers to validate our guesses. I'm still OK with the trade-offs.

I have an early prototype for a cache warmer module for JSON:API already :-P

Wim Leers’s picture

Assigned: Unassigned » gabesullice

I have an early prototype for a cache warmer module for JSON:API already :-P

🎉 😃

@gabesullice, thoughts?

It'd be cool to keep up the "release once every 2 weeks" rhythm. We could tag 2.2 this Monday, February 4. This performance boost could be an enticing reason for people to update from JSON:API 2.1 to 2.2 :)

ndobromirov’s picture

The numbers seems great 4x speed-up for warm data :D.

I have problems testing this, as it's conflicting with another patch I am using #3018287: ResourceTypeRepository computes ResourceType value objects on *every request*. I will be re-basing one over the other likely next week, so just ping me what will go in first and to re-base on.

gabesullice’s picture

First I'm HUGELY excited by the potential gains we're seeing.

However, I'm still fairly skeptical of the results. It feels a bit like we're robbing Peter to pay Paul.

  • 4x better performance with warm caches (requests 2–N).
  • 3x worse performance with cold caches (requests 1)

The reality is that the "4x better performance" is a moot point if DPC or PC is involved, which in almost all cases it will be. Thus this change is mostly just slowing down uncached responses dramatically.

What if, now that we have a PoC and validation that this can be quite fast, the "JSON:API Boost" module just decorates the entity normalizer itself with this caching behavior?

If we could be sure that most entities in a request (not cached by DPC) already has its entities pre-normalized, that would make me much less worried about this. However, I'm not convinced that will be the case.

I'd also like to see tests without computed fields. They are still not the norm and we might only be looking at the "best case scenario".

Wim Leers’s picture

Assigned: gabesullice » Unassigned
FileSize
15.25 KB
18.89 KB

Mea culpa

The reality is that the "4x better performance" is a moot point if DPC or PC is involved, which in almost all cases it will be. Thus this change is mostly just slowing down uncached responses dramatically.

You're right. I presented the information far too simply, painted a too rosy picture. I should have added far more nuance. And in doing so, I let myself get carried away by the numbers. I wanted to get an analysis out the door, I shouldn't have. I should've been more thorough.

Mea culpa. 😔

Realistic scenario

The numbers in #56 and #57 are for something that's easy to measure. To properly measure this, we'd need to test with a realistic recorded workload, i.e. with a certain sequence of requests. That's very, very difficult to obtain.

I'd also like to see tests without computed fields. They are still not the norm and we might only be looking at the "best case scenario".

You're right, but @jibran's real-world use case just happens to contain that.

Still, we shouldn't make general conclusions based on one specific case.

@ndobromirov, I'd love to see your test results! 😀

This is not about DPC

That being said…

Of course you're right that repeating the exact same request would already be accelerated by DPC (Dynamic Page Cache). And then it'd be much faster still. So, yes, this patch would not speed up requests 2–N to a URL X1.

However, this does speed up cold cache cases to other URLs than the one that was just stored in DPC (currently URL X1). That is, the first request to URLs X2–N would result in cache misses for DPC, but insofar that the set of entities they return/normalize intersects with that for URL X1, they'll be accelerated.

You would see that 4x increase only if they intersect 100% (in other words: if responses for URLs X2–N are a strict subset of the response for URL X1). That's what you're getting at in:

If we could be sure that most entities in a request (not cached by DPC) already has its entities pre-normalized, that would make me much less worried about this. However, I'm not convinced that will be the case.

(An interesting consequence is that with a single well-targeted and super expensive "cache warming request", you could accelerate lots of other requests.)

Writing to cache post-response

What I propose as a solution is to actually skip setting a cache value if a sparse fieldset is requested. In that case, we should create a normalization queue and insert that entity as a queue item instead of setting it in cache. On Kernel::TERMINATE, we can then process that queue and normalize it with no skipped fields.

It's time we try this. I initially reacted skeptically to this, but back then I didn't yet know how complex or simple the patch would end up being. It's currently quite simple. Let's see how much more complex it becomes once we do this.

So, here it is. Again quite late in the day, definitely not with my top concentration, but this should at least be fairly close 🤞

Status: Needs review » Needs work

The last submitted patch, 70: 2819335-70.patch, failed testing. View results

Wim Leers’s picture

Great, #70 has only 3 failures, 2 of which are in a kernel/unit test and 1 is for a specific edge case for the User entity. That means this generally works! :)

(We'd still want to add explicit test coverage of course.)

@jibran: I know it's a lot to ask but … it'd be super helpful if you could rerun your tests. You'd only need to redo the "after" tests if everything else in the environment has remained the same.

Wim Leers’s picture

Status: Needs work » Needs review
FileSize
2.43 KB
19.54 KB

Hah, turns out that the one change I made last-minute while cleaning up and after manual testing (adding public: false) essentially made this ineffective 😭Private services can't be event subscribers.

EDIT: it'd be cool if Symfony actually threw an error for this during container compilation, instead of silently allowing it to "work" yet then not work as expected.

(This is exactly why we need more explicit tests of course.)

Status: Needs review » Needs work

The last submitted patch, 73: 2819335-73.patch, failed testing. View results
- codesniffer_fixes.patch Interdiff of automated coding standards fixes only.

Wim Leers’s picture

Status: Needs work » Needs review
FileSize
811 bytes
19.49 KB

CS fix.

gabesullice’s picture

Just read the new patch with the cacher. That was a stroke of brilliance. I did not think about just delaying the write until after the request. I figured we'd just be saying "oh, and normalize this again later" effectively normalizing everything twice. 👏

It's also awesome that the patch is still under 20k!

+++ b/src/EventSubscriber/EntityNormalizationCacher.php
@@ -0,0 +1,167 @@
+        'keys' => [$resource_type->getTypeName(), $entity->uuid()],

I know it's early, but let's not forget that this will need to vary on translations and revisions. (🤔 wouldn't it be great if EntityInterface had a standard way of fingerprinting/ID'ing every entity variant, like a per-variant UUID).

Wim Leers’s picture

I know it's early, but let's not forget that this will need to vary on translations and revisions. (🤔 wouldn't it be great if EntityInterface had a standard way of fingerprinting/ID'ing every entity variant, like a per-variant UUID).

Hehe :) And yes, that will be a concern. For revisions, it already is. Although for revisions we could make this simple: only cache the default revision. For translations, we’ll need more nuance.

Do I sense a “RTBC if tests”? 😏🤓

gabesullice’s picture

Hehe :) And yes, that will be a concern. For revisions, it already is. Although for revisions we could make this simple: only cache the default revision. For translations, we’ll need more nuance.

Translations already have some accidental support. I don't think it's a good idea to break that.

I'm fine with only cacheing the default revision. Although, would it really by all that difficult to have a complete solution?

Do I sense a “RTBC if tests”?

Almost there!


  1. +++ b/src/EventSubscriber/EntityNormalizationCacher.php
    @@ -0,0 +1,167 @@
    + * Caches partial entity normalizations after the response has been sent.
    

    Might it be worth generalizing this to not be specific to entity normalizations in this issue? It's a really nice pattern that we may find useful in other issues. e.g., we could consider it in #3018287: ResourceTypeRepository computes ResourceType value objects on *every request*

  2. +++ b/src/EventSubscriber/EntityNormalizationCacher.php
    @@ -0,0 +1,167 @@
    +   * Reads a partial normalization from cache.
    

    ✅ nit: they're not "partial" entity normalizations. They're actually complete normalizations.

  3. +++ b/src/EventSubscriber/EntityNormalizationCacher.php
    @@ -0,0 +1,167 @@
    +  public function get(ResourceType $resource_type, EntityInterface $entity) {
    

    I'd love to have #3015438: Wrap entity objects in a ResourceObject which carries a ResourceType so that this could just take a single ResourceObject as its argument. Not a blocker, but we'll want to reroll this if it lands first.

  4. +++ b/src/EventSubscriber/EntityNormalizationCacher.php
    @@ -0,0 +1,167 @@
    +   * Adds a partial normalization to be cached after the response has been sent.
    

    ✅ "partial" again.

    It's only "partial" insofar as it's part of a larger response document. But I think that knowledge doesn't need to be documented/communicated on this class.

  5. +++ b/src/EventSubscriber/EntityNormalizationCacher.php
    @@ -0,0 +1,167 @@
    +   * @param array $normalization_parts
    +   *   The normalization parts to cache.
    ...
    +    assert(array_keys($normalization_parts) === ['base', 'fields']);
    
    +++ b/src/Normalizer/EntityNormalizer.php
    @@ -118,11 +120,64 @@ class EntityNormalizer extends NormalizerBase implements DenormalizerInterface {
    +   * @return array
    +   *   An array with two key-value pairs:
    +   *   - 'base': array, the base normalization of the entity, that does not
    +   *             depend on which sparse fieldset was requested.
    +   *   - 'fields': CacheableNormalization for all requested fields.
    

    A domain object would be nice here. I'm not a fan of magic array keys. I'm going to experiment to see if there's a way to use a CacheableNormalization object.

  6. +++ b/src/Normalizer/EntityNormalizer.php
    @@ -118,11 +120,64 @@ class EntityNormalizer extends NormalizerBase implements DenormalizerInterface {
    +      // Check if some of the currently requested sparse fieldset has not yet
    

    ✅ s/has/have

  7. +++ b/src/Normalizer/EntityNormalizer.php
    @@ -118,11 +120,64 @@ class EntityNormalizer extends NormalizerBase implements DenormalizerInterface {
    +   * @param string[] $field_names
    +   *   The field names to normalize (the sparse fieldset, if any).
    ...
    +   *   The entity to partially normalize.
    

    ✅ Ah, finally I see. It's "partial" because of a sparse fieldsets.

  8. +++ b/src/Normalizer/EntityNormalizer.php
    @@ -118,11 +120,64 @@ class EntityNormalizer extends NormalizerBase implements DenormalizerInterface {
    +   *   Format the normalization result will be encoded as.
    

    ✅ "The format in which the normalization will be encoded."

gabesullice’s picture

+++ b/src/Normalizer/EntityNormalizer.php
@@ -90,13 +89,16 @@ class EntityNormalizer extends NormalizerBase implements DenormalizerInterface {
+    $this->cacher = $cacher;

$this->cacher needs to be defined as a class property.

gabesullice’s picture

Edit: patches below.

Per #78.1/5 I did a little playing around.

  1. I made the API of the cacher just have `get/set` methods. This makes it look more like a traditional cache backend. I love the idea of this "delayed write" thing and think it'd be an interesting to consider proposing a new cache backend to core that does this (once the cache redirection thing is generalized, ofc).
  2. I separated out the event subscription behavior into its own class and I made a public persist method available to it. This will make it easier for "JSON:API Boost" to write cached normalizations and then persist them.
  3. Finally, I got rid of the ['base' => ['...'], 'fields' => ['...']] array and made this use just a CacheableNormalization object. I think the end result is a little more elegant (and could eventually play better with the idea of a more generic cache backend).
gabesullice’s picture

🙈

This is just an experiment for now. I'm curious what everyone thinks. The high-level approach is the same. I just reorganized and renamed things.

@jibran, we talked about benchmarking the "latest" patch on the API-first call today. Please don't use this one, instead, use #78 🙏

Status: Needs review » Needs work

The last submitted patch, 81: 2819335-80-experiment.patch, failed testing. View results

Wim Leers’s picture

#80 & #81:

interesting to consider proposing a new cache backend to core that does this (once the cache redirection thing is generalized, ofc).

We already have quite a few things in core doing things like that, such as \Drupal\Core\EventSubscriber\RequestCloseSubscriber.


  1. +++ b/jsonapi.services.yml
    @@ -66,12 +66,16 @@ services:
    +    public: false
    ...
           - { name: event_subscriber }
    

    😀 This is making the same mistake I made in #73: private event subscribers are not possible.

  2. +++ b/src/EventSubscriber/EntityNormalizationCacheEventSubscriber.php
    @@ -0,0 +1,53 @@
    +final class EntityNormalizationCacheEventSubscriber implements EventSubscriberInterface {
    +
    +  /**
    +   * The entity normalization cache.
    +   *
    +   * @var \Drupal\jsonapi\Cache\EntityNormalizationCache
    +   */
    +  protected $cache;
    +
    

    🤔 This means we're adding two new services. I don't see how this is simpler or better than what I had? Just more moving parts.

    How would it help a "JSON:API Boost" contrib module? The description @e0ipso gave in #35 is about cache warming, which means it needs to somehow determine which entities to normalize or which subrequests to execute prior to real requests occurring. That is the key complexity, not this.

    If we want a super nimble event subscriber, then let's move the cache logic back into the normalizer, so that the event subscriber only calls a "persist()" or "writeCache()" method.

  3. +++ b/src/Normalizer/EntityNormalizer.php
    @@ -159,38 +160,44 @@ class EntityNormalizer extends NormalizerBase implements DenormalizerInterface {
    +    $entity_normalization = $normalization->getNormalization();
    

    🤓 This is correct, but reads very weird.

  4. +++ b/src/Normalizer/EntityNormalizer.php
    @@ -159,38 +160,44 @@ class EntityNormalizer extends NormalizerBase implements DenormalizerInterface {
    +    $missing_fields = array_diff($field_names, array_merge($existing_attributes, $existing_relationships));
    

    👎Unless I'm missing something … this does not account for the fact that some requested fields may simply be inaccessible, this work would be redone every single request. Because this is now introspecting the concrete complete entity normalization, not the per-field normalization results, which might have been a CacheableOmission.

Wim Leers’s picture

#78:

Translations already have some accidental support. I don't think it's a good idea to break that.

Agreed. For translations it's simple though: vary by languages:language_content. We don't have a cache context for "active revision" like we do for "active translation".

Although, would it really by all that difficult to have a complete solution?

Maybe not, but I don't think it's worth delaying this issue/patch by even a minute for that. It's expanding scope. But honestly, yes, I do think it's difficult: because

  1. This cannot be applied to #3018287: ResourceTypeRepository computes ResourceType value objects on *every request* because ResourceTypeRepository must traverse all type information in order for the computed ResourceType objects to be correct. Consequently, I don't see how we could gradually grow the cache to be complete. It's all or nothing.
    Seems like you noticed this in point 7, but still removed all "partial" mentions?
  2. I … am confused by that statement. Per \Drupal\jsonapi\Normalizer\EntityNormalizer::normalize(), the whole point is that we only normalize those fields that requrested for the current sparse fieldset. Which means it is partial. What am I missing? I guess you mean that anything that does get cached is "complete for its purpose"? But it's not necessarily the complete entity normalization, as in for all fields.
  3. That'd be nice indeed :)
  4. See point 2.
  5. I'm not a fan of it either, but as long as its limited to deeply internal use in one class (okay, now two, but they're tightly coupled, they're "friend classes" in C/C++ terminology), then it's IMHO fine.
jibran’s picture

To start the benchmarking, page_cache and dynamic_page_cache both were uninstalled and xdebug was disabled.
Benchmarked against the current JSON:API HEAD. The cache was cleared using drush cr and visited /user to rebuild the container.

Without patch

8 fields, 2 computed, 2 filters, title sort, Pager 0-49 (1st run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   42.513 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5653100 bytes
HTML transferred:       5546400 bytes
Requests per second:    2.35 [#/sec] (mean)
Time per request:       425.127 [ms] (mean)
Time per request:       425.127 [ms] (mean, across all concurrent requests)
Transfer rate:          129.86 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   364  425 354.5    384    3926
Waiting:      361  421 334.5    381    3723
Total:        364  425 354.5    384    3926

Percentage of the requests served within a certain time (ms)
  50%    384
  66%    391
  75%    397
  80%    402
  90%    440
  95%    455
  98%    480
  99%   3926
 100%   3926 (longest request)
-----------------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 0-49 (2nd run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   40.145 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5653100 bytes
HTML transferred:       5546400 bytes
Requests per second:    2.49 [#/sec] (mean)
Time per request:       401.445 [ms] (mean)
Time per request:       401.445 [ms] (mean, across all concurrent requests)
Transfer rate:          137.52 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   388  401   8.6    400     432
Waiting:      385  399   8.6    397     429
Total:        388  401   8.6    400     433

Percentage of the requests served within a certain time (ms)
  50%    400
  66%    403
  75%    407
  80%    408
  90%    412
  95%    416
  98%    431
  99%    433
 100%    433 (longest request)
-----------------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 50-99 (1st run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   42.397 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5756400 bytes
HTML transferred:       5650200 bytes
Requests per second:    2.36 [#/sec] (mean)
Time per request:       423.971 [ms] (mean)
Time per request:       423.971 [ms] (mean, across all concurrent requests)
Transfer rate:          132.59 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   366  424 270.3    378    2770
Waiting:      363  421 270.2    376    2767
Total:        366  424 270.4    378    2770

Percentage of the requests served within a certain time (ms)
  50%    378
  66%    381
  75%    384
  80%    388
  90%    409
  95%    434
  98%   1496
  99%   2770
 100%   2770 (longest request)
-----------------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 50-99 (2nd run)

-----------------------------------------------------------------------------

Concurrency Level:      1
Time taken for tests:   41.430 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5756400 bytes
HTML transferred:       5650200 bytes
Requests per second:    2.41 [#/sec] (mean)
Time per request:       414.299 [ms] (mean)
Time per request:       414.299 [ms] (mean, across all concurrent requests)
Transfer rate:          135.69 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   369  414 167.4    384    1574
Waiting:      367  412 166.7    381    1567
Total:        369  414 167.4    384    1575

Percentage of the requests served within a certain time (ms)
  50%    384
  66%    388
  75%    392
  80%    400
  90%    423
  95%    442
  98%   1557
  99%   1575
 100%   1575 (longest request)
-----------------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 100-149 (1st run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   42.159 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5812500 bytes
HTML transferred:       5706200 bytes
Requests per second:    2.37 [#/sec] (mean)
Time per request:       421.593 [ms] (mean)
Time per request:       421.593 [ms] (mean, across all concurrent requests)
Transfer rate:          134.64 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   363  421 228.9    392    2670
Waiting:      361  419 228.9    389    2668
Total:        363  422 228.9    392    2670

Percentage of the requests served within a certain time (ms)
  50%    392
  66%    411
  75%    417
  80%    423
  90%    440
  95%    451
  98%    511
  99%   2670
 100%   2670 (longest request)
-----------------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 100-149 (2nd run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   38.516 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5812500 bytes
HTML transferred:       5706200 bytes
Requests per second:    2.60 [#/sec] (mean)
Time per request:       385.156 [ms] (mean)
Time per request:       385.156 [ms] (mean, across all concurrent requests)
Transfer rate:          147.38 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   368  385  12.6    382     434
Waiting:      365  383  12.5    380     430
Total:        368  385  12.6    383     434

Percentage of the requests served within a certain time (ms)
  50%    383
  66%    389
  75%    391
  80%    395
  90%    401
  95%    412
  98%    419
  99%    434
 100%    434 (longest request)
-----------------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 150-182 (1st run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   27.880 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      3712800 bytes
HTML transferred:       3622600 bytes
Requests per second:    3.59 [#/sec] (mean)
Time per request:       278.800 [ms] (mean)
Time per request:       278.800 [ms] (mean, across all concurrent requests)
Transfer rate:          130.05 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   250  279 141.7    262    1678
Waiting:      249  277 141.7    261    1677
Total:        250  279 141.7    262    1678

Percentage of the requests served within a certain time (ms)
  50%    262
  66%    268
  75%    272
  80%    274
  90%    279
  95%    288
  98%    292
  99%   1678
 100%   1678 (longest request)
-----------------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 150-182 (2nd run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   28.034 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      3712800 bytes
HTML transferred:       3622600 bytes
Requests per second:    3.57 [#/sec] (mean)
Time per request:       280.342 [ms] (mean)
Time per request:       280.342 [ms] (mean, across all concurrent requests)
Transfer rate:          129.33 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   248  280  21.9    277     391
Waiting:      246  278  21.8    275     390
Total:        248  280  21.9    277     391

Percentage of the requests served within a certain time (ms)
  50%    277
  66%    286
  75%    294
  80%    298
  90%    308
  95%    314
  98%    339
  99%    391
 100%    391 (longest request)
-----------------------------------------------------------------------------

Without patch

8 fields, 2 filters, title sort, Pager 0-49 (1st run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   19.486 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5007600 bytes
HTML transferred:       4900900 bytes
Requests per second:    5.13 [#/sec] (mean)
Time per request:       194.858 [ms] (mean)
Time per request:       194.858 [ms] (mean, across all concurrent requests)
Transfer rate:          250.96 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   146  195 296.4    163    3126
Waiting:      145  188 242.3    161    2583
Total:        146  195 296.4    163    3126

Percentage of the requests served within a certain time (ms)
  50%    163
  66%    168
  75%    173
  80%    175
  90%    181
  95%    184
  98%    257
  99%   3126
 100%   3126 (longest request)
-----------------------------------------------------------------------------

8 fields, 2 filters, title sort, Pager 0-49 (2nd run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   15.964 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5007600 bytes
HTML transferred:       4900900 bytes
Requests per second:    6.26 [#/sec] (mean)
Time per request:       159.644 [ms] (mean)
Time per request:       159.644 [ms] (mean, across all concurrent requests)
Transfer rate:          306.32 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   151  160   6.9    158     187
Waiting:      150  158   6.8    156     186
Total:        151  160   6.9    158     187

Percentage of the requests served within a certain time (ms)
  50%    158
  66%    160
  75%    162
  80%    164
  90%    169
  95%    173
  98%    182
  99%    187
 100%    187 (longest request)
-----------------------------------------------------------------------------

8 fields, 2 filters, title sort, Pager 150-182 (1st run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   13.671 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      3297800 bytes
HTML transferred:       3207600 bytes
Requests per second:    7.31 [#/sec] (mean)
Time per request:       136.714 [ms] (mean)
Time per request:       136.714 [ms] (mean, across all concurrent requests)
Transfer rate:          235.57 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   114  137 161.9    119    1739
Waiting:      113  136 161.9    118    1738
Total:        114  137 162.0    119    1739

Percentage of the requests served within a certain time (ms)
  50%    119
  66%    120
  75%    122
  80%    123
  90%    128
  95%    135
  98%    142
  99%   1739
 100%   1739 (longest request)
-----------------------------------------------------------------------------

8 fields, 2 filters, title sort, Pager 150-182 (2nd run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   12.252 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      3297800 bytes
HTML transferred:       3207600 bytes
Requests per second:    8.16 [#/sec] (mean)
Time per request:       122.516 [ms] (mean)
Time per request:       122.516 [ms] (mean, across all concurrent requests)
Transfer rate:          262.86 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   114  122   6.0    121     153
Waiting:      112  121   6.0    120     151
Total:        114  122   6.0    121     153

Percentage of the requests served within a certain time (ms)
  50%    121
  66%    123
  75%    125
  80%    127
  90%    131
  95%    135
  98%    137
  99%    153
 100%    153 (longest request)
-----------------------------------------------------------------------------


With patch

Every 1st request of 1st run is priming normailze cache.
8 fields, 2 computed, 2 filters, title sort, Pager 0-49 (1st run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   17.651 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5653100 bytes
HTML transferred:       5546400 bytes
Requests per second:    5.67 [#/sec] (mean)
Time per request:       176.512 [ms] (mean)
Time per request:       176.512 [ms] (mean, across all concurrent requests)
Transfer rate:          312.76 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    86  176 823.6     93    8330
Waiting:       85  128 346.8     92    3560
Total:         86  176 823.6     93    8330

Percentage of the requests served within a certain time (ms)
  50%     93
  66%     94
  75%     96
  80%     96
  90%    101
  95%    102
  98%    140
  99%   8330
 100%   8330 (longest request)
-----------------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 0-49 (2nd run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   9.251 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5653100 bytes
HTML transferred:       5546400 bytes
Requests per second:    10.81 [#/sec] (mean)
Time per request:       92.505 [ms] (mean)
Time per request:       92.505 [ms] (mean, across all concurrent requests)
Transfer rate:          596.79 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    87   92   3.8     91     103
Waiting:       86   91   3.8     90     102
Total:         87   92   3.8     91     103

Percentage of the requests served within a certain time (ms)
  50%     91
  66%     93
  75%     94
  80%     95
  90%    100
  95%    102
  98%    103
  99%    103
 100%    103 (longest request)
-----------------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 50-99 (1st run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   15.717 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5756400 bytes
HTML transferred:       5650200 bytes
Requests per second:    6.36 [#/sec] (mean)
Time per request:       157.172 [ms] (mean)
Time per request:       157.172 [ms] (mean, across all concurrent requests)
Transfer rate:          357.66 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    85  157 644.6     92    6538
Waiting:       85  115 237.6     91    2467
Total:         85  157 644.6     92    6538

Percentage of the requests served within a certain time (ms)
  50%     92
  66%     93
  75%     94
  80%     96
  90%     99
  95%    104
  98%    123
  99%   6538
 100%   6538 (longest request)
-----------------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 50-99 (2nd run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   8.918 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5756400 bytes
HTML transferred:       5650200 bytes
Requests per second:    11.21 [#/sec] (mean)
Time per request:       89.179 [ms] (mean)
Time per request:       89.179 [ms] (mean, across all concurrent requests)
Transfer rate:          630.36 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    83   89   5.2     88     110
Waiting:       82   88   5.2     87     109
Total:         83   89   5.2     88     110

Percentage of the requests served within a certain time (ms)
  50%     88
  66%     90
  75%     90
  80%     91
  90%     96
  95%    102
  98%    105
  99%    110
 100%    110 (longest request)
-----------------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 100-149 (1st run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   16.343 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5812500 bytes
HTML transferred:       5706200 bytes
Requests per second:    6.12 [#/sec] (mean)
Time per request:       163.434 [ms] (mean)
Time per request:       163.434 [ms] (mean, across all concurrent requests)
Transfer rate:          347.31 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    85  163 698.4     92    7077
Waiting:       84  118 256.7     91    2658
Total:         85  163 698.4     92    7077

Percentage of the requests served within a certain time (ms)
  50%     92
  66%     94
  75%     95
  80%     96
  90%    100
  95%    105
  98%    175
  99%   7077
 100%   7077 (longest request)
-----------------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 100-149 (2nd run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   9.242 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5812500 bytes
HTML transferred:       5706200 bytes
Requests per second:    10.82 [#/sec] (mean)
Time per request:       92.423 [ms] (mean)
Time per request:       92.423 [ms] (mean, across all concurrent requests)
Transfer rate:          614.16 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    85   92   5.9     91     131
Waiting:       84   91   5.9     91     130
Total:         85   92   5.9     91     131

Percentage of the requests served within a certain time (ms)
  50%     91
  66%     92
  75%     93
  80%     94
  90%     99
  95%    103
  98%    110
  99%    131
 100%    131 (longest request)
-----------------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 150-182 (1st run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   12.309 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      3712800 bytes
HTML transferred:       3622600 bytes
Requests per second:    8.12 [#/sec] (mean)
Time per request:       123.091 [ms] (mean)
Time per request:       123.091 [ms] (mean, across all concurrent requests)
Transfer rate:          294.56 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    72  123 448.7     78    4564
Waiting:       71   97 196.2     77    2038
Total:         72  123 448.7     78    4565

Percentage of the requests served within a certain time (ms)
  50%     78
  66%     80
  75%     81
  80%     81
  90%     84
  95%     88
  98%     99
  99%   4565
 100%   4565 (longest request)
-----------------------------------------------------------------------------

8 fields, 2 computed, 2 filters, title sort, Pager 150-182 (2nd run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   7.944 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      3712800 bytes
HTML transferred:       3622600 bytes
Requests per second:    12.59 [#/sec] (mean)
Time per request:       79.439 [ms] (mean)
Time per request:       79.439 [ms] (mean, across all concurrent requests)
Transfer rate:          456.43 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    72   79   6.0     78     106
Waiting:       71   78   5.9     77     105
Total:         72   79   6.0     78     107

Percentage of the requests served within a certain time (ms)
  50%     78
  66%     80
  75%     82
  80%     82
  90%     88
  95%     93
  98%     98
  99%    107
 100%    107 (longest request)
-----------------------------------------------------------------------------

With patch

Every 1st request of 1st run is priming normailze cache.
8 fields, 2 filters, title sort, Pager 0-49 (1st run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   17.417 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5007600 bytes
HTML transferred:       4900900 bytes
Requests per second:    5.74 [#/sec] (mean)
Time per request:       174.171 [ms] (mean)
Time per request:       174.171 [ms] (mean, across all concurrent requests)
Transfer rate:          280.77 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    83  174 844.5     88    8534
Waiting:       82  123 340.0     87    3488
Total:         83  174 844.5     88    8534

Percentage of the requests served within a certain time (ms)
  50%     88
  66%     90
  75%     91
  80%     92
  90%     95
  95%     98
  98%    136
  99%   8534
 100%   8534 (longest request)
-----------------------------------------------------------------------------

8 fields, 2 filters, title sort, Pager 0-49 (2nd run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   9.132 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      5007600 bytes
HTML transferred:       4900900 bytes
Requests per second:    10.95 [#/sec] (mean)
Time per request:       91.324 [ms] (mean)
Time per request:       91.324 [ms] (mean, across all concurrent requests)
Transfer rate:          535.48 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    83   91   8.4     90     161
Waiting:       82   90   8.4     89     160
Total:         83   91   8.4     90     161

Percentage of the requests served within a certain time (ms)
  50%     90
  66%     92
  75%     93
  80%     94
  90%     97
  95%     99
  98%    108
  99%    161
 100%    161 (longest request)
-----------------------------------------------------------------------------

8 fields, 2 filters, title sort, Pager 150-182 (1st run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   11.808 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      3297800 bytes
HTML transferred:       3207600 bytes
Requests per second:    8.47 [#/sec] (mean)
Time per request:       118.081 [ms] (mean)
Time per request:       118.081 [ms] (mean, across all concurrent requests)
Transfer rate:          272.74 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    70  118 411.2     76    4189
Waiting:       69   90 143.6     75    1511
Total:         70  118 411.2     76    4189

Percentage of the requests served within a certain time (ms)
  50%     76
  66%     77
  75%     80
  80%     80
  90%     81
  95%     85
  98%    109
  99%   4189
 100%   4189 (longest request)
-----------------------------------------------------------------------------

8 fields, 2 filters, title sort, Pager 150-182 (2nd run)

-----------------------------------------------------------------------------
Concurrency Level:      1
Time taken for tests:   7.626 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      3297800 bytes
HTML transferred:       3207600 bytes
Requests per second:    13.11 [#/sec] (mean)
Time per request:       76.255 [ms] (mean)
Time per request:       76.255 [ms] (mean, across all concurrent requests)
Transfer rate:          422.33 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    70   76   4.3     75      91
Waiting:       69   75   4.2     74      90
Total:         70   76   4.3     76      91

Percentage of the requests served within a certain time (ms)
  50%     76
  66%     77
  75%     78
  80%     79
  90%     82
  95%     85
  98%     90
  99%     91
 100%     91 (longest request)
-----------------------------------------------------------------------------
e0ipso’s picture

Fantastic! Those are the numbers we were hoping for. Let's see if we find a moment to digest these numbers into a TL;DR.

Thanks a lot @jibran!

gabesullice’s picture

I compiled and analyzed the stats from #85: https://docs.google.com/spreadsheets/d/18fDV_r9eUJFIzLN85PY6SBDRiVVAEpX1...

I focused primarily on computed field for what's below (I don't think they were that big of a factor)

With a 100% warmed normalization cache: 75% faster
Without a warmed cache: 2.4 times slower

That 2.4 number really confuses me! The results should only be marginally slower than the pre-patch request, right? Just the cost of 50 DB reads. We know that they can't be that slow because the post-patch, warmed results are quite fast. What gives?

kim.pepper’s picture

According to https://symfony.com/doc/current/components/http_kernel.html#the-kernel-t...

Internally, the HttpKernel makes use of the fastcgi_finish_request PHP function. This means that at the moment, only the PHP FPM server API is able to send a response to the client while the server's PHP process still performs some tasks. With all other server APIs, listeners to kernel.terminate are still executed, but the response is not sent to the client until they are all completed.

So you may be waiting for the full php process to terminate and send a response?

jibran’s picture

Yeah, I don't have fast_cgi on my setup.

ndobromirov’s picture

#87 - the google sheet is private.

#88 - even if he is not using the optimal server setup. Users will see this as normal and just marginally slower, but the resources and time spent on the server side is going to be the same. So for scalability purposes it's correctly reported 2.4 slower on cold caches... :(

Wim Leers’s picture

#88: Gah, that's right. I can't believe that's still not fixed :( @kim.pepper++

Alright, I think that means we should postpone this issue for now. We now know it can be done. Project that need this now can apply the #78 patch. We can't do anything about PHP infrastructure blocking us.

gabesullice’s picture

Sorry @ndobromirov! I thought I clicked "anyone with the link" but it was "anyone in the organization with the link". Everyone should have access now.

Wim Leers’s picture

#3026264: UserAccessControlHandler::checkFieldAccess() denies access to mail field without varying by the ‘user’ cache context is fixed — that means at least that work-around won't be necessary anymore here 🙂

e0ipso’s picture

Alright, I think that means we should postpone this issue for now.

Do we want to? In #89 @jibran mentioned that the good numbers above don't take advantage of that. I would assume this is a nice to have, not a blocker.

Also, code in #78 looks good to me.

gabesullice’s picture

@e0ipso, when writes aren't actually postponed until after the response is sent, they add so much time to the response that it quickly eats away the entire advantage of cached normalizations.

I did some back of the envelope math and even if just 4 entities are uncached in a response with 50 entities, you break even (it won't be faster or slower). However, any more that ratio and it gets slow, fast.

I really think we should just decorate the entity normalizer with the cacheing service created here in JSON:API Boost, that way Boost is entirely self-contained.

ndobromirov’s picture

I really think we should just decorate the entity normalizer with the cacheing service created here in JSON:API Boost, that way Boost is entirely self-contained.

+1

e0ipso’s picture

I did some back of the envelope math and even if just 4 entities are uncached in a response with 50 entities, you break even (it won't be faster or slower). However, any more that ratio and it gets slow, fast.

I think the key here is back of the envelope. I don't see why we should trust that guess over @jibran's proof. As you can see in #89 that is factored by the results posted above.

I think that deferred cache writing is a nice improvement, but not a blocker.

e0ipso’s picture

This re-rolls #78 and changes the direction slightly. This patch was another casualty of #3015438: Wrap entity objects in a ResourceObject which carries a ResourceType.

It still keeps all the supporting code for caching the normalizations. However it will not cache anything, because the normalization cacher follows a null pattern approach. JSON:API Boost will decorate the NullNormalizationCacher to do the deferred caching either on terminate or using a queue. In other words, the EntityNormalizationCacher service from #78 has been moved to JSON:API Boost.

e0ipso’s picture

Status: Needs work » Needs review

I have not tested the patch in #98 it's probably broken in many several ways, but worth sharing for discussion.

e0ipso’s picture

After a quick glance I noticed:

  1. +++ b/jsonapi.services.yml
    @@ -39,8 +39,14 @@ services:
    +  jsonapi.entity_normalization_cacher:
    

    This should be renamed to jsonapi.null_normalization_cacher:

  2. +++ b/jsonapi.services.yml
    @@ -115,6 +121,12 @@ services:
    +  cache.jsonapi_normalizations:
    +    class: Drupal\Core\Cache\CacheBackendInterface
    +    tags:
    +      - { name: cache.bin }
    +    factory: cache_factory:get
    +    arguments: [jsonapi_normalizations]
    

    This needs to be moved to JSON:API Boost.

Status: Needs review » Needs work

The last submitted patch, 98: 2819335--cache-normalizations--98.patch, failed testing. View results

e0ipso’s picture

Status: Needs work » Needs review
FileSize
12.02 KB
7.94 KB

A bunch of misc fixes for #98.

Status: Needs review » Needs work

The last submitted patch, 102: 2819335--cache-normalizations--102.patch, failed testing. View results

e0ipso’s picture

Status: Needs work » Needs review

FWIW these are the changes to JSON:API Boost to support cache persistence https://cgit.drupalcode.org/jsonapi_boost/diff/?h=2819335--cache-normali...


An alternative to the NullNormalizationCacher could be MemoryNormalizationCacher. That may create less friction to put into core.

e0ipso’s picture

Wim Leers’s picture

#97: That's fair. Thanks for reviving this :) I gave up on this issue in #91, at least for now, because I've got so many other things going on. But thanks to you, I gave this a fresh look. See below.


#89: The two very first ab results indicate that the numbers are still kinda difficult to interpret because the resource type repository cache hasn't been warmed yet — that still needs to happen during that first run. I should've advised to visit /jsonapi first, not /user. That'd make the numbers easier to compare. That's why we still see 100% 3926 (longest request) in the first run.

#97: Four thoughts:

  1. Drupal core has render caching turned on in core, and it has the same trade-off. 😃In fact, it's worse (more expensive), because it does not defer the writing. At least in this case, streaming JSON parsers can read the entire response long before the cache writes complete. It's just that there is a significant period between final response byte and closing the HTTP connection.
  2. This doing post-response writing is an extra incentive for more web servers to support this :)
  3. @gabesullice was concerned that the previous test was skewed by the presence of so many computed fields. The 8 fields, 2 filters, title sort tests in #89 address this because they don't include any fields. They show not a 4x speed-up, but a ~1.8x speed-up: 8 fields, 2 filters, title sort, Pager 0-49 (2nd run) has a median of 158 ms in HEAD and 90 ms with the patch. That's also what @gabesullice concluded in #87.
  4. @jibran's test run does
    8 fields, 2 filters, title sort, Pager 0-49 (1st run) without the patch takes 3126 ms, and 8534 ms with the patch. Δ=5408. The only difference is caching 50 normalized entities. It seems insane, but this appears to indicate that it takes 100 ms per entity to be written to cache. Seems like something is wrong on @jibran's end, because that seems pretty much impossible. If that's right though, then it's indeed ridiculously expensive to do this. That'd be proof to not do this. If I'm reading this right, then @gabesullice's analysis in #87 is not nearly strong enough in saying "no" to this 😃 But it really looks like something is off with @jibran's tests.
    Let's try to verify this: let's look at 8 fields, 2 filters, title sort, Pager 150-182 (1st run) Without patch: 1739 ms, with patch: 4189 ms. Δ=2450. This is caching 30 normalized entities at 80 ms per entity. Not 100 like above, but still same ballpark. So yes, @jibran's results are frustratingly high. I really wonder if something is wrong with his development environment…

#98: what's the change in direction? Interdiff would be very welcome.

#105:

+++ b/src/EventSubscriber/NullNormalizationCacher.php
@@ -0,0 +1,39 @@
+/**
+ * Caches entity normalizations after the response has been sent.
+ *
+ * This service uses a null pattern approach, which will do nothing but will
+ * allow other modules to decorate this service.
+ *
+ * @internal
+ * @see \Drupal\jsonapi\Normalizer\ResourceObjectNormalizer::getNormalization()
+ */
+class NullNormalizationCacher implements EventSubscriberInterface, NormalizationCacherInterface {

It will be hard to claim this is not a public PHP API.

Wim Leers’s picture

Issue tags: +Needs benchmarks

Given the numbers I discovered in @jibran's latest test run, I think it actually may be totally feasible to do this in JSON:API itself.

I think somebody else should reproduce @jibran's results for 8 fields, 2 filters, title sort, Pager 0-49. Adding half a dozen fields to article isn't hard, and then generating 50 random pieces of content with Drush also isn't too hard.

Heck, we could just important more realistic content by using the Aggregator module to pull in 50 pieces of content after configuring a bunch of fields with complex default values.

@e0ipso: would you have time for that?

e0ipso’s picture

This is what I did.

Scenarios

I have #105 applied. That doesn't do any caching at all, but it allows JSON:API Boost to decorate the Normalization Cacher to do it. The two scenarios are:

  1. JSON:API Boost disabled. Meaning that there is no caching.
  2. JSON:API Boost enbled. Meaning that normalizations are being cached.

Set Up

I created a new content type called Load. I configured it to have 9 fields, all multivalue except for the boolean. Note that in a real situation it is likely that the content types will have more than that. However I decided that if the performance boost is good enough for that case, it will be even better in real life.

Both page cache and dynamic page cache are disabled.

Benchmark

This is the benchmark script I run:

#! /usr/bin/env bash

drush cr -y > /dev/null
curl 'http://vanilla-drupal.local/jsonapi' -I > /dev/null
ab -c 1 -n 100 -H 'Accept: application/vnd.api+json' 'http://vanilla-drupal.local/jsonapi/node/load'

Results

Scenario Resp (avrg) Resp (max) Resp (min)
No Cache 459 ms 817 ms 436 ms
With Cache 191 ms 925 ms 174 ms

You can see the massive improvement from 459 ms to 191 ms. The price to pay was to make the request with cold caches slightly slower, from 817 ms to 925 ms. Note that in a real scenario we won't be writing the caches for all 50 in a single request, but we'll have some already in the cache.

The ab outputs are linked here for completion.

Additional Observations

Adding filtering did not change the results, it only made the responses slower overall, but the improvements remained the same

Conclusion

I think we should should go for it! I'm OK with the approach in #105, but I think we should move the cache persistence back to JSON:API.

e0ipso’s picture

This patch will do the persistence in JSON:API rather than JSON:API Boost.

The only thing left for JSON:API Boost is to decorate the normalization cacher and have the saveLater functionality optionally operate based on a queue (if php-fpm is not available). That and all the cache prefetching logic.

Status: Needs review » Needs work

The last submitted patch, 109: 2819335--cache-normalizations--109.patch, failed testing. View results

gabesullice’s picture

Whoa, those numbers are far less concerning! That makes me hopeful again!

I'd still like to discover why @jibran's results were so different. If the difference is just that you're using PHP-FPM and he's not (or something like that), then we're still stuck. I have my fingers crossed that that is not the case! 🤞

I like #109! Just a few things:

  1. +++ b/src/EventSubscriber/ResourceObjectNormalizationCacher.php
    @@ -0,0 +1,168 @@
    +  public function saveLater(ResourceType $resource_type, ResourceObject $object, array $normalization_parts) {
    ...
    +  protected function set(ResourceType $resource_type, ResourceObject $object, array $normalization_parts) {
    ...
    +  protected static function generateLookupRenderArray(ResourceType $resource_type, ResourceObject $object) {
    

    It looks like you missed a super tiny thing in your reroll. I mentioned this in passing in #78.3—there's no need to pass $resource_type here. The ResourceObject already contains it.

  2. +++ b/src/Normalizer/ResourceObjectNormalizer.php
    @@ -20,6 +22,13 @@ class ResourceObjectNormalizer extends NormalizerBase {
    +  private $cacher;
    

    This would be the first private property in JSON:API. I'm not opposed. Just pointing it out. Since we're doing that, should we mark the new classes final? The JSON:API Boost module doesn't look like it needs to extend anything.

  3. +++ b/jsonapi.services.yml
    @@ -39,8 +39,16 @@ services:
       serializer.normalizer.resource_object.jsonapi:
         class: Drupal\jsonapi\Normalizer\ResourceObjectNormalizer
    +    calls:
    +      - ['setCacher', ['@jsonapi.normalization_cacher']]
    ...
    +  jsonapi.normalization_cacher:
    +    class: Drupal\jsonapi\EventSubscriber\ResourceObjectNormalizationCacher
    +    calls:
    +      - ['setRenderCache', ['@render_cache']]
    

    Nit: let's use standard injection since we're not decorating anything.


I'm still concerned about #76. Do we have a test that proves that we're not breaking revisions with this? Since the cache doesn't vary by revision, I think we are.

I just landed a getVersionIdentifier method on ResourceObject in #2992836: Provide links to resource versions (entity revisions). That would help I think.

e0ipso’s picture

#111

Nit: let's use standard injection since we're not decorating anything.

I don't have a problem humoring you, if that's the case, I'm curious about your preference here and the relationship to decoration.


This would be the first private property in JSON:API

This slipped my fingers. I'll revert to protected and we can deal with these in #3007431: Make more things `final`.


  1. Nice catch. Will do.
gabesullice’s picture

I don't have a problem humoring you, if that's the case, I'm curious about your preference here...

It's mostly a nit. I'll refrain from inventing a passionate and well-reasoned argument ex post facto :P It's just a twitch I had while reading the diff—as I was reading I thought, "this thing is not like all the others" (in terms of the existing codebase).

and the relationship to decoration.

I think setter injection works well when one is decorating a class using extension. Using that pattern means the underlying class can change its constructor without the extending class being affected. This is when/where I see setter injection used most often.

Wim Leers’s picture

@jibran just mentioned in the API-First call that indeed there were some performance problems on his test site independent of JSON:API. It was related to custom access control logic apparently — I'll let him comment with more details.

jibran’s picture

I really wonder if something is wrong with his development environment…

This observation was striking for me so after doing a performance sweep of the site we fixed some major issues. These issues were not directly affecting the jsonapi response but it was running extra MySQL queries on each request which was indeed affecting the response time.

ndobromirov’s picture

#109 will need a re-roll. It is not applying cleanly on the latest dev.
Any guidelines how to benchmark this will be appreciated.

ndobromirov’s picture

Ping (1)...

e0ipso’s picture

This is mostly blocked on getting specific test coverage for revisions and translations.

jibran’s picture

e0ipso’s picture

Status: Needs work » Needs review

I did this re-roll on the plane. It was harder than I thought.

e0ipso’s picture

e0ipso’s picture

Issue tags: -Needs reroll

Status: Needs review » Needs work

The last submitted patch, 121: 2819335--cache-normalizations--119.patch, failed testing. View results

jibran’s picture

I did this re-roll on the plane. It was harder than I thought.

Thanks, for the reroll. It was indeed hard to reroll but it is looking good.

e0ipso’s picture

e0ipso’s picture

Status: Needs work » Needs review
e0ipso’s picture

Issue tags: +Needs tests

Adding needs explicit tests for:

  1. Translation cache poisoning
  2. Revision cache poisoning

Status: Needs review » Needs work

The last submitted patch, 125: 2819335--cache-normalizations--125.patch, failed testing. View results

jibran’s picture

This change took so much of my time to figure out 🤦‍♀️.

This patch is becoming complex by the minute but nonetheless great work.

So we are replacing this

+++ b/src/Normalizer/ResourceObjectNormalizer.php
@@ -49,23 +67,89 @@ class ResourceObjectNormalizer extends NormalizerBase {
-    $normalizer_values = [];
-    foreach ($fields as $field_name => $field) {
-      $in_sparse_fieldset = in_array($field_name, $field_names);
-      // Omit fields not listed in sparse fieldsets.
-      if (!$in_sparse_fieldset) {
-        continue;
-      }
-      $normalizer_values[$field_name] = $this->serializeField($field, $context, $format);
-    }

With this

+++ b/src/Normalizer/ResourceObjectNormalizer.php
@@ -49,23 +67,89 @@ class ResourceObjectNormalizer extends NormalizerBase {
+    $non_cached_requested_fields = array_intersect_key($non_cached_fields, array_flip($field_names));
+    foreach ($non_cached_requested_fields as $field_name => $field) {
+      $normalizer_values['fields'][$field_name] = $this->serializeField($field, $context, $format);
+    }

Just making sure we are not regressing from older performance improvements.

e0ipso’s picture

#129 @jibran

This patch is becoming complex by the minute but nonetheless great work.

How are you perceiving that complexity increase? This just refactors for readability sake.

So we are replacing this […] Just making sure we are not regressing from older performance improvements.

The only thing it does it to avoid the continue statement to simplify the foreach. We do that by removing the items that will fall into the continue beforehand.

Performance should be untouched.

kristiaanvandeneynde’s picture

Instead of misusing/abusing the render cache again, please view the progress here, more specifically the changes to DynamicPageCacheSubscriber: #2551419: Abstract RenderCache into a separate service that is capable of cache redirects in a non-render array-specific way. The issue is nearly done (fixing test fails) and would be a huge DX boost for people needing caches like you do here.

If you like what you see, maybe we could collaborate on finishing the other issue first and using the variation cache here?

e0ipso’s picture

#131 that would be fantastic @kristiaanvandeneynde. I did not know about that issue. I can see this playing out in two different ways. We land this, then when the other lands, we refactor. Or if #2551419: Abstract RenderCache into a separate service that is capable of cache redirects in a non-render array-specific way lands first, we can refactor this before merging.

gabesullice’s picture

+1 to #132 and keeping the two issue independent. That said, I've bumped #2551419: Abstract RenderCache into a separate service that is capable of cache redirects in a non-render array-specific way in my "things I want to see done" list and will try to help push that forward soon. Thanks for bringing it back up @kristiaanvandeneynde!

kristiaanvandeneynde’s picture

Thanks, hopefully the other one lands first because of all the test fails i had to fix there :D Would be great to have either of you as reviewers once (almost) all of the tests go green.

e0ipso’s picture

Project: JSON:API » Drupal core
Version: 8.x-2.x-dev » 8.8.x-dev
Component: Code » jsonapi.module

Moving to the core issue queue.

e0ipso’s picture

Version: 8.8.x-dev » 8.7.x-dev

This is not a new feature, it's an implementation detail. We can release this in a patch release of 8.7.x. Updating the version accordingly.

e0ipso’s picture

Issue tags: +Needs reroll

We need to change the relative paths for the patch to apply and be testable.

e0ipso’s picture

Status: Needs work » Needs review
FileSize
14.85 KB

I don't usually submit patches to core (so I'm a bit ignorant). My local uses drupal composer project. So after applying the patch I had to account for the repo split.

I used this command. Let's see if it applies:

git diff 8.7.x --src-prefix a/core/ --dst-prefix b/core/ > 2819335--cache-normalizations--137.patch

I'm using 8.7.x because since this is a performance improvement there is no need to wait to 8.8.

e0ipso’s picture

Yay! it came back green.

gabesullice’s picture

Assigned: Unassigned » gabesullice
Status: Needs review » Needs work

Assigning to myself to fix the nits and add a test for revisions.


  1. +++ b/core/modules/jsonapi/src/EventSubscriber/ResourceObjectNormalizationCacher.php
    @@ -0,0 +1,178 @@
    + * @internal
    + * @see \Drupal\jsonapi\Normalizer\ResourceObjectNormalizer::getNormalization()
    

    This should get the same default message as all other internal classes have now that JSON:API is in core.

  2. +++ b/core/modules/jsonapi/src/EventSubscriber/ResourceObjectNormalizationCacher.php
    @@ -0,0 +1,178 @@
    +   * The things to cache after the response has been sent.
    

    Nit: "The resource objects"

  3. +++ b/core/modules/jsonapi/src/EventSubscriber/ResourceObjectNormalizationCacher.php
    @@ -0,0 +1,178 @@
    +  public function saveLater(ResourceObject $object, array $normalization_parts) {
    +    $resource_type = $object->getResourceType();
    +    $key = $resource_type->getTypeName() . ':' . $object->getId();
    +    $this->toCache[$key] = [$object, $normalization_parts];
    

    I'm still concerned about this + revisions. I'll write a test for this myself to convince myself it's fine or find anything we've forgotten.

  4. +++ b/core/modules/jsonapi/src/EventSubscriber/ResourceObjectNormalizationCacher.php
    @@ -0,0 +1,178 @@
    +   * @todo Refactor/remove once https://www.drupal.org/node/2551419 lands.
    

    Good todo.

  5. +++ b/core/modules/jsonapi/src/Normalizer/ResourceObjectNormalizer.php
    @@ -49,23 +67,89 @@ class ResourceObjectNormalizer extends NormalizerBase {
    +    $entity_normalizaton = array_filter(array_merge(
    

    Nit: s/entity/resource_object also normalization.

  6. +++ b/core/modules/jsonapi/src/Normalizer/ResourceObjectNormalizer.php
    @@ -49,23 +67,89 @@ class ResourceObjectNormalizer extends NormalizerBase {
    +    // Add links if missing.
    +    $normalizer_values['base']['links'] = isset($normalizer_values['base']['links'])
    +      ? $normalizer_values['base']['links']
    +      : $this->serializer
    +        ->normalize($object->getLinks(), $format, $context)
    +        ->omitIfEmpty();
    

    Can we normalize links outside of this method so they're not part of the cached normalization? My hopes for hypermedia controls involve more dynamic links that can be less cacheable than fields are.

    I'd be happy to open a follow-up issue to explore caching links too, but I think it'll delay this issue if we discuss it here.

gabesullice’s picture

Issue tags: -Needs reroll
e0ipso’s picture

I'm still concerned about this + revisions. I'll write a test for this myself to convince myself it's fine or find anything we've forgotten.

Yes this I'm hesitant about this part as well.

e0ipso’s picture

Can we normalize links outside of this method so they're not part of the cached normalization? My hopes for hypermedia controls involve more dynamic links that can be less cacheable than fields are.

Hard to say… I'm leaning towards excluding them and caching links separately if they prove to be a resource hog. However I remember @ndobromirov identifying the link generation as an issue when using multiple levels of paragraphs.

@gabesullice what if we exclude them from normalization now and create a spike to figure out the impact of link generation?

gabesullice’s picture

Status: Needs work » Needs review
FileSize
4.08 KB
15.06 KB

This patch addresses my remaining concerns.

Re: #140.3 and #142 (i.e. per-revision caching) I verified that we already have tests that would have surfaced any issues in ResourceTestBase::testRevisions. What was happening is that the render cache automatically detects and then varies by the url.query_args:resourceVersion cache context which we set on all responses for revisions entity types. Using this context, it knows to redirect the cache based on that context. However, this black magic felt a bit unreliable to me, especially because of the imminent #2551419: Abstract RenderCache into a separate service that is capable of cache redirects in a non-render array-specific way. Therefore, I made this behavior more explicit by directly adding the cache context to our cached value. That resolves all my concerns re: caching + revisions.

Re: #143: Yes. This was my thought exactly. I've removed link generation from the cache process for now I've also created an issue for further research: #3054441: [no patch] Investigate caching of resource object links

gabesullice’s picture

Assigned: gabesullice » Unassigned

From my perspective, this is RTBC. I'll let someone else confirm that status though.

e0ipso’s picture

+++ b/core/modules/jsonapi/src/EventSubscriber/ResourceObjectNormalizationCacher.php
@@ -122,6 +128,7 @@ protected function set(ResourceObject $object, array $normalization_parts) {
+      ->addCacheContexts([ResourceVersionRouteEnhancer::CACHE_CONTEXT])

Ah! We did miss something wrt revisions. Did you manually test that everything is working in revisions then?

gabesullice’s picture

Ah! We did miss something wrt revisions. Did you manually test that everything is working in revisions then?

Yes, I verified by manually testing and reading our tests to ensure they already cover my concerns. We didn't really miss anything, it was more that it was working already by coincidence. See #144:

render cache automatically detects and then varies by the url.query_args:resourceVersion cache context which we set on all responses for revisions entity types.... I made this behavior more explicit by directly adding the cache context to our cached value.

So, I didn't really have to do anything. I just did this to feel safer when/if our implementation changes implementation changes.

e0ipso’s picture

Status: Needs review » Reviewed & tested by the community

All things check out now!

kristiaanvandeneynde’s picture

#2551419: Abstract RenderCache into a separate service that is capable of cache redirects in a non-render array-specific way is fully green now. Would it be of interest to create a combined patch or do we just wait and see which one gets committed first?

e0ipso’s picture

@kristiaanvandeneynde that's awesome! I hope more people can start leveraging that service.

I think it does make sense to have both options #144 and a combined patch. Would you be open to create it?

kristiaanvandeneynde’s picture

I really hope I don't break your RTBC status with this :D

This patch contains 100% of your patch and only the variation cache + variation cache unit test from my patch. Because that's all we care about here: Does your work function with my work powering its cache?

I do have one question: Why not use dependency injection on ResourceObjectNormalizationCacher? Using a setter like that seems weird.

Edit:

+++ b/core/modules/jsonapi/src/EventSubscriber/ResourceObjectNormalizationCacher.php
@@ -0,0 +1,148 @@
+use Drupal\Core\Render\RenderCacheInterface;

Forgot to remove this.

e0ipso’s picture

Thanks @kristiaanvandeneynde! So impressive that we got a green patch that is so big at the first try. This speaks volumes about the right abstractions.


About the order of commit. I just noticed that this one is for 8.7.x and the other one is of 8.8.x. That means that we'll need these issues to go in separately, if we keep the branch targets as is.

e0ipso’s picture

Issue summary: View changes
e0ipso’s picture

e0ipso’s picture

Note to reviewer please review #144. #151 is here only as a proof of concept and validation of the abstractions created elsewhere.

kristiaanvandeneynde’s picture

@e0ipso The patch is fully BC so I'm actually hoping to get it into 8.7.x I posted it against 8.8.x so it works there and can then be backported if it doesn't apply cleanly to 8.7.x (which it currently does)

Status: Reviewed & tested by the community » Needs work

The last submitted patch, 151: drupal-2819335-151-COMBINED_POC.patch, failed testing. View results
- codesniffer_fixes.patch Interdiff of automated coding standards fixes only.

e0ipso’s picture

Version: 8.7.x-dev » 8.8.x-dev
Status: Needs work » Reviewed & tested by the community
FileSize
15.06 KB

Re-uploading the patch in #144 and changing the target version to 8.8.x after talking to @berdir in Slack.

Status: Reviewed & tested by the community » Needs work

The last submitted patch, 158: 2819335-144.patch, failed testing. View results

e0ipso’s picture

I am convinced that the failures in Drupal\Tests\media_library\FunctionalJavascript\MediaLibraryTest are unrelated to the patch.

yogeshmpawar’s picture

Status: Needs work » Reviewed & tested by the community

Setting back to RTBC!
This is not happening anymore.

larowlan’s picture

Status: Reviewed & tested by the community » Needs work
  1. +++ b/core/modules/jsonapi/src/EventSubscriber/ResourceObjectNormalizationCacher.php
    @@ -0,0 +1,185 @@
    + * @see https://www.drupal.org/project/jsonapi/issues/3032787
    

    should this move to core now?

  2. +++ b/core/modules/jsonapi/src/EventSubscriber/ResourceObjectNormalizationCacher.php
    @@ -0,0 +1,185 @@
    +    else {
    

    nit, we don't need the else here

  3. +++ b/core/modules/jsonapi/src/EventSubscriber/ResourceObjectNormalizationCacher.php
    @@ -0,0 +1,185 @@
    +    assert(array_keys($normalization_parts) === ['base', 'fields']);
    

    should we be doing this assert in ::saveLater? ie is it too late when we're in terminate?

  4. +++ b/core/modules/jsonapi/src/Normalizer/ResourceObjectNormalizer.php
    @@ -24,6 +25,13 @@ class ResourceObjectNormalizer extends NormalizerBase {
    +  private $cacher;
    

    Should we document why this is private (Which isn't our normal approach)?

  5. +++ b/core/modules/jsonapi/src/Normalizer/ResourceObjectNormalizer.php
    @@ -49,23 +67,85 @@ public function normalize($object, $format = NULL, array $context = []) {
    +  private function getNormalization(array $field_names, ResourceObject $object, $format = NULL, array $context = []) {
    ...
    +  private static function buildEmptyNormalization(ResourceObject $object) {
    

    same here re private?

  6. +++ b/core/modules/jsonapi/tests/src/Functional/ResourceTestBase.php
    @@ -377,6 +377,9 @@ protected function getData() {
    +    // Don't use cached normalizations in tests.
    +    $this->container->get('cache.jsonapi_normalizations')->deleteAll();
    

    so I'm not seeing and new tests here covering the fact that cache entries are building up with each field as they're requested? Am i missing something?

Also, I'm not sold on saveLater being an accurate name given that only happens on php-fpm, so perhaps we should be documenting that in the patch.

e0ipso’s picture

  1. Yes.
  2. Good point.
  3. I'll look into it.
  4. I'll make it protected. Force of habit.
  5. Same as above.
  6. There are no specific tests because the extensive test coverage is already testing those code paths. Do you feel there are specific tests needed?

I think saveLater is accurate. When you call saveLater the actual save happens later on the request cycle. That is regardless of weather it's on the user thread or not. However, I have no attachment to the name. I'll change it to saveOnTerminate. Will that work?

ndobromirov’s picture

+1 for saveOnTerminate

e0ipso’s picture

Status: Needs work » Needs review
FileSize
69.73 KB
58.5 KB

I applied the changes proposed in #163.

Status: Needs review » Needs work

The last submitted patch, 165: 2819335--cache-normalizations--165.patch, failed testing. View results

e0ipso’s picture

Status: Needs work » Reviewed & tested by the community

Back to RTBC after addressing @larowlan's feedback.

larowlan’s picture

Status: Reviewed & tested by the community » Needs review
  1. +++ b/core/modules/jsonapi/tests/src/Functional/NodeTest.php
    @@ -350,6 +352,45 @@ class NodeTest extends ResourceTestBase {
    +    $url = Url::fromRoute(sprintf('jsonapi.%s.individual', static::$resourceTypeName), ['entity' => $uuid]);
    +    /* $url = $this->entity->toUrl('jsonapi'); */
    

    we don't normally comment lines out?

  2. +++ b/core/modules/jsonapi/tests/src/Functional/NodeTest.php
    @@ -350,6 +352,45 @@ class NodeTest extends ResourceTestBase {
    +    $this->assertCount(2, $cache['#data']['fields']);
    

    should we also assert that the second key is title?
    ❤️ this test

e0ipso’s picture

Status: Needs review » Reviewed & tested by the community
FileSize
10.02 KB
77.49 KB
  1. I changed that occurrence and the others already present in other testing files.
  2. Good point. I did a small refactor to ensure that doesn't happen again should the testing become more intensive.

Moving back to RTBC for @larowlan (is this the procedure?)

e0ipso’s picture

Yeah… I forgot --src-prefix a/core/ --dst-prefix b/core/ in the patch in #169. Let's try again.

e0ipso’s picture

Wim Leers’s picture

Status: Reviewed & tested by the community » Needs work

I apparently last commented here more than 3 months ago, in #114! Been too busy with media embedding to have time for this — sorry!


#115 🤓

#131: I am so looking forward to reviewing that issue/patch! I'm very sorry I haven't gotten to it yet! I agree with @e0ipso in #132 and @gabesullice in #133 though that we should land this separately, and then when #2551419: Abstract RenderCache into a separate service that is capable of cache redirects in a non-render array-specific way lands we can refactor the code that this issue added. That'll be extra validation too :) Or, if #2551419: Abstract RenderCache into a separate service that is capable of cache redirects in a non-render array-specific way ends up landing first, then we'll just refactor this. The point is that neither should be blocking.

#140.3 + #142: agreed with those concerns. @gabesullice addressed those concerns in #144 fortunately :)

#151: 🤘🚀


  1. +++ b/core/modules/jsonapi/src/Access/EntityAccessChecker.php
    @@ -31,7 +31,7 @@ use Symfony\Component\Routing\RouterInterface;
    - * @see https://www.drupal.org/project/jsonapi/issues/3032787
    + * @see https://www.drupal.org/node/3032787
    

    I don't think we should be changing all these existing docs in this patch. That's definitely out-of-scope.

    Let's create a "minor task" issue for that, to keep this patch tight in scope.

    This seems to be the primary reason this patch grew from 15K in #158 to 79K now.

  2. +++ b/core/modules/jsonapi/src/EventSubscriber/ResourceObjectNormalizationCacher.php
    @@ -0,0 +1,173 @@
    +  public function setRenderCache(RenderCacheInterface $render_cache) {
    +    $this->renderCache = $render_cache;
    +  }
    +
    

    Why does this use a setter? That question was asked by @kristiaanvandeneynde in #151 too but was never answered AFAICT.

    Note that the setter on ResourceObjectNormalizer is fine, that was explained by @gabesullice in #113. It's only this setter that's odd.

  3. --- /dev/null
    +++ b/core/modules/jsonapi/src/Normalizer/NormalizerBase.php.orig
    

    This is a git conflict resolution leftover file :)

  4. +++ b/core/modules/jsonapi/src/EventSubscriber/ResourceObjectNormalizationCacher.php
    @@ -0,0 +1,185 @@
    +   * @todo Refactor/remove once https://www.drupal.org/node/2551419 lands.
    

    This applies to not just this method, but the entire class. Because for example ::generateLookupRenderArray() would then also disappear!

  5. +++ b/core/modules/jsonapi/src/Normalizer/ResourceObjectNormalizer.php
    @@ -49,23 +67,85 @@ public function normalize($object, $format = NULL, array $context = []) {
    +      $normalization_parts['base'],
    ...
    +   *   - 'base': array, the base normalization of the entity, that does not
    ...
    +   *   - 'fields': CacheableNormalization for all requested fields.
    ...
    +      'base' => [
    ...
    +      'fields' => [],
    

    Should we make 'base' a constant on this class? Same for 'fields'.

    For example: RESOURCE_CACHE_SUBSET_BASE and RESOURCE_CACHE_SUBSET_FIELDS. That also allows us to more clearly document this.

e0ipso’s picture

#172:

  1. I don't disagree with you. However that was a committer request, meaning that he didn't think it impacts reviewability.
  2. No particular reason. I don't think this is a problem. Do you have contradictory experience?
  3. Good catch! Will remove.
  4. 👍🏽 I'll move to the class docblock.
  5. 👍🏽
Wim Leers’s picture

  1. But @larowlan in #162.1 was only talking about the one new link, not about all link 😀
  2. It's just unlike just about every other service in Drupal core. The rule of thumb is: constructor injection unless it's a decorator, then use setter injection.
e0ipso’s picture

  1. I didn't see any complain in #168. Also I don't like the idea of only changing that one link. I'd rather not block this issue on building consensus on this. WYDT?
  2. … ok I will change this to constructor injection. I don't think that's a good policy, but let's remove friction to get this moving forward.
Wim Leers’s picture

#175.1: I think it's trivial enough that any future git archeology won't be significantly complicated by this, there'll just be more noise to dig through. So I can live with that staying in.

e0ipso’s picture

Status: Needs work » Needs review
FileSize
77.8 KB
10.22 KB

Applied the feedback in #172.

Wim Leers’s picture

Status: Needs review » Reviewed & tested by the community
Issue tags: -Needs tests
  1. +++ b/core/modules/jsonapi/src/EventSubscriber/ResourceObjectNormalizationCacher.php
    @@ -14,9 +14,31 @@ use Symfony\Component\HttpKernel\KernelEvents;
    +   * sparse field sets. This typically includes the resource type, and the
    

    Übernit: s/resource type/resource type name/

  2. +++ b/core/modules/jsonapi/src/EventSubscriber/ResourceObjectNormalizationCacher.php
    @@ -14,9 +14,31 @@ use Symfony\Component\HttpKernel\KernelEvents;
    +   * incrementally built across different requests of the same resource object.
    

    Übernit: s/of/for/ — I think 🤔, because I'm not a native speaker.

  3. +++ b/core/modules/jsonapi/src/Normalizer/ResourceObjectNormalizer.php
    @@ -112,14 +113,16 @@ class ResourceObjectNormalizer extends NormalizerBase {
    +    $fields = &$normalizer_values[ResourceObjectNormalizationCacher::RESOURCE_CACHE_SUBSET_FIELDS];
    

    Why the assignment by reference?

    Ah, because you use it for a more terse assignment statement on line 121! This is fine then.

The two übernits can be fixed on commit.

kristiaanvandeneynde’s picture

#178.2 Correct

You could say "two requests of the same type (i.e.: GET, POST, ...)" but when you talk about what you're requesting, it's "requests for the same resource object".

e0ipso’s picture

itsekhmistro’s picture

Hello all,

We're seeing LOTS of performance issues with the normalization&serialization process in JSONAPI.
According to profiling(blackfire.io) results, it takes 90% of all wall time on CPU.

For example, (profiled a request to taxonomy vocabulary):

Wall Time     1.23s
I/O Wait     67.5ms
CPU Time      1.16s
Memory       25.3MB
Network         n/a     n/a     n/a
SQL          63.4ms   134rq

for more heavy content with (nodes of a type with a lot of fields) it's even worse

Wall Time     6.56s
I/O Wait      276ms
CPU Time      6.29s
Memory       38.2MB
Network         n/a     n/a     n/a
SQL           264ms  1694rq

The thing is that the JSONAPI module spends 90% of the time on doing normalization&serialization process.
And it does this every time ( unless there is a rendered cache for it ).

@Wim Leers , have you considered an option to radically change the approach on WHEN to do the resource normalization -
I mean to put into a cache ( or static index - DB table ) the normalized content on entity create/update?

It looks perfectly aligned with the idea of "Resource (entity) normalization should use partial caching" presented in this issue.

What do you think?

ndobromirov’s picture

#181 you might be experiencing #3017291: \Symfony\Component\Serializer\Serializer::getNormalizer() is slow, and it's not JSON:API, but a Symphony one that affects by D8 core. You can go through the tread and find workarounds that you can use now. In near future this should be resolved (hopefully 8.8).

Render cache as defined in this issue will help your second user, as the whole normalized data structure will be render-cached., includin related entities.

The first one will be inevitably slow (depending on the complexity of your payloads).

If you can share more granular details (backfire result or similar) we could look into the issues as well.

itsekhmistro’s picture

@ndobromirov, Nikolay, you are right about JsonAPI and Symfony.

But
>The first one will be inevitably slow (depending on the complexity of your payloads).

Why do we need to accept "The first one will be inevitably slow".
We can avoid it if the normalized presentation is created once the entity is created and updated once the entity is updated.
It shouldn't be flushed on Clear caches either.

We can't accept being slow on high load to JsonAPI.

e0ipso’s picture

#181 this is the use case this module is aiming to fix. Did you have a chance to profile with the patch applied and without it for comparison?

#183 please take a look at JSON:API Boost to address that concern. In the meantime, we need to get this patch merged for that module to work.

Wim Leers’s picture

#181:

Thanks for taking the time to report back! Can you please provide more background information to interpret the numbers you're sharing? For example, the only way that it's possible to be getting a 1.23 second response time for requesting a single taxonomy is if it's being run on an ancient server. So can you please share for both examples:

  1. the request URL?
  2. how many entities are being returned?
  3. how many fields each of bundles/entity types in the response have?

Thanks!

have you considered an option to radically change the approach on WHEN to do the resource normalization - I mean to put into a cache ( or static index - DB table ) the normalized content on entity create/update?

I'm glad to see you thinking along! However, that would result in significantly more complex code for pretty much equally fast responses. Doing this would be a clear example of premature optimization. The current patch already offers a huge performance improvement, and the benefit is greater for more complex scenarios.

It'd be very helpful if you could apply the patch and report back the improved performance you observe. :) Like @e0ipso asks in #184.

#183:

We can avoid it if the normalized presentation is created once the entity is created and updated once the entity is updated.

You're making a big assumption here: that cache capacity is unbounded. Or at least that the cache capacity is sufficient to cache every single stored entity. This may be true for your use case, but definitely is not true for everyone.


I think this is still ready to be committed.

larowlan’s picture

Status: Reviewed & tested by the community » Needs work

Sorry for the confusion here, this one dropped off my radar

But @larowlan in #162.1 was only talking about the one new link, not about all link 😀

Wim is correct, I was only interested in the one new link, the old ones are out of scope here and would make a good novice issue.

itsekhmistro’s picture

FileSize
462.34 KB
159.89 KB

#185:

Can you please provide more background information to interpret the numbers you're sharing? For example, the only way that it's possible to be getting a 1.23 second response time for requesting a single taxonomy is if it's being run on an ancient server

Sure,
to give a better overview of the situation&context I attached the profiling results below.
The test is executed on Macbook Pro 2018 in Docker-based environment.

To demonstrate the issue, I have selected a simple vocabulary with 3 fields.
Content is translatable. There are 10 languages.
There are 2 patches applied to JSONAPI from:
* ResourceTypeRepository computes ResourceType value objects on *every request*
* (this issue) Resource (entity) normalization should use partial caching

1. the request URL?
2. how many entities are being returned?
3. how many fields each of bundles/entity types in the response have?

1) URL: http://.../taxonomy_term/methods
2) there are 6 Entities in the result ( 6 is all available terms in the vocabulary)
3) there are 3 fields with types:
* Text (plain)
* Text (plain, long)
* Boolean

TEST STEPS

1. Preparation
* drush cr
* Open homepage
* Open JSON API for .../node/basic
// The preparation gives a fresh start and then warmup Drupal caches.
// The goal of preparation steps is to give clean and reproducible results.
2. Do test:
* blackfire --samples 1 curl http://.../taxonomy_term/methods

Test result:
Wall time 855ms
Serializer::Normalize 490 ms = 57.3% of Wall time
Only 6 terms with 3 text fields (2 texts and 1 boolean field).

PS.
Important,
in the given test scenario there is no improvement from the patch in this issue.
Here, the test scenario was selected on purpose.
However, there is a significant improvement from the patch when used in other scenarios.

>You're making a big assumption here: that cache capacity is unbounded. Or at least that the cache capacity is sufficient to cache every single stored entity. This may be true for your use case, but definitely is not true for everyone.

It's always about some tradeoffs.
In my case, we have applied the approach I described above. Rendered cache for an entity is saved on created/updated.
It mitigates the issue of slow "Serializer:Normalize" on request to JSONAPI,
but the issue is still there on entity create/update.

--
I think the patch is ready to be committed.

itsekhmistro’s picture

FileSize
420.6 KB

In addition to #187:

Wim Leers’s picture

Status: Needs work » Reviewed & tested by the community
FileSize
25.8 KB
54.15 KB

#186: Okay, done! Back to RTBC. Descoped hunks are in descoped.txt, to make it easy for @e0ipso to create a new issue to fix that everywhere else :)

#187:

in the given test scenario there is no improvement from the patch in this issue.
Here, the test scenario was selected on purpose.
However, there is a significant improvement from the patch when used in other scenarios.

Can you please create a new issue for this? I have never seen a ~850 ms response time for 6 entities in JSON:API. I didn't see that on my old 2014 MacBook Pro I used until a few months ago, and also not on the 2018 MacBook Pro I have since a few months. I'd love to fix this problem for you, but this issue is not the right place. Like you say, this patch is already significantly improving several scenarios :)

I think the patch is ready to be committed.

I'm glad we agree :)

Wim Leers’s picture

Priority: Normal » Major

Also priority Normal is quite the understatement!

jibran’s picture

Another nit.

+++ b/core/modules/jsonapi/src/Normalizer/ResourceObjectNormalizer.php
@@ -49,23 +67,92 @@ public function normalize($object, $format = NULL, array $context = []) {
+    $entity_normalizaton = array_filter(array_merge(
+      $normalization_parts[ResourceObjectNormalizationCacher::RESOURCE_CACHE_SUBSET_BASE],
+      [
+        'attributes' => CacheableNormalization::aggregate($attributes)->omitIfEmpty(),
+        'relationships' => CacheableNormalization::aggregate($relationships)->omitIfEmpty(),
+      ]
+    ));

We don't need array_merge here. We can just add both arrays.

> For associative arrays the only difference is when using + the array doesn’t get overwritten but with array_merge it does.

$array1['foo'] = 'Bar 1';
$array2['foo'] = 'Bar 2';

print_r($array1 + $array2); // Array ( [foo] => Bar 1 )
print_r($array2 + $array1); // Array ( [foo] => Bar 2 )
print_r(array_merge($array1, $array2)); // Array ( [foo] => Bar 2 )
print_r(array_merge($array2, $array1)); // Array ( [foo] => Bar 1 )

In this case, 'attributes' and 'relationships' will not exist in $normalization_parts[ResourceObjectNormalizationCacher::RESOURCE_CACHE_SUBSET_BASE] so it can just be +.

jibran’s picture

It is a feature request but it is an under the hood change so I'm not sure that we need a change record or not. I think we do need a release note section in IS.

e0ipso’s picture

@jibran I find myself avoiding the + operator for arrays altogether, I personally find it confusing. We certainly can use it here, but I'm not sure I see a compelling reason to do so. Is there any benefit of one over the other?

Wim Leers’s picture

Issue summary: View changes
FileSize
1.29 KB
25.79 KB

Issue summary expanded like @jibran requested in #192. Also improved it overall :)

RE: + vs array_merge(): good catch. I don't feel strongly either way. I think + is slightly preferable because it conveys that we know for a fact that the sets of keys are different and therefore + is safe to use. For the casual reader, they'd have to look up RESOURCE_CACHE_SUBSET_BASE and determine which keys are associated with it to understand what this is doing. So, made the change. But honestly, it doesn't matter. We're clearly down to übernits.

Wim Leers’s picture

Made a mistake in #194, although surprisingly tests still passed locally 🤔

Also spotted a typo in surrounding LoC, fixing that too.

  • larowlan committed 6033d39 on 8.8.x
    Issue #2819335 by Wim Leers, e0ipso, gabesullice, jibran,...
larowlan’s picture

Status: Reviewed & tested by the community » Fixed
Issue tags: +8.8.0 release notes

Committed 6033d39 and pushed to 8.8.x. Thanks! 🎉

Thanks everyone for all the hard work here, this is a big win.

Wim Leers’s picture

Version: 8.8.x-dev » 8.7.x-dev
Status: Fixed » Reviewed & tested by the community

#195 also applies cleanly and passes tests on 8.7. It'd be super valuable for all Drupal 8.7-powered JSON:API instances out there to get this improvement in the next 8.7 patch release rather than at the end of the year. It does not cause API changes.

e0ipso’s picture

It'd be super valuable for all Drupal 8.7-powered JSON:API instances

Big +1 from me.

jibran’s picture

It'd be super valuable for all Drupal 8.7-powered JSON:API instances

I second that.

catch’s picture

I'm not really very keen on backporting this - it's adding two new services and new classes etc., and projects running JSON:API can probably also apply the patch via composer.

Wim Leers’s picture

Version: 8.7.x-dev » 8.8.x-dev
Status: Reviewed & tested by the community » Fixed

That's unfortunate, but fair.

Status: Fixed » Closed (fixed)

Automatically closed - issue fixed for 2 weeks with no activity.

gabesullice’s picture