Expand my Community achievements bar.

Don’t miss the AEM Skill Exchange in SF on Nov 14—hear from industry leaders, learn best practices, and enhance your AEM strategy with practical tips.
SOLVED

Recommendation on Dispatcher Cache Configuration: TTL vs Flush

Avatar

Level 8

Hello,

In my project, we have been using classic Dispatcher stat level-based invalidation from the start. This worked fine until we didn't add a lot of content fragments, custom APIs, etc. Now we need custom Dispatcher flush logic to invalidate API paths or pages using queries to fetch the data from content fragments. And with more and more sites going live, it is getting more and more complex.

Looking into this I stumbled upon this AdaptTo presentation that recommends switching completely to the TTL-based invalidation. Therefore the two options are available:

  • Option 1: Stick to default/stat file-based invalidation
    • Pros: Content is cached longer - until it is changed/republished, new content is immediately available for preview on the AEMaaCS publisher
    • Cons: Requires complex invalidation logic for APIs and content fragments
  • Option 2: Switch to TTL-based invalidation
    • Pros: We can have the same cache rules/headers defined for both Dispatcher and CDN, and no complex invalidation logic is required for APIs and content fragments
    • Cons: Cache time would be double as TTL would apply to both Dispatcher and CDN, new content would NOT immediately be available for preview on the AEMaaCS publisher

I am always cheering for simplicity so the 2nd option looks interesting. But I wanted to get some input from the community. Have you tried this? What is your experience? What do you think?

 

Thanks in advance,

Daniel

Topics

Topics help categorize Community content and increase your ability to discover relevant content.

1 Accepted Solution

Avatar

Correct answer by
Employee Advisor

You can have both at the same time

 

This is what we do with a customer. They have a lot of content, which is not changing frequently, and for which the classic invalidation is well suited. But they also have sections of the content, which need to expire very frequently, and for these we have a TTL configured. That's the basic principle.

 

Built on top of that now is a series of SSI statements, which are evaluated on the publish. And while the (rarely changing) pages itself are mostly static, they have SSI statements to include dynamic snippets (which are using the TTL based approach). That works very well, especially because the total number of snippets is comparatively low and they are fast to render. That means that expiring them very frequently does not impose that much load on the system.

 

Works very reliable, we did not have problems with that approach yet.

 

View solution in original post

3 Replies

Avatar

Community Advisor

Hi @daniel-strmecki 
I have not tried TTL based dispatcher but we have evaluate for our use case and identify that we can't enable TTL based cache invalidation for certain paths, so there are trades off.

we stick to not cached APIs and thing at dispatcher but cached at CDN using TTL.

so 1 hit per request within a timeframe.:P



Arun Patidar

Avatar

Correct answer by
Employee Advisor

You can have both at the same time

 

This is what we do with a customer. They have a lot of content, which is not changing frequently, and for which the classic invalidation is well suited. But they also have sections of the content, which need to expire very frequently, and for these we have a TTL configured. That's the basic principle.

 

Built on top of that now is a series of SSI statements, which are evaluated on the publish. And while the (rarely changing) pages itself are mostly static, they have SSI statements to include dynamic snippets (which are using the TTL based approach). That works very well, especially because the total number of snippets is comparatively low and they are fast to render. That means that expiring them very frequently does not impose that much load on the system.

 

Works very reliable, we did not have problems with that approach yet.