Expand my Community achievements bar.

AEMaaCS Caching Strategy: TTL vs Invalidation

Avatar

Level 10

12/31/24

Screenshot 2024-12-28 at 23.18.24.png

 

AEMaaCS Caching Strategy: TTL vs Invalidation

by @daniel-strmecki

 

Overview

Caching is a critical component in the performance and scalability of web applications. For Adobe Experience Manager as a Cloud Service (AEMaaCS), understanding the nuances of caching mechanisms is essential to optimize system efficiency, author, and end-user experience. Two primary caching strategies to consider are Time-to-Live (TTL) and API-based Cache Invalidation. In AEMaaCS, caching occurs at multiple layers to ensure optimal performance and scalability. The Dispatcher acts as a caching and load-balancing mechanism. It caches responses from the AEM Publish instance, reducing load and response times. The CDN caches content closer to end users, leveraging a distributed network of servers to minimize latency. In some setups, organizations may choose to deploy their own CDN in front of the Adobe-provided Fastly CDN. This introduces a double-CDN architecture. In this article, we'll explain caching techniques used on all layers with and without a custom CDN and recommend best practices to follow.

 

Key points:

  • Cache Layers
    • In AEMaaCS we have at least three different cache layers we need to consider - Dispatcher, CDN, and Browser.
    • The cache invalidation logic needs to be aligned across layers to ensure content consistency.
  • TTL-based Approach
    • TTL assigns a specific duration for cached content to remain valid.
    • Use long TTL for content that remains static over time like images, style sheets, and JavaScript files.
    • Use shorter TTLs for dynamic content like HTML pages or API responses.
    • After the TTL expires, the content is considered stale and is refreshed upon the next request.
    • This method is straightforward to implement and easy to maintain.
    • It may serve outdated content if the underlying data changes before the TTL expires.
  • Cache Invalidation
    • Explicitly remove or update cached content when the source data changes.
    • Leverage invalidation to update only the specific resources that change, avoiding unnecessary cache disruptions.
    • Works OOTB with the Dispatcher cache, but requires custom implementation for the CDN cache.
    • Ensures users receive the most current content.
    • Harder to implement and maintain compared to TTLs.
  • Recommendation
    • An effective caching strategy in AEMaaCS is a balance between TTL and custom invalidation.
    • Use the automatic cache flush/invalidation already implemented between AEM and Dispatcher.
    • Enable TTL on the Dispatcher level to ensure more control, it does not apply unless you send the TTL headers from AEM.
    • Managing the CDN cache is much easier using TTL-based invalidation.
    • Custom-developed cache invalidation via cache purge API can be applied only where needed.
    • Be careful in case you introduce a custom content referencing logic (for example using queries).

 

Full Article

Read the full article on https://meticulous.digital/blog/f/aemaacs-caching-strategy-ttl-vs-invalidation to find out more.


Q&A

Please use this thread to ask questions relating to this article

4 Comments

Avatar

Community Advisor

1/1/25

Awesome article @daniel-strmecki . Curious to know your take if there is also an in-memory cache in the mix like Caffeine or Google Guava.

 

Thanks

Narendra

Avatar

Level 10

1/1/25

Hi Narendra,

thanks, regarding libraries for in-memory cache I am personally not using any of them. All you need to achieve in-memory cache in Java is a HashMap and a proper cleanup mechanism. Therefore, I prefer not to add extra dependencies to my project if they are not really needed. You can choose to use Caffeine or Guava if you want, but I think Caffeine is more performant and is focused on one thing only: in-memory caching.

That being said, my team also uses in-memory caching to optimize slow queries or APIs. One thing to be careful about is again flushing/invalidating the in-memory cache properly when data changes. For example, in case you are cashing query results for content fragments, you again need either:

  • a proper in-memory cache cleanup mechanism when new content fragments get published
  • TTL-based in-memory cache cleanup if that is acceptable for the use case (this requires some understanding from editors side)

 

Hope this helps,

Daniel

Avatar

Level 7

2/2/25

Thanks for the detailed explanation, @daniel-strmecki ! You mentioned using TTL for static content and cache invalidation for dynamic content. In cases where both strategies are needed, how do you recommend ensuring content consistency between layers, especially when dealing with a double-CDN setup?

Avatar

Level 10

2/2/25

Hi @AmitVishwakarma, when mixing both approaches, then ensuring consistency and the team's understanding of the solution indeed becomes a problem. I know many teams mix and match, but personally, I don't like it. Unless there are strong requirements for custom invalidation, I would go with the TTL-based approach because it is simple to explain to anyone, doesn't require customizations, and is supported by all layers. However, there will be exceptions where content needs to be available ASAP. In those cases, we can still separate that content with SSI/ESI and use no-cache TTL, or maybe fetch it dynamically in the frontend.