Hello,
In my project, we have been using classic Dispatcher stat level-based invalidation from the start. This worked fine until we didn't add a lot of content fragments, custom APIs, etc. Now we need custom Dispatcher flush logic to invalidate API paths or pages using queries to fetch the data from content fragments. And with more and more sites going live, it is getting more and more complex.
Looking into this I stumbled upon this AdaptTo presentation that recommends switching completely to the TTL-based invalidation. Therefore the two options are available:
I am always cheering for simplicity so the 2nd option looks interesting. But I wanted to get some input from the community. Have you tried this? What is your experience? What do you think?
Thanks in advance,
Daniel
Solved! Go to Solution.
Topics help categorize Community content and increase your ability to discover relevant content.
Views
Replies
Total Likes
You can have both at the same time
This is what we do with a customer. They have a lot of content, which is not changing frequently, and for which the classic invalidation is well suited. But they also have sections of the content, which need to expire very frequently, and for these we have a TTL configured. That's the basic principle.
Built on top of that now is a series of SSI statements, which are evaluated on the publish. And while the (rarely changing) pages itself are mostly static, they have SSI statements to include dynamic snippets (which are using the TTL based approach). That works very well, especially because the total number of snippets is comparatively low and they are fast to render. That means that expiring them very frequently does not impose that much load on the system.
Works very reliable, we did not have problems with that approach yet.
Hi @daniel-strmecki
I have not tried TTL based dispatcher but we have evaluate for our use case and identify that we can't enable TTL based cache invalidation for certain paths, so there are trades off.
we stick to not cached APIs and thing at dispatcher but cached at CDN using TTL.
so 1 hit per request within a timeframe.:P
Thanks @arunpatidar. I would very much appreciate your POW on this @Jörg_Hoh.
You can have both at the same time
This is what we do with a customer. They have a lot of content, which is not changing frequently, and for which the classic invalidation is well suited. But they also have sections of the content, which need to expire very frequently, and for these we have a TTL configured. That's the basic principle.
Built on top of that now is a series of SSI statements, which are evaluated on the publish. And while the (rarely changing) pages itself are mostly static, they have SSI statements to include dynamic snippets (which are using the TTL based approach). That works very well, especially because the total number of snippets is comparatively low and they are fast to render. That means that expiring them very frequently does not impose that much load on the system.
Works very reliable, we did not have problems with that approach yet.