When there is a change in header and footer then all of the site pages gets invalidated in dispatcher. This causes an increase in load on publish instances as dispatcher has to fetch all the pages again to serve new requests. What is the best way to avoid this instant load on publish instances ? Do you guys use re-fetch flush agents ? Are there any issues or any known problems in using the re-fetch agents?
https://stackoverflow.com/questions/40358559/why-do-we-use-refetching-dispatcher-flush-agents-in-aem
Solved! Go to Solution.
Views
Replies
Total Likes
To me, "re-fetch" feature would work fine in some cases based on your dispatcher-cache design.
If the stat file level is zero or you flush a huge number of pages "during business hours" then this feature could backfire and put the system under load. It would make sense to use it outside business hours with limited volume of content flushes
You may also want to restrict it with "Resource-Only" to limit down on the volume of flushes and re-fetching
For your use case, it would be better if the header/footer are saved as a separate fragment in the cache so that there is no need to flush entire site just for two shared fragments
Views
Replies
Total Likes
To me, "re-fetch" feature would work fine in some cases based on your dispatcher-cache design.
If the stat file level is zero or you flush a huge number of pages "during business hours" then this feature could backfire and put the system under load. It would make sense to use it outside business hours with limited volume of content flushes
You may also want to restrict it with "Resource-Only" to limit down on the volume of flushes and re-fetching
For your use case, it would be better if the header/footer are saved as a separate fragment in the cache so that there is no need to flush entire site just for two shared fragments
Views
Replies
Total Likes
I'm afraid refetch will not provide a relief in your case as it is directed at refreshing *flushed* and not *invalidated* content, and what describe is a problem with excessive invalidation.
The only solution is to find ways to limit the scope of auto-invalidation. The suggested solution to split page caching into fragments that handle aggregated content and page exclusive content seems the right answer.
Sling Dynamic Include can be a help here, dynamically loading menus and footers in JS is even better solution.
You can use that to avoid duplicated processing generating menus on every page (usually the most cpu heavy part of the page) and with that the website might withstand large scope invalidations.
Alternatively you might place the shared header / menu fragment in the root path of the website.
e.g. if your statlevel is that you invalidate everything under /en/us, then you can put the shared fragments under language root, render them using selectors (e.g. /en/us.menu.html) and mark as flushable in /invalidate section of dipatcher config (allow *.menu.html)
With that in place you might consider increasing statfile level without impact on your website cache consistency.
Alternatively you might consider using https://adobe-consulting-services.github.io/acs-aem-commons/features/dispatcher-flush-rules/index.ht...
and setting up rules that lead from website pages to the header fragment paths that aggregate them.
If your internal inter page data dependencies are too complex to manage this way you should consider switching to TTL based invalidation strategy.
Views
Replies
Total Likes
Views
Likes
Replies