Hi All,
Could you please confirm which approach is best to clear the CDN cache in multi legged architecture:
1 A - 2P - 2D - 1CDN
1. Create CDN flush agent on Author and flush the CDN cache on replication.
2. Create CDN flush agent on one of the publish instance and trigger CDN cache flush from it.
3. Setup Notify Agent on one of the Dispatchers and then trigger CDN flush using API call from dispatcher.
Even though all the above approaches have some limitations, I want to understand what is industry practice and most favored approach on this architecture. If there is any other approach as well, do let me know.
Thanks,
Rajeev
Thanks,
Rajeev
Solved! Go to Solution.
Dear Rajeev,
The best answer on this I have seen in years is this:
"
You cannot control the order in which the replication happens; these run asynchronous and can be blocked/delayed by various reasons (e.g. a publish might be down for restart).
If you want to enforce that the Akamai invalidation agent is triggered after the other replication happened, you have to run all activation through a workflow, and use synchronous replication to replicate to the publishs; in the next you can invoke the Akamai invalidation agent. But imagine the case when a publish is down; then your complete replication is blocked!
I solve this problem typically by not invalidating Akamai at all, but use TTL-based expiration on Akamai. You will deliver outdated content, but delivering 10 minute old content is typically not a problem.
Jörg
"
Regards,
Peter
Dear Rajeev,
The best answer on this I have seen in years is this:
"
You cannot control the order in which the replication happens; these run asynchronous and can be blocked/delayed by various reasons (e.g. a publish might be down for restart).
If you want to enforce that the Akamai invalidation agent is triggered after the other replication happened, you have to run all activation through a workflow, and use synchronous replication to replicate to the publishs; in the next you can invoke the Akamai invalidation agent. But imagine the case when a publish is down; then your complete replication is blocked!
I solve this problem typically by not invalidating Akamai at all, but use TTL-based expiration on Akamai. You will deliver outdated content, but delivering 10 minute old content is typically not a problem.
Jörg
"
Regards,
Peter
Views
Replies
Total Likes
Peter,
In a site with with frequent content changes, TTL needs to be low so that content can keep on refreshing at quick intervals. But keeping low TTL can be an issue with high volume site where content becomes stale too soon and request goes back to dispatcher to retrieve new content.
Views
Replies
Total Likes
Views
Replies
Total Likes
Views
Likes
Replies
Views
Likes
Replies