Expand my Community achievements bar.

Submissions are now open for the 2026 Adobe Experience Maker Awards.

Impact on Dispatcher cache when 10k pages are deleted

Avatar

Level 4

Hi,
As part of optimizing authoring activity , business is looking to deactivate and delete 10k pages on prod author using groovy script.
We think this might impact cache flush on dispatcher.
As 10k cache flush request will be created.
Is there any suggestion to better handle this part ?
Thanks,

Topics

Topics help categorize Community content and increase your ability to discover relevant content.

3 Replies

Avatar

Community Advisor

Hi @ashish_mishra1 ,

You can try to implement a Custom Flushing Strategy. For a large volume of deactivations, avoid a full cache flush.
Instead, use the CQ:Path header in the replication agent's configuration to specify the exact paths that need to be invalidated.
This targets only the affected content and minimizes impact on the rest of the cache. 

Adjust statfileslevel:Configure the /statfileslevel property in the dispatcher.any file.
This setting determines the number of parent nodes for which 
.stat files are checked.
A lower level reduces the impact of flush requests on the dispatcher by ensuring that only necessary files are invalidated.

 

Hope it helps!

-Tarun

Avatar

Level 10

You could split the Operation into Batches: Instead of processing all 10,000 pages at once, break the operation into smaller batches (e.g., 100 or 500 pages per batch): it reduces the number of simultaneous cache flush requests and minimizes the risk of overwhelming the dispatcher or author instance. Besides you can add delays, ie introducing a small delay between batches to allow the system to process cache flushes and other background tasks.

Points of attention:

  • It is crucial to run the script during off-peak hours to minimise the impact on end-users and reduce the load on the dispatcher.
  • Keep an eye on system performance (CPU, memory, response times) during the script execution. If the system shows signs of stress, pause the script and resume later.
  • Before running the script in production, test it in a staging environment with a similar number of pages. This helps you gauge the impact and fine-tune batch sizes and delays.

Avatar

Community Advisor

Hi @ashish_mishra1,
 

Divide and conquer - Break the pages into batches (say 500) and schedule them during the off hours.
This should help in spreading out the flush events at Dispatcher.

Besides the stat file strategy mentioned by @TarunKumar, you can also use CDN purge if you have a CDN (Akamai, CloudFront etc.) you can flush cache at folder level. Targeting high level purges which would be handful would be better than having 10k small invalidations.
Are you having OOTB Fastly in AEM Cloud or some other CDN in your setup?

 

Hope this helps!

 

Best Regards,

Rohan Garg