コミュニティアチーブメントバーを展開する。

Submissions are now open for the 2026 Adobe Experience Maker Awards.

Mark Solution

この会話は、活動がないためロックされています。新しい投稿を作成してください。

解決済み

Datastore Size Increasing at an alarming rate

Avatar

Level 1

Hi Guys

I hope someone can guide me in the right direction. Our datastore is growing like 400 GB a day and we dont have any clue what might be wrong. Our online compaction, datastore garbage collection and all other maintenance tasks are running fine without any errors.

I just want to check, is there any tracer, any debug log which can be set up to investigate what is eating up space on the datastore? So far i am unable to find any answers.

 

Thanks

Vishal K

2 受け入れられたソリューション

Avatar

正解者
Community Advisor

Hi @vishalk63643534 

 

I would suggest you to generate the thread and heap dumps and analyze it . Also, plan for online or offline revision clean up. Offline revision clean up gives good result in terms of size reduction.

 

Kindly refer to the following post that will help you in analyzing your issue:-

 

https://experienceleague.adobe.com/docs/experience-cloud-kcs/kbarticles/KA-17496.html?lang=en 

 

https://experienceleague.adobe.com/docs/experience-cloud-kcs/kbarticles/KA-17496.html?lang=en 

https://blogs.perficient.com/2021/03/08/managing-aem-repository-size-growth/ 

元の投稿で解決策を見る

Avatar

正解者
Employee Advisor

I would approach this problem like this:

 

* identify all the AEM instances which access this datastore

* check for "CommitStats" messages in the logs to check if there are large write activities on any of these instances. If the Datastore grows by 400GB a day, there must be massive write activities.

* If you don't find any of these messages or if you want to dig deeper in these write activities, turn on JCR Write Tracing (see https://cqdump.joerghoh.de/2016/05/24/what-is-writing-to-my-oak-repository/)

 

* And finally you should check that there is admin process (that means: not an AEM instance) is going crazy. Because adding 400GB a day is 15-20G per hour, and that translates into at least 200-300 megabytes growth per minute. That's a lot.

元の投稿で解決策を見る

2 返信

Avatar

正解者
Community Advisor

Hi @vishalk63643534 

 

I would suggest you to generate the thread and heap dumps and analyze it . Also, plan for online or offline revision clean up. Offline revision clean up gives good result in terms of size reduction.

 

Kindly refer to the following post that will help you in analyzing your issue:-

 

https://experienceleague.adobe.com/docs/experience-cloud-kcs/kbarticles/KA-17496.html?lang=en 

 

https://experienceleague.adobe.com/docs/experience-cloud-kcs/kbarticles/KA-17496.html?lang=en 

https://blogs.perficient.com/2021/03/08/managing-aem-repository-size-growth/ 

Avatar

正解者
Employee Advisor

I would approach this problem like this:

 

* identify all the AEM instances which access this datastore

* check for "CommitStats" messages in the logs to check if there are large write activities on any of these instances. If the Datastore grows by 400GB a day, there must be massive write activities.

* If you don't find any of these messages or if you want to dig deeper in these write activities, turn on JCR Write Tracing (see https://cqdump.joerghoh.de/2016/05/24/what-is-writing-to-my-oak-repository/)

 

* And finally you should check that there is admin process (that means: not an AEM instance) is going crazy. Because adding 400GB a day is 15-20G per hour, and that translates into at least 200-300 megabytes growth per minute. That's a lot.