DiskSpace takes too long to respond (http 504) | Community
Skip to main content
Level 3
February 5, 2019
Solved

DiskSpace takes too long to respond (http 504)

  • February 5, 2019
  • 3 replies
  • 1415 views

Hi All,


I am new to AEM, and I need to check total diskspace of our AEM Instance. Upon checking /etc/reports/diskusage.html , the page returns HTTP 504 the page tooks too long to respond. Even if I check etc/reports/diskusage.html?path=/content/dam/ , it throws the same error. Could anyone help me with this problem? Any other options on how can I manage to see the total disk space consumed in our AEM ? We have installed AEM 6.1 Thank you!

This post is no longer active and is closed to new replies. Need help? Start a new post to ask your question.
Best answer by joerghoh

This report takes a long time because it basically traverses all the repository and calculates the size of the nodes and properties. Depending on the amount of content and the repository architecture (tarmk, s3 datastore, mongo, ....) it can take a significant amount of time. Anyway, I would never start at "/". But it will never report an accurate number of consumed megabytes and gigabytes on disk, mostly because it does not understand the underlaying architecture. For example, if you have a datastore, it will not take the deduplication effect of the datastore into account. Also it does not account for indexes.

To get the relevant numbers of disk consumption (as a  total value), you should have a look into the filesystem. It cannot give you details in which area of the repository the largest pieces are stored, but it gives only the overall number. For a deeper level of granularity you have to use the disk usage report.

Jörg

3 replies

Level 3
February 5, 2019

smacdonald2008​ can you help me with this? thank you

joerghoh
Adobe Employee
joerghohAdobe EmployeeAccepted solution
Adobe Employee
February 5, 2019

This report takes a long time because it basically traverses all the repository and calculates the size of the nodes and properties. Depending on the amount of content and the repository architecture (tarmk, s3 datastore, mongo, ....) it can take a significant amount of time. Anyway, I would never start at "/". But it will never report an accurate number of consumed megabytes and gigabytes on disk, mostly because it does not understand the underlaying architecture. For example, if you have a datastore, it will not take the deduplication effect of the datastore into account. Also it does not account for indexes.

To get the relevant numbers of disk consumption (as a  total value), you should have a look into the filesystem. It cannot give you details in which area of the repository the largest pieces are stored, but it gives only the overall number. For a deeper level of granularity you have to use the disk usage report.

Jörg

smacdonald2008
Level 10
February 5, 2019

Very detailed answer Joerg!