Hi,
I am using ACS commons reports to pull the pages report that are deactivated and modified long back, offtime & redirected pages.
While I am trying to run the report based on the above conditions I am facing this error.
Error: Exception executing report: javax.el.ELException: Error reading 'resultsList' on type com.adobe.acs.commons.reports.api.ResultsPage Error reading 'resultsList' on type com.adobe.acs.commons.reports.api.ResultsPage
Query: SELECT * FROM [cq:PageContent] AS s WHERE ISDESCENDANTNODE(s, '{{path}}') AND (s.[cq:lastModified] < CAST("2022-01-01T00:00:00.000Z" AS DATE) OR (s.[cq:lastReplicationAction] = "Deactivate" AND s.[cq:lastModified] < CAST("2022-01-01T00:00:00.000Z" AS DATE)) OR offTime < CAST("2023-12-01T00:00:00.000Z" AS DATE) OR s.[redirectTarget] IS NOT NULL )
This is working for small content locale but not for huge no of pages. Can someone help me here why am i facing this error.
Views
Replies
Total Likes
Hi,
It looks like the query is the issue, can you try to run the query directly in CRX/DE or in the Query Performance Tool? If you get an issue it is most likely that you will require an index. Please check this: https://experienceleague.adobe.com/en/docs/experience-manager-cloud-service/content/operations/query...
Hope this helps
Hi @Nimma05 ,
The issue you're encountering with the ACS Commons report appears to be related to the size and complexity of the query being executed, especially when working with a large number of pages. The javax.el.ELException indicates an error in evaluating the expression language in Java, often due to issues like null values, large result sets, or performance bottlenecks.
Here's a step-by-step approach to troubleshooting and potentially resolving the issue:
Ensure that your query is optimized and not overly complex. Complex queries with many conditions can be slow and resource-intensive, especially on large data sets.
Break down the query into smaller parts and execute them incrementally to identify which part is causing the problem. This helps in pinpointing the exact issue.
Use paging and limits to handle large result sets efficiently. This can prevent memory overload and improve performance.
Simplify and optimize the query conditions where possible. For example, instead of multiple OR conditions, try to structure the query to minimize the number of conditions checked.
Ensure that none of the fields being queried have unexpected null values that could cause the expression language evaluation to fail.
Ensure that the ACS Commons configuration is correctly set up and optimized for performance. You might need to adjust settings related to query execution or result handling.
Enable detailed logging to capture more information about the error. This can provide insights into what is causing the ELException.
Here’s an example of breaking down the query and handling potential issues incrementally:
SELECT * FROM [cq:PageContent] AS s WHERE ISDESCENDANTNODE(s, '{{path}}')
Step 2: Add Last Modified Condition
SELECT * FROM [cq:PageContent] AS s WHERE ISDESCENDANTNODE(s, '{{path}}') AND s.[cq:lastModified] < CAST("2022-01-01T00:00:00.000Z" AS DATE)
Step 3: Add Deactivated Pages Condition
SELECT * FROM [cq:PageContent] AS s WHERE ISDESCENDANTNODE(s, '{{path}}') AND (s.[cq:lastModified] < CAST("2022-01-01T00:00:00.000Z" AS DATE) OR s.[cq:lastReplicationAction] = "Deactivate")
Step 4: Add Off Time Condition
SELECT * FROM [cq:PageContent] AS s WHERE ISDESCENDANTNODE(s, '{{path}}') AND (s.[cq:lastModified] < CAST("2022-01-01T00:00:00.000Z" AS DATE) OR s.[cq:lastReplicationAction] = "Deactivate" OR offTime < CAST("2023-12-01T00:00:00.000Z" AS DATE))
Step 5: Add Redirected Pages Condition
SELECT * FROM [cq:PageContent] AS s WHERE ISDESCENDANTNODE(s, '{{path}}') AND (s.[cq:lastModified] < CAST("2022-01-01T00:00:00.000Z" AS DATE) OR s.[cq:lastReplicationAction] = "Deactivate" OR offTime < CAST("2023-12-01T00:00:00.000Z" AS DATE) OR s.[redirectTarget] IS NOT NULL)
Using paging and limits can help manage large result sets:
SELECT * FROM [cq:PageContent] AS s WHERE ISDESCENDANTNODE(s, '{{path}}')
AND (s.[cq:lastModified] < CAST("2022-01-01T00:00:00.000Z" AS DATE)
OR (s.[cq:lastReplicationAction] = "Deactivate" AND s.[cq:lastModified] < CAST("2022-01-01T00:00:00.000Z" AS DATE))
OR offTime < CAST("2023-12-01T00:00:00.000Z" AS DATE)
OR s.[redirectTarget] IS NOT NULL)
ORDER BY s.[jcr:created]
LIMIT 1000 OFFSET 0
Enable detailed logging for the ACS Commons and JCR queries. In AEM, you can increase the log level for specific classes to gather more information:
This will help you capture detailed logs and identify where the issue might be occurring.
By following these steps, you should be able to isolate the cause of the ELException and optimize your query for better performance and reliability
Views
Replies
Total Likes
Hi @HrishikeshKa,
Thank you! for your detailed explanation.
Even with the basic query I am getting error.
SELECT * FROM [cq:PageContent] AS s WHERE ISDESCENDANTNODE(s, '{{path}}')
In the logs i could see below error
org.apache.jackrabbit.oak.query.RuntimeNodeTraversalException: The query read or traversed more than 500000 nodes. To avoid affecting other tasks, processing was stopped.
So, I increased the limit to 7 lac in osgi configuration for time being to get the report after changing the issue got resolved. Is that fine if I change the limit?
I am running this query on 19k data
Thanks
Views
Replies
Total Likes
Hi @Nimma05 ,
Increasing the limit to 700,000 nodes may solve the issue for now, but it is not a recommended solution. Querying a large number of nodes can have a significant impact on the performance of the system.
You may want to consider optimizing your query to reduce the number of nodes being queried. You can try adding additional filters to your query to narrow down the results. For example, you can filter by node type or property values.
If optimizing the query is not possible, you may want to consider using pagination to limit the number of nodes returned in each query. This can help reduce the impact on system performance.
It is also important to note that increasing the limit may not be a permanent solution. As the amount of data in the system grows, you may need to increase the limit again in the future.
Views
Replies
Total Likes
HI @HrishikeshKa ,
I am already using [cq:PageContent] to filter the nodes as total pages are 19k out of which I am filtering data based on conditions. Can you suggest me how to use pagination in ACS commons report
My query: SELECT * FROM [cq:PageContent] AS s WHERE ISDESCENDANTNODE(s, '{{path}}') AND ((s.[cq:lastModified] < CAST("2019-01-01T00:00:00.000Z" AS DATE)) OR (s.[cq:lastReplicationAction] = "Deactivate" AND s.[cq:lastReplicated] < CAST("2022-01-01T00:00:00.000Z" AS DATE)) OR offTime < CAST("2023-12-01T00:00:00.000Z" AS DATE) )
Thanks
Views
Replies
Total Likes
Hi @Nimma05 ,
To handle pagination for large queries in ACS AEM Commons, you can use the pagination feature provided by ACS Commons Reporting. Here’s how you can incorporate pagination into your query and handle the large number of nodes efficiently:
Ensure that ACS Commons is installed in your AEM instance. ACS Commons provides the Query Packager and Reports functionalities which you can use for creating paginated reports.
The ACS Commons Reports provide an option to handle pagination. You can configure it by setting the appropriate properties in your query builder.
Here’s how you can incorporate pagination into your existing query using ACS Commons:
You can modify your query to include pagination parameters like limit and offset. Here’s an example of how you can achieve this:
import com.day.cq.search.QueryBuilder;
import com.day.cq.search.Query;
import com.day.cq.search.result.SearchResult;
import org.apache.sling.api.resource.ResourceResolver;
import javax.jcr.Session;
import java.util.HashMap;
import java.util.Map;
public class PaginatedQueryExample {
private static final int PAGE_SIZE = 100; // Number of nodes per page
public void executePaginatedQuery(ResourceResolver resourceResolver, String path, int pageNumber) {
QueryBuilder queryBuilder = resourceResolver.adaptTo(QueryBuilder.class);
Session session = resourceResolver.adaptTo(Session.class);
Map<String, String> queryMap = new HashMap<>();
queryMap.put("path", path);
queryMap.put("type", "cq:PageContent");
queryMap.put("1_property", "cq:lastModified");
queryMap.put("1_property.operation", "less_than");
queryMap.put("1_property.value", "2019-01-01T00:00:00.000Z");
queryMap.put("2_property", "cq:lastReplicationAction");
queryMap.put("2_property.value", "Deactivate");
queryMap.put("3_property", "cq:lastReplicated");
queryMap.put("3_property.operation", "less_than");
queryMap.put("3_property.value", "2022-01-01T00:00:00.000Z");
queryMap.put("4_property", "offTime");
queryMap.put("4_property.operation", "less_than");
queryMap.put("4_property.value", "2023-12-01T00:00:00.000Z");
queryMap.put("p.limit", String.valueOf(PAGE_SIZE));
queryMap.put("p.offset", String.valueOf((pageNumber - 1) * PAGE_SIZE));
Query query = queryBuilder.createQuery(PredicateGroup.create(queryMap), session);
SearchResult result = query.getResult();
// Process the results
result.getHits().forEach(hit -> {
// Handle each hit
System.out.println(hit.getPath());
});
}
}
If you prefer to use ACS Commons Report Builder for the pagination, here’s how you can set it up:
Create a Custom Report Configuration:
Configure the Query Parameters:
Run the Report:
Here’s an example of how you can handle pagination in a loop to process all pages:
public void processAllPages(ResourceResolver resourceResolver, String path) {
int pageNumber = 1;
boolean hasMoreResults = true;
while (hasMoreResults) {
SearchResult result = executePaginatedQuery(resourceResolver, path, pageNumber);
if (result.getHits().isEmpty()) {
hasMoreResults = false;
} else {
result.getHits().forEach(hit -> {
// Handle each hit
System.out.println(hit.getPath());
});
pageNumber++;
}
}
}
private SearchResult executePaginatedQuery(ResourceResolver resourceResolver, String path, int pageNumber) {
QueryBuilder queryBuilder = resourceResolver.adaptTo(QueryBuilder.class);
Session session = resourceResolver.adaptTo(Session.class);
Map<String, String> queryMap = new HashMap<>();
queryMap.put("path", path);
queryMap.put("type", "cq:PageContent");
queryMap.put("1_property", "cq:lastModified");
queryMap.put("1_property.operation", "less_than");
queryMap.put("1_property.value", "2019-01-01T00:00:00.000Z");
queryMap.put("2_property", "cq:lastReplicationAction");
queryMap.put("2_property.value", "Deactivate");
queryMap.put("3_property", "cq:lastReplicated");
queryMap.put("3_property.operation", "less_than");
queryMap.put("3_property.value", "2022-01-01T00:00:00.000Z");
queryMap.put("4_property", "offTime");
queryMap.put("4_property.operation", "less_than");
queryMap.put("4_property.value", "2023-12-01T00:00:00.000Z");
queryMap.put("p.limit", String.valueOf(PAGE_SIZE));
queryMap.put("p.offset", String.valueOf((pageNumber - 1) * PAGE_SIZE));
Query query = queryBuilder.createQuery(PredicateGroup.create(queryMap), session);
return query.getResult();
}
By implementing pagination, you can efficiently handle large datasets and ensure your system's performance is not compromised.
Views
Replies
Total Likes
@Nimma05 Did you find the suggestions from users helpful? Please let us know if you require more information. Otherwise, please mark the answer as correct for posterity. If you've discovered a solution yourself, we would appreciate it if you could share it with the community. Thank you!
Views
Replies
Total Likes