I have come accross this legacy code on a project and I am in the process of refactoring it. I am wondering if I should use the QueryBuilder API to get all pages on which I need to perform an operation, or if I should recursively loop over the Page and its children to perform said operation.
The current implementation recursively loops over a page and its children.
private void method(final Page page, final String data) {
if (pageUtil.pageMatchesCondition(page)) {
final Resource contentResource = page.getContentResource();
if (pagePropertiesService != null) {
pagePropertiesService.updatePageProperties(contentResource, data);
}
}
page.listChildren().forEachRemaining(childPage -> {
method(childPage, data);
});
}
During local development and debugging, it seemed the QueryBuilder API was much faster.
But I read a post from @Jörg_Hoh saying that the iterator is usually faster at https://experienceleaguecommunities.adobe.com/t5/adobe-experience-manager/find-first-level-child-usi.... But for my current use case I am not entirely sure what to do best.
Solved! Go to Solution.
Topics help categorize Community content and increase your ability to discover relevant content.
Views
Replies
Total Likes
It really depends on the criteria: If you search a large subtree and you can formulate the criteria for the pages/assets you want to process in a query, go for the query.
If on the other hand side you need to process (almost) every page/asset in a subtree, using the iteration can be faster.
But you are doing right by testing your special case. Just make sure that the content structure is somehow realistic and not too artifical.
Hi @jeremylanssiers ,
Choosing the better approach depends on a few factors, such as the specifics of your use case, performance considerations, and maintainability.
Here's a comparison of both approaches:
Pros:
Simplicity: Straightforward and easy to understand, especially for small hierarchies.
Direct Control: You have direct control over the recursion and can easily modify the logic for each page.
Cons:
Performance: Recursive methods can be inefficient and resource-intensive for large hierarchies.
Scalability: It might not scale well with very large page trees due to potential stack overflow issues and processing time.
Pros:
Performance: More efficient for searching large sets of pages, as it leverages AEM’s optimized search capabilities.
Scalability: Scales better with large content structures due to efficient querying.
Flexibility: Allows for more complex and flexible search criteria.
Cons:
Complexity: Requires understanding of AEM’s QueryBuilder API, which might be more complex for those unfamiliar with it.
Initial Setup: Setting up the query parameters and executing the query involves more steps compared to the direct recursive method.
For Small Hierarchies: If you’re working with a small number of pages, the recursive method is simple and easy to implement.
For Large Hierarchies: If you’re dealing with a large number of pages and need to ensure better performance and scalability, the QueryBuilder API method is the better choice.
Also, I tried to recreate same function using QuerybuilderAPI for you. Might this code helps you.
Happy Coding,
Aditya
private ResourceResolver resourceResolver;
private QueryBuilder queryBuilder;
private PagePropertiesService pagePropertiesService;
public PageUpdater(
ResourceResolver resourceResolver, QueryBuilder queryBuilder, PagePropertiesService pagePropertiesService) {
this.resourceResolver = resourceResolver;
this.queryBuilder = queryBuilder;
this.pagePropertiesService = pagePropertiesService;
}
public void updatePageProperties(String path, String data) {
Map<String, String> queryMap = new HashMap<>();
queryMap.put("path", path);
queryMap.put("type", "cq:Page");
queryMap.put("property", "jcr:content/@myConditionProperty"); queryMap.put("property.value", "true");
Query query = queryBuilder.createQuery(PredicateGroup.create(queryMap), resourceResolver.adaptTo(Session.class));
SearchResult result = query.getResult();
result.getHits().forEach(hit -> { try {
Resource resource = hit.getResource().getChild("jcr:content");
if (resource != null && pagePropertiesService != null) { pagePropertiesService.updatePageProperties(resource, data); }
} catch (RepositoryException e) { e.printStackTrace(); } });
It really depends on the criteria: If you search a large subtree and you can formulate the criteria for the pages/assets you want to process in a query, go for the query.
If on the other hand side you need to process (almost) every page/asset in a subtree, using the iteration can be faster.
But you are doing right by testing your special case. Just make sure that the content structure is somehow realistic and not too artifical.
Thanks for the advice!
Views
Replies
Total Likes
Hi @jeremylanssiers
Maybe you can try Page API to list all the children pages.
Below are the important points to consider:
1. A query builder query internally is converted to X Path query first.
2. Query Engine then executes the query which considers oak indexes as well as traversal approach to return the results.
3. Now, for each of oak indexes or traversal it finds out a cost.
4. The lesser the cost, the best approach query engine chose to execute it.
5. Now as per your question on traversal vs Query, I guess, always go for query because if traversal is faster anyhow, let query engine decide that and use that approach.
6. The only way you can improve your query performance is creaking oak indexes whose cost is minimal.
7. Use https://experienceleague.adobe.com/en/docs/experience-manager-cloud-service/content/operations/query... to check on how you can improve and debug your query.
8. You can debug your query to check the cost and try updating oak indexes to reduce the cost.
9. I have myself in many projects used oak indexes to achieve 1.0 cost and the performance of query was exceptional high.
Let me know if you have any doubt.
Thanks,
Nupur
Views
Likes
Replies
Views
Likes
Replies
Views
Likes
Replies