Query builder error handling



Hi ,

I am trying to do a full text search using query builder in aem . The requirement is to do a full text search on a large number of nodes . If the number exceeds the limitread I am catching the exception in code and doing full text search on the child nodes of that search path .

But after getting the exception , when I am trying to do a full text search on one of the child node paths I am getting the same error although there are not much nodes under that child node .

I could avoid the error by changing the search path in the first place or by increasing the limitread . But the requirement is to handle it and search under child node .

Please suggest a solution .





Your question is not clear (at least to me)

What is limitread?

Are you referring to some OOTB exception?

Can you try to rephrase your question probably with some code snippet / example, you may get a better answer in that case.




Hi ,

Sorry if i was not clear with my question before .

Limitreads is the attribute name under

JMX Console -> Query Engine Settings .

This attribute is used to set the limit on the number of nodes query can traverse in one go .

Let me put some sample code for reference to my question .

private void processAndPrepareResult(List<String> searchPaths, String uniqueId, String usageLimit, MyDto myDto ) throws RepositoryException {



        if (CollectionUtils.isNotEmpty(searchPaths)) {

                Iterator<String> searchPathIterator = searchPaths.iterator();

                  while (searchPathIterator.hasNext()) {

                           String searchPath = searchPathIterator.next();

                           SearchResult result = prepareResults(usageLimit, searchPath, uniqueId);

                           processMyDto(result, myDto);




  } catch (SupportException ex) {

   LOG.debug("Thrown exception caught in processAndPrepareResult() :{}",ex);

  String errorPath = ex.getMessage();

  List<String> childNodePaths = getChildNodePaths(errorPath);



  processAndPrepareResult(searchPaths, uniqueId, usageLimit, disclaimerAuditVo);



private SearchResult prepareResults(String usageLimit, String searchPath, String uniqueId) throws SupportException {

   try {

            SearchResult result = null;

            int usageLimitNumber = Integer.parseInt(usageLimit);

            Map<String, String> params = new HashMap<>();

            params.put("path", searchPath);

            params.put("type", "cq:Page");

            params.put("fulltext", "\"[d:" + uniqueId + "]\"");

            params.put("fulltext.relPath", "jcr:content");

            params.put("p.limit", String.valueOf(usageLimitNumber + 1));

             if (builder != null) {

                      Query query = builder.createQuery(PredicateGroup.create(params), session);

                      result = query.getResult(); //UnsupportedOperationException  will be thrown here for exceeding the node search limit


             return result;

  } catch (UnsupportedOperationException e) {

   throw new SupportException(searchPath);



Let me rephrase my question here with an example scenario .

  • In the first run of the query let's assume the search path is "/content"  . It throws error because of the number of nodes inside "content" node .
  • In the code i will catch that exception and find out the child nodes under "/content" .
  • Let's assume that i got "/content/childNode" as a child node and inside this "/content/childNode" we have very few child nodes .
  • So in the second run of the query it will try to search under "/content/childNode" which has only few child nodes so it should work fine and return the search result .
  • But i am facing some challenge at this point .
  • Now it's throwing the same error for "/content/childNode" which it should not throw as the child node has only few further child nodes .

Please let me know if i am missing out on something here or if there something wrong with the code  .

Thanks you in advance .




Hi edubey ,

I have added some sample code for reference on top of the original question . Let me know if you need more clarification or have something useful for me .





Based on what I understand, when the count exceeds limitread, you are catching the exception and then trying to read sub-child of that nodes.

Not sure, but just a thought. If limitread is already exceeded. then probably we will not be able to read anything further, due to which this is happening.