Hi,
I have a scenario where I have users in millions stored inside AEM /home/users. Now when I fire a query it returns -1 in this case( may be because users are is millions), but when I run the same query in my local instance I get exact nodes returned in total i.e 12 or 15 etc. I need to get the exact total results returned for the query even when the results returned are in millions.
Can anyone suggest whats wrong here and how can we handle this condition ?
Thanks,
Sameer
Solved! Go to Solution.
Views
Replies
Total Likes
Hi,
the "problem" is, that the JCR query implementation is lazy, when it comes to returning the result. Experience has shown, that in many cases people are not interested in the full result set, but only the first X items of it. That's the reason, why the query implementation does not compute all results immediately, when you start to read the result set. Instead if computes the result in chunks; and because it does not know all results yet even if you start reading from the result set. This might be caused by the fact, that the internal result set must be filtered first through the ACLs, so the raw return set (for example provided by lucene) is never returned to you as a user of this API, but it must be filtered. So unless the query implementation has not read all the result set and done all necessary checks, it simply does not know how much results there will be. But this takes time, especially if you have millions of results (does this query then make sense at all?).
The easiest way to force the query implementation to do this is to use a "order by" statement; because then it has to run through all the result items and order them appropriatly. In that case the size will be reported correctly.
kind regards,
Jörg
Whaty API are you using to Query - can you provide the query statement. It's hard to see what may be wrong without seeing the query statement you are using.
Views
Replies
Total Likes
Hi,
the "problem" is, that the JCR query implementation is lazy, when it comes to returning the result. Experience has shown, that in many cases people are not interested in the full result set, but only the first X items of it. That's the reason, why the query implementation does not compute all results immediately, when you start to read the result set. Instead if computes the result in chunks; and because it does not know all results yet even if you start reading from the result set. This might be caused by the fact, that the internal result set must be filtered first through the ACLs, so the raw return set (for example provided by lucene) is never returned to you as a user of this API, but it must be filtered. So unless the query implementation has not read all the result set and done all necessary checks, it simply does not know how much results there will be. But this takes time, especially if you have millions of results (does this query then make sense at all?).
The easiest way to force the query implementation to do this is to use a "order by" statement; because then it has to run through all the result items and order them appropriatly. In that case the size will be reported correctly.
kind regards,
Jörg
Sammer, Please see this.
or elaborate little bit more your issue with query details and some log.
Views
Replies
Total Likes
Hi Scott,
Thanks for replying. Below is my query which I am running from code. Please ignore the double quote mistakes in query.
String stmt = SELECT * FROM [rep:User] AS p WHERE ISDESCENDANTNODE([/home/users/member_center]) AND p.[cq:lastModified] <= CAST('2015-12-22T12:00:00.009+05:30' AS DATE) AND p.[profile/memberStatus] IN ("active_member","new_member","stale_member","trial_member") AND p.[profile/userType] IN ("B2C","B2B"); Query query = adminSession.getWorkspace().getQueryManager().createQuery(stmt, Query.JCR_SQL2); QueryResult results = query.execute(); NodeIterator nodeIterator = results.getNodes(); LOGGER.info("nodeIterator size :"+nodeIterator.getSize());
Thanks,
Sameer
Views
Replies
Total Likes
Hi Jörg Hoh,
I will try the way you have suggested of ORDER BY. I read about it on few other blogs previously but at some places it was suggested that it is not full proof, but still I will give it a try.
Thanks,
Sameer
Views
Replies
Total Likes
Views
Likes
Replies
Views
Likes
Replies
Views
Likes
Replies