Expand my Community achievements bar.

Radically easy to access on brand approved content for distribution and omnichannel performant delivery. AEM Assets Content Hub and Dynamic Media with OpenAPI capabilities is now GA.

Getting out of memory exception after starting the bulk reindex option provided on /etc/acs-commons/oak-index-manager.html

Avatar

Level 3

Our dev author is having very less RAM.( 8 GB).

We were not aware that bulk reindex is a resource intensive operation on RAM .We wanted to fix some issues on search since we did not run bulk reindexing after upgrade from 5.6.1 to 6.1

But after we started bulk reindexing , it is not stopping since there are some sync indexes . Author runs out of memory and we need to restart.

We get this error-->

03.05.2016 12:47:43.729 *ERROR* [pool-6-thread-1] org.apache.sling.commons.scheduler.impl.QuartzScheduler Exception during job execution of org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate@2a1cf280 : Java heap space

java.lang.OutOfMemoryError: Java heap space

Is there any way to stop this bulk reindexing on aync index nodes?

Even after restart of author this bulk reindex on aync oak indexes are not stopping.

I am getting below info from logs

03.05.2016 13:55:17.360 *INFO* [pool-7-thread-5] org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorContext Loaded default Tika Config from classpath bundle://96.0:1/org/apache/jackrabbit/oak/plugins/index/lucene/tika-config.xml

 

03.05.2016 13:55:17.435 *INFO* [pool-7-thread-5] org.apache.jackrabbit.oak.plugins.index.IndexUpdate Reindexing will be performed for following indexes: [/oak:index/cqTagLucene, /oak:index/counter, /oak:index/workflowDataLucene, /oak:index/authorizables, /oak:index/damAssetLucene, /oak:index/ntBaseLucene, /oak:index/lucene, /oak:index/cqPageLucene]

4 Replies

Avatar

Employee Advisor

HI,

what version of Oak are you using? If you already have 1.2.8 or newer, you could try the approach described in OAK-3505[1]. But the easiest way would be to add temporary more RAM and more heap to overcome this situation. The second approach could be to rebuild your DEV instance from scratch, and deleting the old repo.

Jörg

 

[1] https://issues.apache.org/jira/browse/OAK-3505

Avatar

Level 7

Hi,

Jorg's reply should address your issue. Kindly mark this as solved if you have already got your answer.

Thanks

Tuhin

Avatar

Level 3

@Tuhin,

I need some more information on this issue.

I wanted to implement solution specified on OAK-3505[1] which says:

Following system properties can be set (based on changes done in this issue)

  1. oak.indexUpdate.failOnMissingIndexProvider - Set it to true so as to change the default behaviour to fail the commit of editor is missing. Safe to set untill we change the behaviour in OAK-3642
  2. oak.indexUpdate.ignoreReindexFlags - This is only to be set for recovery where a reindex in a property index can hold system for long in a cluster.

But I don't know where  to set  the above properties on AEM 6.1.

But to stop async index jobs from consuming memory I have set the reindex flag of all async indexes to false. After doing this reindexing on async indexes has stopped and I am not getting the out of memory error. But I wanted to know if this is the right approach. What if I want to run this aync indexing job again?. So if I get the details of setting the system properties provided above  which appears to be a cleaner solution and I can toggle the reindexing option by just using a single property which is the second property mentioned above I it will be very useful for me.