Expand my Community achievements bar.

SOLVED

Rogue process

Avatar

Level 4

I have a development sandbox for a client where they have added a lot of images and projects then deleted them. There is some process that keeps running an inefficient query against assets that are no longer in the dam. The warning about the query is:

17.11.2015 09:30:10.133 *WARN* [pool-6-thread-2] org.apache.jackrabbit.oak.spi.query.Cursors$TraversingCursor Traversed 39000 nodes with filter Filter(query=select [jcr:path], [jcr:score], * from [nt:unstructured] as a where name(a) = 'metadata' and [itemNumber] = 'S-10572' and isdescendantnode(a, '/content/dam/images') /* xpath: /jcr:root/content/dam/images//element(*, nt:unstructured)[fn:name() = 'metadata' and @itemNumber = 'S-10572'] */, path=/content/dam/images//*, property=[itemNumber=[S-10572]]); consider creating an index or changing the query

I've tried adding an oak index on the itemNumber but it doesn't change the warning. The odd part for me is that all of the workflows have been terminated, the replication queues are all empty, and I can't find what is still making these queries. The warning is logged every 1000 nodes traversed until it reaches the end of the nodes then changes to a new item number and does the same thing over and over. The images were imported and deleted over a month ago but these queries still keep processing. The AEM instance has been stopped, reindexed, and restarted a few times since these images were deleted as well.

What could be causing this?

1 Accepted Solution

Avatar

Correct answer by
Level 4

Sham HC wrote...

Sounds like cf/assetfinder or custom search console or an old job executing the query.    To obtain the stacktrace of the code path which trigger this log is to use following Log config [1]   

[1]   {0,date,dd.MM.yyyy HH:mm:ss.SSS} *{4}* [{2}] {3} {5} %caller{10}

 

All of the jobs have been purged. I can run similar queries in the crx/de query tool and they return fairly quickly. All of the replication queues are empty.

Sham HC wrote...

Sounds like cf/assetfinder or custom search console or an old job executing the query.    To obtain the stacktrace of the code path which trigger this log is to use following Log config [1]   

[1]   {0,date,dd.MM.yyyy HH:mm:ss.SSS} *{4}* [{2}] {3} {5} %caller{10}

 

Where do I use that formatted string? Where would I look to see the cf/assetfinder?

View solution in original post

4 Replies

Avatar

Level 10

Sounds like cf/assetfinder or custom search console or an old job executing the query.    To obtain the stacktrace of the code path which trigger this log is to use following Log config [1]   

[1]   {0,date,dd.MM.yyyy HH:mm:ss.SSS} *{4}* [{2}] {3} {5} %caller{10}

Avatar

Correct answer by
Level 4

Sham HC wrote...

Sounds like cf/assetfinder or custom search console or an old job executing the query.    To obtain the stacktrace of the code path which trigger this log is to use following Log config [1]   

[1]   {0,date,dd.MM.yyyy HH:mm:ss.SSS} *{4}* [{2}] {3} {5} %caller{10}

 

All of the jobs have been purged. I can run similar queries in the crx/de query tool and they return fairly quickly. All of the replication queues are empty.

Sham HC wrote...

Sounds like cf/assetfinder or custom search console or an old job executing the query.    To obtain the stacktrace of the code path which trigger this log is to use following Log config [1]   

[1]   {0,date,dd.MM.yyyy HH:mm:ss.SSS} *{4}* [{2}] {3} {5} %caller{10}

 

Where do I use that formatted string? Where would I look to see the cf/assetfinder?

Avatar

Level 4

It now looks like it is the Adobe Scene7 image server trying to process images that were uploaded then deleted before the image server could create the thumbnails. Now I need to find out where the data is stored for the image server to process the images. It looks like it doesn't give up on failed images that have been deleted but keeps trying them forever. Because the client dumps a huge amount of images then can't wait for them to process and deletes then, they keep building up an ever larger number of images that won't process.

Avatar

Level 4

Just an FYI update.

The process was a scheduled process that I didn't know about. After finding out what it was and what it was doing, I was able to create an oak index that cut the processing down from almost a day to just a few minutes. The easiest way to stop it is to not put the file it processes in the expected folder. No file, no work.