I have a long running script that loops over multiple country pages, their language roots and hundreds of pages below each language root. It runs scheduled overnight, but it can also be run manually during the day for one language root at a time.
I would like to transform this into a small script that spawns Jobs for each country, each language and perhaps even each page.
But I am worried I might end up with too many Sling Jobs, making matters worse. Hoe many Sling Jobs is too much? What would be a good approach for this kind of scenario?
Solved! Go to Solution.
Topics help categorize Community content and increase your ability to discover relevant content.
Views
Replies
Total Likes
As already mentioned by Arun, the system can be a guiding principle. But for the granularity the overall system resources do not play that much a role, because you always restrict the number of concurrent executions of these jobs by the queue configuration.
I would consider these factors when designing the granularity:
*
The optimal number of active Sling Jobs in Adobe Experience Manager (AEM) depends on various factors, and there is no fixed number that suits all scenarios. The number of active Sling Jobs that is considered "good" depends on factors such as system resources, job complexity, and performance requirements.
Here are some considerations to help you determine the optimal number of active Sling Jobs:
System Resources: Consider the available system resources, including CPU, memory, and disk I/O. If the system is resource-constrained, running too many concurrent jobs might lead to performance issues.
Job Complexity: The complexity of the jobs being executed is an important factor. Some jobs may be computationally intensive or involve resource-intensive operations. Consider the nature of the jobs and how they impact system resources.
Job Queues: Sling Jobs are often processed through Job Queues in AEM. Each queue has a maximum parallel setting that limits the number of jobs processed concurrently. Adjust the maximum parallel setting based on system performance and requirements.
I think you should start with less umbr of jobs and rely on JCR Queries to filter the pages and perform operations only on those pages.
We had similar kind of requirement where we had to check all of the pages for link type and adjusted those like via a sling job.
We had only 1 sling job for three language roots that runs a query and return only the impacted subset pages and perform adjustment.
As already mentioned by Arun, the system can be a guiding principle. But for the granularity the overall system resources do not play that much a role, because you always restrict the number of concurrent executions of these jobs by the queue configuration.
I would consider these factors when designing the granularity:
*
Views
Likes
Replies