Best Practice for processing many bundles | Community
Skip to main content
tibormolnar
Level 4
March 27, 2025
Solved

Best Practice for processing many bundles

  • March 27, 2025
  • 1 reply
  • 1404 views

Hi All,

this is rather generic question about how to best handle a high number of search results.

The Workfront Search module in Fusion gives you the option to set a limit on the number of search results returned, but what if I need all matching records and the number can be high (several hundreds)? For example, I need to pull a list of all Assignments in a project and process each of them. For a large and complex project with many tasks this list can be extensive.

How can I make sure that all matching records are returned and processed, while not risking to reach the 40 minutes runtime limit, etc.

Is there a way to obtain and process them in batches?

Any ideas are appreciated.

Thanks,

Tibor

Best answer by Sven-iX

Hi @tibormolnar here is what I've used before: 

  • First get a count of items, then get batches of 2000 records each until you exhaust all items
  • do the processing in a second scenario that is called from the first one, for each batch. This "multi-threads" your scenario.

 

Example: 

  1. customAPI module with the query but instead "search" action, use "count", gives you a number
  2. setVar calculate the batches - you can go up to 2000 records or anything smaller
    1. set batch size
    2. cal number of batches = round( countbatchSize )
  3. Repeater module with number of steps set to number of batches
  4. setVar define start (the item to start the batch with) = the "i" from repeater * (batch size-1)
  5. send these parameters to the "worker" scenario (ideally passing along the query from the count module)
  6. in the worker, do the Search based on the query, start and batchSize
  7. process the found items

1 reply

Sven-iX
Community Advisor
Sven-iXCommunity AdvisorAccepted solution
Community Advisor
March 27, 2025

Hi @tibormolnar here is what I've used before: 

  • First get a count of items, then get batches of 2000 records each until you exhaust all items
  • do the processing in a second scenario that is called from the first one, for each batch. This "multi-threads" your scenario.

 

Example: 

  1. customAPI module with the query but instead "search" action, use "count", gives you a number
  2. setVar calculate the batches - you can go up to 2000 records or anything smaller
    1. set batch size
    2. cal number of batches = round( countbatchSize )
  3. Repeater module with number of steps set to number of batches
  4. setVar define start (the item to start the batch with) = the "i" from repeater * (batch size-1)
  5. send these parameters to the "worker" scenario (ideally passing along the query from the count module)
  6. in the worker, do the Search based on the query, start and batchSize
  7. process the found items
Level 4
June 20, 2025

@sven-ix, do you have an example of such batch split setup?


 
 
tibormolnar
Level 4
July 8, 2025

Ok, I figured this out and was able to split it into 3 batches:

 

So, another question is how to pass them one by one (one at a time) to another scenario for processing?

For example, pass batch 1 to another scenario, when it's completed its processing there, pass next batch 2, process, then batch 3, process

Any ideas?

 


Hi @viovi,

I haven't tried this myself yet, but I assume the idea here is that the 1st scenario that creates the batches, does not wait for the 2nd scenario to complete processing the 1st batch, before sending the 2nd batch to that, because then its total runtime would be just as long as if it was doing the whole processing (unsplit) itself.

Instead, the 1st scenario would create the N batches, trigger the 2nd scenario N times, then end. The 2nd scenario would then be running N times, potentially partially parallel.

 

As for how to pass the batches between the 1st and 2nd scenario, read this article:

https://experienceleague.adobe.com/en/docs/workfront-fusion/using/references/apps-and-their-modules/universal-connectors/webhooks-updated#supported-incoming-data-formats

Basically, your 1st scenario would send a HTTP request to the URL of the webhook which is the 1st module in your 2nd scenario. The data can be passed on in different ways. Considering that your data set is large, you should probably use JSON.

 

I hope this helps,

Tibor