Need guidance on Adobe I/O Journaling API | Community
Skip to main content
Level 3
April 15, 2025
Solved

Need guidance on Adobe I/O Journaling API

  • April 15, 2025
  • 1 reply
  • 555 views

Hi Team,

 

We are using Adobe I/O Eventing to trigger the download events and then fetching the events using Journaling API Endpoint, in the last pushing the events to third party applications.

 

When we are fetching the events from Journaling queue, we are just getting at max 3 events in each get request.

As per the document [1] stating "depending on the traffic of the events associated with your registration, the number of events returned in a single response batch varies: a batch of events contains at least one event (if you are not already at the end of the journal), but there is no pre-defined upper limit.", in each batch we want to fetch more events, we used 'limits' param but that too is not helping to get most of the events in one request.

 

Now my questions are :

1. Why Journaling batches just have 2-3 events at a time, I cant see full queue of events with Postman Get request.

2. How can I pull maximum events from journaling queue?

3. How this Journaling is working?

4. What is the risk here to return whole events at that moment from Journaling queue?

 

[1] https://events-va6.adobe.io/events/organizations/21791/integrations/669203/792eb21e-c34f-4a34-938a-b7507468cde9 

 

Thanks,

SD

Best answer by sarav_prakash

Hi @sdusane , valid questions when first time using the Journaling queue. I wrote an article how we leveraged Journaling with code examples at each layer. https://medium.com/@bsaravanaprakash/building-content-supply-chain-using-workfront-aio-journaling-appbuilder-runtime-actions-finally-4d7a6ca0fbf2

 

So lets breakdown,

  1. Why Journaling batches just have 2-3 events at a time, I cant see full queue of events with Postman Get request?. - We dont do REST APIs over Event-Driven Architecture. We dont write single servlet or postman request to read all events and process immediate. Instead we build listeners/subscribers. The listener wakes up whenever events are available at Journaling. For AWS its lambda, for Azure its function, here in Adobe AIO its Action. You ll write a non-webaction, subscribe to the journaling queue and process the event. 
  2. How can I pull maximum events from journaling queue? - You ll use RxJS observable. Per Step3 in my article, you ll write a function like this.async function fetchEventsUsingSDK(state, since = "") { logger.info("Checking events after index: " + since); const client = await sdk.init(imsOrgId, clientId, oauthToken); const baseURL = `https://api.adobe.io/events/organizations/${consumerOrgId}/integrations/${credId}/${registrationId}`; const journalOptions = since ? { since: since } : undefined; const journalObservable = await client.getEventsObservableFromJournal( baseURL, journalOptions ); journalObservable .pipe( filter((evt) => evt.position && evt.event.data), concatMap((evt) => { return of(evt).pipe( concatMap((event) => processEvent(event).then(() => event)) ); }) ) .subscribe( async (x) => await saveEventIndexToState(state, x), // any action onNext event (e) => logger.error("onError: " + e.message), // any action onError () => logger.log("onCompleted") //action onComplete ); }This will fetch all the events while the action is awake 
  3. How this Journaling is working? detailed documentation. Idea is, its a Apache Kafka queue. First-in First-out. Different adobe products or custom event publishers push events into queue. Journal stores events upto 7 days. And different subscribers can listen and consume events. Same event can be consumed by multiple subscribers.  
  4. What is the risk here to return whole events at that moment from Journaling queue? No risk in reading out and processing all events at same time. Journaling queue doesn't restrict. Usually its listner / subscriber problem. Say you are reading the event, downloading asset and writing into some 3rdparty DB. The listening end will have capacity problem. You might not be allowed to download more than a limit or simultaneous write too many records parallel. Due to which, you need to throttle. Process events one-by-one. Even simplest operation of logging event, will have its boundary. You might not be allowed to log million events into your splunk at same time. So its not risk of journal queue. Its risk of subscriber processing the event.

Hope this clarifies. 

1 reply

sarav_prakash
Community Advisor
sarav_prakashCommunity AdvisorAccepted solution
Community Advisor
April 15, 2025

Hi @sdusane , valid questions when first time using the Journaling queue. I wrote an article how we leveraged Journaling with code examples at each layer. https://medium.com/@bsaravanaprakash/building-content-supply-chain-using-workfront-aio-journaling-appbuilder-runtime-actions-finally-4d7a6ca0fbf2

 

So lets breakdown,

  1. Why Journaling batches just have 2-3 events at a time, I cant see full queue of events with Postman Get request?. - We dont do REST APIs over Event-Driven Architecture. We dont write single servlet or postman request to read all events and process immediate. Instead we build listeners/subscribers. The listener wakes up whenever events are available at Journaling. For AWS its lambda, for Azure its function, here in Adobe AIO its Action. You ll write a non-webaction, subscribe to the journaling queue and process the event. 
  2. How can I pull maximum events from journaling queue? - You ll use RxJS observable. Per Step3 in my article, you ll write a function like this.async function fetchEventsUsingSDK(state, since = "") { logger.info("Checking events after index: " + since); const client = await sdk.init(imsOrgId, clientId, oauthToken); const baseURL = `https://api.adobe.io/events/organizations/${consumerOrgId}/integrations/${credId}/${registrationId}`; const journalOptions = since ? { since: since } : undefined; const journalObservable = await client.getEventsObservableFromJournal( baseURL, journalOptions ); journalObservable .pipe( filter((evt) => evt.position && evt.event.data), concatMap((evt) => { return of(evt).pipe( concatMap((event) => processEvent(event).then(() => event)) ); }) ) .subscribe( async (x) => await saveEventIndexToState(state, x), // any action onNext event (e) => logger.error("onError: " + e.message), // any action onError () => logger.log("onCompleted") //action onComplete ); }This will fetch all the events while the action is awake 
  3. How this Journaling is working? detailed documentation. Idea is, its a Apache Kafka queue. First-in First-out. Different adobe products or custom event publishers push events into queue. Journal stores events upto 7 days. And different subscribers can listen and consume events. Same event can be consumed by multiple subscribers.  
  4. What is the risk here to return whole events at that moment from Journaling queue? No risk in reading out and processing all events at same time. Journaling queue doesn't restrict. Usually its listner / subscriber problem. Say you are reading the event, downloading asset and writing into some 3rdparty DB. The listening end will have capacity problem. You might not be allowed to download more than a limit or simultaneous write too many records parallel. Due to which, you need to throttle. Process events one-by-one. Even simplest operation of logging event, will have its boundary. You might not be allowed to log million events into your splunk at same time. So its not risk of journal queue. Its risk of subscriber processing the event.

Hope this clarifies. 

SDusaneAuthor
Level 3
April 16, 2025

Hi @sarav_prakash ,

 

Thanks for the detailed answers, I will work on the recommendations.

 

Regards,

SD