Hello,
I have a workflow running every 15minutes that exports the latest data from broadLogEventHisto to an external service for tracking sends.
The query is setup as follows:
query: broadLogEventHisto
@eventDate > @lastAuditTS
AND
@eventDate < SubMinutes( GetDate() , 30)
So, the query will gather all records between the @lastAuditTS (which is equal to the most recent eventDate value from the previous run) and 30 minutes before the current date - and it runs every 15 minutes. Generally, we can expect this time window to look like the following:
1:00pm run - selects broadLogEventHisto records from 12:15pm-12:30pm
1:15pm run - selects broadLogEventHisto records from 12:30pm-12:45pm
1:30pm run - selects broadLogEventHisto records from 12:45pm-1:00pm
The issue arises when we combine all of these datasets to get sums of transactional messages sent. Generally, we are finding that this export does not capture all transactional message sends. Does broadLogEventHisto behave in a way that would prevent some sends from being captured by this query?
E.g., if a send has an eventDate of 12:15 pm but is not populated into the table until 1:30pm, then it wouldn't be captured by this query.
Topics help categorize Community content and increase your ability to discover relevant content.
Hi @SkylerQu2 ,
Your current setup exports data every 15 minutes, but some records might be logged late, meaning they fall outside your query’s time range and never get captured.
Here’s some fixes that worked for me :
1. Use @lastModified Instead of @eventDate
Rather than relying on when the event occurred, track when the record was actually inserted or updated inthe schema. This will ensure that even delayed records are picked up.
2. Schedule an Hourly Reconciliation Check
In addition to your regular 15-minute exports, run a backup workflow that will query every hour to capture any records that were logged late (within the past 1-2 hours). Exports these missing records as well.
This way, no data slips through the cracks.
3. Track and Compare Exported Records
Keep a log of how many records are processed in each run and store the number of records exported in an audit table.
Compare it against expected volumes.
If the count is lower than usual, trigger a retry to fetch data from the last 2 hours.
Hope this helps!
Regards,
Abhinav
Hi @abhinav99 thank you for your answer
1) Unfortunately if we use @lastModified it results in duplicate records being picked up. Is there another value we can use, like a 'creation date' value?
Thanks,
Skyler
Hi @SkylerQu2 ,
For this case you can use an Incremental Query.
An incremental query lets you periodically select a target based on a criterion, while excluding the people already targeted for this criterion.
Incremental Query:
query: broadLogEventHisto
@eventDate on or after DaysAgo(1)
The above condition will fetch newly created records in last 1 day. As this is incremental query, even you schedule to run every 15 minutes, it will fetch only newly created records in last 1 day. So as per your example, even though the eventDate of 12:15 pm gets populated into the table at 1:30pm, this record will get resulted in either 1:30pm run or in next 1:45pm run.
Views
Replies
Total Likes
Hello @SkylerQu2 The best way would be to pick the data based on lastModified date and then in the external service you will have to update the data based on Primary key.
Just pulling in the latest data won't work as the delivery status can change and you wouldn't have the updated information in the external service. Keep in the mind that the delivery logs can take upto 5 days to have up to date data based on retries, bounces and delivery validation.
Views
Replies
Total Likes
Views
Likes
Replies
Views
Likes
Replies