Hi ,
We are using AEMaaCS , we have implemented GraphQL API's using Content Fragments. The issue we are facing is when we update or make any change in the Content Fragment and publish the change , it gets updated in author endpoint but it takes very long time to be updated on the publish endpoint. We publish CF models , CF instances but the changes never reflect immediately on the publish endpoint.
Can you please suggest what steps needs to be done to solve this issue , do we need to do any specific change on the dispatcher to achieve the same.
I am looking for some guidance/steps to solve this issue.
Thanks,
Solved! Go to Solution.
Views
Replies
Total Likes
As a workaround , we are using nocache param in the client call to flush the cache at runtime. This is working as expected with significant amount of data and not facing any performance issue at the moment .
Hi @tnik
The updates of Content Fragments (CF) on the publish endpoint in AEM as a Cloud Service (AEMaaCS), there are a few steps you can take to investigate and potentially resolve the issue:
1.Check Replication Status: Verify that the replication of the updated Content Fragments is successful. In AEMaaCS, the replication process is responsible for propagating changes from the author environment to the publish environment. You can check the replication status by navigating to the Replication Status page (`/libs/granite/operations/content/diagnosistools/replicationstatus.html`) in the AEM author environment. Ensure that the Content Fragments you updated are successfully replicated to the publish environment.
2. Cache Invalidation: If the replication status is successful, but the changes are not reflecting immediately on the publish endpoint, it could be due to caching. By default, AEM uses a caching mechanism to improve performance. However, this can cause delays in reflecting updates on the publish endpoint. To address this, you can configure cache invalidation rules on the dispatcher to ensure that the updated Content Fragments are not served from the cache. You can define specific cache invalidation rules for the URLs associated with your Content Fragments.
3. Dispatcher Configuration: Review your dispatcher configuration to ensure that it is properly configured for AEMaaCS. The dispatcher plays a crucial role in caching and serving content in AEM. Make sure that the dispatcher is configured to communicate with the AEMaaCS environment and that the cache invalidation rules are correctly set up.
4. Adobe Support: If the above steps do not resolve the issue, it is recommended to reach out to Adobe Support for further assistance. They can help investigate the specific configuration and environment details to identify any potential issues or provide guidance on resolving the delay in Content Fragment updates.
Thanks.
Hi @aanchal-sikka ,
Are you referring to reduce the below time param . I have reduced it to 30 seconds , so can the query response will be refreshed in 30 seconds if the underlying CF is updated
This should work. Let the previous ttl expire. Publish the CF and try the query after 30 sec.
Hi @aanchal-sikka ,
I updated the setting , but the query response is not getting updated post 30 seconds of CF update and publish , do we need to do anything on the dispatcher side?
if i am adding the no cache param to the endpoint like "nc=50" then i am seeing the updated response , so the cache is not getting flushed as per the time set in the above GQL header param
Thanks
The GraphQL response is cached on both CDN and dispatcher (TTL based). Thus, you notice the updates a little later in case of publish.
The AEM GraphQL API allows you to update the default cache-control parameters to your queries in order to improve performance/cache-refresh. Please refer to:
Alternatively, if you want to forcefully flush the cache, please refer to https://experience-aem.blogspot.com/2023/01/aem-cloud-service-invalidate-dispatcher-purge-fastly-cdn...
As a workaround , we are using nocache param in the client call to flush the cache at runtime. This is working as expected with significant amount of data and not facing any performance issue at the moment .
@tnik Did you find the suggestions from users helpful? Please let us know if more information is required. Otherwise, please mark the answer as correct for posterity. If you have found out solution yourself, please share it with the community.
Views
Replies
Total Likes
Sure @kautuk_sahni . I will mark it correct once resolved. The issue still exists. Thanks