Hi @pronic45,
IMO, you won’t be able to use the OOTB replication agents directly to push to S3 - they’re really meant for AEM-to-AEM communication. There isn’t a native S3 transport handler that you can just configure.
That said, there are a couple of ways I’d approach it depending on how tightly you want to integrate with AEM’s replication framework:
-
Custom TransportHandler -> you could extend the replication agent mechanism by writing a custom transport that uses the AWS SDK (or REST API) to drop content into your S3 bucket. This way you still get replication queues, retries, and can limit it by path (say only /content/dam/myproject).
-
Workflow step triggered by activation -> probably the simpler option IMO. You add a workflow process that listens to replication/activation events and then pushes the asset or JSON extract to S3. This is clean if you only care about specific directories.
-
Other routes -> Sling Distribution or ACS Commons could also be extended, but in practice the two options above are what I’ve seen used most often.
Personally, I’d lean toward the workflow step if you just need certain folders/files to go to S3, since it’s easier to maintain. If you need proper queuing/retry semantics, then implementing a custom TransportHandler for replication is more robust.
Key things to watch for: large binary handling (multi-part uploads), IAM permissions (bucket-scoped), and making sure you filter the right paths so you don’t overload S3.
So, short answer: not OOTB, but yes - with a little customization you can definitely have a push mechanism from Author -> S3 without resorting to a side-loaded script.
Santosh Sai

