Expand my Community achievements bar.

Applications for the 2024-2025 Adobe Experience Manager Champion Program are open!
SOLVED

Programmatically create replication agents and activate each site via its own exclusive agent

Avatar

Level 7

Hi,

In our deployment, we are using CQ 5.6 to dynamically create web sites. At the moment we have 10-20 web sites, but we expect to hit to 100-1000 sites soon.

One issue that has been worrying me is that during activation, sometimes the only agent is stuck and then no changes go through.

Is there a way to dynamically/programmatically create one replication agent per web site and activate changes to each site via its own replication agent?

If so, could you please provide me with links to relevant documentation, API docs, and code samples?

Thanks in advance. 

1 Accepted Solution

Avatar

Correct answer by
Level 10

Yes possible. The steps are

1)     Create an reference replication agent and make it disabled. Make sure ignore default is checked.  (This is required because password is encrypted)

2)     When you create a new site, create a new agent by copy the above replication agent with changes required and enable it.

3)   Then while replicating use agent filter to use the one created in step2. http://helpx.adobe.com/experience-manager/kb/CQ5ReplicateToSpecificAgents.html

View solution in original post

5 Replies

Avatar

Correct answer by
Level 10

Yes possible. The steps are

1)     Create an reference replication agent and make it disabled. Make sure ignore default is checked.  (This is required because password is encrypted)

2)     When you create a new site, create a new agent by copy the above replication agent with changes required and enable it.

3)   Then while replicating use agent filter to use the one created in step2. http://helpx.adobe.com/experience-manager/kb/CQ5ReplicateToSpecificAgents.html

Avatar

Level 8

One work of warning - doing desperate replication agents for each site may not actually solve your problem. The most common reason that replication queues get stuck is a problem on the publish server, not a queue problem. So if for some reason you publish server is slow or unable to persist content that is often the unrelated to the content being published. There are times with the queue gets stuck specifically related to the content that is being published, or rarely an actual problem with the queue, but in my experience those are less common. 

In addition you will want to consider the impact of maintaining all those site specific replication agents and queues. Most infrastructures have multiple publish servers, and one replication agent per publish server. If you have 1000 sites that would be 2000 replication agents. Having to monitor 2000 queues for issues would be significant, as the effort involved if say you needed to change the hostname of a publish server, or change the admin password of the publish server. You could script those sorts of changes but they would still be significant. 

Avatar

Level 7

Hi Sham,

Could you please explain how password being encrypted is related?

UPDATE: Just noticed that "Ignore Default" is actually preventing the replicator to be picked up for default replication.

Also is this solution reliable? In particular what would happen when the agent is enabled and a user opens the tree activation page (/etc/replication/treeactivation.html) to activate a path? Wouldn't that path also be pushed to this agent that is programmatically enabled?

Here is the relevant treeactivation (POST.jsp) code:

if (doActivate) { if (!dryRun) { try { replicator.replicate(session, ReplicationActionType.ACTIVATE, res.getPath()); } catch (ReplicationException e) { out.printf("<div class=\"error\">" + xssAPI.encodeForHTML(i18n.get("Error during processing: {0}", null, e.toString())) + "</div>"); log.error("Error during tree activation of " + res.getPath(), e); } } aCount++; }

As this code is not using any kind of filter whatsoever, I am not sure if your solution would work?

Thanks.

Avatar

Level 7

orotas wrote...

One work of warning - doing desperate replication agents for each site may not actually solve your problem. The most common reason that replication queues get stuck is a problem on the publish server, not a queue problem. So if for some reason you publish server is slow or unable to persist content that is often the unrelated to the content being published. There are times with the queue gets stuck specifically related to the content that is being published, or rarely an actual problem with the queue, but in my experience those are less common. 

In addition you will want to consider the impact of maintaining all those site specific replication agents and queues. Most infrastructures have multiple publish servers, and one replication agent per publish server. If you have 1000 sites that would be 2000 replication agents. Having to monitor 2000 queues for issues would be significant, as the effort involved if say you needed to change the hostname of a publish server, or change the admin password of the publish server. You could script those sorts of changes but they would still be significant. 

 

Hi Orotas,

The issue with the stuck replication queue, at least for us, was due to a bug that needed a hot fix to be installed to be fixed. We are not still %100 that the issue is fixed, but so far we haven't seen a stuck queue in our tests after applying that hot fix. We have another issue with replication at the moment that is due to a ClassCastException thrown from a CQ5 built-in class (can't remember which class now).

Regarding 1 queue per site, I agree with you. I think in practice we might just create 10-20 queues and use an algorithm to distribute the replication of sites to those queues. Adobe is also investigating this issue for us and if can fix it, we might ditch this approach altogether.

Thanks.

Avatar

Level 10

LinearGradient wrote...

Hi Sham,

Could you please explain how password being encrypted is related?

Thanks.

 

The password of replication agent is encrypted & you need to use Granite CryptoSupport to generate one. To avoid if you have same password recommended to have master copy so that you can clone.