Hi
We're doing some testing to enable load balancing in the dispatcher. Currently we tie a dispatcher to a single publish instance but this isn't great for scalability (still having multiple dispatcher/publishers and balancing in an appliance). We're trying load balancing within the dispatchers, all talking to the same multiple publishers. However performance tests indicate that load balancing in this way is slower.
Initially I had some issues getting the dispatchers to spread load evenly but creating better statistics rules seem to help this. However we're still seeing a noticeable increase in page response times? Does anybody have experience of this? The only thing left to assume is the decision as to which renderer to use has caused this increase? It doesn't seem trivial such as round robin and there doesn't seem to be much way to tune this? Any advice would be greatly appreciated.
Cheers
Mikey
Solved! Go to Solution.
Views
Replies
Total Likes
Mike,
It is not recommended to have an architecture where a single dispatcher is talking to multiple publishers. This means that multiple publishers will have flush agents to the same dispatcher. Think of a situation where the same file gets flushed/invalidated by multiple dispatchers within a matter of seconds because one publisher got the activation a second later than the other one. The easiest is to have 1:1 ratio and have a LB infront of the dispatchers that load balances to the dispatchers instead of the dispatchers doing the load balancing.
Hope this helps.
Views
Replies
Total Likes
Mike,
It is not recommended to have an architecture where a single dispatcher is talking to multiple publishers. This means that multiple publishers will have flush agents to the same dispatcher. Think of a situation where the same file gets flushed/invalidated by multiple dispatchers within a matter of seconds because one publisher got the activation a second later than the other one. The easiest is to have 1:1 ratio and have a LB infront of the dispatchers that load balances to the dispatchers instead of the dispatchers doing the load balancing.
Hope this helps.
Views
Replies
Total Likes
Thanks for the reply. I'm interested though if flushing is the only reason that it's not recommended? It seems not many people are using the load balancing aspect of the dispatcher and am wondering why? It doesn't seem very flexible in terms of config.
Views
Replies
Total Likes
Please have a look at https://cqdump.wordpress.com/2015/01/12/connecting-dispatchers-and-publishers/ written by Joerg Hoh
Views
Replies
Total Likes
Thanks - very useful. So it seems I've solved most of the management issues Joerg has mentioned by having elastic scripts the monitor service discovery of publishers using dns and automagicall add/remove them as they come and go. I think it still remains unanswered why not to use load balancing in the dispatcher. Even Joerg has suggested to use an intermediate load balancer between dispatcher and publisher?
Is load balancing really bad in the dispatcher?
Views
Replies
Total Likes
For a LB to work properly, it must be the only function in touch with the servers. Then only it can take in stats and optimally route the requests. Here in the m:n scenario, for.ex., disp1 will route to AEM1 and disp2 will try to route to AEM1 which is already handling a req. disp2 has no way of understanding why AEM1 is busy as it doesnt know disp1 is also routing to the same AEM1. It will just take the stats as...AEM1 is busy. If there had been no disp1, then disp2 will know that AEM1 is busy for x seconds because of a request of this type to it. Then it can route requests optimally .Since it knows everything about the servers it routes to as it is the only LB routing to those servers. Hence in m:n scenario, you would rather have a single LB in front of all the dispatchers connecting to the AEM servers. The allows for a simpler configuration as well, as you can manage the list of AEM servers in the LB itself and doesnt have to update every dispatcher. Also,m:n is seen to be worthwhile only if the number of publish servers are more than 4. Any number below that, it adds too much complexity.
Views
Replies
Total Likes