Hello,
I have a couple questions about how the Dispatcher performs load balancing and penalizes unavailable renders.
Hopefully someone here will have useful insight.
The documentation for the dispatcher "statistics" and "unavailablePenalty" are a little ambiguous as to how the load balancing is actually performed.
Here's the documentation: https://docs.adobe.com/docs/en/dispatcher/disp-config.html#par_168_66_0011
Is there any documentation that details what information is kept in the "statistics" and how this information relates to prioritization of renders for load balancing. Is the algorithm documented anywhere?
The issue is, without knowing how the dispatcher performs the load balancing, it is difficult to configure and tune it. Specifically, the "unavailablePenalty" property is of interest to us. The documentation simply states that the "unavailablePenalty property sets the time (in tenths of a second) that is applied to the render statistics when a connection to the render fails. Dispatcher adds the time to the statistics category that matches the requested URI."
This does not help us determine what a meaningful value for this property would be. To property tune this setting we would need details of how the Dispatcher uses these values. For example, i understand that a value of 10 would mean '1 second'; however, i have no idea how 1 second will impact load balancing.
The documentation says that the dispatcher maintains statistics about request response times for the purpose of load balancing. Are normal response times in any way related to the penalties? For example, if we have a normal response time of 2 seconds for a render, does a penalty (applied when the render does not respond) of 0.5 seconds actually penalize the render MORE than a valid response of 2 seconds would?
Thank you for your insight,
Kyle
Solved! Go to Solution.
Views
Replies
Total Likes
Hi Kyle,
I've been working with the dispatcher for a very long time, and I've never seen the dispatcher being used as a loadbalancer for these reasons:
So I don't have a use for this feature.
kind regards,
Jörg
[1] https://cqdump.wordpress.com/2015/01/12/connecting-dispatchers-and-publishers/
Views
Replies
Total Likes
Hi Kyle,
I've been working with the dispatcher for a very long time, and I've never seen the dispatcher being used as a loadbalancer for these reasons:
So I don't have a use for this feature.
kind regards,
Jörg
[1] https://cqdump.wordpress.com/2015/01/12/connecting-dispatchers-and-publishers/
Views
Replies
Total Likes
The Dispatcher has been Adobe's/CQs standard Load Balancer. Not sure why you say you've never seen it as an LB @Jörg_Hoh
It's like in the front of the documentation and what Adobe has recommended from the start beginning with CQ5
https://experienceleague.adobe.com/en/docs/experience-manager-dispatcher/using/dispatcher
Views
Replies
Total Likes
I have never seen the dispatcher being used as a loadbalancer, and also I am not aware of anyone using it in this role.
In 2011 someone in a project of mine tested the dispatcher as loadbalancer and connected it to 6 publish instances. It distributed the requests in a very uneven way (the first publish got 40% of the request, the last one 4%), which basically proved my assumption. Also the dispatcher lacks of any sophisticated feature I would expect from a loadbalancer, starting with observability and not ending with the lack of working sticky connections.
I have read your suggestion of only using a 1:1 ratio.
Unfortunately, the 1:1 setup, although simpler, has a significant draw back that is an issue for our use case. We have hundreds of thousands of cache-able assets. Our caches are cleared quite frequently due to frequent activations.
If we were in a 1:1 setup, this means each dispatcher (and each cache) only has one render powering it. To fully repopulate the cache each dispatcher would have to render each cache-able resource at least once. What this means is that each publish instance (render) would have to actually render every single asset at least once. In a setup where multiple publishers are backing a single dispatcher/cache, the ability to repopulate the cache is enhanced significantly. This is true because the first request to some resource will be cached and re-used for subsequent requests regardless of which publisher actually rendered the item. For a 1:1 setup, that cached resource would only have been valid for the same request on the same dispatcher; i.e. if the same request were sent to a different dispatcher, that dispatcher would not have it cached and it needs to be rendered by a publisher again.
So, knowing that the dispatcher significantly outperforms a publisher if and only if the requested resources exists in the dispatchers cache; It makes sense that you would want to maximize the cache hit ratio. One way of doing this is to reduce the number of caches (dispatchers), but put more rendering power behind each cache. This is all kind of a moot point if the caches rarely get cleared and are easy to repopulate, but becomes a significant problem when they are cleared frequently and are difficult to fully repopulate before clearing again.
Anyway, my question is more just about the semi-documented dispatcher feature. If staying in a multiple publisher to dispatcher setup, i'm sure we could place a nice load balancer in between the publishers and the dispatcher (emulating a 1:1 setup since the dispatcher only sees the load balancer as its single render).
Putting those things aside, the feature that currently does exist in the dispatcher module, does anyone know how it works, how to configure it? or is it just a write off?
Views
Replies
Total Likes
Hi,
my personal answer is: I wouldn't use it.
kind regards,
Jörg
Views
Replies
Total Likes
Views
Likes
Replies
Views
Likes
Replies