Hi All,
I have set up a docker image pointing to my publish instance.
Previously the setup was running fine as when I hit localhost:8080 it rendered content from publish instance.
But lately I get 502 error though no changes have been made. Here are the logs -
[05/Jul/2022:07:45:34 +0000] "GET /content/xyz/us/en/home-page.html HTTP/1.1" 502 none [publishfarm/-] 54054ms "localhost:8080"
[Tue Jul 05 07:47:00.946021 2022] [dispatcher:warn] [pid 65:tid 140116585622328] [client 172.17.0.1:51476] Unable to connect socket to host.docker.internal:4507: Operation timed out
[Tue Jul 05 07:47:00.946227 2022] [dispatcher:warn] [pid 65:tid 140116585622328] [client 172.17.0.1:51476] Unable to connect to any backend in farm publishfarm
[Tue Jul 05 07:47:11.957133 2022] [dispatcher:warn] [pid 65:tid 140116585622328] [client 172.17.0.1:51476] Unable to connect socket to host.docker.internal:4507: Operation timed out
[Tue Jul 05 07:47:11.957233 2022] [dispatcher:warn] [pid 65:tid 140116585622328] [client 172.17.0.1:51476] Unable to connect to any backend in farm publishfarm
[Tue Jul 05 07:47:22.968211 2022] [dispatcher:warn] [pid 65:tid 140116585622328] [client 172.17.0.1:51476] Unable to connect socket to host.docker.internal:4507: Operation timed out
[Tue Jul 05 07:47:22.968280 2022] [dispatcher:warn] [pid 65:tid 140116585622328] [client 172.17.0.1:51476] Unable to connect to any backend in farm publishfarm
[Tue Jul 05 07:47:33.979010 2022] [dispatcher:warn] [pid 65:tid 140116585622328] [client 172.17.0.1:51476] Unable to connect socket to host.docker.internal:4507: Operation timed out
[Tue Jul 05 07:47:33.979121 2022] [dispatcher:warn] [pid 65:tid 140116585622328] [client 172.17.0.1:51476] Unable to connect to any backend in farm publishfarm
Please note the publish instance is rendering content correctly so its not an issue on the AEM end I think.
Could you please help troubleshoot this issue ?
Solved! Go to Solution.
Views
Replies
Total Likes
It is clear from the error that you shared that docker is not getting the response from the 4507 port server and unable to server the content. Can you please below points
I advise you to check on point 5. But ideally all the above points are like a steps to debug and validate the issue. Hope this works for you
It is clear from the error that you shared that docker is not getting the response from the 4507 port server and unable to server the content. Can you please below points
I advise you to check on point 5. But ideally all the above points are like a steps to debug and validate the issue. Hope this works for you
Hi Jagdeep,
Thanks for the reply. Here are the results from your checkpoints
1. The dispatcher configuration is correct. I restarted the publish instance and had the socket timeout error correct automatically. The error comes up intermittently for some reason hence the dispatcher configurations should not be an issue here.
2. There is no URL shortening done yet.
3. Yes, the issue seems to be on docker end but the dispatcher configuration is validated and the watchers are also established every time.
4. I have I navigated to the cache path and ran a "rm -rf*" to remove all the cache. Still the error persisted when the socket was timed out.
5. With new publish instance the error is not coming up yet. But the intermittent error coming up is still unexplained. I think maybe the publish when remaining idle for sometime might be throwing up this error but for now its unexplained.
Hi @Rohan_Garg , as per your logs Docker might not be the issue, its the Dispatcher which is unable to reach publish instance even though it's in your local machine. So check your Dispatcher configurations for publish instance hostname & port. something looks like this.
Read this page for more info on dispatcher
~Aditya.Ch
# The load will be balanced among these render instances
/renders
{
/rend01
{
# Hostname or IP of the render
/hostname "127.0.0.1"
# Port of the render
/port "4507"
}
}
Hi Aditya,
Thanks for your reply. I have put up the render URL of the localhost as suggested.
But the issue does not seem to be at that end.
Thanks,
Rohan Garg