What is the best way to measure the length of time it takes to sync the from the active instance to the standby instance? Ultimately the time it takes to start the transfer of something on active to it being stored and finished on standby.
Currently I am running a script that gets the send and receive logs and looking for these two particular fields on
21.09.2015 11:10:22.145 *DEBUG* [nioEventLoopGroup-3-12] org.apache.jackrabbit.oak.plugins.segment.standby.server.StandbyServerHandler sending segment d3d39a1d-cbb3-48da-a7f4-7106b058fd7d to /127.0.0.1:52954 (from the Active)
21.09.2015 11:10:22.145 *DEBUG* [nioEventLoopGroup-324-1] org.apache.jackrabbit.oak.plugins.segment.standby.codec.ReplyDecoder received segment with id d3d39a1d-cbb3-48da-a7f4-7106b058fd7d and size 5008 (from the Standby)
Then comparing the timestamps for latency (standby to receive) and size.
Looking at it though I don't know if it measures the polling interval correctly (it is currently set to 5 seconds) and if this logically achieves I'm trying to do.
Solved! Go to Solution.
I have but it doesn't quite go into further depth about full length of time it takes for getting from one instance to the other.
We can see how often it polls through the sync interval, but what I want to know is the time from start of the transfer from active to the time after written to the standby.
Is there something that we can find within the "org.apache.jackrabbit.oak.plugins.segment" log that I'm missing or is there another log we can use to see when data is finished being written in the standby?