- Filter dispersion in NTP
In NTP dispersion is a term also used to capture the variance of remore readings. In NTP, if a node can synchronise from multiple clocks, it will synchronise from the clock that exhibits less variance, therefore, less error (statistically).
Consider the observer round-trip-times to source S1: 2, 100, 50, 4, 98, 44, ...Consider the observer round-trip-times to source S2: 38, 40, 42, 39, 41, 40, 38, 41, 39
Consider a when reading from S1 with a RRT of 30. To be conservative, we estimate the transit delay as 15 with an error of +-15. In fact, there is no evidence that the error may be smaller than 15, as the variance is very large for source S1Consider a when reading from S1 with a RRT of 40. To be conservative, we estimate the transit delay as 20 with an error of +-20.
Thus, when using S1, the reading with RTT of 30 is likely to be more accurate than the reading with RTT of 40, due to the high variance towards S1.
Consider now that when reading from S2 we get a RRT of 40. We estimate the transit delay as 20 but we can safely assume a smaller error (+-2) given that all readings in the past have been in the interval [38,42].
Thus, due to the smaller variance of the network to server S2, reading from S2 is likely to induce a smaller error than reading from S1, even when the observed RRT is larger.
NTP uses the ideas above in two ways:
- for a given server, it collects the last N readings and uses the one that offers lower error. This procedure is described nicely here:
https://www.eecis.udel.edu/~mills/ntp/html/filter.html
- when multiple servers are available, a node can pick the server that offers smaller variance