once at startup. ntpd delays daemonizing until it has done the intial
time setting (or ran into the timeout) in this mode to make sure stuff started
later in rc is not subject to time jumps.
this eleminates the need to run rdate -n beforehands.
with some input from & ok ryan and bob, march music from mickey
-kill the _pid flavors of imsg_create and imsg_compose, and just add pid as
argument to those
-use imsg_create in imsg_compose instead of duplicating code
-check for datalen overflow
asking the privileged one to do it. sends back an imsg with the
resulting addresses in a bunch of struct sockaddr_storage in the data
part.
this should fix all remaining issues with dns (non-)availability at
ntpd startup, be it due to named on localhost or something else.
tested by marco@ and Chris Paul <chris.paul@sentinare.com>
to resolve the hostname every 60 seconds
fixes ntpd invocations before e. g. a dialup link is established and such.
as we want ntpd to be a "fire and forget" background daemon it should
cope with such situations.
tested by many
which, besides the head pointer for the list of course, stores the original
address as specified (i. e. as hostname instead of resolved IPs) and flags
and such.
1) base the interval calculation on the offset from the last reply, not
from the last peer update.
Allows us to send more queries again faster when the local clock
diverges too much
2) every time we form a peer update (for which we need 8 replies)
check wether we have a ready peer update for all peers that are
currently trusted, and if so, calculate the total offset and call
adjtime().
that means that adjtime is no longer called in fixed intervals
but whenever we have enough data to reliably calculate the local
clock offset.
In practice, that means we call adjtime() less often, but with
probably better data.
3) invalidate peer updates after beeing used. no point in re-using them
- this resulted in calling adjtime() multiple times with the same
offset, which doesn't make sense
tested by many
a close-to-reality stratum, a real reference time, and a leap indicator
that will indicate if the local clock isn't synchronized.
This also means that until the server feels it's synchronized, it will
tell the clients it isn't. This is normal, and correct.
ok henning@
we know have both a "server" and "servers" keyword. they differ when the
hostname resolves to more than one IP, server picks one and servers expands
to all.
that means no longer stuffing a sockaddr_storage into ntp_peer but a pointer
to a linked list of ntp_addr structs.
in the "servers" case the list of n addresses returned by host() is expanded
into n ntp_peer structs and thus n individual peers.
in the "server" case the whole list is attached to ntp_peer, and whenever we
do not receive a reply in time we traverse the list one further, so that
hosts with both AAAA and A records are first tried with the AAAA one but
we gracefully fall back to the A one.
semantics with theo; hacked up on the Montreal->Frankfurt flight.
again Air Canada surprised me, that older 767 hat pretty decent seats.
fixes the dns resolves to v4 and v6 addresses bug found by phessler
hacked on the Calgary->Montreal flight that proved that Air Canada _does_
have some modern aircrafts with good seats
* Respond to the query with a reasonable received time (which
will help clients get better accuracy).
* Consolidate the server response code in preparation for a
completely 'proper' response to the client.
tips and ok from henning@
struct ntp_addr, which just wraps a sockaddr_storage and a next pointer,
so that host_dns can return more than one entry.
let host_dns do exactly that, return a list of all IPs for that hostname
adjust all callers in the grammar to cope with that
a lot of credit for not having supplied us with enough data within an
adjtime run interval, and get a little credit each time we get a good
reply packet. if a peer is below 20%, only send a packet occasionally to
see wether it is back. send out queries much more often between 20 and 80%
to (re-)sync quickly, and above 80% usethe regular interval.
do not use peers < 60% for calculating teh local clock offset.
designed with theo at the pho, alexander ok
form the last 8 replied received from a peer, find the one with the lowest
delay. Use that as the peer's update taken into account for calculating
the local clock's offset.
Invalidate that reply and all ones received earlier than it so that they do
not get used again.