and coping with error conditions... that lets us avoid a pledge "wpath".
Putting it all together, this lets the master ntpd pledge "stdio rpath
inet settime proc id". It works like this: "rpath" to load the
certificates, "proc" to create constraint processes, "id" to chroot
and lock the constraint processes into a jail, then "inet" to open a
https session. "settime" is used by the master to manage the system
time when the ntp-speaking engine instructs the master.
with help from naddy
non-sensical. The dns lookups happened in the process routing table
(usually '0'), which is very likely to have different results from the
other routing domains. If you do depend on having this behaviour,
you'll need to use pf to cross the rtable boundary.
"listen on * rtable X" is still supported.
Users of "server * rtable X" will need to switch to launching ntpd with
"route -T X exec /usr/sbin/ntpd"
OK deraadt@
This helps the ntp process to a) give a better pledge(2) and to b)
keep the promise of "saving the world again... on time" by removing
the delays that have been introduced by expensive constraint forks.
The new design offers better privsep but introduces a few more imsgs
and runs a little bit more code in the privileged parent. The
privileged code is minimal, carefully checked, and does not attempt to
"parse" any contents; the forked constraints instantly drop all
privileges and pledge to "stdio inet".
OK beck@ deraadt@
including fork/exec cost, it would be better if constraints were
forked from the master process, which would then tell the ntp
engine. That would increase accuracy and security.
Lots of conversations with reyk and bcook
than < for the comparison. Otherwise, if we don't do enough work
in the loop to advance the clock (for instance if the network is
down) we may end up calling poll() multiple times with no timeout,
racking up CPU time for no real reason. OK bcook@
henning@ 9 years ago because of an issue with the /dev/hotplug device
- it does not support multiple readers opening it. Nobody ever cared
enough to fix it so it is time to sent the dead code to the Attic.
OK henning@ (feeling sad about it), mpi@ and others
of being wrong, not the NTP responses, reset it and query it from all
the constraint servers all over again. This is turned out to be a bit
aggressive because it could get triggered with just a few bad NTP
peers in a larger pool. To avoid constant reconnections, scale the
error margin with the number of resolved NTP peers using peer_cnt * 4.
This way a single or a few outliers in a NTP pool cannot trigger
reconnecting to the constraint servers immediately. More NTP peers,
less reason to mistrust the constraint.
Found by dtucker@
OK deraadt@
addresses and try one after another until the connection succeeded -
based on the existing mechanism of "server". "constraint" previously
only tried to connect to the first returned address, aborted and
skipped the constraint on failure. In difference to "constraints"
(plural), it still only connects to one address at a time and not to
all of them at once.
Pointed out by rpe@
OK rpe@ deraadt@
while here i've reformatted the page to stop kidding that -s is 4 options;
original issue kind of spotted by adam thompson, though note i am not fixing the
issue he complained about (i'll address that mail in a minute);
tls_config_insecure_noverifyname(), so that it is more accurate and keeps
inline with the distinction between DNS hostname and server name.
Requested by tedu@ during s2k15.
no need to request it ever again. The only exception is the
escalation of failed constraint checks that might lead into
re-requesting the constraint time from all servers. Adjust the states
accordingly.
OK henning@