I think the YB team members are probably best equipped to talk about this, but I can note that while some databases do build their own clock synchronization protocol, many prefer to let the OS handle clocks. For one thing, clock sync is surprisingly tricky to do well, so it makes sense to write daemons that do it well once and be able to re-use them in lots of contexts. There's also the question of HW support: in theory, datacenter and hardware providers could do better than pure-software time synchronization by, say, offering dedicated physical links to a local atomic + GPS clock ensemble. AWS TimeSync is a step in this direction, and I wouldn't be surprised if we see more accurate clocks in the future.
There are still tons of caveats with this idea--Linux and most database software ain't realtime, for starters--but you can imagine a world in which clock errors are sufficiently bounded and infrequent that they no longer represent the most urgent threat to safety. That's ultimately a quantitative risk assessment.
My suspicion is that DB vendors like YugaByte and CockroachDB are making a strategic bet that although clocks right now are pretty terrible, they won't be that way forever. I'd like to see more rigorous measurement on this front, because while I've got plenty of anecdotes, I don't think we have a broad statistical picture of how bad typical clocks are, and whether they're improving.
As @aphyr had mentioned, any NTP-alike system would work. We can update the docs to mention PTP, we do work with AWS Time Sync as well (which uses Chrony).
In short, no: many transactional databases don't rely on clocks for safety. I'm going to speak in broad terms here--there's a lot of nuance and special cases that we can dig into, but I'd like to keep this accessible:
You can use CRDTs, and other commutative data structures, to obtain totally-available replicated objects across wide area networks. Systems like Riak do this. CRDTs can't express some types of computation safely, though! For instance, you can't do something like a minimum-balance constraint, ensuring that an account always contains $25 or more, if you allow both deposits and withdrawals, in a commutative system. Why? Because order matters! Deposit, withdraw is different than withdraw, deposit, in terms of their intermediate states.
For order, you can use a consensus mechanism, like ZAB (Zookeeper), Paxos (Riak SC, Cassandra LWT), or Raft (etcd, consul) to replicate arbitrary state machines without any clock dependence at all. These systems require at least one round trip to establish consensus, and their guarantees only apply within the consensus system itself.
What if you have multiple consensus groups? Say, one per shard? Then you need a protocol to coordinate transactions on top of that. You can execute an atomic commit protocol for cross-shard transactions, perhaps using a consensus system. Or you can use a protocol like Calvin to obtain serializability (or stronger) across shards without relying on clocks. That's what FaunaDB does. That adds a round-trip, but if you're clever, you may only have to pay that round-trip cost between different datacenters once.
Another tactic is to exploit well-synchronized clocks to obtain consistent views across independent consensus groups. You can use this technique to (theoretically) reduce the number of round trips a transaction costs, and there are different ways to balance whether you pay increased latency on read or write transactions. Spanner, CockroachDB, and YugaByte DB all take this approach, with different tradeoffs.
Spanner is backed by custom hardware and carefully designed software, to obtain tight bounds on clock error. CockroachDB and YugaByte DB leave that problem to you, the operator.
Often, a database uses a stronger replication mechanism inside a datacenter, but when it comes to replicating between datacenters, backs off to a weaker strategy which doesn't offer the same safety invariants.
While FoundationDB uses Paxos for cluster state (like leader election), it is not on the commit path for a transaction. If any process fails in the transaction system (not storage processes), the cluster is reconfigured by the coordinators and every component is replaced. Transactions do not proceed during failures, but the cluster will replace the failed process in a few seconds and resume.
(This is not meant to be a contradiction, just pointing out an important difference compared to systems that allow progress in parallel with failures.)