Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Physical link training and negotiation

That still shouldn't take that long, though, should it? 3s sounds like some O(N^2) process is happening.

Keep in mind that this stuff is happening close to the metal, on a nowadays-unshared medium (no Ethernet hubs around any more), with negligible speed-of-light delays because the nearest switch is probably ~100ft away at most. If some high-level protocol like Steam Link can have no perceivable latency, then certainly PHY negotiation shouldn't.

My naive guess would be that the medium is speed-tested in order, first seeing if it works at 1Mbps, then 10Mbps, then 100Mbps, and finally 1Gbps; and alternating in the crossover-cable versions of those tests; satisficing with the last-achieved line rate when the next up-clocking fails.

If that's the case, then I have a feeling that modern hardware could get a bit of an advantage just from doing things in the opposite order: 1. optimistically assuming everything is set up for 1Gbps, and then, if not, ratcheting down the link-speed until the link starts working; and 2. only doing the crossover-cable tests after all the non-crossover tests fail.

You'd still have the same worst-case performance (3s) as before, but now that worst-case would be for old 1Mbps crossover cables: not a common case!




There is a thing called: compatibility.

Even when you don't have a hub anymore and Ethernet is not shared anymore, it is just a "anymore", which means it needs to respect those old things and need to test for it.

BTW, even that Ethernet is not a shared medium anymore is wrong. In Industrial and Automotive Ethernet we are back at SPE (Single Pair Ethernet) and working shared medium, because switched Ethernet is way to expansive.


You don’t test for the medium being shared/unshared; Ethernet is just a protocol that assumes a shared medium, and does https://en.wikipedia.org/wiki/Carrier-sense_multiple_access, even when there’s no benefit to it.

The reason that Ethernet can afford to do that even in entirely switched deployments, though, is that Ethernet’s CSMA is very aggressive/optimistic, meaning that there’s almost no overhead to it in the case that there really is nothing else sharing the medium. In fact, Ethernet’s “1-persistent” CSMA is effectively designed for low contention, falling over at high [100+ TXers] contention—which is why we don’t just use Ethernet over shared-medium WANs like a cable ISP’s (pre-fibre-backhaul) coax, but instead protocols like https://en.wikipedia.org/wiki/Asynchronous_transfer_mode.

My point with bringing up the low contention of modern media wasn’t that modern devices could somehow skip CSMA sense-idle altogether; but rather that, due to the aggressive nature of Ethernet’s CSMA, Ethernet when on a low-contention or no-contention media should have basically zero sense-idle overhead, which means one less thing standing in the way of fast Ethernet PHY autonegotiation in an archetypal modern deployment; and so one less reason to privilege the hypothesis of “it’s the laws of physics making PHY autonegotiation slow” over “Ethernet controllers are doing something dumb.”

Here’s something to chew on: USB is also a shared-medium PHY with many layers of legacy compatibility. And yet, on every OS I know of, a USB3 analog input device (e.g. a microphone) can go from “off/unplugged” to “negotiated, registered, driver up, and transmitting data to the host, that the host has an open buffer for such that it will acknowledge and process the data within the soft-real-time window”, all with 0.5s of delay or less.

Heck, the entire Bluetooth stack plus connections to pre-paired devices can come up faster than Ethernet—and Bluetooth sits on top of USB! Bluetooth comes up fast enough, that Apple bothers on its desktops to bring up the Bluetooth stack within EFI, finishing quickly enough that Bluetooth peripherals can be used to signal an interrupt to the EFI boot process within its ~1s interaction window. (We all know, meanwhile, what EFI under Ethernet control looks like: the modern server mainboard’s 6+ second “IPMI autoconfig” delay.)


Comparing USB to Ethernet is difficult. Any USB-device is talking USB1.1 at startup. So negotiation basically is transferring a data packet with the capabilities at a pre-defined data rate.

While Ethernet has to negotiate the number of wires, full-duplex vs half-duplex (which depends on the number of wires). The code like Manchester vs. 4B5B.

The main difference is, in USB the host decides on what to talk. In Ethernet there is no such instance.


You're forgetting delays on the switch side like spanning tree checks, arp table population, DHCP. But yeah, there's no reason it shouldnt be improved.


None of which are required for link-level autonegotiation, however. Those are later steps.


1 Megabit ethernet was never a thing.


Pedantic but... 802.3e


Oh, you are right! Do any modern network cards support this?


Not a chance. The claim that ethernet starts at 10Mbit is basically correct. It's not like StarLAN got the kind of adoption that thicknet or thinnet got. It came out after those two standards but was 1/10th the speed and AT&T put out StarLAN 10 just a year later.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: