Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Like all coaxial connectors, it was designed to present as little change in the characteristic impedance of the feedline as possible by keeping the spacing between the center conductor connection and the outer shell as close to the feedline dimensions as possible.

The above supposedly explains why coax is better for high frequency, but the sentence isn't parsing for me. Can anyone explain it a bit more?



The characteristic impedance of a cable is distributed and depends on things like the spatial relation of the connector to the shielding, other internal conductors, and what the cable insulator is made of. Changes in these parameters result in local impedance changes, when electrical signals propagate down and reach changes in impedance, it cause reflections which show up as noise and loss. The more abrupt and large the impedance change is, the more extreme the reflections will be. Connectors are basically a hotspot for changes in the physical dimensions of the cable as it connects to a receptacle. Higher frequencies are more sensitive to changes in the physical geometry because of their smaller wavelength. Most decent BNC connectors will work up until a few GHz, most SMA up between 10 and 20 GHz. N connectors work up to almost 30 GHz.


Also, it's better shielded than twisted pair.

It's funny, keeping everything else constant, the center conductor can move quite a bit away from the exact axial center of the shield before the impedance changes very much.

PS. Neill, not Neil




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: