What the article didn't mention is the possibility to build a "wired wireless" network using leaky feeder [0]. Unlike a normal coax feedline, a leaky feeder is a special coax feedline that is designed to leak a bit of RF along its way, thus providing wireless coverage on everywhere the coax runs. It's useful in areas when using antennas at fixed points is difficult to provide a throughout coverage, such as a building complex or a tunnel.
They mention that bandwidth is anemic at higher frequencies... so how well does this really work? And what kind of power is being pumped through those lines?
LTE is 700 to 2600 MHz, depending on the band. Wi-Fi is 2.45 (or 5 or 60) GHz. They’re all considered microwave and face similar signal integrity concerns in the same channels.
The prefix micro- in microwave is not meant to suggest a wavelength in the micrometer range. Rather, it indicates that microwaves are "small" (having shorter wavelengths), compared to the radio waves used prior to microwave technology.
Microwaves are a form of electromagnetic radiation with wavelengths ranging from about one meter to one millimeter; with frequencies between 300 MHz (1 m) and 300 GHz (1 mm).
Those are pretty clever too. I don't think I've seen one in practice (although I may have inadvertently made one with bad SMA grounding efforts :-))
I picked up a vector signal generator which has as one of its standard waveforms 802.11 (b/g/n variants) I use its output on a bit of coax to the SDR sitting on my bench while working on the WiFi tranceiver code. I have referred to that wireless over coax (it does bluetooth, zigbee etc so those work over coax too :-)).
Back in Blekko's first building space the office was on the other side of the building from the nearest cell tower so we installed one of those cell "repeaters" which was essentially an antenna in the suite connected via a bidirectional amplifier to an antenna on the roof that was pointed at the cell tower. The FCC later outlawed them but I always wish that I had kept it for those situations where cell service was hard to get.
It suffered from the fact that the antenna was not omni-directional so you really wanted it on one side of the space so that everyone in the space would be able to hit it with their phone. It was an improvement though over the AT&T wifi connected "mini cell" because it carried everyone's signal, not just the AT&T ones.
I recently installed an access point in an elevator. All that we did was put a cat6 into the umbilical cord that contains the elevator power, controls, etc.
Back in the day, almost all cell phones had a miniature coax port on them. They were "hidden" in that you probably had to pry off some nondescript part or open the battery cover up or something.
This is how we tested mobile networks. It works great. The only problem is when someone forgets to make a tight connection or attach a signal attenuator. Since the first thing you always test is emergency calls, once in a while your call ends up on the real mobile network and the fire department shows up in your parking lot. This happened about once or twice a month where I worked. They knew what our business was so they were never mad at us, but we had to pay a fine each time.
The old Nokia and Ericsson phones as well I think as some Motorola’s had a rubber plug in the back just below the external antenna with a coax port that you had to remove when you plugged the phone into the a car speaker phone system the phone would then hook to an external or a larger antenna glued to the windshield.
It doesn't allow you to replace the signal strength indicator anymore, unfortunately (you have to check the strength yourself). Although it now supports dark mode!
If the boy who cried wolf was fined by the townsfolk, he'd be incentivized sooner to stop, and they'd still be motivated to turn up. This doesn't work for impecunious boys, who under this theory should be left to the wolves.
I've heard similar stories at my old job, except with military radios. Needless to say the other end of the line was very unhappy about interference :)
I can't believe you had to pay a fine to comply with the government's own laws. No reasonable person would expect an operation to make no mistakes here.
How big were the fines? Were they ever challenged in court?
Part of 'complying with the governments own laws' is not making bogus calls. He said himself it happened more than once a month. There's a real cost to dispatching those fire trucks.
The fine is pretty standard practice to discourage repeated nuisance calls. Resources are not infinite and if time is spent on a nuisance call when a real call is delayed, you end up with avoidable injuries and even deaths.
The possibilities here with sufficiently sophisticated software and tunable hardware (deep learning of RF propagation characteristics in a building using data from portable sensors) are fascinating! But honestly, MOCA already solves this problem. The MOCA 2.5 standard (https://en.wikipedia.org/wiki/Multimedia_over_Coax_Alliance#...) already delivers 2.5gbit total (time-multiplex) throughput which is more than enough for real-world no-compromise bidirectional gigabit ethernet on isolated runs or distributed backhaul to multiple wifi APs. I'm using this in my (115 y.o.) house now where new cable runs are impractical and the coax plant is unused.
Happy to see MoCA get a shout- out. I spent most of 2003-2015 working on the design of MoCA chips, starting before the v1.0 spec release up through v2.5. AMA
Why is MoCA still so rare and expensive? Is there a technical reason or is it just some combination of business reasons?
It seems like it's been the obvious choice for no-new-wires home networking for years but you basically can't find it at retail and have to look for it even online.
<normal disclaimer about this just being my opinion>
I think there were a couple of contributing factors:
- WiFi is still easier to install (where you can get a good signal), and built into phones/laptops/etc. so mostly all you have to do to get wireless networking is buy the router. For MoCA, you have to buy & install a box for every endpoint. Also, MoCA is limited to places you have coax, which is usually bedrooms & living rooms. If you need connectivity in the garage/attic/closet/kitchen, MoCA might require new wires anyway.
- Lack of interest from MoCA developers. The main "customer" for MoCA chips was cable set-top-box vendors (Scientific Atlanta, etc.) for "multi-room PVR" products. That's where almost all MoCA networking chips made ended up. The cable/satellite vendors needed the deterministic performance of MoCA (vs WiFi) and they were always planning on putting a box in every room with a TV - which generally already aligns with where the cable taps are. They could also pre-configure the MoCA so it wouldn't interfere with some other stuff they might want to put on the cable (Example: DirecTV put its downlink from the dish squarely in the middle of the MoCA frequency band, so you had to configure MoCA to a different channel in DirecTV houses than e.g. Verizon). The market for bare Ethernet-Coax Bridges (ECB) was always tiny in comparison.
- The chicken and egg problem. The consumer market for bare Ethernet-Coax gateways was smaller (see above) as it has to compete with both WiFi and "just run some new CAT-5" (as well as niche things like HPNA/G.hn) so it didn't get a lot of focus or advertising. In turn, this means most people have no idea that MoCA even exists, so they don't go looking for it. D.Link, Netgear, Linksys, etc. then decided that lack of demand meant it's not worth developing/advertising new/improved versions of the products, etc.
MoCA was a very targeted solution for adding IP connectivity to things which were already wired together on a Coax network, and it did a great job at that. We sold 100s of millions of chips. MoCA was never meant to be all things to all people though, and while I personally use MoCA in my home I never got my own parents to use it - they just have the one computer hooked directly to their router, and WiFi for their iPad.
For what it’s worth, you can buy the DirecTV DECA adapters for like $20 on Amazon. They’re actually just MoCA adapters, but they run at a lower frequency than regular MoCA so they don’t interfere with the DirecTV signal. They work fine by themselves.
The only downside is that they’re 100 Mbit/s.
That said, they really do give you a rock solid 100 Mbit/s.
If you need more bandwidth, I think Verizon sells a MoCA 2.5 adapter for like $60 which should give you GigE.
The 4ms of latency is a product of the MOCA protocol.
As a fully scheduled network, each packet must wait for a timeslot to send a reservation packet, wait for the schedule to be updated (map packet), and then wait for the actual scheduled time.
The reservation timeslots and mail packets are on a fixed schedule (approximately- there are cases where the timing changes if you have a poor link) with a consequence of an unloaded MOCA network having a transit time of ~2ms going from the master node, or ~2.5ms going anywhere else (master node transmit is faster since it gets to skip the reservation step. Times are averages, as the exact time depends on the alignment of the time of arrival to the scheduling period). Round- trip ping times should go up by roughly 4.5ms.
Under ideal conditions, it is possible to get 100Mbps UDP throughput on a pair of MoCA nodes (1518 byte packets). The physical media can support up to 110Mbps (MOCA 1.0), 140Mbps (MOCA 1.1) or 450Mbps (MOCA 2.0) user throughput per channel (up to 5 channels in MOCA 2.5), but that's shared bandwidth (all traffic summed together). Throughout will fall off in bad channels (minimum 40Mbps), or if you use smaller packets (higher scheduling requirements per packet) so YMMV
The main difference is that MoCA is designed to run over coaxial cable, and G.hn is meant for use over power lines.
They have similar use cases ("route data over wires that already exist in your homes"), but different problems (MoCA: avoid interfering with your cable/satellite/cable modem feed, G.hn: deal with the absolutely abysmal signal quality on power lines, avoid accidentally broadcasting in the FM bands,...)
From a modulation perspective, G.hn wave 2 and MoCA 2.0 are broadly similar. They both use OFDM as the main modulation and LDPC as the FEC. They both are scheduled networks (like 802.11, unlike wired ethernet) where a master node allocates transmission time slots to other nodes. The devil is in the details though and I'm not as familiar with the G.hn wave2 spec as MoCA, so the detail I can give you is limited.
You alluded to it but the other difference is MoCA is shielded, EoP turns your house wires into a ton of radiating antennas across the frequency bands. I still have no idea how the FCC even approved the damn things after seeing what they do to the RF spectrum nearby.
Correct. Coax is nice and quiet in both directions. In home power wiring is basically a giant, poorly tuned antenna.
I can tell you MOCA 1.0 can be made to work over powerline (at reduced bandwidth, if you rip out the RF frontend and run at baseband), and it can pass emissions, but it didn't work well enough (by % of households able to achieve >= target data rate) to be worth commercializing.
Yes - that's been my experience with MoCA too. I needed to add an AP in one room and putting a new ethernet cable would have been a huge hassle. Luckily there was an existing TV extension into the room and a pair of MoCA adaptors works perfectly over the run. It's significantly better than Powerline options.
The other option to be aware of is G.hn on 2-wire telephone extension cable (that wasn't relevant in my case) - the adaptors for this seem to be more expensive and harder to find though than MoCA.
> The other option to be aware of is G.hn on 2-wire telephone extension cable (that wasn't relevant in my case) - the adaptors for this seem to be more expensive and harder to find though than MoCA.
If you've got (unneeded) telephone wire in your walls near where you want Ethernet, there's a good chance you have at least two pairs, which you can use for 100BaseTX, either point to point (ugh) if cable was run in bus formation as used to be common, or more usefully in more modern wiring where phone lines ran to a central location (hopefully somewhere that's appropriate to terminate ethernet near, but often at the telco DMARC on the outside of the home)
What hardware are you using? I looked briefly into this since I also have coax run throughout the house unused, but it was hard for me to know what to look for.
I'm running a MoCA 1.4 network at home with a mixed set of ActionTec ECB2500C and Netgear MCA1001 Ethernet-Coax bridges. The interoperated w/ no problems.
You can get newer versions that support MoCA 2.5, but I haven't tried them.
I was thinking about this for a minute, I don't think this technology is valuable because MOCA already exists and can be used instead, I suspect it fits a sort of different use case. It allows antennas to be separately installed from the radios, and allowing existing coax cabling to be used for the distribution.
So my read is instead of having to install power, MOCA, and various endpoints around, I can centrally install my radios/APs, and distribute the antennas if I have existing cabling, such as cabling for security cameras.
Everybody hates punching holes in walls and running new cables, but this is almost certainly the wrong solution for almost every problem, outside of the weirdest of edge cases.
For starters, you're going to lose MIMO, so you've substantially downgraded your network's ability to handle multiple clients. It's also really unlikely that your weird old coax system is clean and not a rat's nest of splitters and unterminated ends (=interference).
Attaching multiple antennas to a single AP in a non-MIMO configuration is not a cost effective way of doing DAS with current technology, if it ever was. More, lower-powered APs all over the place is where you want to be spending your budget. If you want to save money on install, do PoE.
Ethernet over power line (basically DSL) has become quite good these days, you can get fairly cheap 1gbe units today and unless you have some really crappy wiring with noisy appliances they don’t have many issues.
Latency. Sadly, Ethernet over powerline (IEEE 1901) made the same unfortunate choice as WiFi and the (analog phone line) modem V.4x standards in including error correction. This is not what you want or need for TCP/IP. It's better to drop a packet than to deliver it late. IEEE 1901 tries to mitigate this by using forward error correction, but succeeds in that only to a degree.
Bandwidth is alright-ish at about estimated 2MiB/s here (single pair, about 50ft apart), but RTT of ~4ms with occasional spikes in excess of 1s (at a different location, I've seen spikes in excess of 5s). This is only barely noticeable for accessing the WWW, but unpleasant for interactive GUI work (e.g. via NX or VNC) and a deal-breaker for cluster communication protocols.
Only an idiot would design a device to assume no packet loss, unless they were _also_ allowed to design the network. If you buy such a device and put it on a janky power network, then you have only yourself to blame.
Standards decisions should never be made to accommodate designers that want to assume there's effectively no packet loss.
Some degree of error correction is totally reasonable, but your first point is irrelevant to discussion.
To a TCP implementation, what's the difference between a dropped packet and one that's delayed by hundreds of round trip times? What problems are caused by not dropping that packet?
(If all the packets are failing to go through for that long, then you have much bigger problems than the error correction.)
And has a big probability of killing your VDSL connection, (Homeplug frequencies overlap with those used by VDSL), in addition to polluting the HAM spectrum with wideband interference.
I looked into this pretty seriously about two weeks back when I realized that I'm losing about 2/3rds of my connection speed (30 Mbps at router) while sitting at my desk at home.
I backed off for now because I wasn't sure how well it would work in practice and honestly the speeds I do get are perfectly fine for what I do.
Curious if you or anyone else has recommendations on what I should be looking for / avoiding if I were to go that route and invest in some powerline ethernet equipment.
Can you actually find an Ethernet over power line adapter that delivers 1Gbps? From my experience you only get like a third of the advertised bandwidth.
The speed between the Ethernet over powerline nodes is independent of the Ethernet connection of the devices connected to them.
You can see the link speed between the nodes in the TPlink app.
They also support LAG, I have two of the 3 ports LAGed to a Switch.
From what I can tell the biggest issue with these other than the wiring/noise issues is if people rely on them as a switch.
I used to have 3 devices connected to them (AP, TV and Xbox) switching to LAG and a proper 1gbe TPlink switch that ironically costs more than the homeplug more or less solved my issues but ymmv.
I also think that the UK residential power wiring that uses ring circuits might be better for homeplug usecases than other curicut types.
Oh, so you're talking about the speed the adapter tells you. In my experience those speeds are about 2x what you'd get in reality over a TCP connection (using something like iperf3).
If you have time, could you run iperf3 (brew/apt install iperf3) between two computers separated by the powerline devices?
I would say performance is so contingent on wiring quality that it cannot be reasonably assured. I tried out a TPLink system that claimed to be able to deliver 300 mb/sec and only managed 10. The fundamental problem is that you tend to want to place the units in areas with the highest potential for noise (eg near lots of electronics).
Your experience is typical even in the cleanest of AC power conditions. Powerline is hilariously oversold and regularly provides less than 25% of the advertised throughput.
The "2 gigabit" adapters top out at 470mbit when both sides are plugged into the same duplex receptacle. That number is cut in half when you move to another room, and it gets worse from there.
Every Powerline vendor, without exception, are using funny math when coming up with their advertised throughput numbers. In no situation will they ever be able to provide 50% of their stated throughput, and more typically it's closer to 10%.
Most houses build in the last decade or two use at least cat5 for phone lines. Just refit the outlet, plug in a switch where they all connect and, boom, wired network. Place a few strategically located wireless APs, adjust the signal strenght, and you have great coverage all over.
Edit: An alternative is to get a wifi extender that supports 5Ghz and 2.4Ghz and (this is the important part) allows you to use one of those as the back haul. You give up one of the frequencies, but you don't halve your bandwidth.
> Most houses build in the last decade or two use at least cat5 for phone lines. Just refit the outlet, plug in a switch where they all connect and, boom, wired network. Place a few strategically located wireless APs, adjust the signal strenght, and you have great coverage all over.
My parents' previous house was 2011-ish had the phone lines run over Cat5, but they still daisy chained jacks so it wasn't easy to convert to ethernet use.
That's not the only place I've seen that stupidity too.
> Edit: An alternative is to get a wifi extender that supports 5Ghz and 2.4Ghz and (this is the important part) allows you to use one of those as the back haul. You give up one of the frequencies, but you don't halve your bandwidth.
The nicer "mesh" kits in some cases even have three radios per base where the third is dedicated to backhaul.
Exactly this. I did this in a rented townhouse to get excellent signal coverage throughout while keeping my equipment locked up in a small mechanical room in the basement and requiring no modifications to the house.
You know, I never considered the unterminated end problem. Could attaching terminators to all the idle coax sockets in a house improve modem performance?
MIMO is in common use since 802.11n, it's what "3x3" spacial streams is.
MU-MIMO, "multi-user MIMO", is what was added in 802.11ac but never gained significant support or usage. (You'll find it in some WiFi APs, but practically no WiFi clients, I've never seen one myself.)
Defining it as a Wi-Fi cable is a bit narrow. It's just a coaxial cable between an antenna and the router. Just make sure your cable is matched to your antenna and using the correct connectors.
You'd only need an attenuator if you're connecting two WiFi adapters directly through coax. In what is discussed here, waves always travel through the air at some point, which does the attenuation.
While technically correct, using a coax cable will reduce some dominating loss factors like multipath loss, free-space path loss, and line-of-sight loss. So, in reality, your link might improve.
Home TV coax has a characteristic impedance of 75ohm. If you just plug consumer devices in directly you will get a pretty significant SNR loss from reflections and other impedance mismatch effects.
The mismatch loss between 50 and 75 ohm cable is 0.4 dB (or ~5%). It's not really significant unless you're doing small signal work (which wifi isn't) or if you're doing high powered work where 5% loss heats the coax (which wifi isn't).
Wouldn't there be a problem with reflection? Attenuation is not the issue here, reflecting even a fraction of a strong enough signal could potentially even be destructive for the device. It's possible though that WiFi devices, particularly those with coax ports, have protection against this.
Sure. But I would bet that most Wi-Fi access points use a 50-ohm system. (Plus, the connectors on a home tv coax cable probably won't mate with the router connectors.)
If you want to use a 75-ohm cable, you'll have to use impedance converters to convert to 50 ohms.
Bottom line: just make sure the components you're connecting in your system are properly matched to avoid a lot of reflections and power loss.
Impedance matching is overrated, the return loss of 50-ohm into 75-ohm is 14 dB, or a VSWR of 1.5:1, it means a low reflection, in practice the return loss is even better due to the loss in the cable. Also, unlike some digital systems that are expected to operate in a controlled environment, Wi-Fi is designed to work with multipath interference, so running Wi-Fi over 75-ohm coax can already get better SNR than running it over-the-air, so matching is not always necessary. You can always add one at the last step if it turned out to be necessary.
I'm curious to know, at what frequency in a pair of wires is it considered to be EM waves travelling through it? I know that Ethernet is changing voltage (electric field) at frequencies of ~30MHz, while WiFi travelling through a pair is an EM wave at a frequency of 2.4 GHz. At what point does it change to EM?
Ethernet is only spec'd to 250MHz for CAT6. That said, coaxial installed for CATV is only rated to 1GHz (less the cable and more the ubiquitous passive taps), so neither ought to carry 2.4GHz WiFi unmodified, although with CATV I've seen it work for sufficiently relaxed definitions of working.
At frequencies greater than 0 Hz (DC) structures of two or more conductors can support transmission of TEM electromagnetic waves. At microwave frequencies and above it is more efficient to switch to waveguide (a conductive tube).
Wouldnt any wire with electric charges in motion be considered to have EM waves? I dont think there is a minimum frequency per se, at least not a relevant one.
Well to counter that point, wifi has to go through coax so that the EM waves it creates don't induce current in nearby conductors. But for a standard Ethernet (100Mbit) cable, i.e. unshielded pair, you can place it right up against metal if you like.
Got it! Ethernet uses balanced pairs, which by themselves have some form of common mode rejection, but twisting it greatly reduces the harmful effects in the case where one conductor is closer to the EM interference. Twisted balanced pair has a lot of loss at >1GHz, at which point using a confined EM space in a coaxial cable can help prevent that. Ethernet stays under 250MHz, while WiFi is >2.4GHz, hence the cable differences. As another parent mentions, all changing currents produce EM waves, it's just the losses are much more important as the frequency increases.
The following page helped me understand it as well.
I think it would be pretty awesome to inject the WiFi RF into the coax of my home from the distribution panel. I only use the 1 line for my Comcast internet and ethernet/wifi to connect everything else. All of those other coax lines to each room are sitting entirely unused right now...
All you would need is a WiFi router that has a bunch of RG6 connectors on it. Wire in where the coax splitter/amplifier would normally live for CATV/satellite service. And then in each room where the coax feed ends up, simply screw on an antenna with matching RG6 connector.
The article mentions 802.11r for fast roaming between APs and how this avoids it. I've seen mentions of networks where the APs just forward WiFi packets and a central router does all the actual negotiation to achieve the same effect. It's too bad something like OpenWRT doesn't include it.
Ericsson has something similar for cellular signals over CAT5 called Ericson Dot [0]. Though I believe they have it at baseband or intermediate frequency in the cable and then shift it up in the Dot. Disclosure, I worked at Ericsson when the first Dot was released, but never with it myself. But they were really proud of it I remember.
I'm struggling to see how this is anything but using feed lines to the antenna. We've done this for decades to attach rooftop high gain antennas to an equipment closet or if you are a HAM radio operator and want to have an outdoor antenna while your rig remains inside.
If this were node-to-node, doesn't this just fit the definition of old school ethernet over coax?
It is easier and cheaper to use 50ohm compatible WiFi when dealing with situations like trying to communicate with equipment inside an enclosure through a slip ring.
Running 2.4ghz or 5.8ghz over run-of-the-mill coaxial cable will introduce a lot of losses. It'll have to be low loss stuff similar to LMR400 to really be useful, I'd think. This would only be feasible if such coax already exists. Otherwise it's cheaper to run multiple access points.
You would connect wifi access point to fibre from the ground. Wifi signal is output to coax cable running along side with the power lines or from the ground next to the track. Wifi is beamed into the train. Local Wifi access points spreads the wifi signal in the train.
This is a very timely post for me. My contractor mentioned this to me last week. It caught me totally off guard and seemed counter intuitive to my knowledge of wifi connections. Pretty neat and thank you for sharing.
I think what Wifi-over-Coax is trying to get around is having to coordinate multiple access points. Wifi breaks down particularly horribly when some of the stations and APs can't hear each other. By only having 1 AP, you avoid some of that problem. (Of course, stations can still interfere with each other. So I'm not sure how useful this is in practice.)
I think the idea is that if you already have coax pulled (eg for TVs), you can just connect it to a WiFi access point instead and put an antenna on the other side.
That article is only written for experts. It'd be great if someone here that understands what it's actually about could update the article and contribute a better description for the laymen.
I wasn't aware of coax usage other than for TV. I guess I assumed that twisted-pair cables were far superior for data transmission, but maybe not?
I recently moved in to a home that has been pre-wired with Cat5 and also coax to each room. The wiring was left unterminated but I assumed the coax was just meant for TV reception. But I did discover there is a second coax cable that has been left unterminated at both ends and I wonder if this is meant for data? I really don't like not knowing but I'm unable to get hold of the people who did the wiring during lockdown!
A typical satellite receiver has two or sometimes three coax connections to the dish (or to the inhouse distribution platform). They could have built for that.
Or it's a spare pull in case the cable fails mid span. Or to be able to get cable + antenna. Or to be able to use a centrallized powered splitter for two uses at one location, which would be better than using another splitter in the room, but probably not needed.
It's sticking the Wifi antennas at the end of a long cable. A "feedline" in the usual terminology.
How this interacts with beamforming and antenna diversity is .. not specified.
The basic idea seems simple enough: rather than stick an AP on one side of a wall and accept a 15-20dB loss through the wall, or use two APs, cable one antenna through the wall so the AP has a clear line of sight on both sides of the wall.
That was my interpretation as well. I feel like this is another case of someone new to the industry creating a “standard” for something that, had they been around for a few decades, they would have realized already existed in a different context.
Case in point: we have had “external” uni- and omni-directional antennas, even for plain old 802.11, for decades.
Isn't this exactly reusing old technology that has existed for decades (well, over a century now) in a slightly different context? I mean, a lot of pre-existing coax is there exactly for the purpose of running a signal between an antenna and a device…
No I think it's like running a (coax, as well) cable to the television antenna on your roof instead of having rabbit ears sticking out of the back of the box.
[0] https://en.wikipedia.org/wiki/Leaky_feeder