Hacker Newsnew | past | comments | ask | show | jobs | submit | more lambdaone's commentslogin

I find this simultaneously fascinating and disturbing; once you have someone using this, they become a hitherto-impossible human-AI hybrid, where their mind is now a fusion between the two, completely unnoticed to the user.


I think the Internet warriors are trying to build their own entirely self-sufficient network independent of the state or commercial worlds, which is, as you say, tricky to do only with resources legally available to the general public. Armed forces have had these things nailed down pretty much since the invention of radio.


You could just about squeeze voice down LoRa with a really low-bandwidth codec, as really aggressive codecs can manage < 0.5 kbps. If you want to sacrifice voice quality but use standard codecs, the military MELPe codec has 600 bits/s as one of its standard modes.


And yet such implementation never was seen outdoors.

Because it would likely violate the restrictions setup for the LoRa frequency. Using a normal walkie-talkie has none of those limitations while being cheaper and more versatile.


The ISPs also have a system that runs without power so long as their batteries (and gensets) last. Typically 1 or 2 days without refuelling.


The real "resilience teams" are going to be at the telcos/ISPs, and they will have dark fibre between their networks and autonomous backup power in their data centres. They will be able to do IRC, VoIP telephony, email, etc. between their networks over statically routed point-to-point IP between their local networks even if BGP and the transit networks go down so they can "black start" the Internet. (Back in my ISP days, I remember reading about there being a private telephone network just for AS operators' NOCs to talk to one another quite independenty of the PSTN.)

For anything that takes even those out (eg. a "Big One" quake in California), you fall back to radio hams and autonomous radio links for the disaster services.


The article's author mentioned speaking with some telco people, which apparently weren't aware of any resiliency emergency plan. Maybe there's some difference between EU countries and the USA on this.


Mesh radio bandwidth is pretty poor. Firstly, you have to compete with many interferers (albeit this might get better if the power goes down), including other LoRa radios, but more to the point, long-distance connections consume bandwidth and aquire delay and delay variation at every intermediate hop. It might be reasonable to use it for text messaging, but with per-hop bandwidth ranging from 0.3 kbps to 27 kbps, which will get divided down further over shared multi-hop links it will be impractical to use it for anything else except perhaps very-low-bandwidth telephony over short distances or visiting minimalist text-only websites.

It might make more sense if augmented by fixed multi-megabit point-to-point microwave radio links to act as a backbone, with LoRa only functioning as an access network.

I'd be interested to hear what experiences people have had with doing this for real.


I think the point of the article is not to use that mesh network as a replacement for internet. I think the author's idea is that the mesh network would provide the "resilience club" a communication channel while they work on recovering the regular internet.


I've just realised I've talked my way into the idea of creating per-city club-operated backbone networks based on something like 100 Mbit point-to-point Ethernet-over-microwave links. With tall buildings as hubs, you might actually be able to build a decent mesh, with WiFi, LoRa or both acting as access networks. You'd definitely want to throttle per-client bandwidth to prevent people from abusing your very limited long-range mesh bandwidth. None of this would be cheap; decent microwave links cost thousands, and you'd need backup solar and battery power for every part of the network.

I'd also consider thinking about using the "big ears, small mouth" technique to push up bandwidth; if a fixed link using a technology such as LoRa could transmit at a legal EIRP level, but coupled this with a really high gain parabolic dish (I'm thinking re-purposed satellite dishes) and low-noise amplifier at each end on the receive side, you could get substantially higher end-to-end Eb/No, and thus much higher bandwidth and range than would otherwise be legally possible. At first glance, the necessary hardware to do this looks quite doable, either by active RF switching between antennae, or the use of a hybrid/circulator to do the necessary duplexing. I'd be interested to see if anyone has already built, or even manufactures, something like this, and what the practical and regulatory barriers are to implementation.


"Big ears, small mouth" is exactly what the regulations are designed to encourage, so I don't foresee regulatory issues.

You don't even need extra hardware for the duplexing; the common SX1276 chip has separate Tx and Rx pins which are typically combined on the PCB. All you need is to route a PCB that brings 'em out separately, if that's what you want to do.

In practice it's tricky to aim two dishes the exact same place, so using a single dish with a single antenna at its focus is probably quite a bit more practical. The SX1276 also has a PA control pin, invert that and you've got your LNA control signal. Or don't bother with the LNA, and simply mount the transceiver at the focus to minimize RF feedline losses. You'd give up a smidgen of performance but gain a lot of simplicity. (There would still be coax running down the boom, but it would be carrying the wifi/bluetooth signal outside the dish's aperture!)


I didn't know about the chip having separate TX and RX pins. There must be a gap in the market for rooftop-to-rooftop LoRa transceivers that don't cost a fortune. Even using something like a 24 dB gain antenna would push range up by a factor of 10 relative to a simple 4 dB antenna, or get a substantial improvement in bandwidth/reliability at the same range. For an even simpler design, you could just put a 20 dB attenuator between the transmit port and the antenna, reducing the effective forward gain of 4 dB, while getting the full 24 dB in the opposite direction. Proper RF engineering details an exercise for the student etc.


Y'know it occurs to me: If you want separate east and west paths, the chips are so cheap, it might make more sense to just use two whole off-the-shelf boards with some software linking them together, one dedicated to Tx and the other dedicated to Rx. Same hardware, just only use half of it.


Google "cantenna" made from a pringles can.


Please don't. We have better antennas than a 2001 article of really really dubious RF engineering.


That's true and proprietary modulation makes the situation worse.


It's substantially harder than linear programming: it's equivalent to SAT, whereas linear programming is merely polynomial-time (and in practice weakly polynomial-time with current algorithms).


I normally use Simplex method which is fast and not polynomial in the worst case though


You can always just run a portfolio of Simplex/Barrier/PDLP and just grab whichever returns something first. The latter two are usually slower but polynomial time, so you always win.

Can't do that with SAT or ILP.


Simplex usually runs fast, and Barrier or PDLP help with LP. But for SAT or ILP, there’s no quick way. You can’t just try a bunch and pick the fastest. Those problems are just tougher.


HyperCard was undoubtedly the inspiration for Visual Basic, which for quite some time dominated the bespoke UI industry in the same way web frameworks do today.


HyperCard was great, but it wasn't the inspiration for Visual Basic.

I was on the team that built Ruby (no relation to the programming language), which became the "Visual" side of Visual Basic.

Alan Cooper did the initial design of the product, via a prototype he called Tripod.

Alan had an unusual design philosophy at the time. He preferred to not look at any existing products that may have similar goals, so he could "design in a vacuum" from first principles.

I will ask him about it, but I'm almost certain that he never looked at HyperCard.


A blog post about Tripod/Ruby/VB history - https://retool.com/visual-basic

  Cooper's solution to this problem didn't click until late 1987, when a friend at Microsoft brought him along on a sales call with an IT manager at Bank of America. The manager explained that he needed Windows to be usable by all of the bank's employees: highly technical systems administrators, semi-technical analysts, and even users entirely unfamiliar with computers, like tellers. Cooper recalls the moment of inspiration:

  In an instant, I perceived the solution to the shell design problem: it would be a shell construction set—a tool where each user would be able to construct exactly the shell that they needed for their unique mix of applications and training. Instead of me telling the users what the ideal shell was, they could design their own, personalized ideal shell.
Thus was born Tripod, Cooper's shell construction kit.


It allows the Mac to use far less RAM to display overlapping windows, and doesn't require any extra hardware. Individual regions are refreshed independently of the rest of the screen, with occlusion, updates, and clipping managed automatically,


Yeah, it seems like the hard part of this problem isn't merely coming up with a solution that technically is correct, but one that also is efficient enough to be actually useful. Throwing specialized or more expensive hardware at something is a valid approach for problems like this, but all else being equal, having a lower hardware requirement is better.


I was just watching an interview with Andy Hertzfeld earlier today and he said this was the main challenge of the Macintosh project. How to take a $10k system (Lisa) and run it on a $3k system (Macintosh).

He said they drew a lot of inspiration from Woz on the hardware side. Woz was well known for employing lots of little hacks to make things more efficient, and the Macintosh team had to apply the same approach to software.


So when the OS needs to refresh a portion of the screen (e.g. everything behind a top window that was closed), what happens?

My guess is it asks each application that overlapped those areas to redraw only those areas (in case the app is able to be smart about redrawing incrementally), and also clips the following redraw so that any draw operations issued by the app can be "culled". If an app isn't smart and just redraws everything, the clipping can still eliminate a lot of the draw calls.


They don't create contiguous surfaces, and GPUs are optimized to deal with sets of triangles that share vertices (a vertex typically being shared by four to six triangles), rather than not shared at all as with this.

"Watertight" is a actially a stronger criterion, which requires not only a contigous surface, but one which encloses a volume without any gaps, but "not watertight" suffices for this.


Interesting thank you!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: