Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>has to download some runtimes

As someone who had to develop software in an airgapped enviroment, I'm sending a special "fuck you" to whoever thought this is a good idea. God forbid you have the AUDACITY to not be connected to the internet at all times for any reason whatsoever.



Off topic, but this really struck a chord. I arrive at the trailhead for an afternoon of mountain biking in the mountains. I can't record the ride route because my mountain bike specific trail/ride logging app can't get an internet connection to log me in. I don't want to be logged in. I just want to record GPS over time. No maps needed. No server access needed. I guess they never thought someone might want to mountain bike in the mountains! Sorry, but that is an idiotic design decision. Unfortunately, this approach to "personal" computing has become the norm rather than an exception. People are not allowed to exist on their own.


...then use a different app. There are myriad ways to record rides with no data service; I do it all the time. Most commonly I use Strava or a dedicated Garmin bike computer.

Syncing / uploading then happens once they have signal again. Or in the case of the Garmin I copy the FIT file off by hand when plugging it into my computer.


I think that by the time you're at the top of a mountain without cell service, you're already locked in to an app that won't let you record your position.


So your solution for a software problem is buying more hardware. Companies must love you.


I don't think that was the point at all, the point was that if your app doesn't let you record your track without internet connection, then it's on that specific app (and eventually it's on you if you stick with such an app).

There are many apps that let you track your run/hike/ride without internet connection.


You got it; the point is that it's a one-time problem after which a different app should be chosen. There's a ton of things out there which'll record rides offline.

Personally, I think a dedicated bike computer is best, because then the phone's battery is saved for emergency uses instead of recording a ride. For long rides (8-10 hour) phones won't have enough battery to record the whole ride.


Some people like the interface/extra features of an app. I tried going from google maps to open street maps, and came back in 5 minutes flat.


I don't know if companies love him but I know Strava is an android app.

An android app is software.

Not hardware.


What new hardware are you referring to?


I record GPX with OSMAnd+ running on my older and smaller Android phone. No SIM, no Bluetooth, only GPS. It goes on all day long. Then I send it to my new phone or to my computer over WiFi. If I were in the mountains I'd turn on hot spot mode on the phone to make the transfer work.

No log in, no external services.


It's exactly why I quit using fitbit 4 or 5 years ago when they made a similar change. There should be 0 reason for needing an internet connection to send data from a wristband 2 feet away to my phone using bluetooth in order to tell me how many feet I've traveled. That may have changed since then but I wouldn't know. They lost me as a customer.


You made the mistake of thinking an app's functionality is it's purpose. The functionality is just a thin veneer of an excuse the devs use to track your every moment. Why would you think otherwise?

Even if it cost money to install, you are just paying for the privilege of being tracked by those folks instead of others.

#appsnotevenonce


Any good FOSS exercise monitoring apps?


Some great offline GPS (cell-free) map apps for Apple iOS are

- Sygic,

- Topo Map+,

- 2GIS

- Starwalk - Night Sky 2

- and my fav, Genius Map

all works without WiFi, cell coverage, nor NFC/bluetooth


Also maps.me


Maps.me started to implement some monetisation UI bloat a while ago. I swapped to organic maps. Can’t remember if it’s a fork or by one of the old maps.me developers but there was some connection.


Is it possible to migrate saved-places from MM to OM ?


...and yet, we're hitting to the same wall again:

Hardware is cheap, network is reliable. Move fast and break things.

Neither of these assumptions are true, and we're going to have these issues by truckload, until we understand and change our mindset.


Hamburg, Germany: the public transport services released a new app a few months ago which assumes that the request failed when the app has not finished receiving an answer to a route query after a given time span. The problem: It is normal to have a data cap that, when reached, limits the rate to 64 kbit/s. That is too slow for the answer to be fully transmitted in time...

At least the website works, even though it transmits the full site for every request...


Did this happen with Strava? My usual morning MTB lap has no cell service at the trailhead and it’s never been an issue. But I’ve also never found myself logged out before.


The PADI (suba diving) app barely works without constant internet connection, especially eLearning is useless with it once you are offline / have a shaky connection.

Scuba diving spots on the world are rarely well covered with internet and even if they are, still many people don't have the roaming/local SIM solution for it.

Guess where you want to use that damn app the most?


You can’t add a password to Bitwarden while being offline.

Discovered this when I tried to add a WiFi password before connecting to it.


Strava doesn't require an internet connection (at least on Apple watch). For maps, Trailforks supports offline mode.


Strava doesn't need a connection on Android either.


For android users, Strava makes things easy, but OsmAnd has way more features and I like it a lot more.


I use GPSLogger that saves my ride to a GPX file that can then be imported wherever you want.


Yeh I hit that too! If it’s the same app their business model is to charge for offline use.

Not a great way to get new customers though as it requires missing out on logging a ride to become aware of a reason to buy.


The massive 5G infrastructure push should help deal with some of you “occasionally offline” dissidents


No, if anything it will be harder to connect rural areas with 5G. 4G’s coverage is measured in miles. 5G’s coverage is measured in feet.

https://www.verizon.com/about/news/how-far-does-5g-reach

> 5G Ultra Wideband network’s signal can reach up to 1,500 feet without obstructions


You're quoting something about “5G Ultra Wideband”, which seems to be a brand name for mmWave. Yes, mmWave has very short range. But 5G isn't just mmWave. It's in many ways an evolution of LTE/4G, supporting the same frequencies and offering the same range, i.e. multiple km/miles. But it's up to carriers how they allocate their frequencies. To quote Wikipedia:

> 5G can be implemented in low-band, mid-band or high-band millimeter-wave 24 GHz up to 54 GHz. Low-band 5G uses a similar frequency range to 4G cellphones, 600–900 MHz, giving download speeds a little higher than 4G: 30–250 megabits per second (Mbit/s). Low-band cell towers have a range and coverage area similar to 4G towers.

https://en.wikipedia.org/wiki/5G#Overview

5G is _perfect_ for providing coverage in rural areas, except for the problem that 4G devices are incompatible with 5G networks. Starting 5G rollout in urban areas makes more sense because (a) 5G provides most benefit when clients are close together, and (b) because denser cells make it reasonably economical to maintain 4G coverage in parallel to 5G coverage.


That's a fair point -- that the tech is capable of supporting it. I could be wrong, but in the near term I don't recall any US carriers proposing to allocate any low-band spectrum that way.

Either way, if we're talking about "coverage" for low-bandwidth stuff like fitness trackers, it's the spectrum that matters more than anything. We can communicate thousands of miles on 1 or 2 watts of LF spectrum using technology that is nearly a century old. Don't need 5G for that, just need to use the right spectrum.


I'm very excited to heqr the plan for locating 5G towers in the ocean, in remote wilderness sites, in underground facilities, the Antarctic, etc. People visit these sites and expect their tech to work fine as long as it doesnt obviously require a network connection. Of course I can't browse HN from those places, but my otherwise self contained apps should continue to run predictably.


The higher frequencies of 5G are even more easily blocked by outdoorsy things like trees and rain.


How so? I thought 5G is mostly coming to densely populated areas, that is, areas that already have decent connectivity. Also, at least currently, I thought 5G is a developed country thing. Lots of folks are still running off 3G.


Huh, almost seems like 5G is marketing bullshit? The primary goal of 5G being to goad and shame consumers into upgrading a perfectly capable older phone to a new phone that is “5G ready”


I'm very excited to have gigabit download speeds so I can hit my hidden "unlimited" undescribed quota within a minute while also permanently having hotspot throttled to 128kbps.


There still likely won't be towers in the mountains or backcountry.


It's really not that big of a deal. You set up a package proxy that is itself behind the airgap, and you're good. Yes, you have to put some extra effort into moving packages across that airgap when you need to add or upgrade one, but then, isn't having to do things like that kind of the whole point of an airgap?

I certainly wouldn't want to ask that the other 99% of the world's developers avoid a feature that's useful to them just to assuage my feelings of envy about the convenience they enjoy.


> isn't having to do things like that kind of the whole point of an airgap?

Nope. The exe files are supposed to do it's thing without internet and no extra effort like it was for a long time.


I'd call it a "yup," if we're talking about the point of an airgap. If I don't want executables contacting the outside world without my knowledge, and one is, then the airgap (or, more likely, firewall or suchlike) preventing that exe from being able to do so is a feature.

This does mean that certain programs just won't work, or won't work without some finagling. That's also a feature. The price of control is having to control things.

Granted, most people don't want to pay that price, and prefer convenience. That's admittedly not to my own taste - c.f.) log4j for a good example of why - but I think I'm maybe a little weird there. I certainly don't think there's anything audacious about catering to majority tastes. Maybe just vaguely disappointing.


Computers were able to work without internet since i was a kid. That's how me and my friends used them without any problem. Using the airgap word you are making it a new special setup that needs some special steps, some programs will not work etc. It's not a feature.

Saying people prefer one to another hides the fact that they were not given any other option. People will choose whatever default is given and then we may say they everyone prefers that. Or just make the other option (which was normal before) complicated so that nobody wants that now.


Even in the past files wouldn't magically materialize on your harddrive.


In the mini-guide above they already moved the SDK ("Software Development Kit") across the air-gap, yet creating a hello world still requires downloading even more stuff, because the .NET SDK does not actually contain enough stuff to create hello worlds, apparently?

Contrast this with e.g. Zig for Windows is a ~60 MB zip file which contains the entire Zig/C/C++ compiler with cross-compilation support for basically everything, and also has complete Win32 headers + libraries in it. There's even DDK headers in there, though I'd expect to do some legwork to build drivers for NT with Zig.


If you're happy to sacrifice the benefits of .net and spend time writing basic Win32 apps, that's totally a choice you can make. Or even just use .net framework 4.6 and not add any extra dependencies.

I'm not really sure what you're complaining about here. .net core is split into tiny packages - if that is hard to handle in your very special environment, you get to use special solutions to make it work.


That's besides the point I was making, which is that there's runtimes/languages which lean towards "internet required for development" or even "internet required for building" while there are also languages/runtimes which are self-contained and independent.

That being said, WinForms is also "just" a Win32 wrapper, I don't see a compelling reason why a similar wrapper wouldn't be possible in pretty much any language. .NET 4.6 is a fine choice too, especially because you're not forced to ship runtime and standard library, as Windows already has both.


I believe that's very much related. The more of the nice wrappers you provide, the more you have to decide if someone needs them all or are you providing them on demand. With .net core doing more splitting than framework and with Java going through a decade of jigsaw, I think we collectively decided we don't want to include everything upfront.

We don't even require the internet for building/development. In .net land you require internet to get the dependency you're using the first time. If you want to get them all and distribute/install before you start development, you can totally do that. It's just not the default behaviour.


Does that 60 MB include something like Qt, comparable to WPF? I'm not sure that comparison is fair.


Files used to materialize without internet since beginning, it's not magical. We just needed the software installer file copied to hard drive.


It's gonna blow your mind when you realize that `dotnet publish` produces an artifact that IS machine-independent, and that you would just distribute that. Or if it somehow really bothers you, put it in a self-extracting ZIP or MSI, wow, so much better. And I don't know what golden age of the Internet you grew up on, but there's always been "apps" distributed as zips, or as more than just a single binary.

I get that you have opinions, but you seem to have entirely missed that the runtime is downloaded at build time and included in the bundle. And god forbid if you like doing everything by hand, you don't have to use Nuget and you can manage every last dep by hand, however you like (and you'll likely end up hacking something that is less usable than just setting up a private nuget server, but "opinions").


>It's really not that big of a deal. You set up a package proxy that is itself behind the airgap, and you're good.

Yes, technically easy but if their work environment is strict enough to enforce air gapped development, I imagine the bureaucratic process to accomplish such a thing to be a bit less than easy.


> set up a packaging proxy to [connect with a system outside the airgapped computer]

Do you even know what an airgapped computer is?


It’s not that kind of proxy.


My guitar tab program, which I pay for, refused to show me my library of tabs when I was supposed to play for some kids at a mountain campfire because it couldn't verify my membership because no internet connection. I'm not a good guitar player, and my memorized repertoire is... well, not of interest to 12 year olds. :)

I wouldn't say the campfire was ruined, by my goodwill toward this product certainly was.


> I wouldn't say the campfire was ruined, by my goodwill toward this product certainly was.

Your goodwill deterioration does not matter unless you switch a new a app[1], and a) Make sure that new app can function without internet, and b) Tell your current app developers why you are switching.

So, yeah, your goodwill is irrelevant is you're still giving them money or value. [1] I assume that it's a subscription - most things are nowadays.


I don't know if it's the case anymore, but that's been the state of windows installers for a long time. Usually the easy-to-download one was tiny and just phoned home for the real bits. And that wasn't just Microsoft's own stuff, but even things like Java and whatever.

Usually you had to dig a bit and could find an "offline installer". Sometimes an "alternate downloads" link is on the initial download page, sometimes you have to good to find a deeper link on the vendor's site.

I always did that just to keep from needlessly reaching out to the internet X times when updating X machines.

And of course, make sure you're getting it from the vendor and not some sketchy download site.


The worst example I know of is Microsoft Office. When I run their installer/downloader, it installs the 32-bit version on my 64-bit machine—it doesn’t let you choose and by the time you realize, you’ve already wasted all that time and bandwith. I had to go to some janky website that hosts the links to the official ISOs and download that instead.


Yes, I hate it when software that I thought was fully installed unexpectedly starts downloading more stuff when I run it.

What if I installed the software with the intention to run it without internet, or 5, 10 or 100 years in the future?


Apparently in those 20GB there was no place for the runtime?


The installer for the sdk is 200MB


I think it started with Android.

You can download all versions of Android if you want, but i doubt that you would want that.


I think the first time I encountered it was in some makefile of Chrome or perhaps V8 that automagically downloaded dependencies. It sounds nice in theory, but then I expected the tarball to contain the entire thing which caused trouble and confusion down the line.


the default is online now for development, so nowadays by default if you need offline you need to test it :/


Who gets to decide what is the default?


On this case, Microsoft. It's their private garden, you are invited by their rules.

And yes, this sucks. But if you want freedom, there are other OSes (or even other dev tools) where you can have it.


This is the reason I wrote "bash-drop-network-access" [0]. I use it as part of my package building system so that downloads are only done in the "download" phase of building where I validate and cache all objects. This means I can fully verify that I can build the whole thing air-gapped and far into the future with the set of files identified by the SHA256 in each package's buildinfo.

This is important because I support each release of the distribution for up to 10 years, and have some customers who may need to build it in an air-gapped environment.

[0] https://chiselapp.com/user/rkeene/repository/bash-drop-netwo...


Very interesting. First time I heard about loading bash "builtins" from a shared library. How does this compare to LD_PRELOAD?

Personally, I just run things in network namespaces with "ip netns exec offline|wireguard $COMMAND" to restrict net access.


Using LD_PRELOAD you only affect dynamically linked executables, where using kernel enforcement using syscall filtering, every process is affected. Also, things are allowed to unset LD_PRELOAD, but not remove filtering.

I thought about using a network namespace, but that would make things more complicated since I would need to re-call my shell script to pick-up where I left off (because it requires creating a new process). I initially tried to implement this using network namespaces, but you cannot "unshare" the current process, you must spawn a new process.

With dropnet I can do

      download()
      enable -f ./dropnet.so dropnet
      configure()
      build()
      install()
With "unshare" I would need to do more work to get to "configure()" in a new process.


Safari is reporting your TLS certificate has expired


While I strongly sympathize, in this case it specifically addresses one of the OP's main objections: why did they have to download and install many GB of stuff that they'll never need. The three options I can think of are: (1) install everything (what they objected to), (2) ask the user what things to install (they probably already had this option but didn't know what they needed), or (3) install a minimal amount and download on demand. Although it doesn't work well for you, it seems it would work well for them.


Because OP is being disingenuous. 20GB is for the _full_ install of VS 2022. The required components for what he is doing is literally half that.


Is it clear to a novice which components they will or will not need ahead of time?


What is it that you are complaining about really? You need the latest runtime if you want to develop for the latest runtime. If that's your intention, download the latest runtime anyway you like, and then install it on your target machine. If it's not, don't download it and develop for the last runtime available on your machine.


You can bundle the .NET runtime with your app. So the user doesn't need to download a runtime.


Yes, however, it will expand the runtime into C:\temp (or similar). What could go wrong? And then you find yourself in a MS induced yack shave because you want to run two different executables. Microsoft is a never ending source of accidental complexity.

In this particular scenario, my first thought was "shoulda used golang".

I hear tell that since then (1+ yrs ago) matters have improved in the realm of MS standalone apps (well, maybe just cmd line apps).

oh, and the exe is round about 65mb compared to a golang ~5 or 6mb


Indeed, that's why it's called a redistributable.


Agreed. I was very happy when it was announced in 2019 that Cargo got offline support: https://www.ncameron.org/blog/cargo-offline/

You can even prefetch popular libraries: https://crates.io/crates/cargo-prefetch


Well, it caches the packages... So only your CI system needs internet, or your PC the first time you ever publish.

And you can likely mention it as an explicit dependency in your csproj so that you can download it on first restore.


I feel like you are misunderstanding the rules associated with an air-gapped system.


I developed with dotnet in an airgapped environment. Due to restrictions, you cannot use dozens of nuget packages. So, you create a nuget package repository in your airgapped environment. That's all it is. If you want something else, you use whatever the policy is to get a file from internet to airgapped side. When I wanted a newer version of a Nuget package, it took me 3-4 hours to get it on my workstation. But that's all.

Also, when you write something on those environments, you know users cannot install a runtime. So you get in touch with IT teams to ensure which version of runtime they are deploying. If and only if you have to update it for proper reasons, then they can deploy newer versions to the clients. For all or for a specific user base. This is how it works.

Without an actual business case or a security concern, you don't just go from a runtime to another, let's say 4.8 to 6.0. So yes, development in airgapped environments is PITA. But it's the same with Java, Python, Perl, etc. That's not the fault of the runtime but the development environment itself.


Presumably all development frameworks require you to explicitly list your dependencies, download/restore them with internet, then snapshot and copy that to your air-gapped environment?

That's exactly what you have to do here.


Is there a rule regarding developing for a .NET framework from within such an environment?

I understand OC issues with the difficulties associated with using M$ tools with limited internet but wonder if the "Air Gapped" example may be a bit extreme.

Being required to work from home while still meeting an employers' secure network policies might be more common.


I would guess because the world doesn't revolve around you? You can download the full installers and bring them over on a USB, it's a trivial operation. You can also build on a networked computer and then bring over the final file(s) to your air-gapped system.


Well, in .net 6 you have the ability to deploy self-contained application, in a single file manner and even compress the binary [1]

The end result is a Golang-like, single binary experience that runs on many platforms easily and rapidly.

Though I can master a lot of programming languages, I miss C# the most especially on async/await and LINQ. Rust is what I'm favourited in second places with a lot of similarities of C#.

[1]: https://docs.microsoft.com/en-us/dotnet/core/deploying/singl...


Pytorch occasionally does this as well w/ model weights and it's a royal PITA


You are missing the point. ...We have imaged Black Holes galaxies away, detected Gravitational waves from the other side of the Universe, landed on the Moon.

But to this day, nobody knows what data your system telemetries to Microsoft. Not the data they talk about in the 5-10 page license. Instead, the data mentioned in the 55 page doc about what you agree to send them, that they refer to from the MS Software License...


What dev system allows you to build things without downloading required components first? None?

Like every other dev system, connect, either download offline installers for everything (they exist), or get your system running, then you can dev offline all you like.

You don't need to "be connected to the internet at all times for any reason whatsoever". You need it once.


Man… the number of times I’ve had to debug a broken Steam game by installing the Nth .net runtime version…


For me it's zero.


Things were far more annoying in the past, in Win98 connecting a printer, or any other hardware required inserting the installation CD or having a folder containing the all the cab files on your system and drive space was far less abundant.


same with CI/CD pipeline. Most developers just choose to download the same runtime each time there is a build, which is not just very inefficient but not at all guaranteed to work for the next 10 years.


its a split game, you can install everything at once, like 60gigs, and then you can happily work offline, but for most people, it is much easier to work from the on demand model, to pull what is needed when its needed.


This is a bit dramatic. You're a software developer, building an app which has dependencies, so of course you have to download those dependencies to build. Where else would they come from? Literally every language with a package manager does the same thing.


Being able to make a portable build of the software you are creating is such a basic feature it's baffling you have to fetch extra data to do that. Also nowhere in "dotnet publish -o ./publish -r win-x64" I said "Connect to the internet and fetch megabytes of binaries"

What I miss is the old model for installing software. Give me the yearly ISO, optionally provide a service pack or two if some huge problem went under the radar in testing.


`dotnet publish` performs an implicit `dotnet restore`. So, yes, you did.

If you don't want it to download anything then you use the `dotnet publish --no-restore` flag, which is used a lot in CI/CD pipelines. If you don't have the package dependencies cached it will then simply fail.


The opposite side of that coin is a required up-front install of every package that might ever be needed for every possible scenario... in which case people would complain (even more) about massive installs.

The internet exists, the industry has evolved, software has dependencies, and yes you have to download them (just like you had to download the SDK ISOs back in the day). But it's just one command, run it and get it over with, and after that up-front one-time pain you'll have a nice offline workflow.


I'm not OP, so interpreting: I don't think OP is asking for an up-front install of every package under the sun that might ever be needed for any kind of development. He's just asking that, out of the box, the build tools can build software with no dependencies into an executable without having to hit the Internet. And, if he has particular dependencies he needs, allow him to download them (ONCE) onto that machine, and again, he can build software into an executable without having to hit the Internet again. This doesn't seem that unreasonable a request. Every other compiler I've ever used has had this feature. It wasn't even a feature. It's just the way software has always worked.

I should be able to take my computer to a remote cabin with no Internet, and use all the software on it. The only software I'd expect to not work is software whose purpose is to access data stored on the Internet, like web browsers. I don't think this is such a crazy user expectation.


You are welcome to the philosophy that says, “the internet exists. Adapt or perish.” It may serve you well.

For many, it is not so black and white. Internet connections are spotty, slow, or expensive. In GP’s case, there is no internet.

Like I said, you are welcome to ignore those users. But your ignorance (I don’t mean that in a derogatory way) doesn’t change their situation.


> make a portable build of the software you are creating is such a basic feature

That is easily doable. However users often don't want a copy of a large runtime for each and every program they use, so it often makes sense to move common things (like DLLs, runtimes, your OS) to libraries that can be shared.

You can easily make dotnet apps in either flavor to your liking. And not every developer is going to make their apps to appeal the your needs.


We seem to have normalised the current situation as an industry, but that doesn't mean the situation is good.

In days gone by we used to have truly standard libraries and runtimes, in the sense that they came with your build tools out of the box and so were available everywhere. Host platforms similarly provided basic services universally. Documentation was often excellent and also available out of the box.

In that environment, writing "Hello, world!" meant writing one line that said do that, maybe with a little boilerplate around it depending on your language. Running a single simple command from a shell then either interpreted your program immediately or compiled it to a single self-contained executable file that you could run immediately. Introducing external dependencies was something you did carefully and rarely (by today's standards) when you had a specific need and the external resource was the best way to meet that need.

Some things about software development were better in those days. Having limited functionality in standard libraries and then relying on package managers and build tools where the norm is transitively installing numerous dependencies just to implement basic and widely useful functionality is not an improvement. The need for frameworks and scaffolding tools because otherwise you can spend several hours just writing the boilerplate and setting up your infrastructure is not an improvement.


There was a time when MS didn't understand the internet and none of their build tools depended on it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: