3. `dotnet run` and you have an (admittedly empty) hello world app
It's telling me it took 1.5s to build the app.
Want to publish the app?
`dotnet publish -o ./publish -r win-x64` which takes a bit longer as it has to download some runtimes (60 seconds max, then cached). Zip up the resulting folder and ta-da you have an application. Just unzip and run the .exe.
If you want a single-file exe (as in completely statically compiled without self-extraction) that's a bit more advanced and requires a few properties in the csproj to configure how to handle native assemblies that expect to exist on disk. If you want it to have no runtime dependencies, add `--self-contained`
As someone who had to develop software in an airgapped enviroment, I'm sending a special "fuck you" to whoever thought this is a good idea. God forbid you have the AUDACITY to not be connected to the internet at all times for any reason whatsoever.
Off topic, but this really struck a chord. I arrive at the trailhead for an afternoon of mountain biking in the mountains. I can't record the ride route because my mountain bike specific trail/ride logging app can't get an internet connection to log me in. I don't want to be logged in. I just want to record GPS over time. No maps needed. No server access needed. I guess they never thought someone might want to mountain bike in the mountains! Sorry, but that is an idiotic design decision. Unfortunately, this approach to "personal" computing has become the norm rather than an exception. People are not allowed to exist on their own.
...then use a different app. There are myriad ways to record rides with no data service; I do it all the time. Most commonly I use Strava or a dedicated Garmin bike computer.
Syncing / uploading then happens once they have signal again. Or in the case of the Garmin I copy the FIT file off by hand when plugging it into my computer.
I think that by the time you're at the top of a mountain without cell service, you're already locked in to an app that won't let you record your position.
I don't think that was the point at all, the point was that if your app doesn't let you record your track without internet connection, then it's on that specific app (and eventually it's on you if you stick with such an app).
There are many apps that let you track your run/hike/ride without internet connection.
You got it; the point is that it's a one-time problem after which a different app should be chosen. There's a ton of things out there which'll record rides offline.
Personally, I think a dedicated bike computer is best, because then the phone's battery is saved for emergency uses instead of recording a ride. For long rides (8-10 hour) phones won't have enough battery to record the whole ride.
I record GPX with OSMAnd+ running on my older and smaller Android phone. No SIM, no Bluetooth, only GPS. It goes on all day long. Then I send it to my new phone or to my computer over WiFi. If I were in the mountains I'd turn on hot spot mode on the phone to make the transfer work.
It's exactly why I quit using fitbit 4 or 5 years ago when they made a similar change. There should be 0 reason for needing an internet connection to send data from a wristband 2 feet away to my phone using bluetooth in order to tell me how many feet I've traveled. That may have changed since then but I wouldn't know. They lost me as a customer.
You made the mistake of thinking an app's functionality is it's purpose. The functionality is just a thin veneer of an excuse the devs use to track your every moment. Why would you think otherwise?
Even if it cost money to install, you are just paying for the privilege of being tracked by those folks instead of others.
Maps.me started to implement some monetisation UI bloat a while ago. I swapped to organic maps. Can’t remember if it’s a fork or by one of the old maps.me developers but there was some connection.
Hamburg, Germany: the public transport services released a new app a few months ago which assumes that the request failed when the app has not finished receiving an answer to a route query after a given time span. The problem: It is normal to have a data cap that, when reached, limits the rate to 64 kbit/s. That is too slow for the answer to be fully transmitted in time...
At least the website works, even though it transmits the full site for every request...
Did this happen with Strava? My usual morning MTB lap has no cell service at the trailhead and it’s never been an issue. But I’ve also never found myself logged out before.
The PADI (suba diving) app barely works without constant internet connection, especially eLearning is useless with it once you are offline / have a shaky connection.
Scuba diving spots on the world are rarely well covered with internet and even if they are, still many people don't have the roaming/local SIM solution for it.
Guess where you want to use that damn app the most?
You're quoting something about “5G Ultra Wideband”, which seems to be a brand name for mmWave. Yes, mmWave has very short range. But 5G isn't just mmWave. It's in many ways an evolution of LTE/4G, supporting the same frequencies and offering the same range, i.e. multiple km/miles. But it's up to carriers how they allocate their frequencies. To quote Wikipedia:
> 5G can be implemented in low-band, mid-band or high-band millimeter-wave 24 GHz up to 54 GHz. Low-band 5G uses a similar frequency range to 4G cellphones, 600–900 MHz, giving download speeds a little higher than 4G: 30–250 megabits per second (Mbit/s). Low-band cell towers have a range and coverage area similar to 4G towers.
5G is _perfect_ for providing coverage in rural areas, except for the problem that 4G devices are incompatible with 5G networks. Starting 5G rollout in urban areas makes more sense because (a) 5G provides most benefit when clients are close together, and (b) because denser cells make it reasonably economical to maintain 4G coverage in parallel to 5G coverage.
That's a fair point -- that the tech is capable of supporting it. I could be wrong, but in the near term I don't recall any US carriers proposing to allocate any low-band spectrum that way.
Either way, if we're talking about "coverage" for low-bandwidth stuff like fitness trackers, it's the spectrum that matters more than anything. We can communicate thousands of miles on 1 or 2 watts of LF spectrum using technology that is nearly a century old. Don't need 5G for that, just need to use the right spectrum.
I'm very excited to heqr the plan for locating 5G towers in the ocean, in remote wilderness sites, in underground facilities, the Antarctic, etc. People visit these sites and expect their tech to work fine as long as it doesnt obviously require a network connection. Of course I can't browse HN from those places, but my otherwise self contained apps should continue to run predictably.
How so? I thought 5G is mostly coming to densely populated areas, that is, areas that already have decent connectivity. Also, at least currently, I thought 5G is a developed country thing. Lots of folks are still running off 3G.
Huh, almost seems like 5G is marketing bullshit? The primary goal of 5G being to goad and shame consumers into upgrading a perfectly capable older phone to a new phone that is “5G ready”
I'm very excited to have gigabit download speeds so I can hit my hidden "unlimited" undescribed quota within a minute while also permanently having hotspot throttled to 128kbps.
It's really not that big of a deal. You set up a package proxy that is itself behind the airgap, and you're good. Yes, you have to put some extra effort into moving packages across that airgap when you need to add or upgrade one, but then, isn't having to do things like that kind of the whole point of an airgap?
I certainly wouldn't want to ask that the other 99% of the world's developers avoid a feature that's useful to them just to assuage my feelings of envy about the convenience they enjoy.
I'd call it a "yup," if we're talking about the point of an airgap. If I don't want executables contacting the outside world without my knowledge, and one is, then the airgap (or, more likely, firewall or suchlike) preventing that exe from being able to do so is a feature.
This does mean that certain programs just won't work, or won't work without some finagling. That's also a feature. The price of control is having to control things.
Granted, most people don't want to pay that price, and prefer convenience. That's admittedly not to my own taste - c.f.) log4j for a good example of why - but I think I'm maybe a little weird there. I certainly don't think there's anything audacious about catering to majority tastes. Maybe just vaguely disappointing.
Computers were able to work without internet since i was a kid. That's how me and my friends used them without any problem. Using the airgap word you are making it a new special setup that needs some special steps, some programs will not work etc. It's not a feature.
Saying people prefer one to another hides the fact that they were not given any other option. People will choose whatever default is given and then we may say they everyone prefers that. Or just make the other option (which was normal before) complicated so that nobody wants that now.
In the mini-guide above they already moved the SDK ("Software Development Kit") across the air-gap, yet creating a hello world still requires downloading even more stuff, because the .NET SDK does not actually contain enough stuff to create hello worlds, apparently?
Contrast this with e.g. Zig for Windows is a ~60 MB zip file which contains the entire Zig/C/C++ compiler with cross-compilation support for basically everything, and also has complete Win32 headers + libraries in it. There's even DDK headers in there, though I'd expect to do some legwork to build drivers for NT with Zig.
If you're happy to sacrifice the benefits of .net and spend time writing basic Win32 apps, that's totally a choice you can make. Or even just use .net framework 4.6 and not add any extra dependencies.
I'm not really sure what you're complaining about here. .net core is split into tiny packages - if that is hard to handle in your very special environment, you get to use special solutions to make it work.
That's besides the point I was making, which is that there's runtimes/languages which lean towards "internet required for development" or even "internet required for building" while there are also languages/runtimes which are self-contained and independent.
That being said, WinForms is also "just" a Win32 wrapper, I don't see a compelling reason why a similar wrapper wouldn't be possible in pretty much any language. .NET 4.6 is a fine choice too, especially because you're not forced to ship runtime and standard library, as Windows already has both.
I believe that's very much related. The more of the nice wrappers you provide, the more you have to decide if someone needs them all or are you providing them on demand. With .net core doing more splitting than framework and with Java going through a decade of jigsaw, I think we collectively decided we don't want to include everything upfront.
We don't even require the internet for building/development. In .net land you require internet to get the dependency you're using the first time. If you want to get them all and distribute/install before you start development, you can totally do that. It's just not the default behaviour.
It's gonna blow your mind when you realize that `dotnet publish` produces an artifact that IS machine-independent, and that you would just distribute that. Or if it somehow really bothers you, put it in a self-extracting ZIP or MSI, wow, so much better. And I don't know what golden age of the Internet you grew up on, but there's always been "apps" distributed as zips, or as more than just a single binary.
I get that you have opinions, but you seem to have entirely missed that the runtime is downloaded at build time and included in the bundle. And god forbid if you like doing everything by hand, you don't have to use Nuget and you can manage every last dep by hand, however you like (and you'll likely end up hacking something that is less usable than just setting up a private nuget server, but "opinions").
>It's really not that big of a deal. You set up a package proxy that is itself behind the airgap, and you're good.
Yes, technically easy but if their work environment is strict enough to enforce air gapped development, I imagine the bureaucratic process to accomplish such a thing to be a bit less than easy.
My guitar tab program, which I pay for, refused to show me my library of tabs when I was supposed to play for some kids at a mountain campfire because it couldn't verify my membership because no internet connection. I'm not a good guitar player, and my memorized repertoire is... well, not of interest to 12 year olds. :)
I wouldn't say the campfire was ruined, by my goodwill toward this product certainly was.
> I wouldn't say the campfire was ruined, by my goodwill toward this product certainly was.
Your goodwill deterioration does not matter unless you switch a new a app[1], and
a) Make sure that new app can function without internet, and b) Tell your current app developers why you are switching.
So, yeah, your goodwill is irrelevant is you're still giving them money or value.
[1] I assume that it's a subscription - most things are nowadays.
I don't know if it's the case anymore, but that's been the state of windows installers for a long time. Usually the easy-to-download one was tiny and just phoned home for the real bits. And that wasn't just Microsoft's own stuff, but even things like Java and whatever.
Usually you had to dig a bit and could find an "offline installer". Sometimes an "alternate downloads" link is on the initial download page, sometimes you have to good to find a deeper link on the vendor's site.
I always did that just to keep from needlessly reaching out to the internet X times when updating X machines.
And of course, make sure you're getting it from the vendor and not some sketchy download site.
The worst example I know of is Microsoft Office. When I run their installer/downloader, it installs the 32-bit version on my 64-bit machine—it doesn’t let you choose and by the time you realize, you’ve already wasted all that time and bandwith. I had to go to some janky website that hosts the links to the official ISOs and download that instead.
I think the first time I encountered it was in some makefile of Chrome or perhaps V8 that automagically downloaded dependencies. It sounds nice in theory, but then I expected the tarball to contain the entire thing which caused trouble and confusion down the line.
This is the reason I wrote "bash-drop-network-access" [0]. I use it as part of my package building system so that downloads are only done in the "download" phase of building where I validate and cache all objects. This means I can fully verify that I can build the whole thing air-gapped and far into the future with the set of files identified by the SHA256 in each package's buildinfo.
This is important because I support each release of the distribution for up to 10 years, and have some customers who may need to build it in an air-gapped environment.
Using LD_PRELOAD you only affect dynamically linked executables, where using kernel enforcement using syscall filtering, every process is affected. Also, things are allowed to unset LD_PRELOAD, but not remove filtering.
I thought about using a network namespace, but that would make things more complicated since I would need to re-call my shell script to pick-up where I left off (because it requires creating a new process). I initially tried to implement this using network namespaces, but you cannot "unshare" the current process, you must spawn a new process.
While I strongly sympathize, in this case it specifically addresses one of the OP's main objections: why did they have to download and install many GB of stuff that they'll never need. The three options I can think of are: (1) install everything (what they objected to), (2) ask the user what things to install (they probably already had this option but didn't know what they needed), or (3) install a minimal amount and download on demand. Although it doesn't work well for you, it seems it would work well for them.
What is it that you are complaining about really? You need the latest runtime if you want to develop for the latest runtime. If that's your intention, download the latest runtime anyway you like, and then install it on your target machine. If it's not, don't download it and develop for the last runtime available on your machine.
Yes, however, it will expand the runtime into C:\temp (or similar). What could go wrong? And then you find yourself in a MS induced yack shave because you want to run two different executables. Microsoft is a never ending source of accidental complexity.
In this particular scenario, my first thought was "shoulda used golang".
I hear tell that since then (1+ yrs ago) matters have improved in the realm of MS standalone apps (well, maybe just cmd line apps).
oh, and the exe is round about 65mb compared to a golang ~5 or 6mb
I developed with dotnet in an airgapped environment. Due to restrictions, you cannot use dozens of nuget packages. So, you create a nuget package repository in your airgapped environment. That's all it is. If you want something else, you use whatever the policy is to get a file from internet to airgapped side. When I wanted a newer version of a Nuget package, it took me 3-4 hours to get it on my workstation. But that's all.
Also, when you write something on those environments, you know users cannot install a runtime. So you get in touch with IT teams to ensure which version of runtime they are deploying. If and only if you have to update it for proper reasons, then they can deploy newer versions to the clients. For all or for a specific user base. This is how it works.
Without an actual business case or a security concern, you don't just go from a runtime to another, let's say 4.8 to 6.0. So yes, development in airgapped environments is PITA. But it's the same with Java, Python, Perl, etc. That's not the fault of the runtime but the development environment itself.
Presumably all development frameworks require you to explicitly list your dependencies, download/restore them with internet, then snapshot and copy that to your air-gapped environment?
Is there a rule regarding developing for a .NET framework from within such an environment?
I understand OC issues with the difficulties associated with using M$ tools with limited internet but wonder if the "Air Gapped" example may be a bit extreme.
Being required to work from home while still meeting an employers' secure network policies might be more common.
I would guess because the world doesn't revolve around you? You can download the full installers and bring them over on a USB, it's a trivial operation. You can also build on a networked computer and then bring over the final file(s) to your air-gapped system.
Well, in .net 6 you have the ability to deploy self-contained application, in a single file manner and even compress the binary [1]
The end result is a Golang-like, single binary experience that runs on many platforms easily and rapidly.
Though I can master a lot of programming languages, I miss C# the most especially on async/await and LINQ. Rust is what I'm favourited in second places with a lot of similarities of C#.
You are missing the point. ...We have imaged Black Holes galaxies away, detected Gravitational waves from the other side of the Universe, landed on the Moon.
But to this day, nobody knows what data your system telemetries to Microsoft. Not the data they talk about in the 5-10 page license. Instead, the data mentioned in the 55 page doc about what you agree to send them, that they refer to from the MS Software License...
What dev system allows you to build things without downloading required components first? None?
Like every other dev system, connect, either download offline installers for everything (they exist), or get your system running, then you can dev offline all you like.
You don't need to "be connected to the internet at all times for any reason whatsoever". You need it once.
Things were far more annoying in the past, in Win98 connecting a printer, or any other hardware required inserting the installation CD or having a folder containing the all the cab files on your system and drive space was far less abundant.
same with CI/CD pipeline. Most developers just choose to download the same runtime each time there is a build, which is not just very inefficient but not at all guaranteed to work for the next 10 years.
its a split game, you can install everything at once, like 60gigs, and then you can happily work offline, but for most people, it is much easier to work from the on demand model, to pull what is needed when its needed.
This is a bit dramatic. You're a software developer, building an app which has dependencies, so of course you have to download those dependencies to build. Where else would they come from? Literally every language with a package manager does the same thing.
Being able to make a portable build of the software you are creating is such a basic feature it's baffling you have to fetch extra data to do that. Also nowhere in "dotnet publish -o ./publish -r win-x64" I said "Connect to the internet and fetch megabytes of binaries"
What I miss is the old model for installing software. Give me the yearly ISO, optionally provide a service pack or two if some huge problem went under the radar in testing.
`dotnet publish` performs an implicit `dotnet restore`. So, yes, you did.
If you don't want it to download anything then you use the `dotnet publish --no-restore` flag, which is used a lot in CI/CD pipelines. If you don't have the package dependencies cached it will then simply fail.
The opposite side of that coin is a required up-front install of every package that might ever be needed for every possible scenario... in which case people would complain (even more) about massive installs.
The internet exists, the industry has evolved, software has dependencies, and yes you have to download them (just like you had to download the SDK ISOs back in the day). But it's just one command, run it and get it over with, and after that up-front one-time pain you'll have a nice offline workflow.
I'm not OP, so interpreting: I don't think OP is asking for an up-front install of every package under the sun that might ever be needed for any kind of development. He's just asking that, out of the box, the build tools can build software with no dependencies into an executable without having to hit the Internet. And, if he has particular dependencies he needs, allow him to download them (ONCE) onto that machine, and again, he can build software into an executable without having to hit the Internet again. This doesn't seem that unreasonable a request. Every other compiler I've ever used has had this feature. It wasn't even a feature. It's just the way software has always worked.
I should be able to take my computer to a remote cabin with no Internet, and use all the software on it. The only software I'd expect to not work is software whose purpose is to access data stored on the Internet, like web browsers. I don't think this is such a crazy user expectation.
> make a portable build of the software you are creating is such a basic feature
That is easily doable. However users often don't want a copy of a large runtime for each and every program they use, so it often makes sense to move common things (like DLLs, runtimes, your OS) to libraries that can be shared.
You can easily make dotnet apps in either flavor to your liking. And not every developer is going to make their apps to appeal the your needs.
We seem to have normalised the current situation as an industry, but that doesn't mean the situation is good.
In days gone by we used to have truly standard libraries and runtimes, in the sense that they came with your build tools out of the box and so were available everywhere. Host platforms similarly provided basic services universally. Documentation was often excellent and also available out of the box.
In that environment, writing "Hello, world!" meant writing one line that said do that, maybe with a little boilerplate around it depending on your language. Running a single simple command from a shell then either interpreted your program immediately or compiled it to a single self-contained executable file that you could run immediately. Introducing external dependencies was something you did carefully and rarely (by today's standards) when you had a specific need and the external resource was the best way to meet that need.
Some things about software development were better in those days. Having limited functionality in standard libraries and then relying on package managers and build tools where the norm is transitively installing numerous dependencies just to implement basic and widely useful functionality is not an improvement. The need for frameworks and scaffolding tools because otherwise you can spend several hours just writing the boilerplate and setting up your infrastructure is not an improvement.
This is my experience as well building and running .NET core stuff on Arch Linux all the time. You just have to know what you're doing, and the Microsoft documentation doesn't make it easy to take the minimalist route.
Microsoft could do a much better job onboarding new developers.
Visual Studio Code only needs to be good enough for a Cloud IDE kind of scenario for Azure workloads (it started that way as Monaco anyway), anything beyond that is a gift so to speak.
This is probably the easiest way. The tragedy is that this explicitly rejects all the subsequent developments since WPF - UWP, WinUI3 - because they don't work nearly as well.
- ignore Winui3 and do it in WPF (fewer controls, deprecated, actually works)
- do it blind in XAML, possibly with the aid of a piece of paper. Or the "live view" in VS if that works. (Live View is the suggestion given on the github issue for "designer doesn't work", fwiw)
- do it in the UWP designer, then s/Windows/Microsoft/ in the control namespaces
Exactly. And then people wonder why everything is electron nowadays. Native UI development on any platform is pure garbage compared to frameworks in Web frontend.
I hope SwiftUI and flutter will be able to make it at least a little bit better.
> If you want a single-file exe that's a bit more advanced and requires a few properties in the csproj.
This. This is what's wrong. Why is single-file exe "a bit more advanced". In early 2000s Delphi could build a single file exe in seconds, and that was the default behaviour.
What changed since early 2000s that having an exe is a) advanced and b) requires manually fiddling with some unspecified properties in the abomination that is a csproj file?
> This. This is what's wrong. Why is single-file exe "a bit more advanced".
Because that's how it works for very every single interpreted and bytecode compiled language?
And the thing that changed in the early 2000s was a massive shift toward using interpreted and bytecode compiled languages.
If we're specifically talking .NET, the thing that changed since the early 2000s is that creating a self-contained executable became possible in the first place. On Windows, .NET *.exe files were still run by an outside runtime, it's just that, since Microsoft owned the whole platform, it was easy for them to hide all that behind a curtain, ensure .NET is preinstalled with Windows, etc. The design constraints changed when .NET became cross-platform. OS X and Linux require a bit more (or at least different) finagling in order to achieve good user experience.
> Because that's how it works for very every single interpreted and bytecode compiled language?
I went ahead and searched for C# executable around 2005-2006. Guess what, this wasn't even a question then. Because, apparently, building an .exe was also the default setting for C# projects in Visual Studio.
So. What changed?
> If we're specifically talking .NET, the thing that changed since the early 2000s is that creating a self-contained executable became possible in the first place.
It was always possible.
> On Windows, .NET .exe files were still run by an outside runtime
1. We're literally in the thread to a question about Windows apps. On Windows
2. If you've ever did anything on Windows such as played a game, you'd know that you almost always need something external to run: be it msvcrt (c++ runtime) or clr.
> The design constraints changed when .NET became cross-platform.
What you mean is: it's still perfectly fine to create a standalone executable, but for some reason it's now called a "more advanced operation". The thing that's changed is that now it's hidden behind a ton of inconsistently named parameters
But, per the last paragraph of my comment, those .exe files were not really executable files. At least not in the sense of, say, an AOT-compiled C++ application.
They were much more comparable to an "executable" Python or Perl script where the OS knows to look at the hash-bang line to figure out what interpreter to use to run it. If you try to execute one of those .NET .exes on a computer that doesn't have a suitable .NET run-time installed, you'll get more-or-less the same error as you'd get trying to run a Python script on a computer that doesn't have Python installed.
The part that was being criticized a few comments up was about how to create self-contained .NET apps with the runtime bundled in and everything. Specifically, these guys: https://docs.microsoft.com/en-us/dotnet/core/deploying/#publ... That kind of executable simply did not exist in the old Windows-only .NET Framework; it's a feature that was first introduced in .NET Core 3.0.
No application on a modern OS is standalone. They all rely on either having many components the need already installed, and then try to bring along others that may not be installed. As the commonly installed base changes, the included pieces also change.
I for one don't want every application to include 100's of MB of standard components that every other such app also brings (such as Electron style apps). I'd much rather have an app tell the OS to fetch missing pieces once, and once only, then future apps share.
And this also mitigates a significant source of security holes. Nothing like linking everything and the kitchen sink so your system is riddled with unknown, hidden vulnerabilities in binaries.
For example, I recent worked on tools to search for such things - they are EVERYWHERE. OpenSCAD, for example, includes a ssh engine, which has known vulnerabilites, but OpenSCAD does not list them. I found thousands and thousands of embedded binary libraries in applications with known and unpatched vulnerabilities.
Too bad all those didn't use a decent package manager, allowing systemwide updates to common functionality. I suspect the future is more components, not less, for these reasons.
"Oops, we couldn't find a shared library that this program depends on" is not exactly the same error as, "Oops, we couldn't find the interpreter / VM that you need to run any programs written in this non-AOT-compiled language."
In other words, compare missing msvcrt.dll more to missing libc.so than to not having a JVM or whatever installed. I guess from end user perspective the end result is the same - the program won't run - but what's actually going on under the hood is very different.
Which is exactly why I dont use any of those. I will stick to Go, or Rust, or Zig. People expect to be able to produce a single EXE. Maybe not as the default, but it should be an option, and an easy option. Any language that cant do that is a failure in my opinion.
Also please dont lump interpreted languages with C#. At least with interpreted languages, once you have the runtime on your computer, you can have a single script file to work with. With C#, you STILL have to have the runtime on your computer, then after you "compile", youre left with a folder of one EXE and literally over 100 DLL.
With Python, if I publish my script with 100 dependencies, and someone `pip install`s it, they will also end up with 100 packages being copied to their computer.
The main difference is that Python installs them globally (or at least thinks it does, if you're using virtual environments), while .NET apps isolate themselves from each other.
Also, let's make a fair comparison. Is that hypothetical Rust application of yours depending on literally 100 crates? If so, what is the size of your binary?
Please don't use Python package management as a baseline. Aim higher.
I love Python, but I cringe whenever someone asks me why a Python program isn't running properly on their machine. Obligatory xkcd: https://xkcd.com/1987/
Take PHP with composer: it works quite fine, but still you need all the dependencies downloaded from somewhere on Internet. Just a PHP script and the PHP interpreter works, but this is not how 99.9% of the software is written.
The php world invented the phar format [1] to deal with the single file tool/app distribution issue.
In fact, composer uses dependencies managed by itself in their sources. Then it gets packaged and distributed as a single file that includes all dependencies (composer.phar). That single file can be run by php as if it was any regular php file (including executing it directly and letting your system detect the required interpreter through the shebang).
> With C#, you STILL have to have the runtime on your computer, then after you "compile", youre left with a folder of one EXE and literally over 100 DLL.
No you don't. If you'd rather increase the download size for the user then you can turn on self-contained (include runtime) and single-file app.
If we're talking modern .NET ( 6 for example ) you have 4 options,
let's assume a simple hello world without third party dependencies:
1. Build it runtime dependent: ( which requires the NET 6 runtime to be pre installed ) on your computer in order to be able to run:
You get a single exe file.
2. Build it self contained: You get a directory with a bunch of dlls and one exe
But no runtime needs to be installed on the target computer
3. Build it self contained + single exe: You a get a single exe that embeds all those dlls and unpacks them in memory ( since net 6, in net 5 it would copy them to a temp directory )
4. Build it using AOT Mode: You get a single, statically linked exe.
This is probably the closest to a standard Rust (statically linked) build.
However AOT mode is not yet official and requires some fiddling still, but should become stable for NET 7 next year.
And you loose out on some features obviously like runtime code generation
The reason it's more complicated is to support reflection. C# allows you to create objects and call methods at runtime based on data not available at compile time, including classes and methods that don't get used by the normal (non-reflection) code.
That means that by default you can't do tree shaking, which means you would end up with enormous exes, which will probably annoy the type of people who want a single exe.
The bit more advanced is to tell the compiler which types you might want to reference by reflection so that it can shake the rest out of the tree.
Do you know of any production GUI applications that are literally a single-file EXE and aren't like, small utilities? There's just no reason to try to pack everything into a single file, Go-style. The self-contained publish (which is literally a single flag) is a quite reasonable result - a directory of files that are completely sufficient to run your app on any Windows computer, without any dependencies to install.
> Do you know of any production GUI applications that are literally a single-file EXE and aren't like, small utilities?
The old Delphi-based version of Skype fell into that category. Thinking of that example, I can understand why some people think modern software is decadent.
It's amazing that most people on this thread seems to take this nonsense as being completely normal and acceptable now. It really shows how much windows dev has devolved over the last decade.
Why does every linux app I download come as a self-extracting installer, run in a container, or download dependencies? Those aren't single-file executables.
> I'm not sure why people are making it out as if this is some very complicated feature.
OP asked how.
The answer was, quote "If you want a single-file exe (as in completely statically compiled without self-extraction) that's a bit more advanced and requires a few properties in the csproj to configure how to handle native assemblies that expect to exist on disk. If you want it to have no runtime dependencies, add `--self-contained`"
Somehow creating an exe is "more advanced", and requires changing of unspecified parameters in the project file. wat.
.NET Core is cross platform, it was created with ASP.Net as the driver and web development is not about exe files. "dotnet run" runs your code, on any platform, that's one of the default intended ways to run code. If you want a platform-specific executable you've done more work and made the code less general. If you also want to package the entire .Net framework into one binary on any platform, why is it unbelievably impossible to understand that this is more effort and desired by fewer people, so isn't as easy to do?
It is trivial if you can embed most of the OS inside of your executable, if there can be only 1 version of the OS, if you do not use any libraries you cannot statically link and so on.
There was a whole fiasco where the dotnet/C# team were forced to remove features from dotnet in order to sell more copies of Visual Studio. Later, Microsoft lied through their teeth and said it was some kind of sprint-planning scoping error, even though the development work was already done.
It is quite easy, the ASP.NET folks on the C# team don't care about the remaining use cases, and they are the ones behind the initial .NET Core reboot.
Except, everyone else cares about everything else .NET Framework has been used for the last 20 years.
This was quite telling on the .NET 6 release with its minimal APIs to compete against Python and JavaScript "hello world" apps. As soon as one scales beyond that, it is back to the old MVC model.
They're two almost-identical pieces of software that parse almost-identical file formats. They may or may not share a codebase (five minutes on github hasn't clarified this), and in most cases you can substitute one for the other. Except this one. And the developers (of whom there are not many for WinUI!) forgot this use case because they're focused on building inside VS.
This is the best response I've seen to the question: "What's the easiest way to get started developing a native Windows app". Better than anything Microsoft has put out.
2. Terminal ,pick a folder, `dotnet new wpf`
3. `dotnet run` and you have an (admittedly empty) hello world app
It's telling me it took 1.5s to build the app.
Want to publish the app?
`dotnet publish -o ./publish -r win-x64` which takes a bit longer as it has to download some runtimes (60 seconds max, then cached). Zip up the resulting folder and ta-da you have an application. Just unzip and run the .exe.
If you want a single-file exe (as in completely statically compiled without self-extraction) that's a bit more advanced and requires a few properties in the csproj to configure how to handle native assemblies that expect to exist on disk. If you want it to have no runtime dependencies, add `--self-contained`