Hacker Newsnew | past | comments | ask | show | jobs | submit | dosourcenotcode's commentslogin

And not just options but base command names too. I wrote a tool to partially mitigate this in some cases: https://github.com/makesourcenotcode/name-safe-in-bash


You have a nice ternary counter going in the version numbers :)


Agree that long options should be used. But there is one caveat to consider: portability.

Sadly to this day not all BSD distributions have GNU style long options. And the ones that now do only got them fairly recently. So if you want portability you have to use short options as you weep with a bottle of vodka in hand.


Not trying to spam this thread with praises of nix, because it does have its own problems, but it certainly solves the portability problem.

Four years in to using it at work for dev environments across mac (x86 & ARM) and various linuxes and can’t imagine going back. I also always make dev environment definitions for my open source projects, so even if people aren’t using nix, there is at least a record of what tools they will need to install to run scripts, tests, etc.


Does nix work well on BSD-derived Unices? In particular, the most widespread of them, macOS?


Yes, works great on Mac. About half our engineers us Macs, the other half Linux. We have one nix configuration for the dev environment, which works for everyone.


This surprises me because the first case I remember ever coming across where short versus long options impacted portability across GNU and BSD was _fixed_ by using long options. Maybe six years ago or so I had an issue porting a script someone else had written for use in CI that happened to decode some base64 data that failed when I tried to use it on a different platform. I forget which one it was originally written for and which one I was trying to use it on, but the issue boiled down to the MacOS version of base64 using the BSD short option for decode and Linux using the GNU one, and they each used a different capitalization; one used `-d` and the other used `-D` (although I also can't remember which used which honestly). My solution was to use the long option `--decode`, which was the same on both of them, and since then the times I've needed to decode base64 I've always used the long option out of habit, which probably explains why I can't remember what option Linux uses despite it being the one I've used far more over the years since then.


I think the right way to think about this (if your goal is to avoid surprises at least) is that options (short or long) are just strings. There's no guarantee that there's a long variant of an option. There's not even a requirement that options start with a dash. A sufficiently brain-damaged developer could start them with a slash or something.

If you're going for portability the best bet is to just read the manual for each of the separate versions and do whatever works.


To this day, I write tar options with no dash, simply because I can. `tar cvzf foo.tar.gz ./foo`

I would never write a new program with this option, but I do find it a delightful historical oddity.


I've noticed that it seems to be a pattern that's used for other compression/decompression software as well. Sometimes mods I use for games will be uploaded as rars or 7zips (I guess because this stuff gets developed on and for Windows, and tarballs aren't really something people use much there), and the CLI invocations I use to extract them always look off to me, especially the 7zip one: `unrar x` and `7z x`.


That sounds reasonable to me. If anything, I might even go further and say that reading the manuals wouldn't be enough to fully convince me without also actually testing it by running a script on a given platform. It's not that I don't trust the manuals to be right, but I have less trust in myself to write bug-free code than probably any other language I've ever used, and I don't think I'd feel confident without verifying that I actually did what the manual said correctly.


Definitely agree with the article that engineers should be more aware of scenarios where those interacting with the systems they build have slow internet.

Another thing I think people should think about is scenarios with intermittent connectivity where there is literally no internet for periods ranging from minutes to days.

Sadly in both these regards I believe we're utterly screwed.

Even the Offline First and Local First movements who you'd think would handle these issues in at least a semi-intelligent manner don't actually practice what they preach.

Look at Automerge or frankly the vast majority of the other projects that came out of those movements. Logically you'd think they have offline documentation that allows people to study them in a Local First fashion. Sadly that's not the case. The hypocrisy is truly a marvel to behold. You'd think that if they can get hard stuff like CRDTs right they'd get simple stuff right like actually providing offline / local first docs in a trivial to obtain way. Again sadly not.

The following two links are yet another example of a similar kind of hypocrisy: https://twitter.com/mitchellh/status/1781840288300097896 https://github.com/hashicorp/vagrant/issues/1052#issuecommen...

Again at this point the jokes are frankly writing themselves. Like bro make it possible for people to follow your advice.

Also if you directly state or indirectly insinuate that your tool is ANY/ALL OF Local First, or Open Source, or Free As In Freedom you better have offline docs.

If you don't have offline docs your users and collaborators don't have Freedom 1. If you can't exercise Freedom 1 you are severely hampered in your ability to exercise Freedoms 0, 2, or 3 for any nontrivial FOSS system.

The problem has gotten so bad the I started the Freedom Respecting Technology movement which I'm gonna plug here: https://makesourcenotcode.github.io/freedom_respecting_techn...


A cool tool to be sure.

However I feel this tool is a crutch for the stupid way browsers handle web pages and shouldn't be necessary in a sane world.

Instead of the bullshit browsers do where they save a page as "blah.html" file + "blah_files" folder they should instead wrap both in folder that can then later be moved/copied as one unit and still benefit from it's subcomponents being easily accessed / picked apart as desired.


"save as [single] html" or whatever hasn't worked reliably in over a decade. I wrote a snapshotter that i could post in a slack alternative "!screenshot <URL>" and it would respond (eventually) with an inline jpeg and a .png link of that URL. As i mentioned upthread, this worked for a couple of years (2017-2020 or so) and then it became unreliable on some sites as well. as an example, old.reddit.com hellthread pages would only render blank white after the first couple dozen comments.

I haven't had the heart to try it with singlefile, but now that there's at least 3 tools that claim to do this correctly, i might try again. This tool, singlefile (which i already use but haven't tested on reddit yet) and archivebox. 4 tools, if you count the WARC stuff from archive.org


Good search and offline docs are not mutually exclusive. Grab the HTML version of Python's docs for example. You can search through them totally offline thanks to a bit of JavaScript they include.


As glad as I am that things like DevDocs and Zeal exist I feel like ultimately they are a crutch and indicative of a much larger problem in Open Source which I'm trying to address.

Now as a FOSS maintainer I don't owe anyone any particular set of features or bug fixes. BUT I ABSOLUTELY DO OWE THEM ACTUAL OPENNESS AND THE ABILITY TO STUDY THE SYSTEM PROPERLY.

Many FOSS projects frankly kneecap Freedom 1 with a sledgehammer for anyone who isn't a well off person with reliable Internet access. And I've been up to here with it for a very long time now.

For all my FOSS projects big or small my pledge is to give users complete and trivial access to the full Open Knowledge Set associated with them.

Not just the main program sources and executables, but built and source forms of any official documentation that exists.

Withholding any official documentation that exists from trivial and easy offline access in a useful form is fundamentally no better than withholding any part of the source code. Period. End of story.

My pledge for all my FOSS projects is as follows:

At the home page people within 30 seconds of having read the Elevator Pitch and decided they want to study the system properly will be able to trivially enumerate and initiate downloads for all educational information related to it whether that's source code or built forms of the documentation usable for study straight away.

How the Open Source Definition and Free Software Definition don't mandate something as common sense as this I don't know. Open Source and Open Knowledge should be for everyone, not just well off people with reliable Internet access.

Anyway that's what caused me to start the Freedom Respecting Technology movement. Thus if anything I said here resonates with anyone they should read https://makesourcenotcode.github.io/freedom_respecting_techn... to learn more.


That's a good idea, but I don't see why it deserves a manifesto, or the grandiose claims about being the "next generation" of open source.

Just ship your software with documentation. This was a good practice even before open source or the internet took off. Old school closed source software used to ship with physical manuals, and good quality software often had good documentation as well. Some OSS CLI tools do have extensive manpages, yet users often don't read them in their entirety. So it's not just a matter of shipping good documentation, it's also about making it discoverable and easy to use. This is where projects like DevDocs step in.


I absolutely stand by my claim that FRT is the Next Generation of FOSS for several reasons.

To begin with even now in the 2020s billions of people alive today have never once been on the Internet. Source: https://www.un.org/en/delegate/itu-29-billion-people-still-o...

Also billions more may have some kind of Internet but it's flaky. Sometimes very very flaky. They are systematically excluded from large swathes of the FOSS ecosystem.

So make no mistake, the scale of the problem is VAST. We're talking billions of people here that can be helped by FRT. Again, billions, with a B.

If all existing FOSS were transformed into FRTs overnight the world would be unrecognizably better by several orders of magnitude.

And yes we need a new manifesto/definition. FOSS standards have completely dropped the ball on this issue among several others. Do a ctrl+f for the word "offline" in either the Free Software Definition or the Open Source Definition.

Many FOSS implementations also drop the ball here. Happily some do the right thing. Sometimes deliberately which is beautiful to see. Often though it turns out it's accidental and one redesign of the site later I can't get docs for the latest version of the tool.

Oftentimes it's the small (sometimes subtle) details that make the difference between freedom and lack thereof and the FRTD exists to make sure they are covered.

Even seemingly simple things often aren't. Consider pointers, they're just a thing that stores a memory address, no big deal right, easy peasy, yet using them safely is the subject of at least several chapters in a book, and even calls for research into and implementations of safer approaches like those used by Rust.

And yes one of the details the FRTD addresses is the discoverability issue you mention with the man pages.

Say I'm a newbie that just learned there's a thing called the command line and I open my terminal. I see a Bash prompt, but I don't know what to do with it, or even that it's a Bash prompt. I don't know about the man command. I don't know about apropos. I don't know about GNU info. I don't know to try looking for info manuals if I can't find man pages. I just vaguely know from like the movies or something I have to type stuff, press enter, and then stuff happens.

I don't know anything yet. The fact that the man pages are on my system and will be available offline does all of bupkis for me at this point.

On Linux there's a man page called "intro" (and even though there's room for improving it) after which someone reads it they actually have a fighting chance of using the command line and knowing where to learn more. On OpenBSD there's a similar man page called "help" that does a similar job and starts the whole conceptual bootstrap chain. Yet nothing tells me to start my studies by running "man intro".

On Linux a one sentence message saying to run "man intro" to begin your studies of the command line would go a long way for example.

The difference between information and knowledge is often a few small bits of commonsensically placed metadata forming a conceptual bootstrap chain as well as one pointing to the chain's start. Not labor intensive on the part of implementers yet transforms the system from zero to superhero.

Or perhaps since William Shotts wrote an excellent book called The Linux Command Line, and it's licensed under Creative Commons such that it can be reproduced and included in Linux distributions, maybe there can be a pointer telling people to read that at wherever it's stored on the system instead because it's far superior to "man intro".

Again a one or two sentence message can make all the difference.


Those are noble goals, but I think your project ignores a few important things:

- The internet has become the primary distribution channel of software itself, not just documentation. How would a user be in the position to access software via the internet, but not its documentation? They can't purchase software offline in a brick and mortar store anymore, and physical media is pretty much dead. They would need to keep the software updated on a regular basis, and downloading a few kilobytes of documentation pales in comparison to downloading hundreds of megabytes of software. So the internet really is a requirement for most software, even for those that can function entirely offline, and most developers make this assumption.

- What fraction of those 2.9B people who are not yet online would a) use traditional computers instead of tablets and smartphones, b) be interested in OSS, c) actually have a need for and the patience to read documentation? I reckon that this is a very small percentage, constituting orders of magnitude less people than the billions you claim it is. Instead, most people would be better served by using intuitive devices and software that doesn't require documentation to begin with. Smartphones and smartphone apps have made computing more accessible to more users than personal computers, desktop operating systems and mountains of documentation ever did. The next generation of computing devices will be even more intuitive, and written documentation wouldn't even make sense to new users.

- The quality of the documentation is more important than how it's accessible. It doesn't matter if I can read documentation offline, if it's incomplete, incorrect or confusing. There are no manifestos that will make developers write good documentation. This is either something they care about and put effort in, or they don't.

- The advent of LLMs is making traditional documentation obsolete. Why would any user prefer going through a bunch of documentation of varying quality to find the information they need, when an LLM could give them the right answer tailored to their query, much more quickly and in a consistent language? LLMs make knowledge more discoverable than traditional documentation. Even projects like DevDocs will not remain useful for too long. Proprietary LLMs like ChatGPT can already do a decent job at this, and other products can be trained on specific documentation. Accessibility is still a hurdle, but this too will improve with local, offline and open source LLMs, lower hardware requirements, etc. Soon there won't be a need to write documentation at all, as AI will be able to answer any functional question about software directly, which it can already do to an extent. Once it becomes better at writing software itself better than humans, documentation as we think of it today will be even less of a necessity.

So I really don't think your initiative has as much importance, or will have as much of an impact, that you think it does and will. At best, offline documentation removes a minor inconvenience for a small subset of computer users _today_. And these users already have solutions like DevDocs and, increasingly, LLMs at their disposal.


I've been working on what I hope is the Next Generation of the Open Source movement.

See here to read about how Open Source fails in certain serious ways to be properly open and what I propose be done about it: https://makesourcenotcode.github.io/freedom_respecting_techn...

I'm also working on some FRT demo projects so people can viscerally feel the difference between FRTs and mere FOSS.

You can help by: 1. spreading the word if you agree with the ideas behind FRTs 2. helping me tighten the arguments in the Freedom Respecting Technology Definition 3. proposing ideas for FRT projects you'd like to see to help me prioritize the most impactful demos


This in my opinion is what Open Source should have been.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: