Hacker Newsnew | past | comments | ask | show | jobs | submit | catenate's commentslogin

I've been using a little literate-programming shell script for years, for various personal and professional projects. I find it makes a huge difference in my ability to hand off a code file to another developer, because I tend to document justifications and explanations for why the code is the way it is, where it came from, what else was tried, alternatives and variants, and how to understand aspects of it that are not obvious from the code.

My script doesn't try to shuffle code around, which has the advantage that if you are familiar with the general structure of the code, the documentation follows the same structure. (This may not be the best order to explain the theory of the program, as is noted in several introductions to literate programming).

Instead of code blocks, my little shell script has a per-line approach, where each code line is preceded by the name of the file to which it is extracted. This approach allows me to name a variant immediately after the filename, so that I can code alternative lines, and decide at the time of extraction which sets of lines to use. This is also useful for extracting multiple very similar files from a single markdown source. This use of variants has been very effective in supporting alternative implementations, since I can quickly switch between them by the list of variants I give the tool to extract.

https://github.com/catenate/sharedo/blob/main/lit.md


Since 1 June 2012, I've been taking notes in unicode text files, which contain (occasional or adjacent) lines starting with 'nb ' and then a list of tags. I wrote a simple tool ("nb") in Inferno's shell (thanks to Robert J. Ennis for the port to Plan 9's rc), to (1) search for given keywords in per-directory index files pointed to by the global index, (2) index all of the nb lines in files in the current directory, and (3) if necessary, append, to a global index file, a reference to the index file in the current directory.

https://github.com/catenate/notabene

I've found that I'm comfortable with the eventual consistency this offers, in exchange for fast lookups when I want something (as opposed to indexing first, and/or indexing globally, and so waiting for indexing to get a result). This distributed-file approach also allows me to add tags to a variety of files: local files, or networked file-system files, or sshfs-mounted files, or Dropboxed files, or files under version control, or files with varying text formats; and find tags across all of them and across all the time I've been indexing.

It runs in linear time with respect to the number of tags I've entered, plus the time to read and process the global index, so obviously there are many ways I could improve the time performance (as an easy example, I could permute the index to list all the tags in alphabetical order, and next to each tag list the files that contain that tag).

I also wrote other tools, since the layout is so simple: for example, "nbdoc", to catenate the actual contents of the references returned by the primary tool (nb); and "so" (second-order), to return all the tags which appear in any nb line with the given tag(s).

I've also found that it's not easy for me to remember what tags I might have used in the past, or how I was thinking about something, so I try to use the conjuction of several tags to narrow down search results, rather than try to remember one specific tag (this seems to correspond to the observation that it can be difficult to remember exactly where in a hierarchy you put something).

The modular approach, of per-directory indexes referenced in a global file, also makes it easy for me to combine work-specific notes, with public notes, with private notes, all in the same global index file, at work; but only have the same public and private notes at home.


I made my own Secret Hitler deck, and have been playing it for a few months with both adult and teen groups for whom I usually run Werewolf games. SH has been really well received, keeps everyone more involved with the game (both because of fewer deaths, and the basic mechanics), and is well on its way already to supplanting Werewolf as the social game of choice.


I used Emacs from 1993-2004, and switched to Acme for the 11 years to present. I don't miss trying to memorize all the key combinations from Emacs. I like that Acme presents a clean, simple, and direct Unicode interface to what I work with: mostly editing shell scripts, and running shell commands, as a build engineer. It takes a while to get used to mouse-button chording, but I don't even think about it now. I constantly use guide files, in many directories, to store and modify commonly used commands to highlight and run, so I make many fewer typos now, and don't forget which commands to run or how I run them. I can also switch contexts a lot faster, both because commands are laid out in the directories where I use them, and because the Dump and Load commands store and retrieve sets of files in the tiled editor subwindows. When I had to work on Windows I enjoyed having a pared-down unixy userland that I could write scripts in, to use also in my Linux Inferno instance (mostly communicated from one instance to the other through a github repo for backup and version control). The biggest drawback to me with Inferno is that so few other people run it, that I have to compile it myself on any new platform on which I run it (there are not really rpms/debs/etc available to just install it). But your experience with Plan 9 Acme might be better, I just prefer also working with the Inferno OS improvements, such as bind, /env, sh, etc.


I told Facebook to delete my account as soon as they implemented this feature, years ago, and I found that (at least at the time) I could not block it on my Android phone. So, at least, I've not since then been party to giving Facebook information (they most likely have anyway) about other people, since the Facebook app is not installed on my phone.


Composing a larger program, by combining functions in different languages in a new framework, seems to take things in the direction of large and complicated programs. Would it not be simpler to write smaller, independent, individually named and reusable programs in several languages, according to each language's strengths, and pipe the output from one of these programs to the other? As an added bonus not all the programs need to understand the entire problem domain.


This line of thinking ignores simple domain specific languages like SQL, which 9 times out of 10 are held in strings in the host language (and thus undergo no static checking because the contents of strings are basically ignored by the compiler. There are other examples too: html files contain nested Javascript, CSS etc. PHP files host HTML. C++ effectively hosts C and assembly.

Haskell hosts a few dozen languages called "extensions" - specified in a {-# LANGUAGE #-} pragma at the top of the file. This one is of particular interest because the language appears to have some kind of extensible syntax which allows these extensions to occur - except when you look under the surface, they're all combined into the same grammar and they all interact with each other, such that other extension developers basically need an entire understanding of all of them to know where conflicts may lie.

Using a framework like LanguageBoxes instead to host these kinds of extensions would allow individual developers to put their own, independant extensions into the language, without having to hack on the compiler and rebuild it.

Also, writing several independant programs and using any IPC to communicate between them is ideal in theory, but in practice is more often unsuitable - because Unix processes are a bloat. They take time to initialize and use lots more memory than necessary for what amounts to running code for a small amount of time and discarding of it. Perhaps if we had a more lightweight model of processes ala Erlang style, this kind of reusability would be practical and not just a good philosophy to follow.


Was this a permanent or temporary effect on Algernon the mouse? If we continue the amino-acid mutation along the line of the difference between the mouse and human FOXP2, can we expect great things from Charlie?


I use Inferno to develop software, as a virtual OS over top of Linux or Windows. Its community doesn't see any value in writing software just to make it easy to use for newbies. It is still actively maintained, with new changes from Plan 9 development. There are even some new software tools developed in it (eg, I wrote a build tool). Maybe this qualifies?


The redo-inspired build tool I wrote abstracts the tasks of composing a build system, by replacing the idea of writing a build description file with command-line primitives which customize production rules from a library. So cleanly compiling a C file into an executable looks something like this:

Find and delete standard list of files which credo generates, and derived objects which are targets of *.do scripts:

> cre/rm std

Customize a library template shell script to become the file hello.do, which defines what to do to make hello from hello.c:

> cre/libdo (c cc c '') hello

Run the current build graph to create hello:

> cre/do hello

Obviously this particular translation is already baked into make, so isn't anything new, but the approach of pulling templated transitions from a library by name scales well to very custom transitions created by one person or team and consumed at build-construction-time by another.

Also see other test cases at https://github.com/catenate/credo/tree/master/test/1/credo and a library of transitions at https://github.com/catenate/credo/tree/master/lib/do/sh-infe...

I think this approach reduces the complexity of the build system by separating the definition of the file translations from the construction of a custom build system. These primitives abstract constructing the dependency graph and production rules, so I think it's also simpler to use. Driving the build system construction from the shell also enables all the variability in that build system that you want without generating build-description files, which I think is new, and also simpler to use than current build-tool approaches. Whether all-DSL (eg make), document-driven (eg ant), or embedded DSL (eg scons), build tools usually force you to write or generate complicated build description files which do not scale well.

Credo is also inspired by redo, but runs in Inferno, which is even more infrequently used than Go (and developed by some of the same people). I used Inferno because I work in it daily, and wanted to take advantage of some of the features of the OS, that Linux and bash don't have. Just today I ran into a potential user that was turned off by the Inferno requirement, so I'll probably have to port it to Linux/bash, and lose some of those features (eg, /env), to validate its usability in a context other than my own.

EDIT: Replaced old way, to call script to find and delete standard derived objects, with newer command.


Inspired somewhat by redo, I wrote "credo" as a set of small command-line build tools, so build description files are in the shell language rather than a standalone DSL.

https://github.com/catenate/credo

(I don't like how makefiles have so many features that reimplement what you can do in the shell. I also don't care for big languages with build-tool DSLs--though you could say credo is a build-tool DSL for the shell, like git is a version-control DSL for the shell. With only language directives, no constructs.)

I wrote it in the Inferno shell to take advantage of some nice OS features and its cleaner shell language. One of these days I should port it to bash, so other people might use it.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: