Not sure if games count as software but if they do - Factorio. I don't play much these days but I'm still utterly stunned how a relatively small, humble team of developers can build something so robust and performant. Granted, I've never really tried pushing the limits, but not once have I felt like the game is even breaking a sweat while processing thousands of machines, belts, and bots. It's a miracle to me.
Their devblogs are really nicely written and you can tell they are extremely passionate about getting things right. In my experience that's a rarity now.
Similarly, mindustry is written in java by a single developer freshly out of college and is incredibly performant for the amount of moving pieces rendered at a time. Granted I'm not in video games, maybe it's not too hard, but I thought it was really cool.
It's also an incredibly cool, interesting game design. If you like Factorio + Tower Defense you should check it out.
Doubly so that the game is written entirely in native C++ and is fully cross platform compatible (mac/windows/linux) with a tiny binary size. The responsiveness and performance of that game even with thousands of agents onscreen has always impressed me.
I'll also point out that the UI for Factorio is almost among the best.
I'd prefer a factorio over almost another UI.
I'm imagining a port of factorio's UI into most strategy games would be very nice, both from a VERY zoomable map, a clear research/progression tree. A few improvements could be made IMHO, but it's lighyears better than TF, CS, the Chris Sawyer set, etc.
There's a series of space strategy/4x/combat sims called X where the latest entry seems to simulate entire economies and individual ships in real time. Dozens of sectors with hundreds of ships and dozens of space stations all humming along at 30-60 FPS.
Like Factorio it's incredible and makes me wonder wtf I was doing chaining together bullshit service APIs in my day job.
Another elegant game is [1] City Skylines. It's amazing how they can create such a huge virtual world with millions of simulated individuals that roam around and have complex lives in your city.
Factorio really is a bit of a modern marvel, even HUGE factories don't seem to run poorly at all and I genuinely have no idea how. Other similar games made since (Satisfactory comes to mind) seem to struggle with things like keeping track of resources on belts at scale, which IMO is a perfectly understandable problem to have but still makes factorio that much more impressive.
You can definitely hit the limits of Factorio. My largest base runs at ~10fps which is how I know it's time to start a new game. Egregious use of copy/pasting and an automated robot production will get you there.
It's still leagues ahead of any other game I've played though.
I've definitely pushed the limits of performance with way too upgraded huge range artillery turrets hitting all the biter bases, but yes, very performant with large factories otherwise :)
Emacs - although there's a lot of accumulated cruft in the form of whacky APIs and elisp functions, the design of Emacs is stunningly effective. The way that minor modes and keymaps are composed to customize the interaction mode for each individual buffer is clever and beautiful, to name just one thing out of many. And, as janky as elisp is, it's one of the few extension languages that's actually good at its job, and the Emacs elisp API allows you complete freedom over virtually every aspect of the editor. Unironically, Emacs is not a text editor - it's a toolkit for creating text-oriented applications.
Forth, Lisp (Scheme, in particular - I love CL but it's the C++ of Lisps), and Lua - three languages that take a small set of elegant primitives and synthesize them to give you incredible power.
Remember the Milk is a task-tracking SaaS that is one of the few pieces of software that I actually like, which is especially impressive given that it's proprietary. Cheap, fast, effective, and with a UI design that continually impresses me with its mix of intuitiveness and ergonomics.
Emacs is quite popular and has some of the most skilled programmers I know using it. I'm really surprised that there hasn't been a neovim-equivalent project for emacs. Or maybe it's out there and I just don't know about it?
> it's a toolkit for creating text-oriented applications
Good examples IMO are Magit and Org-mode. They are basically large applications created in Emacs, and each so good that they are worth learning/using Emacs just for using them (I'd say AucTeX is also in this category).
Writing Scheme (Guile Scheme on Guix) has been the only time I've actually felt a "eureka moment" and feeling like I understood fully what my code was doing.
Turbo Vision/Turbo Pascal for MS-DOS. Borland put together one of the best instances of an object oriented library that just worked. The follow up with Delphi for Windows was the most productive environment I have ever experienced, until I was priced out of it in their pivot to "Enterprise" customers.
Nothing since is anywhere near as productive. Lazarus is ok, but the documentation is horrible, (almost non-existent, only doc-strings in many cases) which makes the system far less useful.
Related: I did a lot of WinForms programming in C# when I was a tools programmer at EA and I don't know if I've ever been as productive as during that time. The whole API was really well thought out, Visual Studio was fast, C# was a beautiful language.
I cut my teeth on Borland Pascal and later launched my career with Borland C/C++ tools. I love JetBrains these days but not as much as I remember loving Borland!
There's Lazarus project (https://www.lazarus-ide.org/) - which is a Delphi compatible IDE. I've used it once to build a simple UI app, and it was a real nostalgic look back in time. Not to mention that it was extremely simple to build the app.
I just loaded 4.6G of json into an in-memory sqlite database in about 20 lines of Python. It isn't just for data storage, it is also great for quick adhoc analysis.
I honestly eats most people's "big data" for breakfast. Combined with datasette for turning your ad-hoc queries an api of sorts it's like a superpower.
As someone who very much enjoys tinkering with Linux and such, but also likes trying to onboard other people who aren't as techy, Syncthing is such a killer example. It's almost depressing because I want other free and open source stuff to be this good, though I know the economics (and proprietary interference, perhaps?) make that tough.
I use syncthing and appreciate that it exists. It's way too easy to accidentally all your data, though!
I've been dragging my feet migrating a hard drive from my old desktop to my new one for 1.5 years. This past weekend I finally got motivated to power the old one up and wait for syncthing to give positive indication that it's in sync with my server. The reason that was even a concern of mine is that the last time I used that desktop, I spent a whole weekend cleaning up about 200 GB of renamed and duplicated "sync conflict" files that syncthing created and then synced to my server when I previously migrated hard drives. I wasn't sure if all the fixes had made it to the server yet. That required writing my own tooling to positively confirm every duplicate was bitwise identical before deleting one or the other.
The official documentation suggests I remove syncthing's metadata from the drive and then add it again on the new computer, and let it re-sync. It's a good way to check for bit-rot I guess. At least the documentation these days suggests marking one instance as read-only.
I've managed to delete a folder with GBs of important data because Syncthing has non-obvious ways of handling data. Thankfully, I have tons of backups of everything, so it wasn't a big deal, but since then I've been extremely paranoid when using Syncthing to make sure it doesn't happen again.
I do all my work with Syncthing in send-only folders, one per machine. Which means that if I make changes across machines then I always get duplicates, but the redundancy feels natural - it's only concerning if the data size becomes absolutely huge.
I'm really surprised by that. I feel like Syncthing would really benefit from a simplified user interface for its core use case, which I imagine is "sync one folder across multiple machines". It's really nice that they have all these additional features and detailed information on the default dashboard, but it can be really confusing if you cannot form a mental model of what the software does.
i feel it's a great example of software that doesn't get in your way.
I would've love to write about how awesome it is but came across this wonderful essay regarding the same titled _"Computers as I used to love them"_ [0]. Highly checking it out.
The last time I used syncthing to sync files from my android phones to my Linux server, it always somehow got stuck and never recovered. The phone app would stop running in the background and I would forget to rerun it and if it was started a long time later it would get stuck and would not recover automatically. Eventually, just stopped using it. Now I use Google photos and it works great to backup my family's phones' photos.
Redis. The interface is quickly obvious using telnet (which makes all clients pretty obvious). The documentation is both succinct and complete. All operations are listed with their big-O notation.
Redis's interface and docs are a joy to work with. Everything just feels so straightforward and uncomplicated. The config file stuff gets a bit weirder, but it's still pretty easy.
I'm going to say vim, and I expect to get yelled at by folk who don't use vim, but here's why I call it elegant software:
Vim's entire command model is based on the simple composition of motion and action.
Until you understand this you will never 'get' vim. You may be able to use it, even efficiently, but you will never understand how these two simple concepts when fully understood are a force-multiplier.
Motion indicates a range, of characters or lines, there are motion keys that will get you: to EOL, to EOF, to next matching char, to braces, to function / class blocks, to predefined marks.
An action is something that applies to that range, these can be anything from auto-formatting, changing case, calling out to another process, anything you can imagine.
When you understand this suddenly g=GG, ct(, y$ become: reformat document, cut to next paren and copy till EOL respectively. Does that sound horribly esoteric? that's only because you don't yet understand vim motion composition.
I'm happy to argue that if anyone fully understands this concept they cannot help but agree that vim is at it's heart seriously elegantly designed software.
I just recently had an opportunity to flex my "wizard muscles" in a case where we had to feed a test consisting of about a hundred similar messages with minimal differences (numbers) to a system in order to reproduce an issue.
"Can't we try to reproduce this?"
"Maybe, but we'd need a lot of messages in order to reproduce this."
"And...?"
"That's tedious."
"Watch me" *grinning*
A couple of edits and a `100@q` later, we were ready to go.
It's almost like Git + Gitea, all in a single application. Code + tickets + wiki + notes all version controlled and capable of hosting the server itself. Also, the repo is just a SQLite database, so backup is easy.
Shoutout to the OpenRCT2 project! https://openrct2.org/ Lots of great QOL fixes and improved modern hardware support.
For another fun game in this genre, check out Parkitect which has some additional logistical elements around transporting and hauling stall inventory https://themeparkitect.com/
I still play this game today. I managed to carry along an older version of RCT Deluxe (before it was patched up to defeat all the cheats) for all these years, with the cheat program "beast trainer" (or "cg_beast").
I'm aware that the concept can be difficult to grasp and that the cli commands seem weird at first.
Once it clicks however, it's an absolutely fantastic tool. I'm still often amazed by what is possible with selective resets, diffs, greps, and most impressively interactive rebases. It makes a lot of otherwise difficult tasks much easier, and more elegant.
Git is IMO one of the most important pieces of software made in the last couple of decades and should be celebrated more for it.
I disagree with this. I think the basic concept of Git is beautiful, but the implementation of some of the derivative operations are distinctly inelegant.
For example, I hate the use of "ours" and "theirs" in merges and rebases. I understand the theory of why it works the way it does and why it's different when merging versus rebasing, but I just think it's downright confusing, and I still have to double check every time to make sure I accept the right changes. I don't understand why they couldn't just use the branch name (or branch + commit hash).
One thing that confuses me about Git (but I guess isn't unique to it) is what happens when you merge and "git show" the merge commit. It seems the classes of changes are:
- things that were the same on both sides so aren't in the diff
- things that were different but auto merged, and aren't in the diff
- things that were different, were auto merged, and are in the diff
- merge conflicts
But how did it distinguish the last 3? And how does it get so confident about #2 that it doesn't show them and there isn't a single command to show them?
When hg was more popular I knew someone working on Google Code who told me it was better because it didn't have rebases, but I'm pretty rebases and linear history are safer than this.
Something that comes to mind is that Github, as part of a "papercuts" initiative to fix small things, added an arrow that points from the "theirs" branch to the "our" branch in pull requests. That was such a huge improvement for me.
I agree that the concept behind git is powerful and elegant. I very much disagree that the implementation of git is elegant. For example: how many things does `git checkout` do? And I still don't understand what the point is of having a staging area. Or why I need to edit files in order to provide Git with instructions on what to do when it rebases. Or why adding hunks is so much more difficult than adding complete files.
Git is built upon a powerful and beautiful concept, but the Git CLI is just about the worst viable interface to that concept that you can build. There's a reason that there are so many other Git UIs, such as Magit, SourceTree, GitKracken, etc.
> I still don't understand what the point is of having a staging area.
Nor do my coworkers who keep committing whitespace changes and machine-specific configuration changes unrelated to the tasks at hand.
For a more serious answer: I find it extremely useful to be able to control what goes exactly into each commit. Sometimes I refactor more than one part of the code at the same time. If one part passes tests and another one doesn't, I can commit just the one that does. If I spot an issue in one part of the codebase and I add a TODO or FIXME in a comment, or if add some logging, or some other inconsequential change, I can leave them in the working copy for later while still adding "atomic" changes to the repo.
But by far the most use I get out of it is as a last chance of reviewing the code I just wrote, or as a way of checking the results of an automatic merge before the actual merged commit gets written.
You would still be able to do that without a staging area by having `git commit` take roughly the same arguments as `git add`. So instead of running `git add foo bar baz` followed by `git commit`, you'd just write `git commit foo bar baz`, and it'd launch $EDITOR for the commit message.
My understanding is that other distributed version control systems, such as Mercurial or Bazaar, take a similar approach to the above.
The staging area is so you can get your upcoming commit pretty enough (by adding/removing hunks etc) ahead of time before creating it. If you were previously on SVN where there was no intermediate step between creating a commit and putting it in official server history, I think you'd appreciate all the safety you could get.
It does seem like it'd work without it, since there's commit --amend.
You might get a kick out of this. Tech Talk by Linus Torvalds at Google presenting Git: https://www.youtube.com/watch?v=4XpnKHJAok8 . He goes into his thought process of the pains with existing version control systems and how Git goes to address their shortcomings.
Not sure about that - yeah, git is a powerful tool, but I can't help thinking that lots of things could have been done in a more straightforward and less confusing way. Such as the "double responsibility" of "git checkout" which has recently been split into "git switch" (for switching branches) and "git restore" (for reverting files).
I think the core concepts behind git are extremely elegant, but the complexity of using it effectively and the very large number of operations make it substantially less elegant.
The CLI is so confusing that Stackoverflow has a bunch of questions for things that should be easy or trivial by default. Last thing I would call git is "elegant".
I knew there would be people commenting on this. But I don't think something has to be intuitive for newcomers to be elegant.
There are also posts here mentioning vim, which to most people is weird and useless at first.
Some tools takes effort to understand, but once you put in the effort, they become tremendously powerful.
I would say that they are elegant in that they allow you to do something otherwise difficult in an effortless and obvious way. Both vim and git does exactly that - once you've gotten to know how they work.
It's not intuitive to newcomers nor to experienced users. Its interface does not make sense; it can only be memorized. The data structure is beautiful; the tool is ugly.
Those commands look quite arcane to my eye. In Mercurial we do:
> hg pull --force <project-to-union-merge>
> hg merge
> hg commit
-----
So in a sense you are right, that's exactly the point of contention. However, simple (and hard) things in Git are done in a complicated and non-obvious way, which require you to memorize a bunch of commands and flags which require understanding the internal implementation of git. There are much easier and intuitive ways of doing version control, and Mercurial proves it.
As a side note, I can't be bothered to look it up right now but iirc even Torvalds said that originally Git's CLI was not actually meant to be used by end users (rather, another abstraction layer on top would provide a friendlier approach to version control). But people ended up using it as it was, for various reasons.
Not a specific piece of software, but a characteristic of a limited percentage of anonymous internal systems--one of the most beautiful things to witness in the realm of design is a system undergoing catastrophic stress when the designers anticipated and planned for such events, and in usually a very short period of time you see the results of extensive planning spool out, design features hidden from view and unappreciated until this moment, kick in and recover/compensate in ways that feel almost magical.
For obvious reasons it is much more common to see this level of design in physical, life-critical systems like aerospace or automotive technology, but you do see it sometimes in software. Well designed services that under heavy load, various kinds of infrastructure failure, attack, or other kinds of scenarios well outside the bounds of normal expected operations intelligently compensate while signaling alerts with precise, useful information, and attempt whatever kind of recovery is possible.
This is hard to anticipate and often thankless to build in advance. It's always a stressful time when this behavior is visible, but it gives me a feeling of admiration for the perhaps long gone employees who built it.
Mercedes has a system on the S-class that uses the sensors to detect a potential impact (e.g. an rapidly approaching vehicle while the car is stationary) and uses the active suspension to "jump" raising the car a few inches just at the moment of impact, apparently slightly reducing the potential for injury.
Many cars and especially planes & spaceships are have tons of systems like this.
I worked on a system that did things like in response to a primary datasource being offline would switch the queries it used to a different database to include substitute data and pass this back to be used instead until the primary store came back online. Three years after it was put in production this happened on one of the company's biggest and most important days of the year, and our SREs were sitting there calmly trying to solve the issue, ended up waiting until late at night to deploy a fix. It would have been reasonable for this service to just rely on the primary source, but we would have been offline for hours if this little trick hadn't been put in place.
>...a different database to include substitute data and pass this back to be used instead
The thing that trips me up about this is how can you ensure that the substitute data is actually useful. Is the substitute data a copy of the primary data? I guess it all depends on the use case...
Haha, maybe not? Since their car would tend to go underneath, although the footage I've seen always shows the other vehicle hitting from the side with the hood sliding slightly beneath the Mercedes...it does seem from watching that would give other vehicle slightly more space to dissipate the impact energy.
Now if they had only bought a Mercedes as well, they could have chosen the self-breaking option to avoid the collision in the first place.
One that I've always thought is really cool since I heard about it is the base of street signs. Instead of being cemented into the ground, signs are commonly bolted to a base so that they'll break away when hit. Some even have an incline that will throw the sign up and over your car. After learning this, I've started paying attention to signs I pass on walks or while driving and it's crazy how they are everywhere and I never noticed.
Another thing to look into if you find that cool is guard rail design. Modern guard rails have some cool features designed to reduce risks associated with hitting them.
Nortel DMS-10 telephony switch springs to mind. They run in landline phone company switching offices amd date back to the early 1970s. Those things are indestructible in the face of all kinds of disasters.
Yes! I'm working through the Design of Computer Programs on Udacity, which he teaches, and it's so enlightening to hear how he thinks through the conceptual framework of a program and see the code he comes up with to implement it. It's also nice to see that his code isn't always super elegant and that he's willing to write something readable and straightforward if it works well.
This is nice. One interesting choice is the decision to compute the number of observed words, N, as the default value of an optional argument, thereby ensuring that it is only computed once, while still limiting its scope to the function in which it is needed. Perhaps this is a common pattern but it's one I haven't stumbled across before.
The short circuiting `or` chain is also pleasantly virtuosic. Sometimes a little flashiness is tolerable when it works this well!
> One interesting choice is the decision to compute the number of observed words, N, as the default value of an optional argument, thereby ensuring that it is only computed once
You're saying python computes an expression that's part of a default value when the code defining the function is run? I guess it makes sense now that i say it, I wouldn't have automatically assumed that. That's an interesting one to remember.
A cursory glance suggests this function appends a number to an empty list every time it's run. But actually, the default argument is evaluated once when the function is defined, so calling this function repeatedly will continue to append to that same list.
Signal - The elegance here is the "Privacy-first Design". Every feature and code for Signal messenger is designed on collecting as little (or no) data as possible and it is an essential tool for folks like me who are tired of having tracking and ads nonsense in their most-used apps.
Can you create an account without a phone number? Last time i checked this wasn't possible and is everything else than "Privacy-first Design" or as you said "Every feature and code for Signal messenger is designed on collecting as little (or no) data as possible". Here in Germany at least it's not possible to register a new mobile phone number without your identity connected to it.
We recently had to do a Signal bot and for an elegant codebase it sure is lacking in terms of libraries. Not to mention the license being as aggressive as possible to prevent any clients of any kind.
quite happy and impressed with signal too. realllyyyy cheering for them to become the default messaging app. i still use whatsapp heaps but would love to switch over
It's architecture, the window manager/system, the UI, how it is built by and for the users, how the API is integrated and how fast it is compared to other software.
The things app https://culturedcode.com/things/ , even though it's just an elegant TODO app something drove me to buy the iPhone and Mac versions.
Some time ago I spent a good amount of time looking for a development stack that allowed me to just build stuff. I ended up trying and deciding on Laravel Jetstream with InertiaJS https://jetstream.laravel.com/2.x/stacks/inertia.html. Laravel was easy enough already to just pick and do things, now this solves the backend+frontend projects for me by allowing me to just put vuejs components on top of my laravel app and jetbrains already comes with Auth stuff setup solved.
I love Things, and for a long time used it every day. I just can't get behind their pricing model though, the desktop version in Australia is $80 which is outrageously expensive for a TODO app that can largely be replaced with the slightly inferior but free Reminders app.
Not only that, but even if I pay that price, I can't share lists with my wife? I would really love to use it, but that just seems ridiculous in this day and age.
That's where I'm at with OmniFocus. I love it so much, but it's really expensive, and I'm finding the extra features above Reminders aren't as crucial to my daily life as I'd thought.
Yep same here, Omni was my next stop after Things and I came to much the same conclusion.
Personally I'm pretty happy with Reminders, being able to shout reminders at Siri, and getting location-based reminders has actually been pretty useful for me especially these days when I don't know when specifically I'll be in the office.
The Siri integration was icing on the cake, as was "Remind me when I talk to...". OF had pretty decent integration, but I couldn't step out of the shower and ask my Homepod what I need to do today.
A shared to-do list with my wife was the real killer feature.
That’s even less attractive when you consider that you have no idea when they will switch to a new version number and ask you to pay again.
Things 3 have been out for a long time now so personally I wouldn’t take the risk and would rather wait for Things 4, so that my payment lasts some time.
I've been using Things on iOS for a long time and I should say their desktop pricing for me is worth the money and investment. I understand its pricey because it is just a one time payment. As a father, husband and dev juggling between work and family it is just a task to keep up with everything.
I'd say Linux Mint. It combines everything that's good about Ubuntu while removing things like snap. Also the design is so much better but I feel that's a personal choice. But everything in linux just works so well now.
I run 4 monitors on 2560 resolution on two separate amd cards and everything runs flawlessly. I have all the software for free and most OSS is just as good if not better for my work (except games and Photoshop but photopea is a good alternative for that, and it can be easily my second nomination for this thread).
I know linux has evolved a lot and it's the effort of millions of volunteers which has made Linux what it is today, but for me personally Linux Mint really combines all the great things about linux into an amazingly elegant software.
Hmm. I was kicking the wheels on cinnamon the other day - but because it's not on Wayland it won't do mixed resolution displays out of the box, so it's not really viable for me.
A pity, as I dislike snaps and the generally dumbed down direction of recent Gnome, so it would otherwise have been a good fit for me. For now I'm (still) on Ubuntu though.
This may be technically true, but since I don't want to spend a lot of time messing with command line tools then it might as well not be.
Edit - To add a little context since that on reflection seems too dismissive...
I did try a whole bunch of things a while back, include several variations on xrandr stuff. I never did get to a place where all the following were true:
Both screens were operating at their maximum resolution
All app windows were appropriately sized (readable fonts, correctly scaled menu bars, etc.)
Neither screen was blurry due to scaling
I wouldn't swear it was impossible but with recent versions of Ubuntu under Wayland I could have all three without having to do anything at the command line.
I love the command line. I just don't find messing around with config to make the basic system stuff work properly edifying.
You're right I did face some issues too with this when one of the monitors was different but the solution was rather simple. Just had to add a Modeline in some file called 10-monitors.conf(1) and then it worked perfectly until I replaced the 4th monitor with the same configuration, then no such hack was required.
You're right it didn't work out of the box but it was very simple thing even though I'm relatively new to linux myself (relatively speaking)
It's possible I am misunderstanding something, but you can do mixed monitor resolutions on Linux Mint Cinnamon. I usually have two monitors hooked up to my desktop, a 1440p resolution monitor and a 1080p monitor. The 1440p monitor is running at 144hz with the 1080p monitor running at 60.
It is a bummer that it's not on Wayland, and seemingly they have no current plans on migrating.
No longer a question of elegance, but of practical importance anyway: Will they keep up with maintaing a Firefox package? (Or Chromium if you are so inclined?) And if they do, why could you not use the same on any Ubuntu variant after you remove snap?
Hm, that is strange, they have an upgrade process for the past 3 or 4 releases (presently done through the GUI), and they've had unofficial upgrade procedures for versions before that. I've recently got my hands on a notebook I had installed 2017-ish, and got it upgraded to 20.3 with a couple of "next-next-finish"-like wizards.
They did a really nice job of building thin layers up the stack from byte buffers (bytes), to async-friendly logging (tracing), basic IO (mio), async runtime (tokio), generic request/response services (tower), HTTP (hyper), and a web framework (axum).
Each of the layers are useful independent of the other layers above, and every one is has a thoughtfully designed, pragmatic interface.
I think this argument refers more to Rust not having built-in support for async from the language’s v1.0 rather than the design of the Tokio stack. That has definitely led to unfortunate incompatibilities between libraries built for the different runtimes. However, the Rust language team took a super methodical approach to async support and the way that more async-related traits are slowly being standardized (first Futures and hopefully Stream, AsyncRead / AsyncWrite at some point in the not too distant future) seems like a long-term-great way of building into the language the abstractions everyone can get behind while leaving room for experimentation. I’m sure others would have different takes but I’m a fan.
Personally I love being able to experiment on the bleeding edge but waiting for a stable implementation, even if it takes 3 years to reach discussion on the RFC's. There are crates if you need it today, which operate on best-practice, but for an official solution I accept that it may take time for it to achieve acceptable standard library inclusion.
It's not standard library, which makes some believe it is of lesser quality (not fit for stl?). Tokio works splendidly and I don't think it's a common belief that it's hacky. That being said, language-wide, async is a bit less focused upon (not in stl, trait fns cannot be async, etc) but otherwise the integration is very good.
Non-trivial graphs will make it produce hard-to-read output, and you can try fiddling with it forever to get better output. But it's still the first thing I reach for when I have to make a graph.
Agree. And after years of using it, I discovered osage, and after a few years more, gvpr. It gives and it gives. And yes, I'm forever fiddling with it to make it look the way i want it to, but in my heart I know the layout algorithms now better :)
TeXmacs (www.texmacs.org), which is a finely crafted document preparation system realizing at the same time both the structured and the WYSIWYG paradigms.
It is vastly superior to all other document preparation systems. In particular it is superior to both TeX (in all its variants) and to Word, under all respects: conceptually, in the power that it affords in manipulating documents, in the ease with which it makes it possible to write, concentrating only on the content and yet having one's document in front of one's eyes.
Turbo Pascal 3 on an IBM 5150. Pick any font color as long as it's green. Editor does automatic indentation and understands arrow keys. Compiler is built in and does one damned job, making EXEs. Runs from a floppy disk and you install it with COPY if you're fancy enough to have a hard drive. The future is wide open.
I was also going to suggest this, although there are a few things about the UI that I don't love - e.g., it takes 5 'tab' keystrokes to get from a note title to the note body.
On that note (no pun intended), I do wish it was possible to query tag intersections, so to speak. E.g., show notes tagged with both 'todo' & 'coding'.
I find the editor and UI fairly lacking, but I really appreciate the E2EE. I have a few more years on my 5 year subscription. It is interesting knowing your notes are fully encrypted sort of mentally frees you to write down exactly what you're thinking, and spend less time self censoring.
Man, I've looked at Standard Notes and want to love it and switch everything over to it. This is the only thing holding me back is the lack of background sync on mobile: https://github.com/standardnotes/mobile/issues/45
I guess my definition of elegant would be software that has fantastic UX and just works and works so well it boggles my mind how well it works. I'd also extended that to include a foundational core that all other parts can be built off of. In that case, I'd go with vim. I'm not even a huge Vim guy (use it for notes and remote stuff, but not my primary editor), but the concepts are simple and oh so powerful. It's just building blocks on top of text editing.
Pipes in Unix as a concept are also a great abstraction. A bit dated, but still every powerful today.
Files in Unix as well. Some people have gripes which are fair, but the idea that a device, a file, and a socket are all accessed via the same API is fantastic. Of course there are issues, but it's generally worked really well for me.
Yes!! Learning Vim was a revolution for me. It eliminated the strain and repetitive movement of constantly turning to the mouse or arrow keys to navigate the cursor around. Just learning how to jump between INSERT and VISUAL mode and move the cursor was enough to hook me. Now I use nvim + COC for Intellisense-style completion and it's a joy. I love using this software!
FFMPEG is an excellent piece of software. I used it last weekend to build an automatic video editor / titler and it only took about 10 hours to make it work. I was going to use moviepy but the rendering time was extreme and the memory consumption was horrible. My 300 line python script with imagemagick and FFMPEG produced a 30 minute long video in under 5 minutes.
was just about to mention this as well. considering it only takes 30 second to install and log in on a device, and then it just works without any more setup... can't really get any more elegant than that
Ecto the database driver for Phoenix is probably the most amazing piece of software I've used. Elegant dx, performant and just enough of an abstraction on SQL.
pandoc comes to mind, one of those pieces of software that “just works.” I’m not a Haskell guy but pandoc makes me wonder sometimes.
Visual Basic (cue the hecklers). Yes the language is awful. But the tool was great. I wish Microsoft (or somebody) would release a new version for full-stack apps with a drag-and-drop UI with js event handlers, easy backend framework, and a one-button “deploy to cloud” button for testing. Then a “publish” button that sets up your CI pipeline for production deployments. I feel like we’ve raised the white flag in terms of what software development should look like in 2022. Writing yaml feels like banging rocks together compared to possible alternatives.
Nim. It's just so quick and easy to write high performance code. That's why I'm writing a web framework for it, soon to be released: https://github.com/jfilby/nexus
Sublime Text introduced multiple cursors to the world. A single incredibly versatile and elegant feature. That feature has empowered my ability to use other pieces of software faster, and programming languages I was unfamiliar with... Better.
I've used it to:
- batch edit columns copied from excel files.
- wget/rename/run cli commands on dozens of inputs without having to worry about how for loops are written in bash/bat/powershell by just typing commands on a hundred lines and concatenating them with &&
- extract data from various text files without writing parsers or even thinking about regexes
This is one keyboard shortcut, with one of the smoothest learning curves ever. Pure elegance.
I don't see Airtable here yet. That team managed to make relational databases user-friendly to a consumer market without compromising the core features a power user might expect. Hats off to them!
The original MapReduce implementation by Jeff Dean and friends is probably up there for me. Couple of hundred lines of code doing bunch of task distribution on a very large scale is just very impressive.
Of course the current/latest version of it has took a life of its own in size and complexity (but of course with performance and reliability too) but the initial version still shines through!
After 50 years it doesn't seem like it, but C has managed to survive this long because it's a solid all-round player from the bare metal to web servers. It's simple enough that you could probably implement a fairly capable C compiler in assembler.
- Licensing of Unix allowing it to proliferate to the masses and being used for education
- because it is simple enough that a compiler can be quickly brought up for any new ISA that appears, as long as it looks enough like a 70's-80's CPU architecture enough for pointers to work.
Elegant? No.
- Making pointers and arrays synonymous is elegant only from the CPU's perspective.
- The pointer syntax sucks.
- Casting does weird stuff sometimes.
- Bool - how hard is it to get true and false right?
- Everything being an operator leads to the confusion between assignment and equality which is inelegant. It was cute in the 70's when you had limited disk space but sucks now.
- `void *` being used for function pointers is not elegant.
- Threads and any notion of multiple CPUs doesn't work well without a lot of libraries or help.
- An elegant language would have not cared about the underlying CPU memory model, but C had to be enhanced for 16-bit x86 segmented memory models.
- If you are using intrinsics or whatever to generate assembly opcodes (e.g. vector instructions) because your language doesn't support them, you are surpassing the limitation of your language in an inelegant way.
- An elegant language makes things like the IOCCC impossible.
> - Bool - how hard is it to get true and false right?
I think this is actually much more complicated than it seems at first thought. There are a lot of different ways to represent booleans, each with their own advantages, and then the hardware has it's own ideas that might need to be considered. I'm not sure that there's any way to do bool that doesn't lead to pain somewhere.
Those damn C preprocessor macros. You have to become a compiler to understand C code. The elegant metaprogramming approach is what Zig does with comptime.
townscaper is another "game" that i would consider to be very elegant considering all you do it click somewhere and it decides what type of building to add. the only option you have is what colour it will be
Keyboard Maestro. It’s a tool for automating common tasks on a Mac. A real programming language is certainly more elegant for writing programs. But what I find elegant about Keyboard Maestro is that it lets me add programming logic to any application on my Mac, quick and dirty.
I love Keyboard Maestro's "Click on Found Image" action. Definitely quick and dirty, but it's great for ad-hoc web page automation when I can't be bothered to knock something up in puppeteer etc.
Lots of features, sure, but elegant? I've always felt like excel was about as far as you could get from elegant, especially once you want to do anything more than data entry and simple charts.
Elegant software doesn't normally need an interest group dedicated specifically to preventing people from misusing it. http://www.eusprig.org/
I recently had to do a bunch of currency conversion for my tax accounting. I had hundreds of transactions, and I needed to set the right exchange rate for each transaction, based on the date of the transaction. This took just a couple of minutes to do in Excel. The solution - a simple Excel formula that compared two rows – was indeed, extremely elegant. Excel enables this type of elegant calculating all the time.
I agree. The declarative/functional nature of spreadsheet formulas is certainly elegant, even if you might not say the same about Excel as an application overall. The same could be said about other spreadsheets (e.g Google Sheets).
I'm a definite 'excel-apologist' and think Excel is brilliant and incredibly powerful when used correctly, so the below may be biased but...
> Elegant software doesn't normally need an interest group dedicated specifically to preventing people from misusing it. http://www.eusprig.org/
Plenty of the other software listed has user-error misuses (For instance C is listed, and by the same standard it could be considered responsible for more software vulnerabilities than anything else!).
Sure there are errors in spreadsheets, but as the alternative is often calculating something by hand or asking similarly-trained users to write a python scripts, I think both those options probably create more errors.
Looking at the 'horror stories' listed on that website:
* The first listed is about the UK government using a version of Excel that is over 10 years old, with an issue that would not have happened if they updated the software.
* The third listed is because someone ENTERED incorrect information into a procurement spreadsheet (they copied the specification for a standard bed in rather than a critical care bed).
* The fourth is user input error - they input a fund as Dollars rather than Euros (how is this Excel's fault?)
* The fifth and sixth talk about logic errors - one with hard-coding a value and another with using 'cumulative mileage totals rather than running calculations on a sample average for vehicles'.
I agree that spreadsheets can have issues, but most of these can be mitigated by setting up sheets properly and I haven't really seen a compelling replacement for a spreadsheet for the sorts of stuff it gets used for.
The real problem with spreadsheets is a lack of training - I would estimate less than 20% of users know how to turn on cell validation, less than 10% know how to write a dynamic array formula, and less than 5% know how to use PowerQuery. It's like asking a bunch of people to write python code, but only 10% of users know how to write a loop, and then we are surprised that there are issues.
Besides, if you input a fund into a fancy financial package with the wrong currency it will cause the same issues.
Excel has an unrealistic barrier to performance to meet if we point to the fact that a desktop spreadsheet package is not good at being used for DNA and amino acid analysis.
We don't expect any other off-the-shelf technology to meet such a wide variety of use-cases, and somehow I think the statement 'excel is bad at processing bioinformatics data' speaks a large amount about how pervasive and flexible it is.
Besides, in reality Excel actually can handle the data perfectly fine if the data is brought in correctly (i.e. Data -> Get Data and using PowerQuery rather than just importing a file and hoping Excel works out the types correctly).
If the argument is feature:[something else] ratio, then I might be able to consider it, FSVO something else, such as "UI complexity" or "learning cost". Partly in response to sibling posts, the PP definitely makes me think twice about why I'm willing to call PostgreSQL elegant but pause a bit harder to evaluate Excel.
Yes, I think maybe the original VisiCalc was elegant, even if far less featureful. It was a game-changer, one of the first general purpose PC applications that let users do row/column based computing without programming, and along with word processing was the software that underpinned the explosion of PC use in business.
I taught Lotus 1-2-3 for a while, so probably had well above average familiarity with it.
And then in 1987, I opened Excel 1.0 on a Mac SE for the first time. Was blown away at how elegant that felt. Later versions seemed to loose that original elegance as features were added.
Norton Commander - https://en.wikipedia.org/wiki/Norton_Commander. Not sure how elegant on the inside, but it comes from the era when software development was not so fast paced. And the fact that it inspired so many spin-offs (just to mention few that I personally used: Volkov Commander, Midnight Commander, FAR Manager, and my favourite - DOS Navigator (it had spreadsheet!)).
That's not `calc.exe`. I think windows doesn't ship with calc.exe anymore. The fair comparison would be the last version of mIRC that was current when the last version of calc.exe was shipped. I don't believe that mIRC would use less RAM than that... Certainly not when you were actually USING it! (including reconfiguring the size of your scrollback, for example..)
Some of the best C++ code written. Extremely clear and concise. Chess-AI is a bit complicated but the source-code + comments seems to inform the programmer where all the problems are.
Native tabs, panes. And the keyboard shortcuts for handling them is super intuitive.
Some innovative keyboard functionalities. E.g., highlighting all the URLs in the current buffer, so I can open it in the browser with just keyboard shortcuts, same for file paths. One keyboard shortcut let's you open the last output in a pager. So I don't have to `previous-cmd | less` again. There are a lot of small stuff like this.
Configurable to its core.
Broadcast feature which lets you print output to multiple panes.
Plugins (called kittens!) which can extend the functionality a lot. Like ctrl+f in the terminal.
And as the sibling mentioned, well maintained. Kitty is one of the softwares on my "donate-to-when-financially-stable" list.
Automated trains / subways. They are reliable enough for millions of people to trust them with their life, every day. That's an incredible achievement.
If you're ever in Paris, get a front-row seat on subway line 14 and feel the magic.
I used Notational Velocity for a few years for taking notes in school. I was really impressed by the search/create function and have tried using the same paradigm in my own projects. I couldn't find an equivalent on Linux, so I ended up using Emacs which isn't elegant in the same way.
I've been pleased by NATS (https://nats.io/). I like how it builds its functionality on layers of abstractions, from the most basic (pub/sub), to request/response on top of that, to key/value and persistent streams on top of that. The CLI is simple to use and you can learn it in an afternoon, but it's robust enough to deploy.
Geometric algebra for any R{p,q,r} dimensional space.
Has it own custom JS to JS transpiler so the literal number "1e10" becomes a bivector.
The code is just around a thousand lines while it lets you do amazing things like this right in the browser:
Stardew Valley! I always marveled at how it was a one man team, and everything from the graphics to the game systems seemed to work well. I haven't seen the code base or anything, so I am not sure if this is 'elegant', but my assumption is that for one person to put out that kind of work, some things have to be going right in the design.
I used this to show my kids how computer games work. I started out by showing them Unreal Engine 4 so they understand the modern tool chain, but that still leaves the mystery of how the game is represented inside the computer. Within minutes you can have some code drawing things, real graphics, on the screen. I cannot adequately describe how easy it is.
The experiment I did with them (young kids) was define some variables, draw a counter on the screen, start a loop, and then increment the counter. After they saw how that was represented to the computer it was easy to get them to imagine a character with all of his stats assigned at the beginning and then updated on every iteration of the loop as events take place. But with kids you've gotta move fast or they lose interest. PyGame makes creating games so fast you can literally pick it up and start teaching it to kids without knowing it yourself. It's that good.
The biggest testament to the quality of the numpy API is that it effectively became a standard that's replicated across other libraries like TensorFlow, PyTorch and JAX.
The original QSpy protocol, which then became the GameSpy protocol, and later still made way for Valve's Master Server protocol.
I'm not sure what other protocols exist today for tracking a list of servers, providing information on them, and are as up to date as the frequency of the heartbeats from those servers, but I suspect there are similar protocols out there, and I'm just not familiar with them.
Unfortunately despite how elegant the QSpy protocol is, most modern video games no longer provide server browsers as first-class features, eschewing them in favor of matchmaking services or publisher provided dedicated servers.
IMHO SolidWorks is terrible. Single core performance limitation, huge disk IO, no out of box support for STEP GD&T export, ridiculous drawing-oriented BOM export UX, half-measure built-in RCS/VCS system, collision-prone namespace, etc.
Comsol is a truly inspirational bit of software. Remarkably easy to get started but with a huge breadth and depth of applicability. I wish there were a way it could be made free. Like, I think it might not be a waste of money for a country to buy a country-wide site license for their citizens.
A browse through their application gallery can be pretty interesting:
There's examples of studies into everything from cooking beef in a convection oven, to Bose-Einstein condensates, medical implants, geothermal storage, semiconductor manufacturing, etc, etc.
The modern Comsol GUI is truly phenomenal, especially considering where it was at with the 3.x versions. It's one of the rare cases, pretty much the only one I can think of in CAD/CAM, where the effort put into a radical GUI redesign really paid off significantly and immediately.
Diagram!, by Lighthouse Design, for NEXTSTEP. A drawing tool that featured "smart links." I believe it pre-dated Visio. Later cloned as OmniGraffle by OmniGroup.
Elegance for me means a program that's lightning fast, efficient to use, and has a minimal learning curve. The first things that come to mind:
OurGroceries, a lovely little free app to share grocery lists with your family
FooBar2000, an early windows-based media player
uTorrent (or at least the very early versions, before it became bloatware)
Snapchat (again, the early versions when it was 5x quicker than any other mobile photo sharing platform)
It's very focused on the task of writing - novels, short stories, screenplays. It pares away the parts of e.g. a word processor that are distracting (layout and the like), adds the functions of a database for tracking characters, locations, research and so on.
The underlying implementation is also storing as plain files in directories, so you can be comfortable that you'll be able to retrieve your writing if Scrivener no longer exists.
We put a lot of effort and consideration into the architecture of Unikraft[0][1], its elegance towards modularity and abstraction is the reason why I joined the team to help develop it. :)
Circus Ponies Notebook (On OpenStep, then OS X), made my jaw literally hit the floor when I first saw it. I didn't just think all software had something to learn here, I thought this was the only software anyone would ever need.
To be fair, my job lend itself toward this at the time, the WWW was nowhere near what it was a few years later, and everything-tied-to-everything was miles away. But this blew my mind.
Everything, a Windows application which finds/searches files instantly.
If MS Windows would include this as standard search function the worldwide GDP would probably go up by like .1% from the productivity gain.
https://www.voidtools.com/
Their Setup for the Docker stack [2] is not only well documented, but also suitable as a blueprint for any persistent software setup with Docker.
They also have a mentality for documentation as a firct class member, which is really important for OSS and self-hosting. You'd have to look hard to find any outdated piece of information or lazy written part in the docs.
Lastly, the interface is just beautiful, simple and elegant. I finally enjoy listening to my music library again.
Insanely powerful and useful - very much so. And immeasurable impact on the entire media software and soundware landscape. It’s probably amongst my subjective top 10 list of software.
Saying 4.3BSD Unix is like saying SVR4 Unix. Which implementation ? As far as i know SunOS (4.1x) was a 4.3BSD implementation and Solaris was a SVR4 implementation.
ZFS. It made filesystems and Software RAID elegant and accessible with an elegant interface and a pleasure to work with.
It offers all the features that one would want in a file system without many shiny bloat.
Scrubs
Distributed parity
Hot spares
Encryption
Snapshots
Compression
Quotas
All in an intuitive way that led to an almost cult like following in the Unix/Linux admin world.
Microware OS-9. A Unix-like (..sort of) operating system that supported multitasking, multiuser operation on an 8 bit CPU with 64kb or less of memory back in 1979.. two years before IBM bought a CPM clone and called it DOS.
If yes, then I’ll offer an oldie but a goodie: Adobe Photoshop.
Many of its tools are intuitive. As a photo artist you don’t want the application getting in your way. It’s easy to learn how PS organizes multi-color/layer/channel images. You can do a lot with a little bit of knowledge of the application, so you can do the simplest things easily and quickly. With more application skill you can produce the highest quality images necessary for _any_ static image project.
I love utilities that quietly work, and just accept whatever workflow I throw at them.
OwnTracks is an app that logs your position and sends it somewhere else. It has been running on my phone for like 2 years without issues, and talks to a server I wrote myself.
FolderSync syncs folders on my phone to remote storage. It's super flexible and generally just works. The conditions for syncing are highly configurable. I lament the lack of a similar utility on Mac - basically an rsync+cron UI.
I have set and forgotten it for a few things here and there, like making sure the photos I take on my phone are backed up and available on my laptop as soon as they're on the same network.
I'm on Windows and hated the File-Explorer for years: it always resets the view (even if you told it to use your standard view for all folders) and has terrible defaults.
I tried many alternatives, then finally found XYplorer. It is so easy to use, well structured, but when needed it is also a power house full of soo many tools you otherwise need another app for. Really loving it and many kudos to the only(?) developer from Germany who constantly improves it.
Elegance is usually an unassertive quality. It's harder to say what software is elegant than it is to say what isn't elegant. To me it usually means that something is done correctly for you, things are presented clearly, and annoying chores are removed. Consistency and a lack of surprises.
The first thing that comes to mind as elegant compared to the alternatives is k9s for managing/monitoring kubernetes.
This is an obscure one, but Mike Innes "[automatic] differentiation for hackers" tutorial. It's a code tutorial, not software, if that counts. Both the way it's constructed and the functionality of Julia that gets shown off here.
MindNode is a beautiful way to lay out your thoughts, and the graphs can be copied as bulleted lists.
Alfred is an application launcher that also lets me enter shortcuts to quickly launch pages like my calendar and each of my team members' open PRs. I use it dozens, if not hundreds, of times a day. With the premium version, you also get a great clipboard manager.
even on an ancient laptop, it runs smoothly. I had no idea a javascript game could look/play this good. I've spent 100 hours on it since I first saw it posted here on HN
The Visio that came on the sampler floppy. It was small and did an amazing job.
PFE back in the day. Simple macros and templates that made life much easier in a small package. Some editors today don't even bother with macros or have all of PFE's options.
IntelliJ and Goland. I mean both Eclipse and Visual Studio used to be good tools, but they lost it, however IntelliJ remains to be a good tool, throughout the years. Also Goland is much better than Visual Studio Code, imho.
Spotify (brilliant!), unix, Google's Flume, DynamoDB, EC2, the Lithium ereader app on android (simplicity is good!), Wikipedia, Google Search (or perhaps Old Google Search), emacs, vscode
First coming to mind, HackerNews. No images, no ads, no complex settings UI, no complex points/karma system. Just text and links, it does what it's built for and does it well.
workflowy. i use dynalist these days as my main outliner / note taker since it has a lot more features, but workflowy still just has something more elegant and minimal going on with its UI
That definitely isn't why Node was created... Node is a standalone programming environment that embeds V8, it doesn't somehow help you embed V8 in one of your projects or make that "feasible". In fact, one of the issues with Node for a long time was that every Node plug-in was expected to directly use the embedding API from V8--in no small part as it is actually a pretty easy-to-use API--and that made the entire ecosystem lock-step on V8 API changes (which got tied to major Node versions).
Much later, Node added two little abstraction layer over V8's APIs: one that is mostly done in some header files (to deal with the occasional backwards compatibility issue) and another which actually wraps V8; and, even then, AFAIK the latter was mostly done to allow entirely replacing V8 with ChakraCore. But neither of these abstractions are designed to be used by others outside of Node's cosebase (something I almost sort of got working once, but not really): they don't help you embed V8... V8 is already easy to embed.
Notably: V8's embedding API isn't particularly more complicated than the API of any other engine: it is simply more templated. If you sit around with SpiderMonkey, JavaScriptCore, or even DukTape, you will find yourself allocating machines, managing handles, converting strings, and checking types of values. This is of course going to be verbose, in the same way that using JNI to call into Java is verbose, or generally accessing any embedded VM from a language like C/C++ is verbose (and for all the same reasons).
Nothing, necessarily. Primary initial motivation for tmux seemed to be to create a BSD-licensed "screen" for the OpenBSD userland. In any event, tmux quickly appeared in NetBSD's package collection and I was an early user. Comparing the source code is where I saw a difference. Maybe I am just dumb but I found the tmux approach was easier to comprehend and I could edit tmux source easily whereas I found screen was more difficult learn and I never felt comfortable editing the source code. I also thought when it was released tmux's documentation was more elegant than screen's. I do not follow screen anymore. Things may have changed after screen finally got some competition.
This is not to say there is nothing one can do with tmux that one cannot also do with screen, or vice versa. There could be. I only opine that tmux is elegant, and more elegant than screen.
My favorite example of elegant software is the way Spotify organizes playlists and allows for playlist sharing between users. The playlists are very easy to search, browse, and make. They are also easy to share with friends, and you can find an endless number of playlists curated by other users on almost any topic you can imagine!
Incredibly easy to host open source network video recorder with object tracking and hardware acceleration support. You have to install hardware and know what you're doing to hook things up, but bespoke systems that do these things cost tens of thousands for hardware/licensing alone and don't do them half as well.
Can you give an example? I’ve never seen it myself. What do you find the most impressive about it? Any links where those parts can be seen in action? Also, what would be the closest competitor, and in what regard are they worse?
I'm impressed by how polished everything looks. As a person who does UX / product design, their working software looks better than most designer's portfolio mockups.
I'm impressed by how fast and snappy everything works or feels.
I'm impressed by how rich and custom tailored their UI component library is.
I'm impressed by how focused and tailored their UI for job at hand.
I'm impressed by how every single page in their application looks beautiful, not just a handful.
They actually have all their React UI library published as opensource here. https://blueprintjs.com/
If there's anyone from pltr reading this, good job. Your design people are amazing.
Their devblogs are really nicely written and you can tell they are extremely passionate about getting things right. In my experience that's a rarity now.
And of course, it's a brilliantly addictive game.