Hacker Newsnew | past | comments | ask | show | jobs | submit | 0x37's commentslogin

These may be objectively superior (I haven't tested), but I have come to realize (like so many others) that if you ever change your OS installation, set up VMs, or SSH anywhere, preferring these is just an uphill battle that never ends. I don't want to have to set these up in every new environment I operate in, or even use a mix of these on my personal computer and the traditional ones elsewhere.

Learn the classic tools, learn them well, and your life will be much easier.


Some people spend the vast majority of their time on their own machine. The gains of convenience can be worth it. And they know enough of the classic tools that it's sufficient in the rare cases when working on another server.

Not everybody is a sysadmin manually logging into lots of independent, heterogeneous servers throughout the day.


Yeah, this is basically what I do. One example: using neovim with bunch of plugins as a daily driver, but whenever I enter a server that doesn't have it nor my settings/plugins, it isn't a huge problem to run vim or even vi, most stuff works the same.

Same goes for a bunch of other tools that have "modern" alternatives but the "classic" ones are already installed/available on most default distribution setups.


Also that workflow of SSH'ing into a machine is becoming rarer. Nowadays systems are so barren they don't even have SSH.


That's a cute thought not grounded in reality.

The infra may be cattle but debugging via anal probe err SSH is still the norm.


Someone might have ssh access, just not you :) VPS' will still be VPSing, even though people tend to go for managed Kubernetes or whatever the kids are doing today. But if you're renting instances/"machines", then you're most likely still using ssh.


Ansible (system management automation) runs over SSH. So do a lot of other useful tools, like git, rsync, most everything from the "CharmBracelet" folks [1], and also anything you can port tunnel, so yeah. SSH is still useful to some of us out here. Personally, I do all my commandline stuff locally and manage remote stuff via SSH through various tools and scripting, so I get mostly the best of both worlds there. :)

[1] https://github.com/charmbracelet/


Some are so vastly better that it's worth whatever small inconvenience comes with getting them installed. I know the classic tools very well, but I'll prefer fd and ripgrep every time.


For my part, the day I was confused why "grep" couldn't find some files that were obviously there, only to realize that "ripgrep" is ignoring files in the gitignore, that was the day I removed "ripgrep" of my system.

I never asked for such behaviour, and I have no time for pretty "modern" opinions in a base software.

Often, when I read "modern", I read "immature".

I am not ready to replace my stable base utilities for some immature ones having behaviour changes.

The scripts I wrote 5 years ago must work as is.


You did ask for it though. Because ripgrep prominently advertises this default behavior. And it also documents that it isn't a POSIX compatible grep. Which is quite intentional. That's not immature. That's just different design decisions. Maybe it isn't the software you're using that's immature, but your vetting process for installing new tools on your machine that is immature.

Because hey guess what: you can still use grep! So I built something different.


Sounds like the problem you have here is that `grep` is aliased to `ripgrep`. ripgrep isn't intended to be a drop-in replacement for POSIX grep, and the subjectively easier usage of ripgrep can never replace grep's matureness and adoption.

Note: if you want to make ripgrep not do .gitignore filtering, set `RIPGREP_CONFIG_PATH` to point to a config file that contains `-uu`.

Sources:

- https://github.com/BurntSushi/ripgrep/blob/master/GUIDE.md#c...

- https://github.com/BurntSushi/ripgrep/blob/master/GUIDE.md#a...


So I stand corrected. I did indeed use ripgrep as a drop-in replacement.

That's on me!


I've been playing around with this over the years and this is what I put in my .rgrc:

--smart-case --no-messages --hidden --ignore-vcs

and then point to it with

.zshenv 3:export RIPGREP_CONFIG_PATH="$HOME/.rgrc"

Not perfect and sometimes I reach for good old fashioned escaped \grep but most of the time it's fine.


The very first paragraph in ripgrep's README makes that behaviour very clear:

> ripgrep is a line-oriented search tool that recursively searches the current directory for a regex pattern. By default, ripgrep will respect gitignore rules and automatically skip hidden files/directories and binary files. (To disable all automatic filtering by default, use rg -uuu.)

https://github.com/BurntSushi/ripgrep


It's odd how, with every new tool that emerges, some people fixate solely on whether it’s an exact clone of what they already know. They overlook the broader differences and trade-offs, treating anything less than a complete replica down to its quirks as unworthy of anyone's attention. While insults like "immature" are rarely thrown around right away, it's a frustratingly narrow perspective.

Regarding ripgrep: if it's not bug-for-bug compatible with grep, it’s deemed useless. Yet, if it is identical, then why bother using it at all? What kind of logic is that?


+100


One of the reasons I really like Nix, my setup works basically everywhere (as long the host OS is either Linux or macOS, but those are the only 2 environments that I care). I don't even need root access to install Nix since there are multiple ways to install Nix rootless.

But yes, in the eventual case that I don't have Nix I can very much use the classic tools. It is not a binary choice, you can have both.


are you going to install nix in a random docker container?


That is why I said that I still know how to use basic Unix tools. If I am debugging something so frequently that I feel that I need to install my Nix configuration just to get productive there is something clearly going wrong.

For example, in $CURRENT_JOB we have a bastion host that gives access to the databases (not going to discuss if this is a good idea or not, this is how my company does). 90% of time I can do whatever I need just with what the bastion host offers (that doesn't have Nix), if I need to do further analysis I can copy some files between the bastion host and my computer to do further analysis.


I've gotten great mileage out of a ttyd-based container that runs bash with nix. Ttyd exposes the shell over web endpoint.

I can just drop it into the environment and pull in tools that I need using nix.


mise is a good middle ground.


I am not sure how mise would be a "good middle ground" compared to Nix, considering it is really easy to get a static binary version of Nix. Nowadays it even works standalone without creating a `/nix` directory, you can simply run the binary and it will create everything you need in `~/.local/state/nix` if I remember correctly. And of course Nix is way more powerful than mise.


> that if you ever change your OS installation

apt-get/pacman/dnf/brew install <everything that you need>

You'll need install those and other tools (your favorite browser, you favorite text editor, etc) anyway if you're changing your OS.

> or SSH anywhere

When you connect through SSH you don't have GUI and that's not a reason for avoiding using GUI tools, for example.

> even use a mix of these on my personal computer and the traditional ones elsewhere

I can't see the problem, really. I use some of those tools and they are convenient, but it doesn't matter that I can't work without that. For example, bat: it doesn't replace cat, it only outputs data with syntax highlight, makes my life easier but if I don't have it, ok.


> apt-get/pacman/dnf/brew install <everything that you need>

If only it were so simple. Not every tool comes from a package with the same name, (delta is git-delta, "z" is zoxide, which I'm not sure I'd remember off the top of my head when installing on a new system). On top of that, you might not like the defaults of every tool, so you'll have config files that you need to copy over or recreate (and hopefully sync between the computers where you use these tools).

That said I do think nix provides some good solutions for this. It gives you a nice clean way to list the packages you want in a nixfile and also to set their defaults and/or provide some configuration files. It does still require some maintenance (and I choose to install the config files as editable, which is not very nix-y, but I'd rather edit it and then commit the changes to my configs repo for future deploys than to have to edit and redeploy for every minor or exploratory change), but I've found it's much better than trying maintain some sort of `apt-get install [packages]` script.


After installing it, git clone <dotfiles repo> and then stow .


Chezmoi makes this really easy: https://www.chezmoi.io/


I only skimmed through the website, but it looks like it only does dotfiles. So I'd need to maintain a separate script to keep my packages in sync. And a script with install commands wouldn't be enough - maybe I decided to stop using abcxyz, I'd like for the script to remove it. Versioning between the package and the dotfile can also sometimes be an issue.


Yes, chezmoi is for user configuration. System configuration is a drastically different domain. Personally, I manage that with Ansible.


it take less than a sec or less than 10s with a google search to adapt...


> You'll need install those and other tools (your favorite browser, you favorite text editor, etc) anyway if you're changing your OS.

The point is that sometimes you're SSHing to a lightweight headless server or something and you can't (or can't easily) install software.


Because 'sometimes' doesn't mean you should needlessly handcuf yourself the other 80% of the time.

I personally haves an ansible playbook to ~setup all my commonly used tooling on ~any cli I will use significantly; (almost) all local installs to avoid need for root. It runs in ~minute - and I have all the Niceties. If it's not worth spending that minute to run; then i won't be on the machine long enough for it to matter.


> I personally haves an ansible playbook to ~setup all my commonly used tooling on ~any cli I will use significantly;

^^ Yep. Totally this. I've become entirely too accustomed to all the little niceties of a well-crafted toolchain that covers all my needs at any given moment. It was worth the time invested to automate installing and configuring all the fancy newfangled stuff I've built up muscle-memory for. :)


It does seem like a lot of these tools basically have the same “muscle memory” options anyway.


That's a niche case. And if you need to frequently SSH into a lightweight server you'll probably will be ok with the default commands even though you have the others installed in the local setup.


Stronhly agreed. I don't understand why I'd want to make >99% of my time doing things less convenient in offer to try to make my usage in the <1% of the time I'm on a machine where I can't install things even in a local directory for the user I'm ssh'd into feel less bad by comparison. It's not even a tradeoff where I'm choosing which part of the curve to optimize for; it's literally flattening the high part to make the lower overall convenience level constant.


> When you connect through SSH you don't have GUI and that's not a reason for avoiding using GUI tools, for example.

One major difference can emerge from the fact that using a tool regularly inevitably builds muscle memory.

You’re accustomed to a replacement command-line tool? Then your muscle memory will punish you hard when you’re logged into an SSH session on another machine because you’re going to try running your replacement tool eventually.

You’re used to a GUI tool? Will likely bite you much less in that scenario.


> You’re accustomed to a replacement command-line tool?

Yes.

> Then your muscle memory will punish you hard

No.

I'm also used to pt-br keyboards, it's easier to type in my native language, but it's ok if I need to use US keyboards. In terms of muscle memory, keyboards are far harder to adapt.

A non-tech example: if I go to a Japanese restaurant, I'll use chopsticks and I'm ok with them. At home, I use forks and knives because they make my life easier. I won't force myself to use chopsticks everyday only for being prepared for Japanese restaurants.


That goes against the UNIX philosophy IMO. Tools doing "one thing and doing it well" also means that tools can and should be replaced when a superior alternative emerges. That's pretty much the whole point of simple utilities. I agree that you should learn the classic tools first as it's a huge investment for a whole career, but you absolutely should learn newer alternatives too. I don't care much for bat or eza, but some alternatives like fd (find alt) or sd (sed alt) are absolute time savers.


Rust's user unfriendly build paradigm puts me off using a lot of these. Ripgrep is fine when I can install it from a package manager. But if I'm on some weird machine and need to build it then first I have to build rustc, and then it wants to download gigabytes of whatever, just to compile a better 'grep'?


You don't need to build rustc to build ripgrep. If you are, that's a choice you are making. Cross compilation is a thing. And what weird machine doesn't have a way to install a ripgrep binary anyway? It's pretty much everywhere these days.


> But if I'm on some weird machine

Then use grep, what’s your point? grep is not going away because ripgrep is better, but ripgrep might become more available?

I also notice you’re saying "if", so you’re not. So again, what’s your point?


> Learn the classic tools, learn them well, and your life will be much easier.

Agreed, but that doesn't stop you from using/learning alternatives. Just use your preferred option, based on what's available. I realise this could be too much to apply to something like a programming language (despite this, many of us know more than one) or a graphics application, but for something like a pager, it should be trivial to switch back and forth.


And when those classic tools need a little help:

Awk and sed.

I like the idea of new tools though. But knowing the building blocks is useful. The “Unix power tools” book was useful to get me up to speed.. there are so many of these useful mini tools.

Miller is one I’ve made use of (it also was available for my distro)


I do prefer some of these tools, due to a much better UX, but the only one I do install in every unix box is ripgrep.


I tend to use some of these "modern" tools if they are a drop-in replacement for existing tools.

E.g. I have ls set up aliased to eza as part of my custom set of configuration scripts. eza pretty much works as ls in most scenarios.

If I'm in an environment which I control and is all configured as I like it, then I get a shinier ls with some nice defaults.

If I'm in another environment then ls still works without any extra thought, and the muscle memory is the same, and I haven't lost anything.

If there's a tool which works very differently to the standard suite, then it really has to be pulling its weight before I consider using it.


I wanted to say we should just stick with what Unix shipped forever. But doesn't GNU already violate that idea?


IMO this is very stupid: don't let past dictate future. UNIX is history. History is for historians, it should not be the basis that shapes the environment for engineers living in present.


The point is that we always exist at a point on a continuum, not at some fixed time when the current standard is set in stone. I remember setting up Solaris machines in the early 2000s with the painful SysV tools that they came with and the first thing you would do is download a package of GNU coreutils. Now those utils are "standard", unless of course you're using a Mac. And newer tools are appearing (again, finally) and the folk saying to just stick with the GNU tools because they're everywhere ignore all of the effort that went into making that (mostly) the case. So yes, let's not let the history of the GNU tools dictate how we live in the present.


Well, even “Unix” had some differences (BSD switches vs SysV switches). Theoretically, POSIX was supposed to smooth that out, but it never went away. Today, people are more likely to be operating in a GNU Linux environment than anything else (that just a market share fact, not a moral judgement, BSD lovers). Thus, for most people, GNU is the baseline.


I indeed would not want to feel stranded with a bespoke toolkit. But I also don't think shying away from good tools is the answer. Generally I think using better tools is the way to go.

Often there are plenty of of paths open to getting a decent environment as you go:

Mostly, I rely on ansible scripts to install and configure the tools I use.

One fallback I haven't seen mentioned, that can get a lot of mileage from it: use sshfs to mount the target system locally. This allows you to use local tool & setup effectively against another machine!


Emacs/Tramp does that for me.


Along those lines, Dvorak layouts are more efficent, but I use qwerty because it works pretty much everywhere (are small changes like AZERTY still a thing? Certainly our French office is an "international" layout, and generally the main pain internationally are "@" being in the wrong place, and \ not working -- for the latter you can use user@domain when logging into a windows machine, rather than domain\user)


I've been using Dvorak for 24 years. 99% of the time I'm using my own machines, so it's fine. For the other 1% I can hunt-and-peck QWERTY well enough.


I know well enough my way around vi, because although XEmacs was my editor during the 1990's when working on UNIX systems, when visiting customers there was a very high probability that they only had ed and vi installed on their server systems.

Many folks nowadays don't get how lucky they are, not having to do UNIX development on a time-sharing system, although cloud systems kind of replicate the experience.


Ed is the standard text editor.



Doing edlin as high-school typing exam was already enough, and ed wasn't much better, which was an opinion shared by our customers back then.


And not installed by default in many distros. FML.


> And not installed by default in many distros. FML.

ed (pronounced as distinct letters, /ˌiːˈdiː/)[1] is a line editor for Unix and Unix-like operating systems. It was one of the first parts of the Unix operating system that was developed, in August 1969.[2] It remains part of the POSIX and Open Group standards for Unix-based operating systems

so it is a bug in those distros.


When I got my first Unix account [1] I was in a Gnu emacs culture and used emacs from 1989 to 2005 or so. I made the decision to switch to vi for three reasons: (1) less clash with a culture where I mostly use GUI editors that use ^S for something very different than what emacs does, (2) vim doesn't put in continuation characters that break cut-and-paste, (3) often I would help somebody out with a busted machine where emacs wasn't installed, the package database was corrupted, etc and being able to count on an editor that is already installed to resolve any emergency is helpful.

[1] Not like the time one of my friends "wardialed" every number in my local calling area and posted the list to a BBS and I found that some of them could be logged into with "uucp/uucp" and the like. I think Bell security knew he rang everybody's phone in the area but decided to let billing handle the problem because his parents had measured service.


I started a new job and spent maybe a day setting up the tools and dotfiles on my development machine in the cloud. I'm going to keep it throughout my employment so it's worth the investment. And I install most of the tools via nix package manager so I don't have to compile things or figure out how to install them on a particular Linux distribution. L


Learn Ansible or similar, and you you can be ~OS (OSX/Linux/even Windows) agnostic with relatively complex setups. I set mine up before Agentic systems were as good as they are now; but I assume it would be relatively effortless now.

IMO, it's worth spending some time to clean up your setup for smooth transition to new machines in the future.


i learned ansible and i run 1 command and wait 10 minutes and configure new linux machine with all the stuff i want


This is how I feel as well. Spend some time "optimizing" my CLI with oh my zshell etc. when I was young.

Only to feel totally handicapped when logging in into a busybox environment.

I'm glad I learned how to use vi, grep, sed..

My only change to an environment is the keyboard layout. I learned Colemak when I was young. Still enjoying it every day.


How hard is it to set up your tooling?

I have a chef cookbook that sets up all the tools I like to have on my VMs. When I bootstrap a VM it includes all the stuff I want like fish shell and other things that aren’t standard. The chef cookbook also manages my SSH keys and settings.


I have some of these tools, they are not "objectively superior". A lot of them make things prettier with colors, bargraphs, etc... It is nice on a well-configured terminal, not so much in a pipeline. Some of them are full TUIs, essentially graphical tools that run in a terminal rather than traditional command line tools.

Some of them are smart but sometimes I want dumb, for example, ripgrep respects gitignore, and often, I don't want that. Though in this case, there is an option to turn it off (-uuu). That's a common theme with these tools too, they are trying to be smart by default and you need option to make them dumb.

So no, these tools are not "objectively superior", they are generally more advanced, but it is not always what you need. They complement classic tools, but in no way replace them.


For scripting, no doubt about that! But if you want to use some custom tool you can use sshfs to mount whatever is on the other side onto your system and work from there. That has its own set of limitations but it makes some stuff much easier.


Never will I ever set up tools and home environment directly on the distro. Only in a rootfs that I can proot/toolbx/bwrap into. Not only I don't want to set up again on different computer, distro upgrade has nuked "fancy" tools enough times to be not worth it.


https://proot-me.github.io/

Wow, that is so cool. This looks a lot more approachable than other sandboxing tools.


Not a comment on these particular tools, but I keep non-standard utilities that I use in my ~/bin/ directory, and they go with me when I move to a different system. The tools mentioned here could be handled the same way, making the uphill a little less steep.


Agreed, but some are nice enough that I'll make sure I get them installed where I can. 'ag' is my go to fast grep, and I get it installed on anything I use a lot.


> Learn the classic tools, learn them well, and your life will be much easier

not really contradicted by:

> exa: modern replacement for ls/tree, not maintained


"I don't want to be a product of my environment. I want my environment to be a product of me."


For some people the "uphill battle" is the fun part


I do it at least for ripgrep.


Fzf has saved me so much time over the years. It's so good.


Yes, good point. I use it all the time too. Plus fzf-lua in neovim which depends on it.


so right.


I'm really torn. A while ago I made the decision to abandon linux and use a BSD as my daily OS. In almost every way I prefer FreeBSD over OpenBSD, except when it comes to the source code. The clarity of OpenBSD source seems unmatched, and for me that's really important as I have picked up interest in contributing to the kernel of the BSD I choose. Also in some naive way it gives me a warm feeling knowing that the code of my operating system is in tip top shape.


I don't usually like participating in bikeshedding, but the @ annotation feels like PHP's $ to me in that I don't see why it needs to exist. The language design could've easily just left that out and I don't think anything would be lost.

Other than that, I'm definitely excited for Zig as a potential C++ replacement.


Built-in names are essentially reserved words, and there are dozens of them. The @ prefix ensures you don't step on user's variable names, and that you can add new built-ins without making breaking changes.


Why aren't they simply namespaced, like `core.some_function`?


Because there'd need to be a magical exception for the "core" namespace that makes it not just a file of Zig code somewhere like every other module.


Isn't there already a magical exception for "root" and "builtin"?


not really, those are two modules that are always available to you, but you still have to import them like any other Zig module `const builtin = @import("builtin");`


5 characters to type instead of 1.


My belief is that it is about builtin functions that are provided by the compiler versus part of the standard library. They are documented in the language reference [0] versus in the standard library documentation [1].

[0]: https://ziglang.org/documentation/master/

[1]: https://ziglang.org/documentation/master/std/


> I don't see why it needs to exist

The symbol namespaces builtins, as those are the only identifiers that aren't declared in the file (either directly or by using `@import`).


Although namespacing them keeps them out from under a programmer's feet, which is a significant benefit, it does seem like this would make it harder to find stuff. @cmpxchgStrong, @wasmMemorySize and @embedFile are completely unrelated, but since they're all builtins they're neighbours.


This is sort of an issue we already deal with in other languages, and imo it's not a huge deal in those. Personally I find @ more reasonable than __builtin_


one side benefit is that at lot of @ code is "dangerous shit" so it draws your eye during code review. You will want to code review the "dangerous shit" that GPT-5 gives you.


Seemed interesting based on the amount of languages the editor supports on paper, but a quick trial showed that at least C# support is very, very barebones, even theoretical, as I couldn't get any kind of autocomplete to work, for example. After few minutes of usage Im unsure of what it provides aside from some syntax coloring. Maybe I did something wrong?


You did not. Aside from Rust and Swift, all the added languages are preliminary. A minimal extension is required to, for example, connect Chime up to an LSP server to get semantic features going like completions and diagnostics. We do not have a 1st-party extension built for C# yet.


Why not implement first class support for LSP servers, and offer extensions that wrap official LSP's?


That's more or less exactly what we've done. Our SDK does have support for LSP. But, unfortunately those extensions still need to be made. Or are you talking about a generic LSP extension that is server-agnostic? That is definitely buildable, but my experience has been that the experience tends to be a lot better when customized for a particular server.


I really like the new UI (and all the other similar modern UIs) and I just can't pretend like I dont.

I'm a very visual and a design oriented person and I want my tools to look good. If I'm going to spend countless hours using a product/tool I want to enjoy it's looks as well. Too many tools in my opinion are designed by people who want to maximize productivity with very utilitarian and practical user interfaces (which is great!) but don't seem to have enough care for making something that looks beautiful at the same time.


I was in the private beta for it, I went into it neutral and came out of it really liking it, all the keybinds remain the same and I already had turned off quite a lot of the old UI so for me it felt like what I already had but more refined.

As with all things design, it's open to a large degree of subjectivity.


Beautiful on top of productive and utilitarian : yes.

Beautiful instead of productive and utilitarian : absolutely NOT.


With only a quick look into this project it looks very interesting and appealing, but I'm afraid it might be an evolutionary dead-end in the Javascript world similar to Facebook's Flow and CoffeeScript. Any opinions to the contrary?


I think they are more at the opposite side of the spectrum, languages like Flow, CoffeeScript and TypeScript are more akin to supersets of JavaScript (with potentially "simpler" syntax). Languages like ReScript and PureScript come from the functional world that as a feature compile to JS, but where the type system are sound without any of the baggage from JS and features that are (at least as of now) not available in JS like pattern matching. They are not an evolution of JS like Flow or CoffeeScript, but are rather for developers who want something more than just "JavaScript with types".


Probably better comparisons would be Clojurescript, Elm, or Purescript. Something that does't even pretend to be javascript anymore.


I've personally witnessed several unnecessary and costly refactorings in my job that were done due to this weird perception the blog is talking about. All those weeks and months that were spent removing a perfectly fine tool that was working without issues.


What is this inane trend here on HN of referring to companies with the names they are registered to stock exhange?


For many of us its sheer laziness and its been this way for a long time. GOOG is faster to type than Google. (This is especially true for MSFT.)


I remember MSFT being a common reference on Slashdot way back when.


I like that it is more accurate than saying Google ( it's Alphabet now after all), while still using a reference we are familiar with.


It shows that we are #savvy.

/s


Shorthand.


If we could start with having working ctrl-c ctrl-v that would be great :). (I know Windows improved on this with 10 but Linux is still lacking)

Other than that I wouldn't mind some graphical features, such as being able to display thumbnails of images for example.


On Linux I usually use Ctrl+Shift+C, Ctrl+Shift+V which seems to be bound by default on most terminals. Ctrl+C of course terminates the running program, usually.


Why is ctrl-C the shortcut for "terminate", anyway? I would have expected it to be something like ctrl-T.


from wikipedia (https://en.wikipedia.org/wiki/Control-C):

As many keyboards and computer terminals once directly generated ASCII code, the choice of control-C overlapped with the ASCII end-of-text character. This character has a numerical value of three, as "C" is the third letter of the alphabet. It was chosen to cause an interrupt as it is otherwise unlikely to be part of a program's interactive interface. Many other control codes, such as control-D for the end-of-transmission character, do not generate signals and are occasionally used to control a program.

Also see https://en.wikipedia.org/wiki/C0_and_C1_control_codes#STX


Type

   man ascii
and notice what is 0x40 less than the capital letter.

H is 0x48, backspace is 0x08. ^H is the same as backspace.

I is 0x49, tab is 0x09, ^I is the tab sequence.

Now you know why Windows \r in text file show up as ^M in vi and Emacs.

^D is ASCII 0x04, "EOT (end of transmission)". That's why you use that to end input.

^C is ASCII 0x03, whose name is "ETX (end of text)". That seems a reasonable choice.


You can probably map Super+C and Super+V (i.e. the "Windows Key") to copy and paste in your chosen terminal emulator.

(I have done this in KDE's Konsole, so I can paste from a web form I must often use, which somehow blocks the standard Unix/Linux selecting-to-copy, middle-click-to-paste method.)


This is probably one of the biggest advantages of macOS using ⌘C and ⌘V for copying and pasting instead of Ctrl-C and Ctrl-V.

(…well that, and being able to use your thumb for performing the keyboard shortcut instead of having to use your pinky finger).


In Linux there's no need to Ctrl-v---just make the selection---, and Ctrl-c is instead Shift-Insert. How is that lacking; it saves 2 keystrokes?


The insert key is located in incredibly non-standard and inconvenient positions on many keyboards. For instance, right now I'd have to hold down a special mode-shift key, shift, and then find the insert key, which I don't believe I have ever typed on purpose in my life - it basically exists as a huge PITA, like the Caps-Lock key, that occasionally puts me into overwrite mode or does other things I didn't intend when I fat-finger it.


> Linux is still lacking

Linux isn't lacking it. It just has already decided on a specific shortcut. Do you also consider the absence of Ctrl-C to paste in Safari (in favour of Cmd-C) a lack?

At least on Linux terminal emulators you could always remap it (at your own peril, be careful about setting up another shortcut for the actual interrupt). Pre-Windows 10, OTOH...


Interesting stuff, I've recently got interested in databases and did some toy programming related to them, trying to understand how they work.

I just wish the author had gone closer to the nitty gritty details of _how_ the data is actually fetched from the disk. How does Postgres store the data? What do the data files look like, how is the parsing done?

In any case I appreciate the effort. I guess I might have to dive into the source code myself.


There's some docs about this in the docs. c.f. http://www.postgresql.org/docs/devel/interactive/storage.htm...


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: