Hacker News new | past | comments | ask | show | jobs | submit | vamega's favorites login

Chromium is kind of stuck with zlib because it's the algorithm that's in the standards, but if you're making your own protocol, you can do even better than this by picking a better algorithm. Zstandard is faster and compresses better. LZ4 is much faster, but not quite as small.

Some reading: https://jolynch.github.io/posts/use_fast_data_algorithms/

(As an aside, at my last job container pushes / pulls were in the development critical path for a lot of workflows. It turns out that sha256 and gzip are responsible for a lot of the time spent during container startup. Fortunately, Zstandard is allowed, and blake3 digests will be allowed soon.)


One of my biggest problems was the local development story. I wanted logs, traces and metrics support locally but didn’t want to spin up a multitude of Docker images just to get that to work. I wanted logs to be able to check what my metrics, traces, baggage and activity spans look like before I deploy.

Recently, the .NET team launched .NET Aspire and it’s awesome. Super easy to visualize everything in one place in my local development stack and it acts as an orchestrator as code.

Then when we deploy to k8s we just point the OTEL endpoint at the DataDog Agent and everything just works.

We just avoid the DataDog custom trace libraries and SDK and stick with OTEL.

Now it’s a really nice development experience.

https://learn.microsoft.com/en-us/dotnet/aspire/fundamentals...

https://docs.datadoghq.com/opentelemetry/#overview


(I'm the post author)

As others said, I've moved away from doing nix builds on servers and into a less wasteful (if you're running multiple servers) approach of building once, deploying the bits into a cache, and making the servers fetch them from a cache. I've been slowly working on my own tool to make this workflow easier (nixless-agent, you can find an early version on GitHub).


Apple | Senior Software Engineer | Austin & Cupertino | Full-time | Onsite

The ASE Data Infrastructure team is seeking an experienced senior software engineer to contribute to the development of our next-generation object storage infrastructure. As a key member of our team, you will play a critical role in designing and implementing solutions that enable seamless collaboration across Apple engineering teams.

Requirements:

- In-depth experience with object storage implementations: S3, GCS, Azure Blob Storage, MinIO, and Ceph - we are looking for candidates who have hands-on expertise with these technologies.

- Proficiency in Rust: You will be working extensively with the Rust programming language to build high-performance systems.

- Expertise in debugging and performance analysis: Experience driving performance analysis of end-to-end distributed systems is essential. We need someone who can quickly identify and resolve issues at scale.

- Micro-services architecture and container orchestration expertise: You have experience working with containers (e.g., Docker) and orchestrated them with tools like Kubernetes or similar. This knowledge is critical to our system's scalability and reliability.

- Relational and non-relational database expertise: PostgreSQL, Cassandra, and other databases are your area of specialization. You know how to design and implement efficient data storage and retrieval systems.

- Experience in data migration, disaster recovery, and capacity planning: We're talking about large-scale data management here. Your experience in these areas will be invaluable.

Responsibilities: Review and provide constructive feedback on pull requests and designs, fostering a culture of continuous learning and knowledge sharing. Collaborate with other senior team members across multiple sites to define high-quality and reliable standards for our solutions.

Location: Austin & Cupertino (onsite work required)

If you're passionate about building innovative software and pushing the boundaries of data storage, we want to hear from you! You can either apply directly (https://jobs.apple.com/en-us/details/200556764/senior-softwa...) or send me an email at mansur.ashraf@<company name>.com


I actually wrote a script which creates a trampoline launcher for this. It has its flaws but it solves the spotlight issue, and supports pinning to Dock across updates.

Available as a plug-and-play module for nix-darwin and home-manager: https://github.com/hraban/mac-app-util



I use Caddy for my applications.

I recent wrote a forward_auth server to use with Caddy's forward_auth functionality:

https://github.com/crowdwave/checkpoint401

https://caddyserver.com/docs/caddyfile/directives/forward_au...


A related project that might interest some people: https://github.com/telekons/one-more-re-nightmare

And the pretty hard to find blog post about it: https://applied-langua.ge/posts/omrn-compiler.html


Can you provide some links to some projects? You’ve piqued my interest

If you’re on NixOS (the distro), the only way to run multiple services of the same type (eg, Nginx) is using containers. These can either be NixOS containers or standard Docker/Podman containers [1]. If you want to run a Compose project, I maintain a tool that help simplify this [2].

[1] https://nixos.wiki/wiki/NixOS_Containers

[2] https://github.com/aksiksi/compose2nix



This, and a handful of simple firewall rules in the raw table can block about 90%+ of that remaining 1% just looking at the spoofable banner that none of the bots seem to spoof I assume due to being lazy like me.

In the raw table:

    -A PREROUTING -i eth0 -p tcp -m tcp --dport 22 -d [my server ip] -m string --string "SSH-2.0-libssh" --algo bm --from 10 --to 60 -j DROP
    -A PREROUTING -i eth0 -p tcp -m tcp --dport 22 -d [my server ip] -m string --string "SSH-2.0-Go" --algo bm --from 10 --to 60 -j DROP
    -A PREROUTING -i eth0 -p tcp -m tcp --dport 22 -d [my server ip] -m string --string "SSH-2.0-JSCH" --algo bm --from 10 --to 60 -j DROP
    -A PREROUTING -i eth0 -p tcp -m tcp --dport 22 -d [my server ip] -m string --string "SSH-2.0-Gany" --algo bm --from 10 --to 60 -j DROP
    -A PREROUTING -i eth0 -p tcp -m tcp --dport 22 -d [my server ip] -m string --string "ZGrab" --algo bm --from 10 --to 60 -j DROP
    -A PREROUTING -i eth0 -p tcp -m tcp --dport 22 -d [my server ip] -m string --string "MGLNDD" --algo bm --from 10 --to 60 -j DROP
    -A PREROUTING -i eth0 -p tcp -m tcp --dport 22 -d [my server ip] -m string --string "amiko" --algo bm --from 10 --to 60 -j DROP
Adding the server IP minimizes risks of also blocking outbound connections as raw is stateless

I rarely do this any more given they rotate through so many LTE IP's. Instead I get the bot operators to block me by leaving SSH on port 22 and then giving them a really long VersionAdendum that seems to get the bots feeling broken, sticky and confused. There are far fewer SSH bot operators than it appears. They will still show up in the logs but that can be filtered out using drop patterns in rsyslog.

    VersionAddendum "  just put in a really long sentence in sshd_config that is at least 320 characters or more"
Try it out on a test box that you have console access to just in case your client is old enough to choke on it. Optionally use offensive words for the bots that log things to public websites. Only do this on your hobby nodes, not corporate owned nodes unless legal is cool with it, in writing.

https://git.kernel.org/pub/scm/libs/librseq/librseq.git/tree...

Thank you, I didn't know about this one. An allocator that seems to use it at https://google.github.io/tcmalloc/rseq.html



If you're looking for a debate against ZStandard, its hard to argue against it.

ZStandard is Pareto optimal.

For the argument why, I really recommend this investigation.

https://insanity.industries/post/pareto-optimal-compression/


Dev here — I've been meaning to update the Homebrew cask to be more complete on zap, but there's a good reason that all of these are needed:

- ~/.orbstack

- Docker context that points to OrbStack (for CLI)

- "source ~/.orbstack/shell/init.zsh" in .zprofile/bash_profile (to add CLI tools to PATH)

- ~/.ssh/config (for convenient SSH to OrbStack's Linux machines)

- Symlinks to CLI tools in ~/.local/bin, ~/bin, or /usr/local/bin depending on what's available (to add CLI tools to existing shells on first install — only one of these is used, not all)

- Standard macOS paths (~/Library/{Application Support, Preferences, Caches, HTTPStorages, Saved Application State, WebKit})

- Keychain items (for secure storage)

- ~/OrbStack (empty dir for mounting shared files)

- /Library/PrivilegedHelperTools (to create symlinks for compatibility)

Not sure what the best solution is for people who don't use Homebrew to uninstall it. I've never liked separate uninstaller apps, and it's not possible to detect removal from /Applications when the app isn't running.


Here is another tutorial on Kalman Filters, step-by-step video playlist -- https://www.youtube.com/watch?v=CaCcOwJPytQ&list=PLX2gX-ftPV...

Once you get the intuition, Kalman filters are really interesting. As are particle filters -- those are fun to work with and visualize.


For people who are interested in doing something similar in Go, some time ago I implemented a generic VFS that can be exposed both via FUSE and NFSv4.

It’s part of Buildbarn, a distributed build cluster for Bazel, but it can also easily be used outside that context.

Details: https://github.com/buildbarn/bb-adrs/blob/master/0009-nfsv4....

My recommendation to the authors would be to use NFSv4 instead of NFSv3. No need to mess around with that separate MOUNT protocol. Its semantics are also a lot closer to POSIX.


If you’re interested in the general concept of sidenotes and labelling figures in the margin, here’s my take on it for further inspiration: https://chrismorgan.info/blog/2019-website/, with https://chrismorgan.info/blog/rust-fizzbuzz/ exhibiting figures.

Do search around for other sidenote implementations, too. I like the markup and presentation I end up with, but you’ll also find other approaches to mobile support in particular, involving things like checkbox hacks to toggle visibility, or just flat-out requiring JavaScript.


If you wanted to learn, I really recommend Operating Systems: Three Easy Pieces (OSTEP). I thought it was excellent and pretty easy to follow. https://pages.cs.wisc.edu/~remzi/OSTEP/

mem* do not need to be called via ifunc. That is a toolchain decision. See e.g. https://github.com/nadavrot/memset_benchmark for recent data about the cost of PLT indirection for small copies.

ETA: Also this excellent paper that is all about optimizing mem* notes that PLT cost adds up. https://dl.acm.org/doi/pdf/10.1145/3459898.3463904



I've been using NixOS on a Dell slim workstation as a router and I couldn't be happier. My config can be found here: https://github.com/seandheath/nixos/blob/main/hosts/router.n...

NixOS is great for a basic home router (I use it for my home router) but it’s networking config is still pretty rudimentary, and some things I would expect to work just don’t - i.e. port forwarding only works from outside your network, not inside.

I haven’t done much with vlans yet so I can’t comment on that.


There's also Sile: https://sile-typesetter.org/

LaTeX (Well, LuaTeX) powers my E-ink newspaper (https://imgur.com/a/NoTr8XX), but I've been curious to use it as a vehicle to try out some of these alternatives and see if they can put together a non-trivial layout.


A few antipatterns/annoyances I've come across over the years:

Importing paths based on environment variables:

There is built-in support for this, e.g. setting the env var `NIX_PATH` to `a=/foo:b=/bar`, then the Nix expressions `<a>` and `<b>` will evaluate to the paths `/foo` and `/bar`, respectively. By default, the Nix installer sets `NIX_PATH` to contain a copy of the Nixpkgs repo, so expressions can do `import <nixpkgs>` to access definitions from Nixpkgs.

The reason this is bad is that env vars vary between machines, and over time, so we don't actually know what will be imported.

These days I completely avoid this by explicitly un-setting the `NIX_PATH` env var. I only reference relative paths within a project, or else reference other projects via explicit git revisions (e.g. I import Nixpkgs by pointing the `fetchTarball` function at a github archive URL)

Channels:

These always confused me. They're used to update the copy of Nixpkgs that the default `NIX_PATH` points to, and can also be used to manage other "updatable" things. It's all very imperative, so I don't bother (I just alter the specific git revision I'm fetching, e.g. https://hackage.haskell.org/package/update-nix-fetchgit helps to automate such updating).

Nixpkgs depends on $HOME:

The top-level API exposed by the Nixpkgs repository is a function, which can be called with various arguments to set/override things; e.g. when I'm on macOS, it will default to providing macOS packages; I can override that by calling it with `system = "x86_64-linux"`. All well and good.

The problem is that some of its default values will check for files like ~/.nixpkgs/config.nix, ~/.config/nixpkgs/overlays.nix, etc. This causes the same sort of "works on my machine" headaches that Nix was meant to solve. See https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/...

I avoid this by importing Nixpkgs via a wrapper, which defaults to calling Nixpkgs with empty values to avoid its impure defaults; but still allows me to pass along my own explicit overrides if needed.

The imperative nix-env command:

Nix provides a command called 'nix-env' which manages a symlink called ~/.nix/profile. We can run commands to "install packages", "update packages", "remove packages", etc. which work by building different "profiles" (Nix store paths containing symlinks to a bunch of other Nix store paths).

This is bad, since it's imperative and hard to reproduce (e.g. depending on what channels were pointing to when those commands were run, etc.). A much better approach is to write down such a "profile" explicitly, in a git-controlled text file, e.g. using the `pkgs.buildEnv` function; then use nix-env to just manage that single 'meta-package'.

Tools which treat Nix like Apt/Yum/etc.

This isn't something I haven't personally done, but I've seen it happen in a few tools that try to integrate with Nix, and it just cripples their usefulness.

Package managers like Apt have a global database, which maps manually-written "names" to a bunch of metadata (versions, installed or not, names of dependencies, names of conflicting packages, etc.). In that world names are unique and global: if two packages have the name "foo", they are the same package; clashes must be resolved by inventing new names. Such names are also fetchable/realisable: we just plug the name and "version number" (another manually-written name) into a certain pattern, and do a HTTP GET on one of our mirrors.

In Nix, all the above features apply to "store paths", which are not manually written: they contain hashes, like /nix/store/wbkgl57gvwm1qbfjx0ah6kgs4fzz571x-python3-3.9.6, which can be verified against their contents and/or build script (AKA 'derivation'). Store paths are not designed to be managed manually. Instead, the Nix language gives us a rich, composable way to describe the desired file/directory; and those descriptions are evaluated to find their associated store paths.

Nixpkgs provides an attribute set (AKA JSON object) containing tens of thousands of derivations; and often the thing we want can be described as 'the "foo" attribute of Nixpkgs', e.g. '(import <nixpkgs> {}).foo'

Some tooling that builds-on/interacts-with Nix has unfortunately limited itself to only such descriptions; e.g. accepting a list of strings, and looking each one up in the system's default Nixpkgs attribute set (this misunderstanding may come from using the 'nix-env' tool, like 'nix-env -iA firefox'; but nix-env also allows arbitrary Nix expressions too!). That's incredibly limiting, since (a) it doesn't let us dig into the structure inside those attributes (e.g. 'nixpkgs.python3Packages.pylint'); (b) it doesn't let us use the override functions that Nixpkgs provides (e.g. 'nixpkgs.maven.override { jre = nixpkgs.jdk11_headless; }'); (c) it doesn't let us specify anything outside of the 'import <nixpkgs> {}' set (e.g. in my case, I want to avoid NIX_PATH and <nixpkgs> altogether!)

Referencing non-store paths:

The Nix language treats paths and strings in different ways: strings are always passed around verbatim, but certain operations will replace paths by a 'snapshot' copied into the Nix store. For example, say we had this file saved to /home/chriswarbo/default.nix:

  # Define some constants
  with {
    # Import some particular revision of Nixpkgs
    nixpkgs = import (fetchTarball {...}) {};

    # A path value, pointing to /home/chriswarbo/defs.sh
    defs = ./defs.sh;

    # A path value, pointing to /home/chriswarbo/cmd.sh
    cmd = ./cmd.sh;
  };
  # Return a derivation which builds a text file
  nixpkgs.writeScript "my-super-duper-script" ''
    #!${nixpkgs.bash}/bin/bash
    source ${nixpkgs.lib.escapeShellArg defs}
    ${cmd} foo bar baz
  ''
Notice that the resulting script has three values spliced into it via ${...}:

- The script interpreter `nixpkgs.bash`. This is a Nix derivation, so its "output path" will be spliced into the script (e.g. /nix/store/gpbk3inlgs24a7hsgap395yvfb4l37wf-bash-5.1-p16 ). This is fine.

- The path `cmd`. Nix spots that we're splicing a path, so it copies that file into the Nix store, and that store path will be spliced into the script (e.g. /nix/store/2h3airm07gp55rn9qlax4ak35s94rpim-cmd.sh ). This is fine.

- The string `nixpkgs.lib.escapeShellArg defs`, which evaluates to the string `'/home/chriswarbo/defs.sh'`, and that will be spliced into the script. That's bad, since the result contains a reference to my home folder! The reason this happens is that paths can often be used as strings, getting implicitly converted. In this case, the function `nixpkgs.lib.escapeShellArg` transforms strings (see https://nixos.org/manual/nixpkgs/stable/#function-library-li... ), so:

- The path `./defs.sh` is implicitly converted to the string `/home/chriswarbo/defs.sh`, for input to `nixpkgs.lib.escapeShellArg` (NOTE: you can use the function `builtins.toString` to do the same thing explicitly)

- The function `nixpkgs.lib.escapeShellArg` returns the same string, but wrapped in apostrophes (it also adds escaping with backslashes, but our path doesn't need any)

- That return value is spliced as-is into the resulting script

To avoid this, we should instead splice the path into a string before escaping; giving us nested splices like this:

    source ${nixpkgs.lib.escapeShellArg "${defs}"}

I also created a minimal Ninja implementation in Rust some time ago. My goals were to implement it in terms of the Build Systems a la carte paper. Of course, hard to compete with the original ninja authors who obviously understand it much better. For example I used a separate lexer and environments, which got a little annoying and is something explicitly called out in their design.

https://github.com/nikhilm/ninja-rs


Or for the of people who don't live in the UK i.e the majority of the world population:

    Episode 1 - magnet:?xt=urn:btih:3C378A82CF67A1107523CA6C647077403A1EF74D&dn=India+The+Modi+Question+S01E01+1080p+HDTV+H264-DARKFLiX&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337&tr=udp%3A%2F%2Ftracker.leechers-paradise.org%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.dler.org%3A6969%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=udp%3A%2F%2F47.ip-51-68-199.eu%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.internetwarriors.net%3A1337%2Fannounce&tr=udp%3A%2F%2F9.rarbg.to%3A2920%2Fannounce&tr=udp%3A%2F%2Ftracker.pirateparty.gr%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.cyberia.is%3A6969%2Fannounce

    Episode 2 - magnet:?xt=urn:btih:F55992F922B9A0E49C09E198835F0F06EE07635B&dn=India+The+Modi+Question+S01E02+1080p+HDTV+H264-DARKFLiX&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337&tr=udp%3A%2F%2Ftracker.leechers-paradise.org%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.dler.org%3A6969%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=udp%3A%2F%2F47.ip-51-68-199.eu%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.internetwarriors.net%3A1337%2Fannounce&tr=udp%3A%2F%2F9.rarbg.to%3A2920%2Fannounce&tr=udp%3A%2F%2Ftracker.pirateparty.gr%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.cyberia.is%3A6969%2Fannounce

I have a catch-all setup so any email sent to my domain, that is not a registered email, goes to a specific account. Then I just create a new email for each site. I also use Sieve rules (Fastmail) to process emails. Each email address for a specific site uses that site's dns name (amazon.com@foo.tld). If an email is sent to an address and the sender's domain doesn't match the left side of the "@" (with or without dots) than it's considered spam and I send myself an email saying someone likely sold my data.

Works great with Bitwarden's new username generation feature. I can create new accounts in a push of a button now.


For what it's worth I encountered the same issue and came up with a solution:

https://github.com/cloudflare/cloudflared/issues/574

Cloudflare have ignored the github issue (which includes a solution) but at least 3 other people seem to have found my solution helpful.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: