Hacker News new | past | comments | ask | show | jobs | submit | hddqsb's comments login

I thought this was going to be about Android (which makes heavy use of that term), and I was expecting completely different complaints:

- The toast disappears quickly, so you might not have time to read it / take a screenshot

- It's not possible to copy the text

- Long text is truncated (e.g. exception messages)


There is a serious bug in Wolfram Alpha's "math input" mode. When you enter e²ⁿ, it is interpreted as e²n (full details at the end). This was reported to them a month ago and still hasn't been fixed, so I figured it was time for some public shaming ;)

I've been really impressed with Wolfram Alpha over the years (both the natural language parsing and the power of Mathematica); my main issue until now has been that the natural language parser tends fail on inputs beyond some length (fortunately Mathematica syntax is also supported and works well). So I was very surprised when this glaring bug in math input mode was shared with me.

Full steps to reproduce:

1. Go to https://www.wolframalpha.com/

2. Click "MATH INPUT"

3. Click the "power" button (second from the right, icon is two boxes with one in superscript)

4. Type "e" (it should go in the first box)

5. Click the superscript box

6. Type "2n"

7. Click the "=" button

Result: The input field correctly shows e²ⁿ (with the "n" in superscript), but the formula shown in the "Input" section is e²n (the "n" is outside the exponent) and the "Plot" section shows a straight line which confirms that the input was misinterpreted as e² * n.

Explicitly adding parentheses around the "2n" fixes this. Ironically, when you do that the "Input" section shows the formula as e²ⁿ (without the parentheses; the same version that fails when entered in the input field).


Docker & co. also let you create a clean build environment (to a lesser extent), and I find them less intrusive than Nix / Guix.


Docker isn't reproducible. The one thing it can give you is a consistent set of mystery meat binaries, but that's an even worse starting point than the old problem of mystery meat source code.


Docker images can be reproducible.

They just aren't by default (because they include a timestamp) and you need to jump through multiple hoops to get them there, consistently. (And things like "apk add" or "apt install" can't be used unless you're installing pinned versions)


Reproducible docker images are almost useless for the things you want reproducible for. Sure you can reproduce the image for all of the future, but that image is useless in a few years when the certificates expire. Those expired certificates mean you cannot use the image for whatever you wanted it for.

A variation of the above is reproducible builds are not that useful - sure you can prove the build is the same, but in the end you want the latest security fixes applies and so by the time you create the replacement build and verify it the build is obsolete.

Don't get me wrong, reproducible builds are important and do good things - but there are severe limits to what you can/should do with them and so while it is important to demand them, they are not important to use yourself.


Wouldn't you want to have certificates and other crypto data as an input to a reproducible build harness?

  # build initial images
  # add semi-static inputs (mostly static config data, crypto data, signed inputs)
  # add final watermarks
So each step can be verified


That final watermark is not verifiable and so you can inject something else.


Can be, but aren't.

Are you pinning your base image? Where did that come from? Are you pinning your packages? What about their dependencies? Are you locking down the hashes or just hoping that your distro won't replace a package in-place?

And that's before you get into crap like OpenShift certification that blanket requires a `dnf update` statement.


Why would you not be pinning versions in a Dockerfile? The entire point is “if it works on my machine, it works on yours,” and that goes out the window if you can’t be assured that every program in the release is at the same version you had.


That "entire point" is already accomplished by the built binary container image, which has a unique identifier in the form of its SHA-256 hash, and can be shared with others easily.

A reproducible build is grand, but somewhat tangential to that goal, and hard to obtain in practice. Besides the timestamp problem already mentioned, you can't always pin the versions of system libraries and other distribution-provided software. The large long-term cost of hosting and geographically distributing content leads to many distributions, and especially their externally provided package mirrors, discarding stale versions from repositories. Often, the only available versions are the one included in the release plus the latest N, with N sometimes as small as 1.

If you're building a no-frills image for production deployment of a single piece of software, this problem can be bypassed thanks to distroless and other stripped-down base images, but "batteries included" images can't go this route.


Because if you pin versions you are pinning to some version with a security flaw that you are not allowing yourself to get. Often a flaw is fixed by a developer who realizes something is wrong with the code but it hasn't been exploited yet so anyone who keeps up to date cannot be exploited by that flaw, while anyone who doesn't keep up doesn't even know they are vulnerable.

Of course there is a balance here, there is a reason to pin versions. I'm stating why you shouldn't do that, but I cannot figure out all the pros and cons and how they should work out for your needs.


Nix is very unobtrusive on non-nixos installs, but I put together a flake that builds a CNPG compatible image, it has postgres, barman, pgmq, pl/python, pl/lua, pl/pgsql, pl/v8, pg_squeeze, pg_jsonschema, pg_graphql, pg_analytics, pg_safeupdate, pg_cron, pg_similarity, pgaudit, pgrouting, postgis and timescaledb. Weighs in heavy on about 700mb container, but it literally has everything-ish. And as long as postgres and barman is in $PATH, shadow files are configured and some folders are created CNPG just goes with the flow.

I can't imagine building such a monstrosity with anything else. And since the plugins are dependant on postgresql but not eachother I can add and remove them at a whim. Nix will create layers for me automatically.

And when I upgrade postgres I know I all packages will be built against the new postgres because Nix.

I think Nix could use list/dict comprehensions and some more devcandy sure, but it's really really great.

And at the end of the day, if you just go look at the source it's all there available to you, you don't have to wonder how Debian or RedHat built their golden postgres, there's no golden anything in Nix because if their hashes don't match mine I won't be pulling from their cache.

I think Nix biggest issue is that it doesn't attract promo skiddies the same way an imperative dirtbag like Salt or Ansible would, and most people can't even comprehend the things that open up when you can trust your shit.

Wanna write the hackiest perl script ever that'll never keep working? That's what activates most people's new NixOS generation still (there is a rewrite undergoing).

But back to point, Nix on Ubuntu patches /etc/{bash,fish,zsh}rc, creates the /nix top folder and that's it. It doesn't eat your system.

Yes, it has warts and they're big. But it's the only way forwards


At Supabase we also recently switched to Nix for packaging our Postgres+extensions bundle

https://github.com/supabase/postgres/blob/develop/flake.nix


> Nix is very unobtrusive on non-nixos installs,

You may be speaking from the perspective of using Linux because this here is some "you gotta be kidding me": https://nix.dev/manual/nix/2.18/installation/installing-bina...


I suspect you're linking to this without having read it recently. This section now explains that there used to be problems on macOS and that they're now resolved + some optional extra information about what the installer is doing. And to be fair, the mac installation issue was quite bad for a few years.


pg_analytics maker here -- That's cool! We package ours in CNPG here: https://github.com/paradedb/helm-charts

Would you recommend using Nix even in that context?


Fix:

  ps() { for i in /proc/[0-9]*; do readarray -d '' -t cmdline < "$i/cmdline"; printf "%s: %s\n" "${i#/proc/}" "${cmdline[*]}"; done; }


A little late to the party...

> The slow cases usually tend to be trying to find wider slots and skipping through many smaller slots. There are often a lot of bookings up to the first wide enough slot too though, so it's a little bit of both.

This right here is the first critical issue. It doesn't matter how fast the data structure is if you're then going to do a linear search over the available slots to find the next one wide enough.

A good data structure for doing this efficiently is a range tree (https://en.wikipedia.org/wiki/Range_tree), where in each node you store the maximum width of the available slots covered by the node. That lets you find the first available slot after some time t and wider than some duration w in O(log(n)), where n is the number of available slots. (It might not be obvious if you're not familiar with range trees; I'm happy to provide more details.)

For the implementation, there are a few options:

A. The simplest is to use a complete range tree, where the leaf nodes are all the possible times. You can lazily create nodes to avoid massive memory usage for sparse ranges. The advantage is that you don't need to do any balancing; the disadvantage is that the time complexity is O(long(T)) where T is the total number of possible times; so it's going to be a little slower on very sparse datasets.

B. The O(log(n)) implementation that's being called for is a self-balancing binary tree (e.g. red-black), modified to also store the maximums in each node. Unfortunately most libraries don't give you low-level control over the tree's nodes, so you'd likely need to copy the code and modify it (or implement the self-balancing tree from scratch).

C. If those are still too slow (and you're certain that your implementation is really O(log(n))), you'll need to improve the cache efficiency. That basically comes down to using larger nodes. The obvious approach is to switch to a B-tree; but you could also keep your nodes binary and just change the way they are allocated to emulate the memory layout of a B-tree (this is simpler but slower because it still uses lots of pointers). Another idea is to replace the first few layers of the tree with a hash table (or a simple array if your data is dense enough). Likewise you can replace the leaf nodes with small arrays.


The article's actual title is "High Court orders temporary suspension of Telegram's services in Spain", not "Spanish High Court banned Telegram".

This is a temporary suspension while investigating "after media companies complained it was allowing users to upload their content without permission".


Great sleuthing, the missing piece of the puzzle is that the file contents are inside a <code> element while the line numbers are not, and <code> elements have a default font so they don't inherit the font from their parent element. Changing the selector to the following fixes the issue:

  div#cgit pre, div#cgit code { ... }
(The buggy CSS is not present in the the official cgit repository, so I assume the owner of kernel.dk is running a patched version of cgit.)



In fact the article mentions a tool for this (sslh), but rejects it because it hides the source IP from the HTTP backend (and other reasons).


In the case of SSH, there is a single connection (in fact SSH implements its own multiplexing), so I don't see the advantage of HTTP/2.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: