Hacker Newsnew | past | comments | ask | show | jobs | submit | bluGill's commentslogin

Developers being let go is about the economy. Every time we see a slowdown people are let go and we always blame the fad but it's the economy not whatever.

For tax reasons most companies are avoiding paying dividends. It still happens but it's not nearly as common and companies are trying to get away from it because for many investors it is better not to have dividends paid.

That is the doom side. However AI has found and fixed a lot of security issues. I have personally used AI to improve my code speed, AI can analyze complex algorithms and figure out how to make them much faster in ways I can do as a developer, but it's a lot of work that I typically wouldn't do. Even just writing various targeted benchmarks to see where the problems really are in my code is something I can do, but would be so tedious I often would not bother. I can tell AI to do it and it will write those.

Only time will tell which version of the future we end up with. It could be good or bad and we will have to see.


There is two wire ethernet that supports 100. It isn't common, but automotive is starting to use it.


People say this without any evidence. This ai-post is just regurgitating hn-thread "received wisdom". The evidence for the existence of a library is thin and hard to piece together, but points to more than a myth. I appreciate that people want real proof of anything, but dumping an ai-slop summary is hardly doing any better than accepting the existence of a large library.

The Library almost certainly existed. It is the destruction (by deliberate fire) that is probably myth.

Its destruction multiple times (in sieges and uncontrolled fires) is current historical consensus.

The sieges and fires you are referring to were hundreds of years before the supposed destruction at the hands of Christian mobs (e.g. as depicted in the movie Agora or in Sagan’s Cosmos). The latter is unsupported.

Historical consensus? So the non scientific view? Science is not consensus based.

If you want to know what the science says on some topic, you have exactly two valid options:

1. Become an expert in said topic, reading the broad literature, becoming familiar with points and counterpoints, figuring out how research actually works in the field by contributing some papers of your own, and forming your own personal informed opinion on the preponderance of the evidence.

2. Look at the experts' consensus on said topic

Of course, you have other options. A popular one is to adopt the view of one expert in the field that you happen to like, who may or may not accept the consensus view - but this is far more arbitrary than 1 or 2.


If you are not in the field, consensus is often almost impossible to figure out. Remember what gets published is things that are controversial. Thus, things that have consensus are things that are going to be silent in literature if you search for it. Thus, if you're searching for something, you may not actually find the consensus if you're looking, and so the study is hard when you're not already an expert.

> Thus, things that have consensus are things that are going to be silent in literature if you search for it.

Not really; generally consensus ideas will be mentioned in passing while discussing something else. You can get a strong sense of the consensus that way.

For example, I bought several textbooks on early Mesopotamian history, which taught me that Marxism enjoys a strong consensus in that field. And it's not even relevant to the field!


As a Canadian I love the US, think of them as family, but also view them as some sort of relative which has lost their senses. Before most recent times, we'd sadly shake our heads, as this relative does weird things, yet still hope for the best for them. Yet while rambling blathers about invading Canada and compelling 51st statehood would be fondly tolerated in grandpa, not so much for a nation with a massive army and a joy in using it.

So I purpose we strengthen another aspect of American "democracy" that Canadians find amusing, the concept of "hiring people for popularity not competency". Americans, especially at the local level, vote for judges, police chiefs, even dog-catchers, so why not a local scientist! Rather than 1 or 2, we can conjoin this concept with your third option, yet with the officiousness that only a vote can provide!

Each municipality can have a local head scientist, which will proclaim what scientific fact is correct. People can vote on such candidates, and their platform of scientifically correct "things" during election time.

It will all work out very well for them I'm sure, and hopefully, with science thus democratized, perhaps they will be less of a threat over time.

(Sorry, I don't know why your comment made this pop into my head)


Why not just have them vote on the truth. That would be very entertaining and keep them all busy

Of course science is consensus based ... consensus is a fundamental part of the scientific process, which is conducted by a community of scientists. Consensus is the end result of attempts at reproducibility and falsification, of the ongoing process by which scientists challenge the claims and purported findings of other scientists. Without it, all you have are assertions from which people can pick and choose based on their biases (as we see, for instance, with people who deny climate or vaccine science by cherrypicking claims).

https://en.wikipedia.org/wiki/Scientific_consensus

https://skepticalscience.com/explainer-scientific-consensus....

https://tomhopper.me/2011/11/02/scientific-consensus/

And even if you reject consensus as being essential to science, calling the consensus view "the non scientific view" is obviously mistaken, a basic error in logic.

This is all well understood by working scientists so I'm not going to debate it or comment on it further.


I have seen several real historians say the same thing. I'm not a historian myself, but when I see professors of history in various institutions saying something, I tend to suspect they actually have a consensus, although as others pointed out, maybe it's not a consensus and I would have no way of knowing.

I'm not doubting the library existed and it was destroyed possibly burned more than once but the common trope that Christians did it does not seem to be backed up by history.


What bothers me is the Vatican library. It's too vulnerable to fire. There should be staff taking photos of the pages, and store those copies elsewhere.

Yes, you can quickly photograph an entire book with a phone camera. No, you do not need archivists to do it. No, you do not need a scanner. No, you do not need special lighting.

Don't believe me? Pick a book of yours, open it, and take a photo with your phone.


Well, the library surely is not there anymore.

Problem is most things are only snap. You can get them ocherwise but not by default

I can't believe people like Snap when in the name of security it breaks basic things such as accessing a folder on a different mount point that the user normally can access perfectly fine.

A packaging system should not break the basic abstractions of an OS.


Yeah, this was the frustrating bit to me. I use Firefox to look at stuff that lives in /tmp/, Snap Firefox can't do this. I'd remove Snap Firefox, pin the priorities and it would still silently crawl it's way back in after a week or two no matter what I tried. I gave up Ubuntu. Earlier versions used to respect the priorities but something changed.

My guess is it was the only obvious evidence of an attack.

Gill probaby already knows this but for the uninitiated: something logged in, did a thing to potentially every container, and then deleted any sign of it doing the thing.

all that's left is a single timestamp of a log or something getting deleted


Which is only useful for historical invesigation - the old snapshot has security holes attackers know how to exploit.

> the old snapshot has security holes attackers know how to exploit.

So is running `docker build` and the `RUN apt update` line doing a cache hit, except the latter is silent.

The problem solved by pinning to the snapshot is not to magically be secure, it's knowing what a given image is made of so you can trivially assert which ones are safe and which ones aren't.

In both cases you have to rebuild an image anyway so updating the snapshot is just a step that makes it explicit in code instead of implicit.


where does the apt update connect to? If it is an up to date package repo you get fixes. Howerer there are lots of reasons it would not. You better know if this is your plan.

You get fixes that were current at docker build time, but I think GP is referring to fixes that appear in the apt repo after your docker container is deployed.

If you've pulled in a dependency from outside the base image, there will be no new base image version to alert you to an update of that external dependency. Unless your container regularly runs something like apt update && apt list --upgradable, you will be unaware of security fixes newly available from apt.


Yeah that's yet another annoying thing to consider

Also I'm tired of doing these hacks:

    # increase to bust cache entry
    RUN true 42 && apt update
Pinning to a snapshot just makes so many things easier.

You are screwed either way. If you don't update your container has a ton of known security issues, if you do the container is not reproducable. reproducable is neat with some useful security benefits, but it is something a non goal if the container is more than a month old - day might even be a better max age.

Why is there a need for a package manager inside a container at all? Aren't they supposed to be minimal?

Build your container/vm image elsewhere and deploy updates as entirely new images or snapshots or whatever you want.

Personally I prefer buildroot and consider VM as another target for embedded o/s images.


So if i have a docker container which needs a handful of packages, you would handle it how?

I'm handling it by using a slim debian or ubuntu, then using apt to install these packages with necessary dependencies.

For everything easy, like one basic binary, I use the most minimal image but as soon as it gets just a little bit annoying to set it up and keep it maintained, i start using apt and a nightly build of the image.


IMO—package manager outside the container. You just want the packages inside the container; the manager can sit outside and install packages into the container.

Yes, how?

With Red Hat's UBI Micro:

  microcontainer=$(buildah from registry.access.redhat.com/ubi8/ubi-micro)
  micromount=$(buildah mount $microcontainer)
  yum install \
      --installroot $micromount \
      --releasever 8 \
      --setopt install_weak_deps=false \
      --nodocs -y \
      httpd
(from https://www.redhat.com/en/blog/introduction-ubi-micro published in 2021)

great. Now I have to install and learn another tool when having yum inside the container will just work?

Not just that but it will probably break later and ruin everything

For the package management, it depends on the package manager, but most have some mechanism for installing into a root other than the currently running system.

Even without explicit support in the pacakage manager, you could also roll your own solution by running the package manager in a chroot environment, which would then need to be seeded with the package manager's own dependencies, of course (and use user-mode qemu to run pre- and post-installation scripts within the chroot in the case of cross-architecture builds).

Whether this yields a minimal container when pointed at a repository intended to be used to deploy a full OS is another question, but using a package manager to build a root filesystem offline isn't hard to pull off.

As for how to do this in the context of building an OCI container, tools like Buildah[1] exist to support container workflows beyond the conventional Dockerfile approach, providing straightforward command line tools to create containers, work with layers, mount and unmount container filesystems, etc.

[1] https://github.com/containers/buildah/blob/main/README.md


There have got to be a million ways to do this by now. Some of the more principled approaches are tools like Nix (https://xeiaso.net/talks/2024/nix-docker-build/) and Bazel (https://github.com/bazel-contrib/rules_oci). But if you want to use an existing package manager like apt, you can pick it apart. Apt calls dpkg, and dpkg extracts files and runs post-install scripts. Only the post-install script needs to run inside the container.

I may be a little out of touch here, because the last time I did this, we used a wholly custom package manager.


Docker recommends using multi-stage builds e.g. Stage one image has the package manager, stage two image omits it completely, leaving only the installed software.

apk and xbps can do this. You specify a different root to work in.

Most Makefiles allow you to specify an alternate DESTDIR on install.


You don’t even need most of the files in the packages. Just pull out the files you need.

The same way you may require something like cmake as a build dependency but not have it be part of the resulting binary - separate build time and run time dependencies so you only distribute the relevant ones.

I do not talk about multi-step container images.

For example i run a gcs fuse driver, it has other dependencies apt 'just' resolves.


Your question feels insane to me for production environments. Why aren't you doing a version cutoff of your packages and either pulling them from some network/local cache or baking them into your images?

I don't just run a java spring boot application. I run other things on my production system.

It doesn't matter much were i pull them from though, i only do this with packages which have plenty of dependencies and i don't want to assemble my own minimal image.


Aforementioned security vulnerabilities don’t strike as a potential reason to you?

Friend, considering the supply chain attacks going on these days, automatically updating everything, immediately, probably isn't the perfect move either.

You need to automatically update from a trusted source. That source better audit and update constantly. Which is hard.

Stable distributions have security teams.

Ignoring the real benefits of security updates to prevent the unlikely event of supply chain attacks sounds like a weird tradeoff.

A weird tradeoff but an increasingly important tradeoff to keep in mind nonetheless. Like I said, updating immediately isn't a perfect answer. But neither is waiting. I hope you're having this discussion, at least.

That local cache is often implemented as a drop-in replacement for the upstream package repository, and packages are still installed with the same package manager (yum,apt,pip,npm).

No, this is not always the case. Regulated industries pin their package versions and store those versions for pulling.

Minimal might or might not pe your goal. A large container sometimes is correct - at that point you have to ask if maybe using a container twice so you only need to download it once and then installing the one missing part makes more sense.

> minimal

I run systemd, sshd and xpra (remote X11) inside my arch container.


For some scenarios simply building a new image every day is reasonable.

I update my docker containers regularly but doing it in a reproducible, auditable, predictable way

Could you explain how you achieve this?

If you are on github/gitlab, renovate bot is a good option for automating dependency updates via PRs while still maintaining pinned versions in your source.

have a multi layer build system where there is version of each layer. Example

- linux base: kernel + libc + some low tooling

- jvm base: adding the JRE/JDK you need

- your application version


Chainguard, Docker Inc’s DHI etc. There’s a whole industry for this.

there was tiawan and japan for your cheap junk fix. Both are not known for cheap junk anymore

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: