Hacker Newsnew | past | comments | ask | show | jobs | submit | elephantum's commentslogin

They write in the press release, that the sources remain under Apache 2 license, they just stop distributing prebuilt images for free.

Edit: As I see it's true.

Source code for OCI images: https://github.com/bitnami/containers/tree/main/bitnami

Charts: https://github.com/bitnami/charts/tree/main/bitnami


> Source code for OCI images: https://github.com/bitnami/containers/tree/main/bitnami

If you look at the folders there, you'll see that all of the older Dockerfiles have been removed, even for versions of software that are not EOL.

For example:

PostgreSQL 13 (gone): https://github.com/bitnami/containers/tree/main/bitnami/post...

PostgreSQL 14 (gone): https://github.com/bitnami/containers/tree/main/bitnami/post...

PostgreSQL 15 (gone): https://github.com/bitnami/containers/tree/main/bitnami/post...

PostgreSQL 16 (gone): https://github.com/bitnami/containers/tree/main/bitnami/post...

PostgreSQL 17 (present): https://github.com/bitnami/containers/tree/main/bitnami/post...

> The source code for containers and Helm charts remains available on GitHub under the Apache 2.0 license.

Ofc they're all still in the Git history: https://github.com/bitnami/containers/commit/7651d48119a1f3f... but they must have a very interesting interpretation of what available means then.


It looks like setting up a mirror and CI/CD on top of Github might work for some time. ghcr is free for public images


I've been thinking a lot about this kind of thing recently - and put a prototype up of htvend [1] that allows you to archive out dependencies during an image build. The idea being that if you have a mix of private/public dependencies that the upstream dependencies can be saved off locally as blobs allowing your build process to be able to be re-run in the future, even if the upstream assets become unavailable (as appears to be the case here).

[1] https://github.com/continusec/htvend


Their Dockerfiles include things like download pre built binaries from $SECRET_BASEURL which is hosted by them, can still be found in git log though. I imagine it will go offline / have auth soon enough.


Or if you have a decent sized deployment in one of the clouds, it's extremely likely you'll already use their internal registry (eg AWS ECR). I know that we do. So it's just a case of setting up a few docker build projects in git that push to your own internal registry.


Is it clear whether the Debian image sources will continue to be maintained?


I do not see direct statements that they will stop maintaining sources in open source.

We'll see :)


It is at the top of the announcement. This only affects OCI images, not source code "The source code for containers and Helm charts remains available on GitHub under the Apache 2.0 license."


So, are they evil because they decided to stop sponsoring free network egress?


Others have already provided good answers. I wouldn't classify it as evil if all they did was to stop maintaining the images & charts, I recognise how much time, effort and money that takes. Companies and open source developers alike are free to say "We can no longer work on this".

The evil part is in outright breaking people's systems, in violation of the implicit agreement established by having something be public in the first place.

I know Broadcom inherited Bitnami as part of an acquisition and legally have no obligation to do anything, but ethically (which is why they are evil, not necessarily criminal) they absolutely have a duty to minimise the damage, which is 100% within their power & budget as others have pointed out.

And this is before you even consider all the work unpaid contributors have put into Bitnami over the years (myself included).


It's also entirely fine if they delete these images to me. But not with a week of time frame, as originally intended.

And sure, we can go ahead and discuss how this being free incurs no SLAs or guarantees. That's correct, but does not mean that such a short time frame is both rude and not a high quality of offering a service. If I look at how long it would take us to cancel a customer contract and off-board those...

And apparently it costs $9 to host this for another month? Sheesh.


If your doing anything serious you should have artifactory setup.


I agree. We do have mirrors setup, we do have observability into the images we use across the infrastructure. This has concluded we only have a minor issue with this move, wonderfully.

But, just butting users with "Just do this good practice" or "Just do this NOW" still is an uphill battle and will usually not cause the best effect with users. We're currently doing this while moving our 2-3 artifactories into one. If we just shut this stuff off because "You should have more control with your builds", there'd be a riot.

And sure, some people will still fail the migration no matter what. But such time frames are still punishing any but the most professional companies.

That's all in all the work I consider a good operations team to do. Make a stink, provide a migration path, be noticeable, and push people off of the old stuff. Just breaking things because "you should have" is burning trust by the mile.


So much of this industry runs off of good will.

Free software. Free docker images/registries.

Then when a company is like "Hey, um we need to make money", every body gets upset.

We need a more substainable way forward. I can't tell you what that looks like though.


This is not an accurate characterization of what's generating the outrage

The Path to Outrage is actually:

1. Launch HN with MPL licensing, "we <3 Open Source!11"

2. (a few moments later) Onoz, engineers cost money and that sweet VC juice dries up

3. echo BuSL > LICENSE; git commit -am"License update"; blog about "safeguarding our customers" or whatever

4. shocked pikachu face when users, who starting using the open source product, and maybe even contributed bug fixes or suggestions on how to make the community product better, lose their minds about having the rug pulled out from under them

Contrast this with:

1. Launch HN, asking for money to pay for the product

2. Potential customers will evaluate if the product adds enough value for the money requested

There is no step 3 containing outrage because everyone was on the same page from the beginning


> 2. (a few moments later) Onoz, engineers cost money and that sweet VC juice dries up > > 3. echo BuSL > LICENSE; git commit -am"License update"; blog about "safeguarding our customers" or whatever

In this case, it's a lot more nefarious. My boss has a list of companies Broadcom has literally sucked dry for money regardless if the company will make it 2 more years. Pretty much everything maintained by VMWare Tanzu and VMWare has to be considered a business risk at this point.

And I maintain, I'm not even mad that the free images go away. I'm saying it's unprofessional and rude how they are being removed. Which however isn't surprising with Broadcom per the last point.

And sure, the answer is to do it all in-house. But the industry has saved a lot of manpower and made tremendous progress by not doing that and sharing effort for a long time.


Why do you expect for profit organizations to provide tools for free.

Eventually the rug needs to be pulled.

A non profit foundation is probably closer to what you want


> The evil part is in outright breaking people's systems, in violation of the implicit agreement established by having something be public in the first place.

Something, something, sticking your hand in a lawnmower and expecting it not to be cut off.

Broadcom is second only to Oracle.


would you mind getting in your time machine and telling me this before broadcomm acquired bitnami?


that's an assumption, but Broadcom is most likely using open source software in all of their apps. So they do have a moral to also give something back. So just saying it's fair that they don't want to provide something for free anymore isn't really all that fair.


Oh don't get me wrong, my claim is that they are not even clearing the absolute lowest bar when it comes to their stewardship of the Bitnami repositories: Do no harm.


Expecting moral behavior from Hock Tan isn’t likely to pan out.


The images are currently in Docker Hub. If $9/month (or $15, not 100% sure if $9 includes organizations) to keep those images available is too much for Bitnami I'm sure there are many organizations who wouldn't mind paying that bill for them (possibly even Docker Hub itself).


Broadcom is deciding to host it on their own registry and bear the associated cost of doing so. Not sure what this has to do with sponsoring network egress


Does said network egress cost $50k per user?


I tried to do brick sorting (because we have great detection and classification models at https://brickit.app/)

It turned out to be much more complex than I expected.

The biggest issue was grabbing. Typical approach for this type of task is to use vacuum suction actuator, but it does not work for Lego parts, because they have stubs and prevent suction from working.

Also there are issues of part separation.

We abandoned this idea, but I still hope that we can return and achieve something working some day.


This one actually works :)

Source: I'm responsible for ML development at Brickit


One question about Brickit - the main usecase I and many many dads of Lego kids have is that our kids want to reassemble sets and we spend ages searching for these pieces. Yet Brickit works by identifying and recommending its own mini-set lists. Is this a usecase that is not in scope (because of business model) or is technically difficult or unsatisfactory in execution (colors, accuracy)?


The use case "let me reassemble all the parts that I have" is out of scope of Brickit app. It works explicitly with the contents of a single scan, that's why you see small ideas to build: it's just what fits into the parts you scanned.

But! We recognize the intent, and we have something in the works which will be released very soon, stay tuned!


Great! I will follow your updates!


My kids mixed up about 20-300 sets, some big ones, and it feels quite hopeless sorting them through. Any tips on how to untangle, or is it a hopeless cause? I do vision research and am mulling an AI based approach too.


would love to hear more on the architecture choices you made.

do your models run on device? what's the general CV backbone?


Everything runs on device in tflite and it gives us some headaches, especially Android ecosystem.

We do not use anything fancy, detector is ssd-like, and classifier is either resnet or efficientnet depending on your device capability


Anoop, first of all: thanks for your efforts, we switched to Bruno from Postman and are quite happy!

And a question: can you share your plans about Vscode extension? It seems to be broken for a while, do you have plans/capacity to revive it?


Interesting, is there a possibility of something like class-action where iPhone owners sue Apple for restricting access to Fortnite? I bet there should be a plenty of angry gamers.


I read this and cannot believe that I can optimize our $5-6K GCP egress bill to zero. Just wow.


There's a plugin for VSCode that brings draw.io editor for local .drawio files. IMO it's the best of both worlds: nice editor and git version control.

Also this plugin works fine in github.dev so it's like a better version of draw.io official site.


We tried Coroot recently in our production AWS cluster.

I must say it's handy. Cost analysis helped us shave ~20% of our EKS bill


Foxglove is my best argument to use ROS instead of custom solution in semi-robotics cases (several cameras/sensors + ML, but no actuators)


Yep. I've worked in a startup making a Laser Direct Imaging PCB photomasking machine, basically using lasers to do photo masks etc a couple year ago. When I came in, there was a custom IPC thing made, sending essentially Python dicts over ZeroMQ (IIRC). It worked to get the machine running and doing it's thing. For calibration of the cameras (needed to see how warped the PCB was and adjust the pattern) I needed to keep track of transforms etc. A perfect use-case for something like ROS's TF, in some incarnation.

The machine was not a 'robot' per se, but there was many sensors, decision making, actuation, so kinda like a robot.

For debugging the images and calibration transforms, we needed to write custom stuff. The whole thing was akin to ROS, with a couple days it could have been made to work with it. But alas


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: