I 100% agree with this in a individual person sense, but in a humanity sense someone does understand linux very deeply and is very intentional on how they change it which to me is how I gain trust in it.
Linux is less then 40 years old. Most of the people who designed it are still alive. How will the situation be in 40 years when the current maintainers are dead? (reiserfs comes to mind - it was just becoming great when [censored] and filesystems in linux went backward for many years, will that be allowed to happen next time?)
There are systems still in use from the 1960s (maybe before) - the original authors are at least retired and likely dead. I question how well the replacements understand all that. Sure they have had to dig in and understand some parts, but what about the parts that just keep working and don't need new features?
Genuine question: is there a big inherent difference between "I don't understand this thing but I think this other human does," and "I don't understand this but I think this other AI does"?
If your answer is "yes," do you think that's inherent to the (metaphysical?) fact of it being AI or to specific limitations to current AI? If the latter, what changes to AI would let you trust it?
I don't know. AI has an understand of some really complex things, but it also does some really stupid things. Depending on which it did most recently for me I change my answer.
The question is does AI understand well enough to maintain that thing for whatever maintenance I need to do in the future?
a comment I cannot stop thinking about is "we need to start thinking about production as throw away" Which is a wild thought to think about when I think about on my career. We had so many dbs or servers that we couldn't touch because they were a special snowflake
>Honestly, the post itself reads very generated, very rage bate. I have so much more faith in us and our hobby/industry than this blog post.
Don't get me wrong. I think the future is very bright for software. I have friends who are scientist and biomedical professionals and I am excited to see what they are able to do with the powers of software where they don't need to care about syntax and can only lead with their intentions.
The rage bait part is mainly my frustrations manifesting. As an SRE my annoyances came off a little bit when it comes to how fast developers are shipping vs how fast our rails can maintain things.
I am mainly seeing across a lot of my engineer friends and mentor who I respect deeply.
They are using swarms of agents to build crms, small business and run their homelabs.
There's a massive difference between launching a piece of software and launching a successful business.
Over the last couple of months I've seen a load of new "product launches" in my niche but when you look at them they're largely vibecoded and don't show deep understanding and sustainability, so it's pretty likely you'll never see them as successful businesses.
Looking at some of the related places like /r/sideproject/ there's a lot of releases and I'd be willing to suggest that most of them are using LLMs
Then, respectfully, what is the point? Does the trillions-of-dollars AI industry exist to support a few hobbyists building niche products to scratch their own itch? I thought the promise here is increased productivity, presumably in the economic sense.
There seems to be a lot of hype, and has been for years, but I’m not seeing it materialize as actual economic output. Surely by now there should be lots of businesses springing up to capture all of this value created by vibecoded software.
Whilst I have no special knowledge, my expectation is it'll do both. If you reduce the barriers to coding you'll get more code, both at the hobbyist/one-person level and also at the large corp level.
Whether that translates into more value for those larger corps is the trillion dollar question :) Writing code is a small part of the process of finding and shipping features that customers want, so it remains to be seen how much LLM tools translate it.
I think it's fairly widely accepted that from a financial standpoint we're in an AI/LLM bubble. There has been more investment than we're likely to see financial benefits, but it's impossible to predict to what degree (if you can predict that and the timing you can make a lot of money!!)
I like the thought behind the piece, but what I think the criticisms are reacting to is the profusion of short, bursty sentences (just like the ones in the parent post), which can be great when used sparingly, but start to feel repetitive and have a "LinkedIn"-ish vibe, at least to me. For example the very end:
Most of you won't be able to answer that. And you already know it.
That's the conversation this industry needs to have. Not tomorrow. Now.
I hope you don't take this the wrong way and do continue writing - I enjoyed this piece, just wanted to give some constructive feedback
Then just post your opinions rather than the text the LLM dreamed around your opinions. Short posts and tweets tend to be well-liked on HN, there is no need to puff it up to a big blog post.
Look, I'm sympathetic to not feeling like you're a good writer, but there are plenty of writing styles which doesn't turn your opinions into overly dramatic AI slop. And now I don't even know which opinions are your own and which are from a GPT, hence my "unreadable" comment even if it sounds harsch. But it literally is impossible to infer what your opinions actually are when they have been butchered this hard in slop.
>The real problem isn't lack of tools - it's that the knowledge is ephemeral.
This 100% the problem. This is why we are trying to capture business context and attach it to the infra itself vs just keeping it in docs.
>How are you handling the drift problem? Auto-discovery polling, change events from cloud providers, or something else?
We built a pretty awesome approach to handling the drift problem. We do a combination of indexing, change even capture and then user behavior. So if a user is looking for a information we pull the live value first.
I 100% agree with this in a individual person sense, but in a humanity sense someone does understand linux very deeply and is very intentional on how they change it which to me is how I gain trust in it.
does trust change when the entire SLDC is AI?