Hacker Newsnew | past | comments | ask | show | jobs | submit | rebolek's commentslogin

I vibe my way through my ideas. I look at LLM code sometimes to cry and cringe and then I beg LLM to have basic dignity and self respect to write code it shouldn’t be ashamed of. But then I instruct it to do something and it does it with speed I’m never able to achieve, even if the code is ugly. But it works.

Works until you discover subtle bugs hiding behind ugliness.

Which is true for human-written code as well.

In both cases, it's your processes (automated testing, review, manual QA) that is the bulwark against bugs and issues.

With AI, you can set up great processes like having it check every PR against the source code of your dependencies or having it generate tests for what's an intermediate step or ephemeral solution that you would never write tests for if you had to do it yourself.

There's this idea on HN that if you delegate too much to AI, you get worse code. Presumably not appreciating all the code-improving processes you can delegate to AI, particularly processes you were never doing for hand-written code.


Yes, there are so many. As in hand-written code. I don’t take LLM written code for granted and I rewrite is sometimes. I know it’s not perfect. But it’s useful.

Compile code is not perfect also. But who does hand-written assembler anymore? Yes, LLM is another layer, it would be ugly and slower but it’s much faster to use.


The thing is that with the code you've written you wrote it in a way that you understand and you have mental model of how it works therefore it is much easier to reason about potential edge cases that have not been covered.

If you like BeOS, take a look at Haiku https://www.haiku-os.org/ , it's very nice and very usable system based directly on BeOS.

And much better option, running the real deal, instead of some compatibility layer.

I suspect Linux has better hardware support than Haiku, which is not exactly easy to run on laptop hardware (w/ wifi, sleep, &c)

I suspect it was a freak occurence, but I actually had incredible luck running Haiku on an old laptop back in the day. It was incredibly fast, and just about all the amenities you'd expect worked with no or minimal intervention.

In the last year sometime I ran the Haiku live image off USB on my only laptop (2011 X201t), it worked fairly well.

Me too. The laptop was so old that I couldn't play a 360p mpg video without pauses on Windows 2K or XFCE, but it ran smoothly with BeOS5 (the Intel-based abandonware version)

Even running from an HDD?

I recently tried the latest version (Beta 5?) on a 2005-ish PC with an even older HDD and it ran surprisingly fast off that. The only thing where it was somewhat slow was web browsing.

Yeah. I installed it to HDD and it worked great. You'd think the thing had an SSD ot was so snappy. No issues with compat on the drive or anything.

> I suspect Linux has better hardware support than Haiku, which is not exactly easy to run on laptop hardware (w/ wifi, sleep, &c)

So true. I had an old Dell Latitude D620, 3GB/500GB, 1.66ghz Intel Core Duo Processor and it was sound that tripped me up. Haiku was lightning fast on this machine.

I think that eventually I might've gotten sound to work but... this was many years ago and the laptop was mostly for testing light-weight distros on modest hardware.


Yeah, a good opportunity to contribute upstream.

Presumably there's a lot more modern software written for Linux which you'd end up running through a compatibility layer from Haiku? The better option seems relative. I could be misremembering how Linux programmes are handled on Haiku though.

VitruvianOS has the clothes of BeOS, which is nice and refreshing.

But Haiku has the soul.


Maybe the fallacy is not exploring what a given OS is great at?

We don't need to clone UNIX all over the place.


How strictly do you mean “UNIX clone”? Because Linux isn’t strictly UNIX. But then at the other end of the scale, BeOS was also partially POSIX compliant and shipped with Bash plenty of UNIX CLI tools.

Perhaps it’s better to play it safe and just run DOS instead ;)


It certainly is, what it is not, is a derivative.

BeOS on its final commercial version certainly did not allow to compile UNIX applications, beyond the common surface that is part of ISO C and ISO C++ standard library.


maybe in early BeOS versions but, BeOS R5 especially with the BONE updates had a fairly decent POSIX compatibility for the time. If you do "ls /" you can see immediatly BeOS has some BSD reminiscence, but certainly it isn't a UNIX OS as in itself.

But Vitruvian is running its own graphics stack so no X11 or wayland applications will run afaict.

Not quite really. Vitruvian runs virtually the same identical sw stack of Haiku and there's a haiku-wayland that works. However on vitruvian the app_server could provide real Gbm buffers, so that would give us pretty much native rendering. We're still working on it but you'd have the advantages of a BeOS-like gui and the power of linux!

So what's the point of this -- it's essentially a different Haiku?

I think itbis the reverse, it is haiku with a linux kernel so it works with more hardware.

With xlibe they should.

In Haiku windowing system, each app window gets its own thread so dialog boxes run in a different thread to the main window and a different thread to the core app. In Linux, all windows share the same message loop thread. A simple port reveals threading issues in Haiku which dont exist on Linux.

To work around this, all window messages in ported apps are marshalled to execute sequentially. Small additional overhead, and the system doesnt spread available threads, so noticably slower.

Compare a native Haiku app with a ported app, one is smooth as ice while the other isnt. Users notice it. This is on many core systems.


> In Linux, all windows share the same message loop thread.

I'm no expert, but aren't you just talking about Xorg here? As far as my limited knowledge goes, there's nothing inherent in the Wayland protocol that would imply this.


And things such as ruby don't work on it. Well, what shall I say? The "best" ideas get beaten when in practically already works very well - aka Linux. People need to compare to Linux and if there are failure points, they need to fix it. Haiku keeps on failing at core considerations. If you look at guides, they recommend to "run in qemu". Well, that is a fever dream. They need to focus on real hardware. And they need to support programming languages just as Linux does. And modern hardware too. Would be great if Haiku could shape up but the development is way too slow. I've been looking at it for many years - they are simply unable to leave the dream era. ReactOS is even worse in this regard. At some point those projects gave up on the real world. I think qemu, while great, kind of made this problem worse, since people no longer focus on real hardware; the mantra is "if it works in a virtual EM, it is perfect". Until one notices that it doesn't work quite as well on real hardware. Case in point how ruby does not work on Haiku. Ruby works well on BSD (for the most part), linux (no surprise) and also windows (a bit annoying, but it does work there too and surprisingly well, for about 99% of the use cases, though it is annoyingly slower in startup time compared to linux).

> I've been looking at it for many years - they are simply unable to leave the dream era.

Sit down and do the work needed to get Ruby running properly on Haiku instead of sitting here complaining and basically admitting that you're just being a noisy spectator... On HackerNews, no less.


Huh, PHP works on Haiku, and there aren't even that many #ifdefs for it in the source. If a language can be ported to Windows, Haiku should be a no-brainer. Seems more a matter of having someone interested in maintaining the port, and I think it ultimately just points to the size of Haiku's userbase being a rounding error.

> And things such as ruby don't work on it.

What doesn't work about it? We have Ruby in the software repositories, and Ruby is required to build WebKit (and we build WebKit on Haiku), so clearly it works for that much at least. I don't see any open tickets at HaikuPorts about bugs in the port, either.


Getting Rebol running on Haiku was fairly easy task, so I guess it shouldn't be that hard for Ruby too, if someone's willing to do the work.

People aren't really running servers on Haiku, which is basically the only relevance to use Ruby in 2026, Rails powered web applications.

Then again, there is a golden opportunity to become a Ruby contributor, road to fame on Ruby contribution list.


Maybe 5% of what I use Ruby for is on the server. I'd suggest those of us who use Ruby client side are likely to outnumber Haiku users by magnitude or two.

Homebrew would like a word.

Homebrew wouldn't support Haiku anyway.

Mostly relevant for folks on macOS, and I skip on it when using Mac anyway rather using UNIX and SDK tools in the box, so kind of debatable.

Debatable because you don't use it?

What does not work? You can install Ruby version 3.2.9 (2025-07-24) with a point-and-click package manager HaikuDepot and it works perfectly fine.

Vitruvian can potentially have everything Haiku has (it's the same identical stack BTW) but with the power of linux. It's cool if people could start to appreciate both visions.

I'm so confused right now.

I've been a fan of Beos philosophy since the Personal Edition but never had the occasion to run it on steel as I was too poor to have two machines back in the days, and now I miss login/password prompt at boot on Haiku. But i'm following it closely and I hope i'll be able to install it on my X220 for a web/mail machine !

You didn’t need two machines to run BeOS. I ran very smoothly on a Windows PC via dual booting.

BeOS 5 could even be installed on a Windows FAT32 partition alongside Windows (it created a 50MB virtual disk).

At one point in time I had Windows 95, Windows 2000, Linux (possibly Slackware) and BeOS 5 all running on the same single PC.


I was probably younger than you, and on the family computer. Couldn't make what I want and mess with booting back then ! I remember trying the PE edition through windows but couldn't install it.

Wow.

"Software authors should have basic decency and respect for the users of their software." Why? Not at all.

"Publishing a project as OSS doesn't relinquish you from this responsibility. It doesn't give you the right to be an asshole." You are free to be asshole and it's nobody's business.

Actually it's exactly opposite. Such feeling of superiority and privilege, that just because you use some software, you have any right to command its author is the very definition of being an asshole.

"I'm demanding that people work for free for my benefit! Unbelievable." Yes, that's unbelievable.


> "Software authors should have basic decency and respect for the users of their software." Why? Not at all.

Because that's the core reason why we build software in the first place. We solve problems for people. Software doesn't exist in a void. There's an inherent relationship created between software authors and its users. This exists for any good software, at least. If you think software accomplishes its purpose by just being published, regardless of its license, you've failed at the most fundamental principle of software development.

> you have any right to command its author is the very definition of being an asshole.

Hah. I'm not "commanding" anyone anything. I'm simply calling out asshole behavior. The fact is that software from authors who behave like this rarely amounts to anything. It either dies in obscurity, or is picked up by someone who does care about their users.

> "I'm demanding that people work for free for my benefit! Unbelievable." Yes, that's unbelievable.

Clearly sarcasm goes over your head, since I'm mimicking what you and others think I'm saying. But feel free to continue to think I'm coming from a place of moral superiority and privilege.


If you want software to be free for everyone except for the authors to use, modify, distribute, and sell without restriction I am sure you could work with a lawyer to draft a new “Apache for everybody on earth other than the maintainers, who permanently waive all rights” license.

If that’s what all good maintainers do, and intend to do, there’s really no reason for maintainers to tempt themselves by using awful “open” licenses that allow them the loophole of doing what they want with the software they create. Plus who wouldn’t want to codify that they’re not an asshole?

It shouldn’t be hard to get maintainers that intend for their software to “amount to something” to adopt it, and it would bring a sense of comfort to the people that rely on the software that you write when you announce that it’s the new default license for everything in your repos.


I have no idea what you're talking about.

Your argument is about some sort of covenant between the developers/maintainers and the users. That’s what a license is. That is the agreement between the parties. In that sense your problem isn’t with individual developers, it’s with permissive licensing.

If you don’t like it when OSS maintainers pivot to proprietary software, why not just create a license that precludes that from happening? The maintainers could waive their rights to pivot or later reuse the code that they wrote in any proprietary software, and that way people could just choose to only create and use NoRugPullForeverEver-licensed software and avoid the headaches altogether.


No, do not poison passwd, let systemd choke on this.


Using LLM is perfect for writing documentation which is something I always had problems with it.


As someone who has dealt with projects with AI-generated documentation... I can't really say I agree. Good documentation is terse, efficiently communicating the essential details. AI output is soooooooo damn verbose. What should've been a paragraph becomes a giant markdown file. I like reading human-written documentation, but AI-slop documentation is so tedious I just bounce right off.

Plus, when someone wrote the documentation, I can ask the author about details and they'll probably know since they had enough domain expertise and knowledge of the code to explain anything that might be missing. I can't trust you to know anything about the code you had an AI generate and then had an AI write documentation for.

Then there's the accuracy issue. Any documentation can always be inaccurate and it can obviously get outdated with time, but at least with human-authored documentation, I can be confident that the content at some point matched a person's best understanding of the topic. With AI, no understanding is involved; it's just probabilistically generated text, we've all hopefully seen LLMs generate plausible-sounding but completely wrong text enough to somewhat doubt their output.


Classic perfect/good.

The choice is not usually “have humans write amazing top notch documentation, or use an LLM”.

The choice is usually “have sparse, incomplete, out-of-date documentation… or use an LLM”.


And my claim is that the latter is better.


Cool, so just ignore documentation then. Problem solved for everyone.


I dont see how that solves anything.


* Projects can focus on code first, and do best-effort on docs for low cost * Most of us get reasonable quality documentation, much better than what some poor developer would turn out in spare moments * You are spared from the outrage of imperfect documentation

We wouldn't have these silly arguments?


Gah hopefully the meaning was clear from context, but I just realized I said "latter" when I meant "former". Inconsistent human documentation is better than miles upon miles of AI-slop documentation.


Given that people have access to LLMs themselves, publishing their output in lieu of good documentation (no matter how sparse) seems like it’s mostly downside.


Probabilistically generated text is light years better than my human generated mess. I know my limits and documentation is one of them.


This immediately invalidates a software or technical project for me. The value of documentation isn't the output alone, but the act of documenting it by a person or people that understand it.

I have done a lot of technical writing in my career, and documenting things is exactly where you run into the worst design problems before they go live.


ssh?


How is it misleading? It says "nanny state vs. Linux", in second paragraph says "several states in the US", then mentions EU and Brazil and California is mentioned first in seventh paragraph. It's not about California.


The original title is "Can LLMs be computers?"

But the right question is, should they?


Maybe they should go open source from the start, then there's nothing to leak.

P.S.: And strangers will sometimes help you find vulnerabilities (and sometimes be very obnoxious but that's not open source's fault).


When I worked for the government in Norway, it slowly changed to all code being developed in the open. 3k repos here now: https://github.com/orgs/navikt/repositories

When I started it was a big security theater. Had to develop on thin clients with no external internet access, for instance. Then they got some great people in charge that modernized everything.

Only drawback is when you quit, you have to make sure to unsubscribe from everything, hehe. When quitting a private company I was just removed from the github org. Here I was as well, but I was still subscribed to lots of repos, issues, PRs,heh.


Very cool! Do they accept external contributions, e.g. from Norwegian citizens? Also, was there any thought given to "digital souvereignty" (wondering because the repos are hosted on a US service)?

I'm also surprised that you were able to (or expected to?) use your private GitHub account for your work.


Not sure how it is now, but when I worked there ~8 years ago we weren't really equipped to accept contributions. Both from a licensing perspective (CLA), but also that we had our own timelines, projects and prioritizations in the team. So most applications were open source more in the sense of source available. Some utils (like generators for Norwegian mock data, or libraries handling Norwegian addresses or whatever) that were actively used by other companies could get some proper contributions once in a while, though.


Yeah. In these cases it's not like anyone is going to spin up their own instance and start competing with you.

Government / handles society-critical things code should really be public unless there are _really_ good reasons for it not to be, where those reasons are never "we're just not very good at what we're doing and we don't want anyone to find out".


It probably depends where you live. When I was young, time was infinite and money were scarce. Now they're both the limit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: