Hacker Newsnew | past | comments | ask | show | jobs | submit | more AIorNot's commentslogin

I kind of like BAML https://boundaryml.com/ been using it in production

Edit, read the article -its really good- that cycle of AI engineering progression is spot on -read the article too!


OpenClaw approach has moved into frontier companies I see -


yes! its all happening


“Everyone has a plan until they get punched in the mouth" - Mike Tyson


By far most of the code LLMs write is for crappy crud apps and webapps not pacemakers and rockets

We can capture enough reliability on what LLMs produce there by guided integration tests and UX tests along with code review and using other LLMs to review along with other strategies to prvent semantic and code drift

Do you know how much crap wordpress ,drupal and Joomla sites I have seen?

Just that work can be automated away

But Ive also worked in high end and mission critical delivery and more formal verification etc - that’s just moving the goalposts on what AI can do- it will get there eventually

Last year you all here were arguing AI Couldn’t code - now everyone has moved the goalposts to formal high end and mission critical ops- yes when money matters we humans are still needed of course - no one denying that- its the utility of the sole human developer against the onslaught of machine aided coding

This profession is changing rapidly- people are stuck in denial


> that’s just moving the goalposts on what AI can do- it will get there eventually

This is the nutshell of your argument. I’m not convinced. Technologies often hit a ceiling of utility.

Imagine a “progress curve” for every technology, x-axis time and y-axis utility. Not every progress curve is limitlessly exponential, or even linear - in fact, very few are. I would venture to guess that most technological progress actually mimics population growth curves, where a ceiling is hit based on fundamental restrictions like resource availability, and then either stabilizes or crashes.

I don’t think LLMs are the AI endgame. They definitely have utility, but I think your argument boils down to a bold prediction of limitless progress of a specific technology (LLMs), even though that’s quite rare historically.


I agree that LLM architecture might hit a ceiling (although the trajectory is still upward at present) but I meant Deep Learning in general

I do think there is a great deal of VC baiting hype in statements by Dario and Altman about ai coding but at the same time the progress has indeed been positive

We've finally proven or unlocked the secret to learning in machines - the only question is how fast that progress curve is - yes it might get stuck for a few years but I think this is really an inflection point that we’ve reached with these technologies


Unless you have a private theater room its not quite the same thing as watching first run movie in a darkend crowded theater - and even that misses the social aspect for an anticipated picture

The communal experience is special

On top of that most people don't have the attention span to sit through a film without opening their phones - film is supposed to be about capturing your attention not just entertainment

Otherwise watch it on your laptop for all I care


>The communal experience is special

Yes, but only if your community is composed of well-behaved people. Apparently, this just isn't the case in many places.


This comment is spot on


AI does not have LONG context, Long Term Memories or LONG intentionality -its not aware and it can't remember the plot without being spoonfed the details each time from scratch.

Its like an amnesic genius who once he already wrote a masterpiece and keeps cycling, and looses his train of thought after some fixed amount of time.

This groundhog day effect is mitigated in some respects by code -we create key-value memories and agents and stores and countless ways to connect agents via MCP and platforms/frameworks like A2A and the like but until we solve that longer lived instance problem we won't be able to trust these systems without serious HITL (human oversight)

I think we need models that update their own weights and we need some kind of awareness cycle rather than just a forward pass inference run with a bigger context window


Met too - I'm 50 and have spent the past 3 years building AI startups, some successfully and in the last two months I've built two side projects with ccode..its amazingly good in past month with Opus


"what is the role of humans in a scenario where work is no longer necessary? This is significant because, since the industrial revolution, work has played an important role in shaping an individual’s identity. How will we occupy our time when we don’t have to spend more than half of our waking hours on a job"

Umm I have been working in AI in multiple verticals for the past 3 years and I have been far busier and more stressed with far less job security than past 15 before that in tech.

For now this is far more accurate: https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies...

Wake me up when the computers run the world and I can relax..but I don't think its happening in my lifetime.


Evolution never lets you relax, it only breeds more effective predators.


Humans are the worst species aren’t we


No that's the domestic cat.


If an animal gets its head stuck, predators will casually eat its meat without first killing the prey. This makes sense from their purely selfish point of view, especially as it provides a sort of short-term "food preservation".

Ducks have been documented engaging in necrophilic gang-rape.

Dolphins use literal waterboarding to rape.

Humans are just smarter assholes than other animals. Not worse or better.


No


I think we have the greatest depth and breadth of cruelty


Our capabilities are so high and our population so differentiated we basically hold nearly all the records for everything (barring some extremeophile metrics) so it makes sense.


It helps we write the record categories. We only measure stuff we find relevant to our existence which happens to be what we probably do sorta well.


Yeah, cool, what categories do you have in mind? Sure we have bias but not infinite bias.

I'll start!

How about sea urchin destruction? I bet otters and sheepshead fish probably have little bookies keeping track and they know which species or virus hold the records! Very fun stuff! I bet they have little tablets to keep track of their records that go back thousands of years? Oh man, yeah, good point about species bias!


To kinda shortcut the bullshit, I think we should try to think about what cruelty is. I'd say it's a kind of celebration. The cruel animal is celebrating in the most visceral, most direct way the fact that they aren't in the position of the victim. Cruelty necessarily involves a kind of excess. You don't tend to find excess expenditure as much in the animal kingdom as you do among our species' section of it. That's why I feel like we exercise the most cruelty.

The records thing is silly. Records are necessarily aggregational, the top performer of many things. Everything sets records all the time at the specific unique thing it does. When we start choosing aggregations of things there's a combinatorial explosion in ways to choose them. We only get by by choosing an infinitesimal subset of those aggregations to consider, and thus to find records in by extension.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: