Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Who in their right mind would intentionally deploy non-deterministic, unreviewable and unprovable software to critical systems?


My colleagues at the head of a company. I’m one of four bosses. One of us is pushing for AI every single meeting. The other is ignoring her. The last one is starting to ‘see her point.’ I’m considering quitting if this goes to far but unwilling to make that threat yet, as it’s a bridge I can only cross once.

Anyway. To me it just speaks to the disdain for semi-intellectual work. People seem to think producing text has some value of its own. They think they can shortcircuit the basic assumption that behind every text is an intention that can be relied upon. They think that if they substitute this intention with a prompt, they can create the same value. I expect there to be some kind of bureaucratic collapse because of this, with parties unable to figure out responsibility around these zombie-texts. After that begins cleaning up, legislating and capturing in policy what the status of a given text is etc. Altman &co will have cashed out by then.


It's interesting to still hear this kind of sentiment.

> People seem to think producing text has some value of its own.

Reading this sentence makes me think that the author actually never seen agentic work in action? Producing value out of text does work and one of good examples is putting it in a loop with some form of verification output. It's easy to do with programming - type checker, tests, linter etc. – so it can chat by itself with it's own results until the problem is solved.

I also find it personally strange that so often discussions require reminder that rate of change in capabilities is also big part of "the thing" (as opposed to pure capabilities today). It changes on weekly/monthly basis and it changes in one direction only.


i think you might have misunderstood the meaning of “producing text” in the parent comment.

the kind of people the parent comment was talking about tend to believe they can send three emails and make millions of pounds suddenly appear in business value (i’m being hyperbolic and grossly unfair but the premise is there).

they think the idea is far more valuable than the implementation - the idea is their bit (or the bit they’ve decided is their bit) and everyone else is there to make their fantastic idea magically appear out of thin air.

they aren’t looking at tests and don’t have a clue what a linter is (they probably think it’s some fancy device to keep lint off their expensive suits).


the essence of man is blind spots and hubris


Anyone who isn’t a software engineer. There is so much hype that non-technical people have bought into.

Their tech teams should know better, but it’s hard to say “no”, when it feels like your salary depends on you saying “yes”.


> Their tech teams should know better, but it’s hard to say “no”, when it feels like your salary depends on you saying “yes”.

There's some truth to the difference between "short term profits" and "my salary depends on this" being whether you're the boss or the employee.


Someone who was ordered by their boss to deploy it, and made sure to get the instructions in writing - with their protests also in writing.


DOGE would and did. Results were as expected... complete failure.


Someone who is really pissed off at how much they have to rely on software developers to run their business. They should not have so much power and direction in the company. I mean, they don't even have memberships at the country club!


The answers can be recorded and reviewed. The other points are true, or is there a way to make outcomes deterministic, when compared to previous versions while allowing to add more knowledge in newer versions?


It's possible to make any model deterministic. Used to be just to save the seed, but I'm not sure it still is now that everything is distributed. Maybe a little more effort.


determinism isn’t really enough, we want “predictable”. Most of these AI wavefunctions are “chaotic” - tiny changes in state can cause wildly divergent outcomes


A part of my question that you didn't go into was, can new knowledge be added in a new version without making the answers with knowledge learned in previous versions non-deterministic?


that’s not really how training works.

changing the input (data) means you get a different output (model).

source data has nothing to do with model determinism.

as an end-user of AI products, your perspective might be that the models are non-deterministic, but really it’s just different models returning different results … because they are different models.

“end-user non-determinism” is only really solved by repeatedly using the same version of a trained model (like a normal software dependency), potentially needing a bunch of work to upgrade the (model) dependency version later on.


This requires an exact lock-down of things like the hardware and driver version, doesn't it? Is that sustainable?


It shouldn't. It didn't used to, at least.


But that won't survive an upgrade, will it?


Anyone who doesn't fully understand current differences between existing non-deterministic, unreviewable and unprovable agents (humans) and the artificial ones.


If you train it the right data, there is no security risk. It cannot know what it doesn't see. However, if you train it on internal secrets, they will leak, simple as that. Filtering will not help.

But this interview is only fear-mongering to sell expensive models. Ditching the industry leaders.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: