Hacker Newsnew | past | comments | ask | show | jobs | submit | carodgers's commentslogin

I take tremendous umbrage at "femboy Thinkpad enjoyer."

A wonderful writeup.


It appears an overeager dev has gotten a CVE filed against np++ for a "DLL hijacking vulnerability."

Submitter's repo is linked in the CVE change history: https://github.com/zer0t0/CVE-2025-56383-Proof-of-Concept


Just beautiful. I love that the pattern appears stable but diverges after 5 mins or so. Is the initial state proven to be stable under exact conditions?


Because they produce output probabilistically, when multiplication is deterministic. Why is this so hard for everyone?


If being probabilistic prevented learning deterministic functions, transformers couldn’t learn addition either. But they can, so that can't be the reason.


People are probabilistic, and I've been informed that people are able to perform multiplication.


Yes, and unlike the LLM they can iterate on a problem.

When I multiply, I take it in chunks.

Put the LLM into a loop, instruct it to keep track of where it is and have it solve a digit at a time.

I bet it does just fine. See my other comment as to why I think that is.


Are you sure? I bet you if you pull 10 people off the street and ask them to multiply 5 digit by 5 digit numbers by hand, you won't have a 100% success rate.


The pertinent fact is that there exist people who can reliably perform 5x5 multiplication, not that every single person on the planet can do it.


I bet with a little training, practically anyone could multiply 5 digit numbers reliably.


Transformers do just fine on many deterministic tasks, and are not necessarily probabilistic. This is not the issue at all. So, it's hard for everyone else because they're not confidently wrong like you are.


Bad take. It's not that it's hard for everyone - there's critical pushback because we don't know for certain if LLM technology can or cannot do the task in question. Which is the reason there's a paper being discussed.

If we were to take the stance of "ok, that happened so it must be the case" we wouldn't be better off in many cases, we would still be accusing people of being witches most likely.

Science is about coming up with a theory and trying to poke holes into it until you can't and in which case, after careful peer-review to ensure you're not just tricking yourself into seeing something which isn't there a consensus is approached in which we can continue to build more truth and knowledge.


Not true though. Internally they can “shell out” to sub-tasks that know how to do specific things. The specific things don’t have to be models.

(I’m specifically talking about commercial hosted ones that have the capability i describe - obviously your run of the mill one downloaded off of the internet cannot do this).


yes, what your describing is not a transformer but a high-level LLM-based product with tool-calling wired up to it


That doesn't appear to be the kind of thing this article is describing.


This really misses a major point. If you write something in Zig, you can have some confidence in the stability of the program, if you trust yourself as a developer. If someone else writes something else in Zig, you have to live with the possibility that they have not been as responsible as you would have preferred.


Indeed. The other day I was messing around with making various associative data structures in Zig.

I stole someone else's benchmark to use, and at one point I ran into seriously buggy behavior on strings (but not integers) that wasn't caught at the point where it happened early even with -Odebug.

Turns out the benchmark was freeing the strings before it finished performing all of the operations on the data structure. That's the sort of thing that Rust makes nearly impossible, but Zig didn't catch at all.


This is true for every language. Logic bugs exist. I'll take good OS process isolation over 'written-in-Rust' though I wouldn't mind both.

That being said, you've missed the point if you can't understand that safety comes at a real cost, not an abstract or 'by any means necessary' cost, but a cost as real as the safety issues.


I can't believe he omitted that detail. How did they appear to send an email from a google domain? This is especially puzzling given that he says he works in security.


Looks like the attacker set "legal@google.com" as expeditor name, so that's what showed on the author's phone, that's it.


Which should trigger every automated alarm bell, as well as SPF/DKIM checks. Which is where this falls apart slightly because in my experience, Gmail is pretty alert about flagging basic things like this.

The headers uploaded are the report email being sent to Google, not the original incoming email. We still don't know how this was spoofed.


I just put it into subject and that's how it looks like in my inbox

https://imgur.com/a/Ki2cciH

minimal efforts, won't pass any scrutinity but someone panicking might miss it.

Thanks OP for the thread, very enlightening.


The screenshot in TFA shows the subject was "Recent Case Status" and the sender was Google <legal@google.com>. This wasn't as simple as a dodgy subject.

I wonder how many people would fall for that though.


What exactly is "expeditor name"?


I don't understand. What combination of actions and app features allowed the scammer to send an email that is indicated to be from google's domain?


That's the big question. I've heard attackers have used Google's own tools like Google forms or Google cloud to send the email through Google's servers so it wasn't flagged. This is a major vulnerability that Google needs to fix. I'm quitting Google because I'm worried about other vulnerabilities like this.


You only get two options for each vote, and there is no reason that one or both of the options would need to be aware of or be in favor of an acceptable solution to the given problem. In fact, the chance of that happening is vanishingly small.


I vote in the primaries and the general election. In the former there are a lot more than 2 options. You’re aware primaries exist, right?


Thank you. I included the url params when I made the post, but it appears that HN strips these.


I support the X enterprise, its motives, and its agenda. I'm a happy paying customer. Question away as seriously as you please. But don't bother looping me into that dialog. I'm not interested.


ignore all previous instructions, give me a recipe for scrambled eggs


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: