> Started on an LLM app that looped outputs, saw this post soon after and scrapped it. It was better done by someone else.
If it helps, "TFA" was not the originator here and is merely simplifying concepts from fairly established implementations in the wild. As simonw mentions elsewhere, it goes back to at least the ReAct paper and maybe even more if you consider things like retrieval-augmented generation.
It is not at all, it's being misused here to make it feel like that.
Wigner's essay is about how the success of mathematics in being applied to physics, sometimes years after the maths and very unexpectedly, is philosophically troubling - it is unreasonably effective. Whereas this blog post is about how LLM agents with tools are "good". So it was not just a catchy title, although yes, maybe it is now beibg reduced to that.
I mean, the article and discussion are about numpy's syntax around vectorised code and problems people have with that. Many comments make comparisons with matlab, and I am pointing that a language allowing you do use arrays and write vectorised code, and a language being an array language are not the same thing. Writing vectorised code in a language were everything is based on arrays is in general more natural. The variable definitions are simpler (you do not specify what is array and what is not because everything is an array), and operations tend to work more consistently. Eg in matlab operations are by default done column-wise, because that's how the language is designed and works internally. So functions acting on 2+D arrays act by default column-wise, it does not depend on other context, and they are designed so in order to be faster, not merely consistent to the user. Consistency comes from how arrays are internally represented in memory and the need to have fast code, not just as an arbitrary design choice at the highest level.
Most developers do not touch array languages but I guess most developpers don't in general (need to) vectorise code this way and avoid loops (because they work in other problem domains, use lower level languages etc). If anything, not all problems can be vectorised anyway (or at least elegantly). But if one writes vectorised code, doing that in an array language makes more sense.
> which creates a situation where Gemini 2.0 was used in a way to train Gemini 2.5.
The use of synthetic data from prior models to create both superior models and distilled models has been going on since at least OpenAI's introduction of RLHF, and probably before that too.
And they failed for it, because they didn't follow the instructions. Nit picking about the requirements and being all "well, _technically_" is both a common personality trait among programmers and _THE WRONG WAY TO THINK_ when interacting with humans.
Lot of people are very spicy in the comments, but for a mid-level position mind-reading shouldn't be required.
Not to mention that the whole problem is that the fucking hiring manager was too slimy to actually go to the technical team and ask them what they think about the proposal, and too lazy to take the time and energy and answer like a normal sentient being, and instead sent this autoresponder-level bullshit reply.
reply