I really haven't tried that stuff myself except for claude code
but I do recall seeing some Amazon engineer who worked on Amazon q and his repos and they were... something.
like making PRs that were him telling the ai that "we are going to utilize the x principle by z for this" and like 100s of lines of "principles" and stuff that obviously would just pollute the context and etc.
like huge amounts of commits but it was just all this and him trying to basically get magic working or something.
and to someone like me it was obvious that this was a futile effort but clearly he didn't seem to quite get it.
I think the problem is that people don't understand transformers, that they're basically huge datasets in a model form where it'll auto-generated based on queries from the context (your prompts and the models reponses)
so you basically are just getting mimicked responses
which can be helpful but I have this feeling that there's a fundamental limit, like a mathematical one where you can't get it really to do stuff unless you provide the solution itself in your prompt, that covers everything because otherwise it'd have to be in its training data (which it may have, for common stuff like boilerplate, hello world etc.)
but maybe I'm just missing something. maybe I don't get it
but I guess if you really wanna help him, I'd maybe play around with claude/gpt and see how it just plays along even if you pretend, like you're going along with a really stupid plan or something and how it'll just string you along
and then you could show him.
Orr.... you could ask management to buy more AI tools and make him head of AI and transition to being an AI-native company..
but I do recall seeing some Amazon engineer who worked on Amazon q and his repos and they were... something.
like making PRs that were him telling the ai that "we are going to utilize the x principle by z for this" and like 100s of lines of "principles" and stuff that obviously would just pollute the context and etc.
like huge amounts of commits but it was just all this and him trying to basically get magic working or something.
and to someone like me it was obvious that this was a futile effort but clearly he didn't seem to quite get it.
I think the problem is that people don't understand transformers, that they're basically huge datasets in a model form where it'll auto-generated based on queries from the context (your prompts and the models reponses)
so you basically are just getting mimicked responses
which can be helpful but I have this feeling that there's a fundamental limit, like a mathematical one where you can't get it really to do stuff unless you provide the solution itself in your prompt, that covers everything because otherwise it'd have to be in its training data (which it may have, for common stuff like boilerplate, hello world etc.)
but maybe I'm just missing something. maybe I don't get it
but I guess if you really wanna help him, I'd maybe play around with claude/gpt and see how it just plays along even if you pretend, like you're going along with a really stupid plan or something and how it'll just string you along
and then you could show him.
Orr.... you could ask management to buy more AI tools and make him head of AI and transition to being an AI-native company..