this is also my problem. When I ask someone a technical question, and I did not provide context on some abstractions. Usually this is common because abstractions can be very deep. "Hmm, not sure.. can you check what's this supposed to do?"
LLMs don't do this, it confidently hallucinate the abstraction out of thin air or uses their outdated knowledge store. Sending wrong use or wrong input parameters.
LLMs don't do this, it confidently hallucinate the abstraction out of thin air or uses their outdated knowledge store. Sending wrong use or wrong input parameters.