Hacker News new | past | comments | ask | show | jobs | submit login

i've observed this schism between people who can get LLMs to produce useful output and people who are baffled, I think it's a mixture of two things:

expectations: using to the LLM to break problems into steps, suggest alternatives, using the LLM to help them think through the problem. I think this is the people using it to write emails - myself included, having a loop to dial in the letter allows me to write the letter without the activation energy needed to stare at a blank page

empathy: people who've spent enough time interacting with an LLM get to know how to boss it around. I think some people are able to put themselves in the LLMs shoes and imagine how to steer the attention into a particular semantic subspace where the model has enough context to say something useful.

GPT4 writes boilerplate python and javascript servers for me in one shot because I ask for precisely what I want and tell it what tools to use - I think because I have dialed in my expectation for what it's capable of and I learned how to ask in precise language, I get to be productive with GPT4's code output. Here's a transcript: https://poe.com/lookaroundyou/1512927999932108




Interesting point about empathy. Sorry I'm abusing the comment system to get back to your comment in the future.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: