Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am not a prompt engineer, but I have basically been using ChatGPT, in place of where I used to use StackOverflow. It’s nice, because the AI doesn’t sneer at me, for not already knowing the answer, and has useful information in a wide range of topics that I don’t know.

I have learned to create a text file, and develop my questions as detailed documents, with a context establishing preamble, a goal-oriented body, and a specific result request conclusion. I submit the document as a whole, to initiate the interaction.

That usually gets me 90% of the way, and a few follow-up questions get me where I want.

But I still need to carefully consider the output, and do the work to understand and adapt it (just like with StackOverflow).

One example is from a couple of days ago. I’m writing a companion Watch app, for one of my phone apps. Watch programming is done, using SwiftUI, which has really bad documentation. I’m still very much in the learning phase for it. I encountered one of those places, where I could “kludge” something, but it doesn’t “feel” right, and there are almost no useful heuristics for it, so I asked ChatGPT. It gave me specific guidance, applying the correct concept, but using a deprecated API.

I responded, saying something like “Unfortunately, your solution is deprecated.” It then said “You’re right. As of WatchOS 10, the correct approach is…”.

Anyone with experience using SO, will understand how valuable that interaction is.

You can also ask it to explain why it recommends an approach, and it will actually tell you, as opposed to brushing you off with a veiled insult.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: