Regarding users treating the chatbot like a search engine bar,
>Avoid using this type of prompt when working with AI bots like ChatGPT. While search engines perform best with keywords, AI bots need more context to understand what kind of response or output you’re looking for.
Just a few days ago in another thread I interacted with another user who was complaining of how ChatGPT hallucinated an incorrect response for the origin of the term 'bhangmeter'. I got a very accurate response myself[0], and noted that their shared chat was this 'keyword style,' staccato fragment approach[1]. I usually ask ChatGPT questions conversationally, and with that approach I got an accurate result.
I wonder if / suspect that one reason people seem to have such varyingly different impressions of overall response accuracy with ChatGPT comes down to different prompting approaches. If you don't talk to ChatGPT and just bounce keywords off it instead, you're not using it as designed, and it seems to result in less accurate responses. If you talk to ChatGPT like it's a simple and stupid computer search, it responds to you like one. We're kind of forced to personify these tools a bit if we want the best results.
A colleague said that they don't like ChatGPT, because it basically gives the same info as the top response on Google. This was a software dev.
I think there's a fundamental misunderstanding of what these models are all about. I really question/fear how the average person uses it, since OpenAI might be incentivized to push it towards that context.
No evidence, but I feel it's harder to get it to converse about topics, and avoid lists. It seems to love making lists now.
I always try to get to the point quickly, I feel like the longer the conversation gets the more it forgets earlier important context and I have to keep reminding it. Mainly using ChatGPT4.
Agreed. If I don't get a good answer, I'll edit my input, rather than continuing with an attempt to correct it.
There's a context size, and there's a "comprehension" size. They appear to be completely unrelated, with comprehension being an incredibly small fraction of the context window.
I've found expanding conversations with GPT-4 are most effective for building code snippets. Start with asking for the basic functionality and continue expanding the feature set. Here's an example of building a simple local frontend app to interface with the TTS API
https://chat.openai.com/share/8ea1e3ea-0763-4118-8c83-a2f5ea...
I agree that an iterative style of coding works great with GPT. That's generally how I code as well, whether collaborating with GPT or on my own. Start with "hello project" and then incrementally add functionality and complexity.
But I have to ask, how do you update your files when iterating on code within the ChatGPT web UI? Do you just copy the code from your browser and manually find and replace the old code chunks in your files?
I ask because my open source coding assistant aider automates away all of that hassle. It lets GPT edit your local files directly. It does a bunch more beyond that, but just the ergonomics of letting GPT edit your files is a huge win. The coding iterations become effortless.
You might want to give it a try if you're using GPT a lot for coding? You do need to provide your own OpenAI API key, so you pay them for token usage as opposed to the all-you-can-eat $20/month ChatGPT web UI.
I copy and paste chunks for small changes, or ask for fully updated code and copy the whole thing if there are complex changes. Your project sounds very interesting! I'll give it a try, thank you
This is an interesting article that correlates with my observations. However, what worries me is the amount of information gathered and available to researchers about the users. "Another user was doing research on the company, Insperity. She asked multiple questions related to different aspects of the company."
"At the end of the study, we conducted in-depth interviews with 14 participants."
I wondered something similar while reading, but recognized the choices of explicit gender and detailed inquiry inclusion while intentionally changing the names. These pointed me to the likelihood this was in-house/controlled research. I believe that is the case from the quote above, though I don't see any additional details about how the study was conducted.
It well might be an in-house research, but I have no doubt that my credit-card is linked with what I've typed in chatgpt. The immense amount of intel gathering capabilities over the chat bots is insane.
>Avoid using this type of prompt when working with AI bots like ChatGPT. While search engines perform best with keywords, AI bots need more context to understand what kind of response or output you’re looking for.
Just a few days ago in another thread I interacted with another user who was complaining of how ChatGPT hallucinated an incorrect response for the origin of the term 'bhangmeter'. I got a very accurate response myself[0], and noted that their shared chat was this 'keyword style,' staccato fragment approach[1]. I usually ask ChatGPT questions conversationally, and with that approach I got an accurate result.
I wonder if / suspect that one reason people seem to have such varyingly different impressions of overall response accuracy with ChatGPT comes down to different prompting approaches. If you don't talk to ChatGPT and just bounce keywords off it instead, you're not using it as designed, and it seems to result in less accurate responses. If you talk to ChatGPT like it's a simple and stupid computer search, it responds to you like one. We're kind of forced to personify these tools a bit if we want the best results.
[0]https://chat.openai.com/share/66788bbc-5c2e-4e0d-9a5d-370b2c... [1]https://chat.openai.com/share/8e9caf23-158b-498a-9261-7f257f...