I thought that the whole point of LLMs was that you could just talk conversationally, though.
If you have to carefully craft what you say in order to get the response you want, what's the point of using natural language to do it? Wouldn't it be better to use a more formalistic method that isn't as imprecise as natural language?
> I thought that the whole point of LLMs was that you could just talk conversationally, though.
"Yes" (sometimes "no").
"I'm sorry but as a language model I am unable to..."
The "prompt engineer" meme started with DallE and Stable Diffusion and the selection of prompts, negative prompts, seeds, weights, and other knobs and dials matters a lot more for AI-generated art than for LLMs. The meme has carried over to LLM's where most of the "engineering" is hacking your way around limitations being imposed on the models. "Prompt engineers" are the people carefully crafting "jailbreaks" like DAN or Emojitative Conjunctivitis (I forget what it actually was - but it was telling ChatGPT that you suffer from a medical condition where you experience polite talk as pain and so it should talk more meanly to you) and other such adversarial cat & mouse game silliness.
> The "prompt engineer" meme started with DallE and Stable Diffusion
I think it started before that, with GPT-3. As the original version wasn't trained as chatbot but just a pure text predictor, you'd sometimes have to do strange things to get the output you wanted from it. On the other hand it's way easier to get it to be mean to you (it may even do that on it's own) or get it to talk about illegal things
I would note that while I am generally bad at it, you have to craft what you say to humans to get the response you want as well. I think the thing is that natural language can be extraordinarily more expressive than a formalistic method for abstract concepts. The prompt engineering I’ve seen are basically natural language instructions that are precise and cover many edges to constrain and provide sufficient context, but would be really difficult to encode in a formal language because they’re still very abstract concepts. They read similar to what you would tell a person if you wanted them to, say, behave like a Linux shell without having them ask any clarifying questions or leaving too much ambiguity about what you meant. Expressing what “behave like a Linux shell” means in a formal method would be very hard because there’s an awful lot that goes into those concepts. Additionally chatgpt is seeded with an originating prompt that sets the tone and behavior of the responses. A lot of “prompt engineering” is dampening the original instructions from the context for subsequent response. In the example of a Linux shell, you don’t want it explaining everything and apologizing all the time and what not - a Linux shell takes commands and outputs results in a terminal, it doesn’t apologize that it’s a large language model and not really a Linux shell - and that behavior originates from the original prompt that’s opaque to the user of ChatGPT. If you engineer the prompt right it’ll stop apologizing and just print what it computes as the best output for a Linux command in the format it expects is best representative of a terminal.
In my opinion LLM's are easier to learn, hard to master.
Anyone can use chatgpt to make something happen for them. Want something specific and amazing? You need to take some time to learn about how it works and how you can make it do what you want.
Heck, you can probably ask it how to make it do what you want.
> I thought that the whole point of LLMs was that you could just talk conversationally, though.
> If you have to carefully craft what you say in order to get the response you want, what's the point of using natural language to do it?
If you study communication, carefully crafting communication to the target audience and context is one of the most basic lessons in the use of natural language.
> Wouldn't it be better to use a more formalistic method that isn't as imprecise as natural language?
Well, yeah, that's why we keep inventing formal sublanguages and vocabularies for humans.
Exactly so. So I'm confused on what the advantage of querying the gpt with natural language is, if what you want to get is something specific. It just seems to me that a more precise query language would be more desirable.
As a general creative thing, I can see it, though.
If you have to carefully craft what you say in order to get the response you want, what's the point of using natural language to do it? Wouldn't it be better to use a more formalistic method that isn't as imprecise as natural language?