Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tech’s hottest new job: Prompt engineer (washingtonpost.com)
90 points by seanherron on Feb 26, 2023 | hide | past | favorite | 114 comments




This isn't anything new. We've had prompt engineers for a long time now; we've just been calling them "SEO Specialists". The kind of person you'd hire to make sure your Amazon listing has all the necessary magical incantations to land on the first page of search results, that sort of thing.

This is just the next incarnation of trying to shift the output of someone else's algorithm in your favor. Be wary of building a career on top of that. It's very easy for the algorithm owner to change things up and obviate any value you used to provide.


SEO optimization is at the other end. It would be like trying to get your data picked up by the model training. Prompts are equivalent to search queries.


Prompt engineering is another way to say functional specification imho.


I just can't believe how idiotic such a thing is. It's like calling a person who uses a toaster a Toast Engineer with a salary of 300k.


>Notably, chain of thought (CoT) prompting, a recent technique for eliciting complex multi-step reasoning through step-by-step answer examples, achieved the state-of-the-art performances in arithmetics and symbolic reasoning, difficult system-2 tasks that do not follow the standard scaling laws for LLMs https://arxiv.org/abs/2205.11916

>https://sites.google.com/view/automatic-prompt-engineer

Not exactly a "toaster go brrrr" job, but it could be obsolete one day

WaPo does need to chill though. There’s barely any Prompt Engineer jobs

Edit: If anyone's curious, I've been following this for prompt stuff: https://github.com/dair-ai/Prompt-Engineering-Guide


It's just software. It takes skill and labor to achieve the outputs you want. Is someone who uses photoshop all day not an artist? Or someone who writes text for a compiler not a programmer?

You've got a huge blind spot if you think prompt engineer isn't already a thing.


>You've got a huge blind spot if you think prompt engineer isn't already a thing.

It may be a "thing", because generating BS is a viable business model and ChatGPT makes it more efficient.

..but I submit as a working hypothesis, that it is completely impossible to gain knowledge you do not already possess from a language model, no matter how clever your prompting.

I'm very interested in counter-examples, but I have seen a few that turn out to be fake already.


> it is completely impossible to gain knowledge you do not already possess from a language model

Not true. Emergent abilities is an active research area in LLMs [0]. They even have pretty graphs on the topic.

[0] https://ai.googleblog.com/2022/11/characterizing-emergent-ph...


I don't really understand what you're saying. Clearly the results one produces with the software has value, but you say it's all "BS." Does that just mean you don't like it, or what? If someone asks me to produce some concept art for a project they've just started, and I write some prompts to produce that concept art, what is "BS" about the art I produced? What does gaining knowledge have to do with it?

Is Art Director just a "BS" job? I don't get it.


Maybe “generic and meaningless” is a better descriptor.


I don’t know how to homebrew. If I ask ChatGPT to help me get started homebrewing, it lists helpful steps to start homebrewing. I can ask it to expand on any of those steps until the breakdown is actionable.

Checking some of the facts it gives me against other sites it’s all correct, but better organized and more accessible. There’s your counter-example. This works for basically any well-documented process.


That's so obviously false that I literally can't imagine how you could believe it. GPT-3 certainly isn't 100% accurate, but neither is it so perfectly unreliable that no one could ever get it to produce a relevant fact not in the prompt. And even if it were, it would probably still be potentially useful for learning languages.


> GPT-3 certainly isn't 100% accurate, but neither is it so perfectly unreliable that no one could ever get it to produce a relevant fact not in the prompt.

I think I understand the sense in which you claim it produces relevant facts not in the prompt.

It's not that we differ on easily observable behavior of the system.

It's that I question if GPT-3 is "producing" these identifiable facts, and if the user is "producing" them instead, whether they can possibly be "relevant".


>It's that I question if GPT-3 is "producing" these identifiable facts, and if the user is "producing" them instead, whether they can possibly be "relevant".

I'm not sure what you're trying to say. That GPT-3 is just vomiting stuff up out of its training set and not producing any new knowledge? But that's totally irrelevant to the issue of whether it can transmit knowledge to a user, who presumably hasn't memorized the entire training set.


>That GPT-3 is just vomiting stuff up out of its training set and not producing any new knowledge?

Hmm. Seems obvious to me that it's producing new output, but that output isn't knowledge and it can't be.

Sometimes ChatGPT tells me something that turns out to be correct and relevant. And I get excited, and then I Google it and what it told me is the first hit on Stack Overflow.

There's a subtle point here, that other people might say "well, ChatGPT is ok, but no better than Google" or something like that. But I differ on that. The key is that I don't know it's Stack Overflow until I check independently. So it's giving it too much credit to say it's as good as Google, and the amount of information it can output is not lower bounded by its training set, but is actually zero due to being adjacent to an infinite amount of BS that by its nature always requires external mechanisms to separate out.


Synthetic knowledge is still novel knowledge if you haven’t put the pieces together before.


Can you unpack "synthetic knowledge", cause I don't know what that really means.


New knowledge synthesized from existing knowledge. For example, you might know of A and B and maybe think that A -> B or B -> A based on their co-occurrence, but an AI might make you realize that C -> A as well as C -> B.


Ok, that's straightforward, I just don't care for the idea that AI can do it or even help.

You might synthesize new knowledge.

When ChatGPT produces new output, it's not synthesizing new knowledge. It can't even output the knowledge it was trained with, as long as it lacks the ability to tag it in a trustworthy way.

It's not that it's always BS, it's that it's almost always BS and if you don't know the answer in advance or independently, you can't distinguish it from anything within the model.


ChatGPT taught me about vectors and other ways to dynamically create an array.


How do you know?


for a side project, it not only gave me the migrations, it suggested all the column names/datatypes, so basically I just said create me a laravel migration that's for an organization, this is a multi-tenant SaaS app, where an organization is basically a team, or tenant. You can think of these as companies as well, now make a migration that has columns that might generally be included in a company or organization.

It not only spit out the model but also the casts/fillable attributes on the model, as well. It even helped me work through an idea, that I didn't know what it was called, I was thinking it was EAV but instead it's metaform/metafields, to basically create something like how wordpress has the ability to dynamically create content 'types', django/wagtail can do this to, w/ chatgpt I think I've nailed down how to do this using polymorphism with the least amount of headache.

I'm wanting to create a CRM/CMS/ERP solution that can be very 'moldable' to different use cases, and this looks to be a good use, either way just being able to discuss with the ai my 'options', was like a major brain dump and increased the power of my flow.

YMMV, but if you can't get it to work like this, doesn't mean it doesn't, just means it doesn't for you, and while I can save 2-3 hours for every hour previously worked, that's valuable to me, esp as a freelancer who charges per project, not hourly.


Because I tried the code and it worked.


There are lots of people with misconceptions of LLMs. It will take time to adjust.

I reached the same conclusion as yourself, but do see a totally different path to take regarding information propagation (how GPT works). For example, cells merge information monotonically. This is how neural networks balance too, but could be applied in new/undiscovered ways.

https://www.youtube.com/watch?v=HB5TrK7A4pI&list=LL&index=4


I meant - how do you know or why did you say ChatGPT taught you?

It doesn't know what works.


Welcome to the last decade of title inflation. Everyone is a "manager" now. A "marketing manager", "product manager", "account manager". No more secretary, it's "executive assistant". It's a perk a company can offer, conferring higher status, at no expense to themselves. So the equilibrium is for other companies to do the same, otherwise the company that gives this cost-free perk outcompetes for talent.

People are graduating watered-down educations, earning inflated cash, with inflated titles. It all helps people believe they're higher status, that they have a university degree and are a manager earning $80k, surely they're getting close to the top of the totem pole now. But they have a worse standard of living and education equivalent to high school in the '60s.


Related: Companies save billions of dollars by giving employees fake "manager" titles[1]

[1] https://news.ycombinator.com/item?id=34641549

[1] https://www.cbsnews.com/news/salary-manager-jobs-fake-titles...


'assistant regional manager'


Assistant to the regional manager


I worked with a prompt engineer. They're basically a Natural Language Programmer. There is skill in doing the job well.


Just about as idiotic as calling someone who types some weird words on a screen a Software Engineer with a salary of 300k.


You’re thinking of a Toast Master.


>This morning, I brushed and flossed, which makes me a Plaque Removal Engineer. I then used my skills as a Room Tidyness Engineer to make the bed. After that, I engineered the harness onto my dog and took her on a walk: Canine Fitness Engineer. I engineered the water to a higher temperature using the kettle, and poured it over some coffee grounds to create a chemical reaction in my morning brew: Yeah, that's right, Caffeine Engineer. After this incredibly productive morning, I got in the car and drove to my job as a computer programmer.

-mlsu: https://news.ycombinator.com/item?id=34884683

At this point the word "engineer" has lost its original meaning. Until there's a formal theory of how we can interact with LLMs and you make use of that in a systematic fashion, "prompt engineering" is really closer to "prompt artist."


> At this point the word "engineer" has lost its original meaning. Until there's a formal theory of how we can interact with LLMs and you make use of that in a systematic fashion, "prompt engineering" is really closer to "prompt artist."

Interesting angle. Are you saying there are rarely any "software engineers" out there, that they are all merely "software artists"? Cause none of these uses a formal theory for their craft. If they were then all those highly opinionated discussions of whether to use goto in C or what are the greatest flaws of node.js would just not exist.


Correct, in my eyes "software engineering" in the sense of "person who glues libraries together to build systems" should not yet be called an engineering discipline because there isn't yet a rigorous enough theory on why something should be designed one way as opposed to another. We are still in the stage of figuring out best practices and making things more rigorous (maybe the functional programming folks will end up contributing something here, but I don't know enough about category theory).

There are other narrower senses of "software engineer" such as "person who optimizes code" and to me those more qualify as engineering because we not only have a decent enough theoretical background (see Agner Fog's work) but also can experimentally verify things. On the other hand it's a lot harder to quantitatively say if one design is better than another.

I think there's also some work in terms of rigorously modeling concurrent/distributed systems (Lamport's TLA+) work which I'd like to see more of.


The naming might be flawed but it's like a batista I guess. You can spend a lot of money on the machine but without an expert to use it you will not get the best out of it.


Yes but you can teach most of what there is to know in a couple of weeks tops. You don't call baristas Coffee Engineers.


starbucks takes notes furiously


Best comment! lol


Just train an LLM on LLM prompts and automate the prompt engineer process.


And how do you get this LLM to output a good prompt?


Not too different from analysts writing SQL queries all day long.


Complex SQL requires knowledge about relational algebra (Cartesian products, set theory, domain relational calculus yada yada) and understanding of how RDBMS and their query planners work. At least, if performance is important to you.

I don't see this in this prompt engineering. In my limited experience (I played a few hours with Stable Diffusion and more hours with the OAI davinci-003 model), you can get good at it within a few days.


At this point, we're very much at an exploratory stage of LLM queries - you could of course be a ML/DL researcher or engineer that's intimately knowledgeable with the current architectures, but still - they're very large and very complex due to the sheer parameter size, so you'd still have to map out what inputs will predictably give what outputs on a finished model.

I'd imagine that being a "prompt engineer" entails finding out and mapping the structures that gives you the desired result. Think of it as a novice user of search engines VS expert user of search engines.


I spent a few hours learning SQL. I can get “good” at writing SQL queries within a few days. Do you want to hire me as someone who’s primary role is to write SQL queries?


[flagged]


I don’t think that’s Dunning-Kruger but I get what you’re saying.


Not? "After a few hours of playing with it I give the opinion that it can't be that hard" is kinda the essence of basically not having a clue but feeling competent to express such a judgement nevertheless.


Fair enough. I think at the time I felt that just one data point wasn't enough to evoke the effect. Maybe I'm just biased against the effect since it was dis-proven: https://economicsfromthetopdown.com/2022/04/08/the-dunning-k...


I still remeber when Google appeared there were people offering search services for a fee (maybe they were called Search Engineers but I don't recal seeing that).


> I just can't believe how idiotic such a thing is.

Well, you do you. That's old world thinking for a field that's going to dramatically morph into something that barely resembles what we have today.

I'm hiring a contract prompt engineer for my startup.

If you want to help us achieve better "TV replacement" results, send me an email (see profile).

https://fakeyou.com/news (early demo, more coming soon!)


Sounds good to me. Since the job doesn't require any skills, I think we should proceed to the negotiation phase. How does 500/hr sound?


> Since the job doesn't require any skills

Oh like hell it requires no skill.

You tell me how you'll generate better photos, improve dialogue coherence across multiple speakers, and control camera direction and movement (something we're using LLMs for too as we experiment with special-purposed models).

All of this is not known a priori, by the way. And I won't accept building a database or lookup table as an answer.

I also want to know how you'll test, benchmark, and refine.

You also need to budget for inference complexity.

I'm waiting :)

I can do this myself, but it is a full time job. I am so busy with all other aspects of my business I'm looking for people to bring on board.


I normally don't work for free, but just as a sample, here's one I've been crafting for a few weeks...

  Epic 4k HD photo, high res and epic, cool extra awesome photorealistic 5k or 6k, realistic, in the style of a really good photographer.
Nah I'm just playing, your company looks pretty cool, I just think a dedicated job for coming up with prompts (which is only going to become easier anyway with better ways to control output) is silly.


That's not a real prompt. A real prompt would look more like this:

   At the top left of a 4k picture, place a dot with the RGB value of (0.5, 0.8, 
   0.2), where color components are expressed on a scale of 0-1 inclusive.

   Then, on the top line, second position from the left, place a 
   dot with the RGB value of (0.7, 0.3, 0.4).

   (approx. 8 million more to follow...)
I'm not kidding either. If there really is a real "prompt engineer" job, I am sure it's going to be like this, with a fig leaf of some sort. We saw how this worked during the brief period when everybody was doing a blockchain project. Oracle added blockchain features to their database. Now I'm sure they all have amnesia, but there are remnants.


No more idiotic than a lawyer - someone who knows the right words to convince a jury (model) to produce a desired outcome.


I dislike that analogy greatly.

Lawyers have a bad reputation, sure, but there’s a lot of education about the interpretation of our law and the absurdly large corpus of legal documentation that must be read in order to even become a lawyer is far and above anything you describe.


So you are presenting the fact that lawyers must digest an absurdly large corpus of legal documentation, but also maintaining that is something a human lawyer has some sort of advantage over a LLM?


>So you are presenting the fact that lawyers must digest an absurdly large corpus of legal documentation, but also maintaining that is something a human lawyer has some sort of advantage over a LLM?

No.

A LLM has more opportunity itself to replace a Lawyer, the person typing the prompt is not necessarily required to be as educated. Though a case can be made that you need to validate the information.

As it happens we have an opportunity to tell how this works. Software engineering has seen many abstractions of which each comes with its own complexity in verification.

What tends to happen is that people don't really do a lot of verification, we are just "mostly right" very fast and leave an immense amount of inefficiency and indirection behind us.


the person typing the prompt is not necessarily required to be as educated.

If I need someone to help me interact with a legal LLM I will want to (and probably be able to, for 300k) hire someone with a law degree. In fact I anticipate many lawyers in the future will effectively become “prompt engineers” for legal LLMs.


A LLM is a generator of misinformation that is maximally difficult to distinguish from real information.

How do you use this as a lawyer?

I mean, as a stereotypical evil lawyer in a world of naïve people who don't learn from experience, you could maybe use it to win cases until you destroy the justice system.

But other than that...


Speaking as somebody who spent thousands of dollars for a lawyer on a matter, with basically no results, and also who used chatGPT, personal research, and common sense to solve this same matter for free, I can only say if a LLM is "a generator of misinformation that is maximally difficult to distinguish from real information" a lawyer is simply "a human who has been trained to maximally drain your wallet without regard for any other matter", of course, neither is true and there exists far more nuance for both.

Sure, there are matters I would only trust a lawyer to handle, but there are a great many I wouldn't.

Further, the average quality of a human lawyer will likely remain the same tomorrow as it is today, while AI will only get better. LLM today, perhaps some hybrid stack tomorrow, it's only a matter of time before an AI lawyer is the way to go for just about any legal matter. And let me be clear, that time might be 10 years, or may be 100+, but it is coming.


A LLM is a generator of misinformation

This is a strange statement. No one is training LLMs to generate “misinformation”. It’s the opposite - it’s trained to generate the most likely next word, given the preceding 2000 words - using billions of examples from a real world training corpus. So it will try to generate as much information as what’s present in the corpus. Maybe even more, but that’s debatable.


>No one is training LLMs to generate “misinformation”.

That is phrased like it is stating a fact about the training process, but it is a statement about the intent of the training, isn't it? So I don't see it as rebutting my comment.

>It’s the opposite - it’s trained to generate the most likely next word

Sure, of course, what else? But if you take any correct statement about something and modify it slightly, it's not very likely it will still be correct.

It seems intuitive to me that there are going to be a million billion (understatement) wrong things next to anything correct in the inputs. As a sort of combinatorial, mathematical thing. You just (in principle) count all the ways to be wrong that are similar to being right.

Nobody trained it to get anything right! It doesn't matter what people expect if they don't have a procedure to do it.

If a statement is adjacent to things that are also "correct", that almost implies a lack of information in the original statement. It seems born out in the impressive BS'ing - the key to BS'ing is saying things that can't really be wrong.


To be an effective prompt engineer you need to have expertise in two different domains - large generative ML models, and in whatever it is people want to generate (e.g. art).


I, too, have watched Suits. Unfortunately, being a lawyer is hard work and the work done usually has very real ramifications.


I remember how silly I thought it sounded when I first heard of the job "Web Master"..... like someone whose ONLY JOB is the World Wide Web? wtf?

Crazy.


you can automatically learn soft prompts with backprop so.. the job "prompt engineering" isn't going to stick around for too long given it's automatable.

https://arxiv.org/abs/2302.06541

That is not to say, that integrating LLMs won't create a lot of jobs. Think of it as systems engineering. Knowing how computers work, as well as a software engineer does, will always be useful.


You still have to come up with the original prompt, though.

Cool paper, BTW.


It's all prompts, all the way down.


Early in Web, I saw job posts for "HTML Programmer".

(This was before JS, before CSS, etc. Mostly just your original HTML simplified LaTeX article.cls elements, plus `A` and `IMG`, and maybe a `FONT`.)

HTML was easier to use than many word processors, but because it was new and unfamiliar, yet looked like it might be huge... for a brief period, practically anyone who could spell "HTML" or "WWW" could posture as a whiz kid, and make big bucks.

I'd guess that "prompt engineer" will evolve into real careers soon, but the nature of the technology and the role will be very different than it is this quarter.


No coding required...yet.

I've been playing around with generating stories with ChatGPT for a while and...English (or any natural language) is really bad at being specific. I've made progress by learning some specific words to describe the type of scene I want and how much of it I want ChatGPT to generate (such as a scene for just that evening verses a few paragraphs describing weeks of traveling). I've also started getting some intuition for when I've given ChatGPT too much info (it'll cram all the facts in in weird ways) and too little info (it'll get really random and start inserting new characters and stuff).

Having a way to manage the meta aspects of story generation would be a big help.


I’ve wondered if it would be helpful to teach the model a micro-DSL for the given task at hand, and use that for greater control and precision.


It's helpful for doing things like mathematical reasoning, since the LLM can't really do it directly. A DSL would allow you to offload that reasoning.


A node editor would be interesting [1], although it might be a recipe for spaghetti prompt [2].

[1] https://en.wikipedia.org/wiki/Node_graph_architecture

[2] https://blueprintsfromhell.tumblr.com


So, what does being a prompt engineer pay these days? And how do you find a job doing it?

Edit: maybe I should have kept reading.

> Anthropic, founded by former OpenAI employees and the maker of a language-AI system called Claude, recently listed a job opening for a “prompt engineer and librarian” in San Francisco with a salary ranging up to $335,000. (Must “have a creative hacker spirit and love solving puzzles,” the listing states.)


Reading the comments here, there is something to be said about how much naysaying there is in regard to this technology. I should expect it. You see the same pattern everywhere in one way or another. I urge people to shift their mindsets and approach cutting edge technology from a perspective of what it could be in the future, vs what it is today. What I mean is, by the time you realize something has fundamentally changed society, you've missed the train.

In the end what we all value is what solves problems. Those who embrace AI tech and learn to use the tool and work around its flaws will solve more problems than those who don't. This includes coming up with a system to validate the work. Those who use the tool recklessly will create more problems than they solve.

What side are we on here? I've been in the industry for over two decades and I for one cannot wait to command the computer in complex ways in my natural language. I am not threatened by other people being able to do the same. The tool is just a tool. What you build with it is what will separate the "professionals" from the "hobbyists".


> I urge people to shift their mindsets and approach cutting edge technology from a perspective of what it could be in the future, vs what it is today

Maybe they realized that too many jumped on the blockchain BS train.


Since this post will inevitably bring out the usual comments that prompt engineering is dumb and a waste of time, here's my rebuttal to that: https://simonwillison.net/2023/Feb/21/in-defense-of-prompt-e...


Professional prompt engineers: journalists, police interrogators, detectives and private investigators, most people working in sales, most people working in the judicial system, politicians, many people working in "HR", ... maybe "tech jobs" will no longer be a meaningful designation anymore in a few decades?


I'm currently tinkering on a customer support bot with langchain and gpt3. The bot can answer questions about services and their terms, it can use tools to make bookings and perform some taks like scheduling appointments, in a conversational manner. It's becoming clear to me that subtle changes in the prompt can lead to bullshit answers and gpt making up facts, despite being specifically told not to do so. If the prompt reaches some complexity threshold, the output quality goes down visibly. I learned that I have to split the bot into subtasks, each having different, smaller, prompts. So, yeah, I believe prompt engineering can be a thing. At least for a while, until the models become smarter at understanding what we want from them :)


Hot? I've seen like 3 postings for prompt engineers and all of them had extreme requirements. The idea that prompt engineering will magically one-to-one replace software engineering is ridiculous, the whole point of technology is to make things more efficient which necessarily makes a certain amount of the workforce redundant after it reaches a certain amount of efficiency.


I'm just wondering what's gonna happen if OpenAI either sinks or limits its APIs severely like Facebook did in its time.


Weird how ChatGPT popped up as a newly hyped Next Big Thing just as crypto crashed. Probably not worth thinking about.


The connection is purely superficial, LLMs and cryptocurrency are two completely different technologies, hype is always going to exist with every popular technology, we need to be able to look at something and evaluate it on its own merit rather than as a function of how popular it is.


Oh don’t get me wrong both crypto and LLMs are interesting. I am just a little dubious how much money and attention suddenly flowed into one and then later out (followed by bankruptcies and lawsuits), then into another.

We will see.


>LLMs and cryptocurrency are two completely different technologies

As mind viruses operating on human brains, they do not seem completely different technologies.


> just as crypto crashed

Did crypto crash?

Last I looked BTC was at 20K USD a pop ... strange definition of a crash for something that used to trade below a dollar.


Crypto boomed, had billions sunk into it and has had multiple high profile implosions. Yes it has absolutely crashed.


> Crypto boomed, had billions sunk into it and has had multiple high profile implosions

Same thing it's been doing every year since 2011.

Yet BTC is still trading at 20k USD ... I'm not sure we have the same definition of the word "crash".

But whatever floats your boat, man.


May your investment go To The Moon


Crypto market cap is still around ~1T USD


If you want to be part of it do not let me stop you.


Crashes don’t mean terminal.

If the dollar (or any other currency)lost 70% of its value in less than a year then we would certainly say it crashed


Disagree.

At this point, BTC is very much known for its extremely high volatility (source: look at the price history since inception).

There hasn't been a single year since it launched where it hasn't displayed outrageously wild swings: at this point, it's pretty clear that the wild volatility is an intrinsic attribute of this particular asset class.

Therefore: not a crash, just Bitcoin's business as usual.


Crashes don’t have to be rare either- frequency does not negate them from being crashes

you can start by seeing some of the the accepted definitions here [1]

While you may have expected crashes in bitcoin to be that hard (good for you) most investors, dozens of high profile funds/exchanges/crypto businesses did not expect bitcoin to fall to 20k USD and have failed.

[1] https://en.m.wikipedia.org/wiki/Stock_market_crash


I don't know how close AI is to replacing software engineers generally, but pretty sure this prompt engineer job isn't going to age well. And for extra irony, the better they do at their job, the greater will be the dependence upon the tech and the incentive to eliminate the middle-man (ie. the human).


Here's a prompt I just engineered - "Flash in the pan".


Large models are programmed with prompts. This is really just a “software engineering” job where the programming language looks enough like English to make it look easy.


I don't know how I feel about prompt engineers yet but, isn't Sidney (Bing's AI) basically all prompt engineering on top of ChatGPT?


Sounds fun until I realized much of this will be requests to use GPT for stuff it isn't meant to be used for. That would constantly trigger me.


That presumes we know what it's meant to be used for. I'm not sure that we do.


Well true, but I think we know some things it shouldn't be used for at least!


These jobs will never fly as fast as when OpenAI announces an official PromptWhiz certification course.


We don't need a new job position, what we really need is a LLM to generate good prompts...


It has been amazing to watch the feedback loop with tools like Stable Diffusion where you see two types of strategy for solving problems of missing functionality. The first strategy is coming up with some long winded manual process involving disparate tools and several steps. The other is simply train a model to do what it is you want automatically. It has fast taught me that there is no competing with this new paradigm, and those capable of being able to solve their problems by building models will totally leapfrog those that have to do things the old way.


Could you give me some concrete examples that come to mind? I'm new to Stable Diffusion but have been using it a lot lately.


ControlNet was a stepwise improvment when in being able to generate a character in a specific pose as compared to trying to coax SD into giving you an approximation of what you wanted through Prompt Engineering.

People training and releasing custom models that can replace entire workflows of disparate steps needed to produce an image that would normally result from that workflow.

There was that video or maybe it was an article where the guy made it so he could just use natural language to describe the edits he wanted made and it would make them etc.


In all seriousness I've found that asking them to generate their own prompts can be very useful. It's a bit like a perl script that outputs a perl script.


You just need an LLM to generate random prompts and a sweatshop in Manila that filters out the bad ones.


Just put a couple AI in a sweatshop to do the filtering. By sweatshop I mean the AI computers gets all crammed together with no cooling to same money.


is this a college dorm metaphor? Haha


I’d rather be a sandwich artist


If anyone reading this wants to do a bit of prompt engineering professionally we're hiring at Channel (https://usechannel.com) and you can get in touch with me (Cameron, one of the cofounders) at cameron@usechannel.com




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: