Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's just software. It takes skill and labor to achieve the outputs you want. Is someone who uses photoshop all day not an artist? Or someone who writes text for a compiler not a programmer?

You've got a huge blind spot if you think prompt engineer isn't already a thing.



>You've got a huge blind spot if you think prompt engineer isn't already a thing.

It may be a "thing", because generating BS is a viable business model and ChatGPT makes it more efficient.

..but I submit as a working hypothesis, that it is completely impossible to gain knowledge you do not already possess from a language model, no matter how clever your prompting.

I'm very interested in counter-examples, but I have seen a few that turn out to be fake already.


> it is completely impossible to gain knowledge you do not already possess from a language model

Not true. Emergent abilities is an active research area in LLMs [0]. They even have pretty graphs on the topic.

[0] https://ai.googleblog.com/2022/11/characterizing-emergent-ph...


I don't really understand what you're saying. Clearly the results one produces with the software has value, but you say it's all "BS." Does that just mean you don't like it, or what? If someone asks me to produce some concept art for a project they've just started, and I write some prompts to produce that concept art, what is "BS" about the art I produced? What does gaining knowledge have to do with it?

Is Art Director just a "BS" job? I don't get it.


Maybe “generic and meaningless” is a better descriptor.


I don’t know how to homebrew. If I ask ChatGPT to help me get started homebrewing, it lists helpful steps to start homebrewing. I can ask it to expand on any of those steps until the breakdown is actionable.

Checking some of the facts it gives me against other sites it’s all correct, but better organized and more accessible. There’s your counter-example. This works for basically any well-documented process.


That's so obviously false that I literally can't imagine how you could believe it. GPT-3 certainly isn't 100% accurate, but neither is it so perfectly unreliable that no one could ever get it to produce a relevant fact not in the prompt. And even if it were, it would probably still be potentially useful for learning languages.


> GPT-3 certainly isn't 100% accurate, but neither is it so perfectly unreliable that no one could ever get it to produce a relevant fact not in the prompt.

I think I understand the sense in which you claim it produces relevant facts not in the prompt.

It's not that we differ on easily observable behavior of the system.

It's that I question if GPT-3 is "producing" these identifiable facts, and if the user is "producing" them instead, whether they can possibly be "relevant".


>It's that I question if GPT-3 is "producing" these identifiable facts, and if the user is "producing" them instead, whether they can possibly be "relevant".

I'm not sure what you're trying to say. That GPT-3 is just vomiting stuff up out of its training set and not producing any new knowledge? But that's totally irrelevant to the issue of whether it can transmit knowledge to a user, who presumably hasn't memorized the entire training set.


>That GPT-3 is just vomiting stuff up out of its training set and not producing any new knowledge?

Hmm. Seems obvious to me that it's producing new output, but that output isn't knowledge and it can't be.

Sometimes ChatGPT tells me something that turns out to be correct and relevant. And I get excited, and then I Google it and what it told me is the first hit on Stack Overflow.

There's a subtle point here, that other people might say "well, ChatGPT is ok, but no better than Google" or something like that. But I differ on that. The key is that I don't know it's Stack Overflow until I check independently. So it's giving it too much credit to say it's as good as Google, and the amount of information it can output is not lower bounded by its training set, but is actually zero due to being adjacent to an infinite amount of BS that by its nature always requires external mechanisms to separate out.


Synthetic knowledge is still novel knowledge if you haven’t put the pieces together before.


Can you unpack "synthetic knowledge", cause I don't know what that really means.


New knowledge synthesized from existing knowledge. For example, you might know of A and B and maybe think that A -> B or B -> A based on their co-occurrence, but an AI might make you realize that C -> A as well as C -> B.


Ok, that's straightforward, I just don't care for the idea that AI can do it or even help.

You might synthesize new knowledge.

When ChatGPT produces new output, it's not synthesizing new knowledge. It can't even output the knowledge it was trained with, as long as it lacks the ability to tag it in a trustworthy way.

It's not that it's always BS, it's that it's almost always BS and if you don't know the answer in advance or independently, you can't distinguish it from anything within the model.


ChatGPT taught me about vectors and other ways to dynamically create an array.


How do you know?


for a side project, it not only gave me the migrations, it suggested all the column names/datatypes, so basically I just said create me a laravel migration that's for an organization, this is a multi-tenant SaaS app, where an organization is basically a team, or tenant. You can think of these as companies as well, now make a migration that has columns that might generally be included in a company or organization.

It not only spit out the model but also the casts/fillable attributes on the model, as well. It even helped me work through an idea, that I didn't know what it was called, I was thinking it was EAV but instead it's metaform/metafields, to basically create something like how wordpress has the ability to dynamically create content 'types', django/wagtail can do this to, w/ chatgpt I think I've nailed down how to do this using polymorphism with the least amount of headache.

I'm wanting to create a CRM/CMS/ERP solution that can be very 'moldable' to different use cases, and this looks to be a good use, either way just being able to discuss with the ai my 'options', was like a major brain dump and increased the power of my flow.

YMMV, but if you can't get it to work like this, doesn't mean it doesn't, just means it doesn't for you, and while I can save 2-3 hours for every hour previously worked, that's valuable to me, esp as a freelancer who charges per project, not hourly.


Because I tried the code and it worked.


There are lots of people with misconceptions of LLMs. It will take time to adjust.

I reached the same conclusion as yourself, but do see a totally different path to take regarding information propagation (how GPT works). For example, cells merge information monotonically. This is how neural networks balance too, but could be applied in new/undiscovered ways.

https://www.youtube.com/watch?v=HB5TrK7A4pI&list=LL&index=4


I meant - how do you know or why did you say ChatGPT taught you?

It doesn't know what works.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: