Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Let’s hold the creators of these models accountable, and everything will be better.

Shall we hold Adobe responsible for people photoshopping their ex's face into porn as well?



I don’t think the marketing around photoshop and chatgpt are similar.

And that matters. Just like with self-driving cars, as soon as we hold the companies accountable to their claims and marketing, they start bringing the hidden footnotes to the fore.

Tesla’s FSD then suddenly becomes a level 2 ADAS as admitted by the company lawyers. ChatGPT becomes a fiction generator with some resemblance to reality. Then I think we’ll all be better off.


I actually agree more with this comment more than after my initial read. You suggest some valid concerns about innovation that regulation could address.

I guess the part I’m unsure about is the assertion about the dissimilarity to Photoshop, or if the marketing is the issue at hand. (E.g. did Adobe do a more appropriate job marketing with respect to conveying that their software is designed for the editing, but not doctoring, or falsifying facts?)


I think ChatGPT and Photoshop are both "designed for" the creation of novel things.

In Photoshop, though, the intent is clearly up to the user. If you edit that photo, you know you're editing the photo.

That's fairly different than ChatGPT where you ask a question and this product has been trained to answer you in a highly-confident way that makes it sound like it actually knows more than it does.


If we’re moving past the marketing questions/concerns, I’m not sure I agree.

For me, for now, ChatGPT remains a tool/resource, like: Google, Wikipedia, Photoshop, Adaptive Cruise Control, and Tesla FSD, (e.g. for the record despite mentioning FSD, I don’t think anyone should ever take a nap while operating a vehicle with any currently available technology).

Did I miss when OpenAI marketed ChatGPT as a truthful resource for legal matters?

Or is this not just an appropriate story that deserves retelling to warn potential users about how not to misappropriate this technology?

At the end of the day, for an attorney, a legal officer of the court, to have done this is absolutely not the technology’s, nor marketing’s, fault.


> Did I miss when OpenAI marketed ChatGPT as a truthful resource for legal matters?

It's in the product itself. On the one hand, OpenAI says: "While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice."

But at the same time, once you click through, the user interface is presented as a sort of "ask me anything" and they've intentionally crafted their product to take an authoritative voice regardless of if it's creating "incorrect or misleading" information. If you look at the documents submitted by the lawyer using it in this case, it was VERY confident about it's BS.

So a lay user who sees "oh occasionally it's wrong, but here it's giving me a LOT of details, this must be a real case" is understandable. Responsible for not double-checking, yes. I don't want to remove any blame from the lawyer.

Rather, I just want to also put some scrutiny on OpenAI for the impression created by the combination of their product positioning and product voice. I think it's misleading and I don't think it's too much to expect them to be aware of the high potential for mis-use that results.

Adobe presents Photoshop very differently: it's clearly a creative tool for editing and something like "context aware fill" or "generative fill" is positioned as "create some stuff to fill in" even when using it.


I don't think it "legal matters" or not is important.

OpenAI is marketing ChatGPT as accurate tool, and yet a lot of times it is not accurate at all. It's like.. imagine Wikipedia clone which claims earth is flat cheese, or a Cruise Control which crashes your car every 100th use. Would you call this "just another tool"? Or would it be "dangerously broken thing that you should stay away from unless you really know what you are doing"?


Did I miss when OpenAI marketed ChatGPT as a truthful resource?


Maybe we should if they were trying to make something completely different and that was the output




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: