Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That way of looking at it feels like it focuses on the one error while ignoring that, in all likelihood, the same action that caused the error, probably improved 1000 other listings.

I guess I'm a glass half full kinda person, this shows me that someone is working on improving things. And I bet they're quite flush from all the attention cause by their oversight in a big spreadsheet. I bet they won't miss the next one. :)



Why do you suggest that?

What do you think the seller's goal was in employing an LLM? Was it to improve quality, or to drive down costs?


I suspect someone was tasked with using the latest tools to improve a bunch of listings. 50 years ago they were given a typewriter for the same task, today they were given an LLM. It just feels like someone doing their job to me. Different year different tool. We no longer hand-transcribing books anymore, we don't lament that, and we won't lament LLMs one day either.


I think the purpose of automating product descriptions is far more likely to be to pay fewer people than to improve the quality of the listings.

I think if the purpose was to improve the quality rather than to crank them out - they probably wouldn't have let such severe and obvious errors get through, certainly not in such a large quantity. If I was tasked with doing this, at a minimum I would kick any listing that contained the word "OpenAI" into a QA queue rather than publishing it. Since they obviously didn't have even the minimal filters to catch errors, I have to infer they never spot checked their output for sanity. Because they didn't really give a shit.

It feels like someone doing their job to me too, sure. That job being to spam. When I see a watch, I infer the existence of a watchmaker. When I see a pile of spam, I infer the existence of a spammer.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: