Hacker News new | past | comments | ask | show | jobs | submit login

And you could even program the LLM to try to subtly manipulate people into using the advertised product.



You can inline the ads. Ask it to write you a poem about flowers that includes an ad for interflora.


A LLM trying to trick and manipulate? That's how your multi-million dollar model quickly becomes worthless in the eyes of the public.


I could see this happening.

I’ve been infuriated with DuckDuckGo on occasion because it refused to exclude certain results.

In fact when you add an exclusion clause it simply boosts those results further instead of removing them.

I’ve been told this is because the underlying search providers refuse to exclude paying customer even when you explicitly don’t want to hear from them.

I could definitely see this happening in LLM answers too and I don’t expect it to be particularly subtle.


People haven't been up in arms about the dishonesty around 'native' marketing.


Not if it was REALLY good at everything else and really subtle.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: