Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We had a time when CGI took off, where everything was too polished and shiny and everyone found it uncanny. That started a whole industry to produce virtual wear, tear, dust, grit and dirt.

I wager we will soon see the same for text. Automatic insertion of the right amount of believable mistakes will become a thing.



You can already do that easily with ChatGPT. Just tell it to rate the text it generated on a scale from 0-10 in authenticity. Then tell it to crank out similar text at a higher authenticity scale. Try it.


Without some form of watermarking, I do not believe there is any way to differentiate. How that water marking would look like I have no clue.

The pandora's box has been opened with regards to large language models.


I thought words that rose in popularity because of LLMs (like "delve" for exampme) might be an indicator of watermarking, but I am not sure.


It's not a very good "watermark". Ignoring that a slightly clever student can use something like https://github.com/sam-paech/antislop-sampler/tree/main to prevent those words, students who have been exposed to AI-written text will naturally use those more often.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: