Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs probably have bad awareness of line numbers





I suspect if OP highlighted line 71 and added it to chat and said fix the error, they’d get a much better response. I assume Cursor could create a tool to help it interpret line numbers, but that’s not how they expect you to use it really.

How is this better from just using a formal language again?

Who said it's better? It's a design choice. Someone can easily write an agent that takes instructions in any language you like.

The current batch of AI marketing.

Not sure how tools like Cursor work under the hood, but this seems like an easy model context engineering problem to fix.

That's the thing. We're expecting the tool to have a clear understanding of its own limitations by now and ask for better prompts (or say: I don't know, I can't etc). The fact it just does something wacky is not good at all to the consistency of these tools.

I do not code/program, but I do read thousands of fiction pages annually. LLMs (Perplexity, specifically) have been my lifetime favorite book club member — I can ask anything.

However, I can't just say "on page 123..." I've found it's better to either provide the quote, or describe the context, and then ask how it relates to [another concept]. Or I'll say "at the end of chapter 6, Bob does X, then why Y?" (perhaps this is similar to asking a coding LLM to fix a specific function instead of a specific line?).

My favorite examples of this have been sitting with living human authors and discussing their books — usually to jaw-dropped creators, particularly to Unknowns.

Works for non-fiction, too (of course). But for all those books you didn't read in HS English classes, you can somewhat recreate all that class discussion your teachers always attempted to foster — at your own discretion/direction.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: