Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not mentioned in post this but occured to me by reading:

I think I will start changing my functional prompts to require a JSON format in responses so that various aspects of the response don't need to be manually parsed, and requests can be more reliably piped to subsequent requests.




> Not mentioned in post this but occured to me by reading:

> I think I will start changing my functional prompts to require a JSON format in responses so that various aspects of the response don't need to be manually parsed, and requests can be more reliably piped to subsequent requests.

that adds more characters though so maybe hits token limits faster, I was thinking about how to have better contexts then found out about langchain and creating a sort of long term memory using vector databases. still trying to figure this out, but once I do, I think it'll be amazing what I can do with it.


Beyond artificially increasing prompt length, vector DBs also allow you to inject domain knowledge into autoregressive LMs.


yeah, it's basically like updating the 'cutoff date' on chatGPT but only w/ specific data pertinent to a specific topic, right?


I had other examples where I request the output in Markdown and it acts accordingly—definitely a nice feature.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: