Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That intro to Langchain is absolutely terrible. Like it was copy-pasted from the worst LLM they could find and pasted in:

> The first high-performance and open-source LLM called BLOOM was released. OpenAI released their next-generation text embedding model and the next generation of “GPT-3.5” models.

Just random sentences strung together delivering no overall message. Yes we know BLOOM and GPT exist, what is your point?

> LangChain appeared around the same time. Its creator, Harrison Chase, made the first commit in late October 2022. Leaving a short couple of months of development before getting caught in the LLM wave.

That's nice that the text model that wrote this knows the creator and first commit but ugh -- just say "Langchain was published in October 2022" instead of all that garbage.

Also, "Leaving a short couple of months of development before getting caught in the LLM wave." doesn't even form a complete sentence.

I'm already hating the future of blog posts and articles where we have to mentally filter out all the LLM-generated garbage around any real information.



I caught myself the other day throwing my feed of articles into an LLM to give me summaries and what it thinks are interesting points / facts. I'm not sure how to feel about this.


Why wouldn't you use a language model to summarize? It is one of the most useful things a statistical language model is capable of.

Though it might be good to tune it to help it identify the parts you find interesting. Especially if you try to identify salient details.


is the manual itself good? guess we will have to go through it.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: