That intro to Langchain is absolutely terrible. Like it was copy-pasted from the worst LLM they could find and pasted in:
> The first high-performance and open-source LLM called BLOOM was released. OpenAI released their next-generation text embedding model and the next generation of “GPT-3.5” models.
Just random sentences strung together delivering no overall message. Yes we know BLOOM and GPT exist, what is your point?
> LangChain appeared around the same time. Its creator, Harrison Chase, made the first commit in late October 2022. Leaving a short couple of months of development before getting caught in the LLM wave.
That's nice that the text model that wrote this knows the creator and first commit but ugh -- just say "Langchain was published in October 2022" instead of all that garbage.
Also, "Leaving a short couple of months of development before getting caught in the LLM wave." doesn't even form a complete sentence.
I'm already hating the future of blog posts and articles where we have to mentally filter out all the LLM-generated garbage around any real information.
I caught myself the other day throwing my feed of articles into an LLM to give me summaries and what it thinks are interesting points / facts. I'm not sure how to feel about this.
> The first high-performance and open-source LLM called BLOOM was released. OpenAI released their next-generation text embedding model and the next generation of “GPT-3.5” models.
Just random sentences strung together delivering no overall message. Yes we know BLOOM and GPT exist, what is your point?
> LangChain appeared around the same time. Its creator, Harrison Chase, made the first commit in late October 2022. Leaving a short couple of months of development before getting caught in the LLM wave.
That's nice that the text model that wrote this knows the creator and first commit but ugh -- just say "Langchain was published in October 2022" instead of all that garbage.
Also, "Leaving a short couple of months of development before getting caught in the LLM wave." doesn't even form a complete sentence.
I'm already hating the future of blog posts and articles where we have to mentally filter out all the LLM-generated garbage around any real information.