Has langchain had positive ROI for anyone beyond the initial prototype of their app? My experience has me skeptical - I end up feeling like I'm painted into a corner and need to start over. Maybe if I just used PromptTemplate and LLMChain, but at that point I can just use function composition and formatted strings.
Like I'd be blown away if someone had a production app where they were able to swap LLM providers (and nothing else) due to langchain. And if that expectation is too high then why not just code against the openai API?
Outside a prototype what’s the benefit? The steps I envision are: 1) turn prompt to embeddings 2) return examples that match from vector db 3) load as examples 4) prompt LLM with examples.
Am I missing something?
Why on earth would you want to import multiple dependencies for this?
Like I'd be blown away if someone had a production app where they were able to swap LLM providers (and nothing else) due to langchain. And if that expectation is too high then why not just code against the openai API?