Yes, I’m a huge fan of how easy it is to whip up quick isolated prototypes in Claude artifacts.
There’s a risk of breaking changes in libs causing frustration in larger codebases, though. I’ve been working with LLMs in a Nextjs App Router codebase for about a year, and regularly struggle with models trained primarily on the older Pages Router. LLMs often produce incompatible or even mixed compatibility code. It really doesn’t matter which side of the fence your code is on, both are polluted by the other. More recent and more powerful models are getting better, but even SOTA reasoning models don’t totally solve this.
Lately I’ve taken to regularly including a text file that spells out various dependency versions and why they matter in LLM context, but there’s only so much it can do currently to overcome the weight of training on dated material. I imagine tools like Cursor will get better at doing that for us silently in the future.
There’s an interesting tension brewing between keeping dependencies up to date, especially in the volatile and brittle front end world, vs writing code the LLMs are trained on.
There’s a risk of breaking changes in libs causing frustration in larger codebases, though. I’ve been working with LLMs in a Nextjs App Router codebase for about a year, and regularly struggle with models trained primarily on the older Pages Router. LLMs often produce incompatible or even mixed compatibility code. It really doesn’t matter which side of the fence your code is on, both are polluted by the other. More recent and more powerful models are getting better, but even SOTA reasoning models don’t totally solve this.
Lately I’ve taken to regularly including a text file that spells out various dependency versions and why they matter in LLM context, but there’s only so much it can do currently to overcome the weight of training on dated material. I imagine tools like Cursor will get better at doing that for us silently in the future.
There’s an interesting tension brewing between keeping dependencies up to date, especially in the volatile and brittle front end world, vs writing code the LLMs are trained on.