Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Same, I watched a video from Theo where he says Next.js and Python will be the best languages because LLMs know them well, but if the model can infer, it shouldn’t be a problem.


Folks on YouTube have used Claude Code and the new Tidewave.ai MCP (for Elixir and Rails) to vibe code a live polling app in Cursor without writing a line of code. The 2hr session is on YT.



That's the one.


since models can't reason, as you just pointed out, and need examples to do anything, and the LLM companies are abusing everyone's websites with crawlers, why aren't we generating plausible looking but non working code for the crawlers to gobble, in order to poison them?

I mean seriously, fuck everything about how the data is gathered for these things, and everything that your comment implies about them.

The models cannot infer.

The upside of my salty attitude is that hordes of vibe coders are actively doing what I just suggested -- unknowingly.


But the models can run tools, so wouldn't they just run the code, not get the expected output, and then exclude the bad code from their training data?


That seems like a feedback loop that’s unlikely to exist currently. I guess if intentionally plausible but bad data became a really serious problem, the loop could be created… maybe? Although it would be necessary to attribute a bit of code output back to the training data that lead to it.


For what its worth, AI already has subpar data. Atleast this is what I've heard.

I am not sure, but the cat is out of the box. I don't think we can do anything at this point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: