Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In a world of vibe coders, those who can still debug on their own, will have quite a valuable skill.


Are least for a few more generations of model.

I just finished creating a multiplayer online party game using only Claude Code. I didn't edit a single line. However, there is no way someone who doesn't know how to code could get where I am with it.

You have to have an intuition on the sources of a problem. You need to be able to at least glance at the correct and understand when and where the AI is flailing, so you know to backtrack or reframe.

Without that you are as likely to totally mess to you app. Which also means you need to understand source control and when to save and how to test methodically.


I was thinking of that, but asking the right questions and learning the problem domain just a little bit "getting the gist of things" will help a complete newbie to generate code for a complex software.

For example in your case there is the concept of message routing where a message that gets sent to the room is copied to all the participants.

You have timers, animation sheets, events, triggers, etc. A question that extracts such architectural decisions and relevant pieces of code will help the user understand what they are actually doing and also help debug the problems that arise.

It will of course take them longer, but it is possible to get there.


So I agree, but we aren't at that level of capability yet. Because at some point currently it inevitably hits a wall and you need to dig deeper to push it out of the rut.


no code has been the hot new thing for the past 40 years.


Surely there is a scale limit to how big the application can be using this approach.


Why do you say that? I would argue that as long as your tests and interfaces are clearly defined no reason it couldn't scale indefinitely.


Practically there is with Claude Code at least.

Hypothetically, if you codified the architecture as a form of durable meta tests, you might be able to significantly raise the ceiling.

Decomposing to interfaces seems to actually increase architectural entropy instead of decrease it when Claude Code is acting on a code base over a certain size/complexity.


Curious, do you supervise the code itself or at least understand what the code is trying to do?

By "I didn't edit a single line", do you still prompt the agent to fix any issues you found? If so, is that consided an edit?


So yes and no. I often just let it work by itself. Towards the very end when I had more of a deadline I would watch and interrupt it when it was putting implementations in places that broke its architecture.

I think only once did I ever give it an instruction that was related to a handful of lines (There certainly were plenty of opportunities, don't get me wrong).

When troubleshooting occasionally I did read the code. There was an issue with player to player matching where it was just kind of stuck and gave it a simpler solution (conceptually, not actual code) that worked for the design constraints.

I did find myself hinting/telling it to do things like centralize the CSS.

It was a really useful exercise in learning. I'm going to write an article about it. My biggest insight is that "good" architecture for an current generation AI is probably different than for humans because of how attention and context works in the models/tools (at least for the current Claude Code). Essentially "out of sight out of mind" creates a dynamic where decomposing code leads to an increase in entropy when a model is working on it.

I need to experiment with other agentic tools to see how their context handling impacts possible scope of work. I extensively use GitHub Copilot, but I control scope, context, and instructions much tighter there.

I hadn't really used hands off automation much in the past because I didn't think the models were at a level that they could handle a significantly sized unit of work. Now they can with large caveats. There also is a clear upper bound with the Claude Code, but that can probably be significantly improved by better context handling.


so if you're an experienced, trained developer you can now add AI as a tool to your skill set? This seems reasonable, but is also a fundamentally different statement that what every. single. executive. is parroting to the echochamber.


I already imagine future devs wide-eyed saying things like: "He/She can _debug_!!"


I have a strong memory from the start of my career, when I had a job setting up Solaris systems and there was a whispered rumour that one of the senior admins could read core files. To the rest of us, they were just junk that the system created when a process crashed and that we had to find and delete to save disk space. In my mind I thought she could somehow open the files in an editor and "read" them, like something out of the Matrix. We had no idea that you could load them into a debugger which could parse them into something understandable.


I once showed a reasonably experienced infrastructure engineer how to use strace to diagnose some random hangs in an application, and it was like he had seen the face of God.


This reminded me of Asimov's short story "The Feeling of Power" https://hex.ooo/library/power.html


Thank you for that, what a fun read :D


I mean I already heard comments about myself when I went and RTFM'd

"You read manuals?!?"

"... Yeah? (pause) Wait, you don't?!?!?"


(Anecdote) Best job I ever had, I walked in and they were like "yeah, we don't have any training or anything like that", but we've got a fully setup lab and a rotating library of literature. <My Boss> "Yeah I'm not going to be around, but here are the office keys" don't blow up the company pretty much.


I don't really see the connection here, but it was a nice anecdote of a trusting environment.


To be honest, I do find most manuals (man pages) horrible to quickly get information how to do something and here LLMs do shine for me (as long as they don't mix up version numbers).


For man pages, you have to already know what you wants to do and just want information on how exactly to do it. They're not for learning about the domain. You don't read the find manual to learn the basics of filesystems.


I love manual pages, at least on/from OpenBSD.


Imagine reading in 2025, when you can just watch tiktok about it!

/s


Pretty much. The hesitancy to read documentation was there long before TikTok and LLMs.

"Teach me how to use Linux [but I hate reading documentation]".

It infuriated me.


Yesterday I tried to vibe install (TM) marker-api docker image and failed miserably. Still, I was able to try. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: