Hacker Newsnew | past | comments | ask | show | jobs | submit | sswatson's commentslogin

Neither of those assertions means anything. For many years, people have been using them to make confident predictions about what AI systems will never be able to accomplish. Those predictions are routinely falsified within months.

Of course, some of those predictions may also turn out to be true. But either way, we have abundant empirical evidence that the reasoning is not sound.


1. Because no one knows how to do it. 2. Consider (a) a tool that can apply precise methods when they exist, and (b) a tool that can do that and can also imperfectly solve problems that lack precise solutions. Which is more powerful?


The author has exclusive claim to their own aesthetic sensibilities, of course, but the language in the piece suggests some degree of universality. Whereas in fact, effectively no one who is knowledgeable about math would share the view that noncommutative operations are ugly by virtue of being noncommutative. It’s a completely foreign idea, like a poet saying that the only beautiful poems are the palindromic ones.


Recently I've used Claude Code to build a couple TUIs that I've wanted for a long time but couldn't justify the time investment to write myself.

My experience is that I think of a new feature I want, I take a minute or so to explain it to Claude, press enter, and go off and do something else. When I come back in a few minutes, the desired feature has been implemented correctly with reasonable design choices. I'm not saying this happens most of the time, I'm saying it happens every time. Claude makes mistakes but corrects them before coming to rest. (Often my taste will differ from Claude's slightly, so I'll ask for some tweaks, but that's it.)

The takeaway I'm suggesting is that not everyone has the same experience when it comes to getting useful results from Claude. Presumably it depends on what you're asking for, how you ask, the size of the codebase, how the context is structured, etc.


Its great for demos, its lousy for production code. The different cost of errors in these two use cases explains (almost) everything about the suitability of AI for various coding tasks. If you are the only one who will ever run it, its a demo. If you expect others to use it, its not.


As the name indicates, a demo is used for demonstration purposes. A personal tool is not a demo. I've seen a handful of folks assert this definition, and it seems like a very strange idea to me. But whatever.

Implicit in your claim about the cost of errors is the idea that LLMs introduce errors at a higher rate than human developers. This depends on how you're using the LLMs and on how good the developers are. But I would agree that in most cases, a human saying "this is done" carries a lot more weight than an LLM saying it.

Regardless, it is not good analysis to try to do something with an LLM, fail, and conclude that LLMs are stupid. The reality is that LLMs can be impressively and usefully effective with certain tasks in certain contexts, and they can also be very ineffective in certain contexts and are especially not great about being sure whether they've done something correctly.


> But I would agree that in most cases, a human saying "this is done" carries a lot more weight than an LLM saying it.

That's because humans have stakes. If a human tells me something is done and I later find out that it isn't, they damage their credibility with me in the future - and they know that.

You can't hold an LLM accountable.


In the vast majority of cases, developer ergonomics are much more important than freeing memory a little earlier. In other scenarios, e.g., when dealing with large data frames, the memory management argument carries more weight. Though even then there are usually better patterns, like method chaining.

FYI John Carmack is a true legend in the field. Despite his not being a lifelong Python guy, I can assure you he is speaking from a thorough knowledge of the arguments for and against.


>developer ergonomics are much more important than freeing memory a little earlier

Preach to the python choir bro, but it should be telling when a python bro considers it's too ergonomic and wasteful.

At some point being clean and efficient about the code is actually ergonomic, no one wants to write sloppy code that overallocates, doesn't free, and does useless work. To quote Steve Jobs, even if no one sees the inside part of a cabinet, the carpenter would know, and that's enough.

tl;dr: Craftmanship is as important as ergonomics.


In this case, overuse of re-assigning is the sloppy thing to do, and immutability by default is the craftsman's move. Reducing your program's memory footprint by re-assigning variables all the time is a false economy.


So if you are preparing a 50Kb webpage, and you do 10 steps of processing, you would have a 500KB memory footprint that might be held during the life of the connection? All the while the footprint and thus capacity of your server could have been 100Kb? Nice craftmanship dude!

We are not even talking about in-place algorithms, just 10 functions that process an html into a new array, maybe:

html = load_template(route) html = formatstring(html,variables) html= localize_paths(html) ...

And you would rather have it:

template = load_template(route) formatted_html = formatstring(template,variables) html_with_localized_paths = localize_paths(html)

And you would rather have the latter? For what gain? I think you wouldn't.

"Only a sith deals in absolutes", you have to recognize that both are valid under different contexts. And I'm merely explaining why inmutable is the default in python, 1: python doesn't do programmer self restrictions like const and private; 2: memory is automatic, so there's no explicit allocation and freeing like in C++, so using a new variable for each thing isn't a zero overhead abstraction.

Even for smaller cases, (not 50kb arrays), it's still the proper thing to do, although you have freedom to choose, it's easier to just follow one style guide and protocol about how to do things, if it's pythonic to reuse the variable name, just reuse the variable name. Don't fall for the meme of coming from another language and writing C++ in python or Java in python, you are not a cute visionary that is going to import greatness into the new language, it's much better to actually learn the language rather than be stubborn.

There's places where you can be super explicit and name things, if it's just an integer then it's a very cheap comment that's paid in runtime memory instead of in LOC. But this is why the default in python is not const, because variable name reuse is a core python tactic, and you are not our saviour if you don't get that, you are just super green into the language.


This is a textbook example of damning with faint praise. If your VCS's interface is so bad that it motivates you to scale back your use of any nontrivial version-control features and instead just content yourself with rudimentary file syncing, that's a case against the interface. Either the additional features are useful and you're missing out on that benefit, or they're extraneous and are saddling the tool with unnecessary baggage.


Or, hear me out, the tool has enough features to cover a wider range of use cases than your own. You're not missing the benefit of features outside your use cases.


The author lists that as a separate benefit, though.

My interpretation is that jj makes certain useful operations convenient to use that would be so complex in git as to be completely impractical. Something like jj undo would be a simple example: jj users can do it, and git users can’t, even though it’s logically possible in both systems.


> but others have been pretty much fawning…

This is not relevant. An observer who deceives for purposes of “balancing” other perceived deceptions is as untrustworthy and objectionable as one who deceives for other reasons.


The idea is thoroughly absurd. Our days are filled with unpaid labor. Getting dressed in the morning, collecting the items you want to buy in a grocery store, making your bookings for a vacation, giving your spouse feedback on their wardrobe selection, etc. — all of these things are work.

If you want to argue that they are not work, then surely helping a delivery person who's lost in your neighborhood also counts as non-work for the same reason.


My first thought as well. I wonder what Kobo will do in response to this announcement.


Me too, I hope there will be a replacement with the same low friction.

I've been using IFTTT with RSS feeds to add serialized stories to my Kobo as they release.


They should be able to change it with fairly minimal changes. I managed to modify some things to proxy articles from Omnivore to it, and the functionality remained largely the same across the two services.

They'd have to implement some kind of login, but they they should just be able to build some kind of converter between whatever format and the format that is expected by the Kobo device.


Could you share the flow if someone wanted to replicate it for their own Kobo?


Yes, but it currently depends on Pocket working.

I have an RSS feed from RoyalRoad that new episodes come in on. Using IFTTT, I have an action set up to, whenever a new item comes in on that feed, add the _URL_ of it to my Pocket account. Then, Kobo just syncs the Pocket articles automatically, and the new episode is added.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: