Hacker Newsnew | past | comments | ask | show | jobs | submit | voidhorse's commentslogin

As usual it's not so black and white and is all about balance.

Project where the sole user is you in your kitchen? Sure, hack it together.

Project where you actually want other people to use the product? A research phase matters and helps here.

Consider what the goal is and the amount of effort to invest typically becomes more evident.


I feel like most of these applications all boil down to "Obsidian but with AI integration baked in up front". It'd be interesting to see approaches that actually rethink commonplaces of the experience (graph view etc) rather than just reproduce the same thing but "with ai"

> I notice the word "literacy" thrown around a lot lately, in part by myself, but there's an inherent dishonesty to this. Language does not have absolute meaning, and you cannot read another person's mind. Just because you interpret literal works of art differently, I don't believe that necessarily qualifies you as illiterate. These are not the same qualities.

I interpret critiques of this flavor and the "literacy" issue in general as being about a lack of interpretive range more than a tendency to produce some "incorrect" interpretation.

I think people are concerned that declining sophistication in readers actively prevents them from even being aware that some more complex interpretation is possible when engaging with a text. People read the overwhelming lack of nuance in internet comment threads as evidence of this. You can question whether or not that's a legitimate inference in isolation (I think it's dubious, personally), but, when bolstered by evidence from studies about how much people read for leisure, and falling grades on reading comprehension exams, I think the argument gains a little more weight.

I don't necessarily take the author as saying that these commentators are wrong about the NYT's author's self-awareness, but rather that evidence of a more complete reading would evidence itself in the comments if they had a more nuanced interpretation. There's a difference between flat out saying "wow this article really makes the writer look like a horrible person" and "I'm glad the writer had the courage to share this and seems o be growing but I'm amazed they were ever such a horrible person at some point in their lives". Again, it's probably unfair to make a judgement about overall interpretive ability based on one comment alone—one would actually need to subject the commenter to reading comprehension exams to know, but if you do feel the extrapolating judgement to population tendencies is legitimate, I understand why you might draw literacy conclusions.


I agree. It's more that I find it difficult to fault people for not recognizing motifs that do not actually speak to them, possibly even after a ton of exposure and instruction, or given a specific context. At that point, it's less the audience being medium illiterate, and more the medium being audience illiterate so to speak. Or rather the creator being medium or audience illiterate.

This is completely true. From what ai can tell talking to people outside of tech, the agi and "omg this stuff is wild" hype and fears have completely dissipated. Ironically the average person sees these tools how typically you'd expect a cold, rational technologist to see them: just another tool.

I think a lot of people are just getting their firs taste if agent harnesses plus slightly better models right now, and yes, the first time you use them it seems scary and amazing. By the hundredth time though, it's very apparent that there is still tremendous work to do before any kind of fully automated software pipeline (let alone any other domain) can be realized.


False Consciousness was the old marxist term for this inadvertent working against your own ultimate self-interest. It's rife in capitalism. If you look closely you'll see it everywhere.

(note that even the "her kids will be ok" isn't true at the limit. If wealth concentrates sufficiently enough it will lead to societal collapse)


I agree with much of the analysis, and originally I would have subscribed to the recommended action (resistance), at this point in time however, I think that advice is severely misguided.

We have already passed the critical point. The LLMs, the agent harnesses are here. There is too much willpower, capital, and risk behind these technologies now—the automobile has landed, thousands of people have purchased it already, protesting the car won't undo it at this point.

What you can do that will be meaningful, is to instead understand the new car, and understand it deeply, Use that understanding to carry the values you care about into the new world and re-articulate them. Make the car safer, push for tactical regulations on it. If you are privileged enough to be able to forgo its use entirely, sure, but that advice is not uniformly applicable. People forget that being able to simply opt-out of certain things is often only a viable option when you are already in a certain position. What we really need are the heavy skeptics to stop falling for luddite temptation and to start bringing their critical lens to bear in positive ways on this new technology to make it safer and better. By opting out and staging a feeble resistance you won't do anything other than let the current dangerous power consolidation continue.


Zettelkasten is great for researchers. I actually don't think it's that valuable for practicing technologists. The general practice of taking notes and connecting ideas together is of course useful, but most technologists don't need such a sophisticated system.

Amid all the fanaticism that grew around zettelkasten method the past few years people have forgotten and de-emphasized the fact that for Luhmann it was not a "second brain" to be referenced on demand, it was explicitly a system to support writing. It is tailored to help researchers write papers. It shines if you actually need a system in which to keep notions coherent and organized so that ideas are clear and citations precise when you need them during the writing process. If that's not you, the overhead probably isn't worth it. Just keep a notebook.


> I file the sharp corners off my MacBooks. People like to freak out about this

The fact that any conscious human being has the time or energy to be "freaked out" about someone futzing around with their own devices is astounding to me.


It's a certain kind of tasteless to take a thing which most people consider a very expensive luxury, and show off how little it means to you and how casually you damage it.

People in a museum gasping at a child almost knocking an expensive vase off a pedestal, for comparison. Then walking in and damaging the vase deliberately while pretending that you don't understand what people are gasping about, to show off that $2000 is nothing to you.


I've been looking at things from the same lens since 2023. At the same time, the depletion/hoarding bit isn't new. Companies were already doing this with consumer data, LLMs are just finally the factory moment—now that we have all the raw material we finally have a means of automating production using it.

So, in some ways, I also view LLMs as a pivotal and important wake up call. Companies were already taking the data and using it for a variety of other purposes—it was just way less evident to people when they weren't in direct competition with labor, since, under capital, labor is what we sell.

Either an entire new industry needs to form, or it's finally time to move beyond capitalism. Centralized capital ends up killing itself, because it effectively shuts down its own engine if it kills off consumers, who can only exist in the first place if the wage labor structure holds.


Thanks for taking the time for some sober analysis in the midst of reactionary chaos.

I can't wait until everyone stops falling for the "AGI ubermodel end of times" myth and we can actually have boring announcements that treat these things as what they actually are: tools. Tools for doing stuff, that's it.

Maybe I'm wrong, maybe stuffing a computer with enough language and binary patterns is indeed enough to achieve AGI, but then, so what? There's no point in being right about this. Buying into this ridiculous marketing will get us "AGI" in the form of machines, but only because all the human beings have gotten so stupid as to make critical reasoning an impossibility.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: