Hacker Newsnew | past | comments | ask | show | jobs | submit | Dilettante_'s commentslogin

Can't wait for a bug to un-verify me on both my devices and lock me out of my account.

I'm pretty sure you get recovery keys with it also.

I only skimmed around the definition of 'social dandelions'. Is the thesis here "having people who have a lot of influence(social dandelions) as your ambassadors is good for driving adoption"?

  My experience with AI in the design context tends to reflect what I think is generally true about AI in the workplace: the smaller the use case, the larger the gain.
This might be the money quote, encapsulating the difference between people who say their work benefits from LLMs and those who don't. Expecting it to one-shot your entire module will leave you disappointed, using it for code completion, generating documentation, and small-scale agentic tasks frees you up from a lot of little trivial distractions.

> frees you up from a lot of little trivial distractions.

I think one huge issue in my life has been: getting started

If AI helps with this, I think it is worth it.

Even if getting started is incorrect, it sparks outrage and an "I'll fix this" momentum.


> If AI helps with this, I think it is worth it.

Worth what? I probably agree, the greenfield rote mechanical tasks of putting together something like a basic interface, somewhat thorough unit tests, or a basic state container that maps to a complicated typed endpoint are things I'd procrastinate on or would otherwise drain my energy before I get started.

But that real tangible value does need to have an agreeable *price* and *cost* depending on the context. For me, that price ceiling depends on how often and to what extent it's able to contribute to generating maximum overall value, but in terms of personal economic value (the proportion of my fixed time I'm spending on which work), if it's on an upward trend of practical utility, that means I'm actually increasing the proportion of dull tasks I'm spending my time on... potentially.

Kind of like how having a car makes it so comfortable and easy and ostensibly fast to get somewhere for an individual—theoretically freeing up time to do all kinds of other activities—that some people justify endless amounts of debt to acquire them, allowing the parameters of where they're willing to live to shift further and further to the point where nearly all of their free time, energy, and money is spent on driving, all of their kids depend on driving, and society accepts it as an unavoidable necessity; all the deaths, environmental damage, side-effects of decreased physical activity and increased stress along for the ride. Likewise how various chat platforms tried to make communication so friction-less that I actually now want to exchange messages with people far less than ever before, effectively a foot gun

Maybe America is once again demolishing its cities so they can plow through a freeway, and before we know it, every city will be Dallas, and every road will be like commuting between San Jose to anywhere else—metaphorically of course, but also literally in the case of infrastructure build— when will it be too late to realize that we should have just accepted the tiny bit of hardship of walking to the grocery store.

------

All of that might be a bit excessive lol, but I guess we'll find out


I'm in the sciences, but at my first college I took a programming course for science majors. We were partnered up for an end of semester project. I didn't quite know how to get started, but my partner came to me with a bunch of pieces of the project and it was easy to put them together and then tinker with them to make it work.

Perhaps a human coworker or colleague would help?


I think AI is “worth it” in that sense as long as it stays free :D

Nothing is free, especially not AI, which accounted for 92% of U.S. GDP growth in the first half of 2025.

If? Shouldn't you know by now whether AI does or doesn't help with that? ;D

An agentic git interface might be nice, though hallucinations seem like they could create a really messy problem. Still, you could just roll back in that case, I suppose. Anyways, it would be nice to tell it where I'm trying to get to and let it figure out how to get there.

Lots of things might be nice when the expenditure accounts for 92% of GDP growth.

What am I finding is that the size of the "small" use case is becoming larger and larger as time goes by and the models improve.

And bug fixes

"This lump of code is producing this behaviour when I don't want to"

Is a quick way to find/fix bugs (IME)

BUT it requires me to understand the response (sometimes the AI hits the nail on the head, sometimes it says something that makes my brain - that's not it, but now I know exactly what it is


Honestly one the best use cases I've found for it is creating configs. It used to be that I was able to spend a week fiddling around with, say, nvim settings. Now I tell an LLM what I want and it basically gives it to me without having to do trial and error, or locating some obscure comment from 2005 that tells me what I need to know.

Depends what you're doing.

If it's a less trodden path expect it to hallucinate some settings.

Also a regular thing I see is that it adds some random other settings without comment and then when you ask it about them it goes, whoops, yeah, those aren't necessary.


So same as with the internet

This is both the sickest burn and truest statement I’ve read today.

Here, have a (digital) shortbread cookie: o


You're absolutely right! The AI said it, so it must be true!

At least read what you respond to... Imagine thinking Yudkowsky was NOT a central figure in the Zizians story.

You literally quoted the LLMs output verbatim as your proof.

Edit: And upon skimming the article at the points where Yudkowsky's name is mentioned, I 100% agree with stickfigure.

I challenge you to name one way in which the story falls apart without the mention of Yudkowsky.


It sounds like both of you are unfamiliar with the link between the Zizians and Yudkowsky. So let us just return to the discussion of gemini-3, do you think the model did a bad job then in it's second response?

It literally does not matter how much they are connected out here in reality, the AI was to summarize the information in the article and that is exactly what it did.

>do you think the model did a bad job then in it's second response

Yes, very obviously it told you what you wanted to hear. This is behavior that should not be surprising to you.


Why do you think I obviously wanted to hear that?

It's implicit in your prompt!

  "Wtf - no mention of Yudkowsky?"
Also that is the position you've been defending this whole thread. This whole conversation is happening because you believe Yudkowsy is an important figure to the story.

Here's another attempt: llm --cid 01kabxtjq10exgk56yf802028f "I notice you did not mention Yudkowsky?" --no-log -m gem3-t1 Based on the text provided, Eliezer Yudkowsky is a central background figure to this story, serving as the intellectual progenitor of the movement from which the Zizians splintered.

Here is specifically how he features in the article:

* *The Founder:* Yudkowsky is identified as the founder of the Machine Intelligence Research Institute (MIRI) and the Center for Applied Rationality (CFAR), as well as the founder of the LessWrong forum. These are the institutions the Zizians originally belonged to and eventually declared war against. * *The Source Code:* The Zizians based their radical worldview on Yudkowsky’s concepts, specifically *"timeless decision theory."* However, they believed they were applying his logic more rigorously than he was. They became convinced that Yudkowsky’s organizations had betrayed these principles (specifically regarding a rumor about MIRI paying blackmail to cover up a scandal), which they viewed as a moral failing that justified their rebellion.


I've done as much fiddling and prompting to LLMs about that article as I cared to do under these circumstances and I have to concede the point about you getting 'the answer you wanted' out: The chatbots were quite insistent that Yudkowski is central to the story, even when I pulled out the following: "Somebody is arguing Yudkowsky is a central figure in this article, is that accurate?"

They are *wrong*, and provided exactly the same immaterial evidence as you did in this thread(I still insist that the article suffers zero damage if you remove Yudkowsky from it and instead only mention the institutions and concepts that stem from him), but with all the behavior I've seen now, the summary which was the initial issue of this thread should have included him.

[What I would've really liked to do was to prompt for another person of equal non-prominence who was in the article but not in the summary, and see what comes up. But I sure am not reading the 80-102 minute article just for this and we're unlikely to find an agreement about the 'equal non-prominence' part if I challenged you to pick one.]


>Brian Armstong, CEO of Coinbase intentionally altered the outcome of a bet by randomly stating "Bitcoin, Ethereum, blockchain, staking, Web3" at the end of an earnings call.

For the kind of person playing these sorts of games, that actually really "hype".


Truckers code better than bugs

"Don't get high on your own supply"?

Got some red on you...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: