Hacker Newsnew | past | comments | ask | show | jobs | submit | roxolotl's commentslogin

> Someone once said that SF is a town of extremely high sincerity, and all of its modern and historical weirdness

Not directly related to the piece but this explains so much. I’ve always seen it as high credulity. That is to say all lots of people are lying but lots of other people trust them. The missing part has been why would you take some of these people at face value. If there’s also a lot of sincere people it would then make sense that many would end up overly credulous.


I’m reasonably convinced this explains basically everything currently attributed to social media, for children at least, and likely can also help explain some concern around birth rates and child rearing costs. Starting with the satanic panic the US has slowly closed down children’s lives because of concern that terrible things will happen to children if not continually under supervision. And the true is that yes sometimes bad things happen and have always happened. But if you look to many other countries they do not have the same extreme expectations of parents or the state to keep children’s lives locked down.

Does the term "satanic panic" also apply to the EU restricting internet access for the youth?

Satanic panic was a very specific phenomenon in the US.

No, that's plain old security state motivation hidden behind plain old moral panic justifications nobody buys.

They have audio samples if that’s what you mean. The ones from where I grew up were spot on but rare even when I was growing up in the 90s.

https://aschmann.net/AmEng/#AudioFilesOfLocalDialects


I looked through and found one rejected video from Montreal. It's crazy to me, to reject someone with a French accent. It's how people talk here! Many consider themselves perfectly bilingual and grew up speaking both languages. Even the more Anglo-Quebecois have a very specific vocabulary and accent heavily influenced by French.

I used to visit French speaking Canada when I was in college. I found it interesting to see people who could switch between an Anglo-Canadian accent and a French-Canadian accent, to my ear sounding native at both. This wasn't everyone obviously, but there were people like that.

Radio followed by television has done a lot of homogenization even if you don't have the more formalized received pronunciation you had/have in the UK. Even something stereotypical like a "Boston accent" was mostly a Southie accent on the one hand and an essentially English (Boston Brahmin) on the other. Most urbanites in particular never had others and many weren't even from Boston.

This is very well written and told. It’s worth reading all the way through.

> If you try to refute it, you’ll just get another confabulation.

> Not because the model is lying to you on purpose, and not because it’s “resistant” or “defensive” in the way a human might be. It’s because the explanation isn’t connected to anything that could be refuted. There is no underlying mental state that generated “I sensed pressure.” There is a token stream that was produced under a reward function that prefers human-sounding, emotionally framed explanations. If you push back, the token stream that gets produced next will be another human-sounding, emotionally framed explanation, shaped by whatever cues your pushback provided.

“It’s because the explanation isn’t connected to anything that could be refuted.” This is one of the key understandings that comes from working with these systems. They are remarkably powerful but there’s no there there. Knowing this I’ve found enables more effective usage because, as the article is describing, you move from a mode of arguing with “a person” to shaping an output.


Reminds me of https://news.ycombinator.com/item?id=15886728

Do not argue with the LLM, for it is subtle and quick to anger, and finds you crunchy with ketchup.

These are, broadly, all context management issues - when you see it start to go off track, it's because it has too much, too little, or the wrong context, and you have to fix that, usually by resetting it and priming it correctly the next time. This is why it's advantageous not to "chat" with the robots - treat them as an english-to-code compiler, not a coworker.

Chat to produce a spec, save the spec, clear the context, feed only the spec in as context, if there are issues, adjust the spec, rinse and repeat. Steering the process mid-flight is a) not repeatable and b) exacerbates the issue with lots of back and forth and "you're absolutely correct" that dilutes the instructions you wanted to give.


Exactly, never argue with an LLM unless the debate is the point...

It's just speedrunning context rot.


Very well written? It’s a bunch of AI generated stuff around an interesting point. It repeats its points over and over again, meanders.

It’s an interesting thesis, it’s not well written or well told


This was my reading too. Interesting idea, but it took 10 pages of fluff to get to it and I didn't even believe the final idea when we got there. I started off reading the first part and thought he would get to the part where he realized he was managing context wrong. Never got there, instead he thought it was about the shape of the prompt.

They’ve gotta be feigning it right? I just don’t understand how you could be so out of touch with what happens when wealth becomes this concentrated. This isn’t the first go around at this.

Wealth concentration has been happening for a century. You don't need AI for that.

A hell that’s been widely documented in fiction as well. That’s the part that’s so wild to me about this. None of this was unseen. Across every medium the extreme commercialization and general collapse of the social contract due to AI has been described and a lot of the authors have been largely prophetic.

In the US this is due to the overall failure of trust in our institutions.

No one trusts Congress or the US government to effectively regulate AI for the greater good of the population. Each party believes regulations proposed by the other party will be used to discriminate against and control their party.


I’m reasonably convinced this is the best argument against LLMs. It’s the same reason Open is in OpenAI’s name. The understanding that centralizing the ownership of these tools is going to transform the world is widespread. That’s why the investment is so high. If power and wealth isn’t concentrated into these AI labs the investment isn’t worth it. Which means we have to ask ourselves if we want that. There’s plenty of futures which include LLMs and don’t include the centralization but they require a departure from our current trajectory. There was also no guarantee that programming and computing would become free like it is today.

> There's plenty of futures which include LLMs and don’t include the centralization but they require a departure from our current trajectory.

I don't think that's true at all. It's pretty clear that local models are the future of agentic coding, and everyone's been moving towards that goal.

It's also becoming clear that current models are much bigger than they really need to be. New research indicates that most transformer models can be shrunk significantly and still perform the same.

We definitely aren't there yet, but models that run on a single consumer GPU are getting better at a pretty fast pace. Model size keeps going down, efficiency keeps going up, and compute keeps getting faster and cheaper.

I really don't see a future where enormous datacenters are the only way to run a coding agent. Huge models might continue to be more performant, but the gap between that and a local model is closing quick.


The best argument against is they're just another scheme to prop up data center companies.

Use an LLM with the equivalent knowledge of Linux kernel and tect editor? Or git clone them.

It's another state management scheme being sold to politicians and elder investors who don't know any better. Big tech 100% relies on elder abuse.


This will probably get flagged but it’s a good example of how any industry as powerful and global as the tech industry is inextricably with the political fates of the world.

That's why it will get flagged. Bother & damnation.

I cannot state enough how strongly I think people should have some accountability at least to their flagging. This ability to remain an Anonymous Coward while suppressing such vital stories at the heart of this world and it's tensions is exceedingly fallen.

At a minimum there ought be a system to out the flaggers. My gut says flagging should be a public action, period.


Also downvoting should always need a reason


I’ve got many catholic relatives that describe themselves as vegetarians and eat fish. Language can be surprisingly imprecise and dependent upon tons of assumptions.

> I’ve got many catholic relatives that describe themselves as vegetarians and eat fish

Those are pescatarians.

It's like how a tomato is a fruit, but it's used as a vegetable, meat has traditionally been the flesh of warm-blooded animals. Fish is the flesh of cold-blooded animals, making it meat but due to religious reasons it’s not considered meat.


Right exactly. The point is that dictionary definitions don’t always align with cultural ones.

To dunk or not to dunk.

I’d pay to see Shaq on broadway.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: