Hacker Newsnew | past | comments | ask | show | jobs | submit | saint_yossarian's commentslogin

Well kickstart.nvim is a template for your own configuration, not a distribution. You're not supposed to update it.

AFAIK using setup() is actually a cargo-cult practice and discouraged by the Neovim maintainers.

echasnovski seems to like it (and he's a maintainer I think)

His use case is mini.nvim, a huge bundle of plugins where you most likely don't want each plugin initialising automatically.

He and I are on very opposite ends of the "setup spectrum", but we have found common ground, which is that you shouldn't cargo cult it: https://github.com/neovim/neovim/pull/35600/files


From the README:

> After installation, authenticate through IAM Identity Center or AWS Builder ID.



They started skipping some months. Check https://isitbandcampfriday.com/ for the next one.

This coming Friday (September 5th) is one! Woo!

What evidence convinced you?

I performed an "Affective Turing Test" with null results.

Wait, how do you get precompiled Rubies with mise? I still have to compile with default settings, and the docs only mention that it uses ruby-build behind the scenes: https://mise.jdx.dev/lang/ruby.html

I don't think you can. I also don't know why people care so much about it - I work full time with ruby and compiling a new version, which maybe takes 10 minutes, a couple times a year is no big deal.

I think it comes mostly from CI environments that start entirely clean before every run. 10 minutes every time a commit is pushed is not pleasant. That's not how I'd like CI to work, but sadly it seems to be the current state of things.

I've only used circleci and github actions for this, but in both cases precompiled rubies are available. On circleci you run your tests on a ruby docker image with the right version of ruby installed. On Github Actions I use https://github.com/ruby/setup-ruby, which installs an compiled ruby in a second or two, and also properly caches your gems. I think if someone's CI environment is building ruby from source they are doing it wrong.

While your breakdown of LLM “benefits” is thorough, I think it glosses over—or outright ignores—some significant limitations and trade-offs that make the picture far less rosy. It’s easy to frame this technology as an unqualified upgrade to human writing, but that framing is misleading and potentially harmful. Let me go point by point through your categories and explain where the problems lie.

1. Enhanced Productivity

Yes, LLMs can produce text quickly, but speed is not synonymous with quality. Churning out a draft in seconds is only useful if that draft actually advances the writer’s ideas, rather than lulling them into outsourcing thought itself. What often happens is that people mistake “having words on a page” for “having meaningful ideas.” Productivity in writing is not about word count—it’s about clarity of thought, and clarity is something that an LLM cannot supply. It can rearrange existing patterns, but it cannot truly reason or generate original insight. A fast draft is worthless if it’s hollow.

2. Improved Writing Quality

This point assumes that grammar and surface-level polish are the essence of good writing. They are not. Good writing emerges from the writer’s voice, their personality, their quirks, even their mistakes. Grammar-correcting AI tends to standardize expression into a bland, middle-of-the-road prose style. The result is “correct,” but sterile. Moreover, “tone adjustment” and “clarity” are superficial facsimiles of understanding. Simplifying an idea is only valuable if you understand what makes it complex in the first place. AI doesn’t “understand” ideas—it flattens them into patterns of words that look simpler but may remove nuance in the process.

3. Creativity and Ideation

Here is where the hype is the most exaggerated. Brainstorming with an LLM often produces generic, cliché, or predictable results. If you ask for metaphors, you’ll get the most common ones floating around in its training data. If you ask for plots, you’ll get reheated versions of existing tropes. Calling this “creativity” misunderstands what creativity actually is: the human capacity to connect disparate, personal experiences into something novel. An LLM is bounded by statistical averages. It cannot be surprised by itself. Humans, on the other hand, can.

4. Language Versatility

Translation and localization are areas where LLMs seem promising, but again, nuance matters. Language is not merely about syntax or vocabulary; it is deeply cultural, contextual, and historically embedded. Machine translation may be “good enough” for casual use, but it consistently fails to capture subtext, irony, humor, idiom, or cultural resonance. Outsourcing too much of this to AI risks flattening linguistic richness into something utilitarian but impoverished.

5. Research Assistance

This one is especially dangerous. Yes, LLMs can summarize and generate context, but they are notorious for producing confident-sounding misinformation (“hallucinations”). Unless the user already has expertise in the topic, they will not know whether what they’re reading is accurate. This means that instead of empowering research, LLMs encourage intellectual laziness and misinformation at scale. The “citation help” is even worse: fabricated references, garbled bibliographic entries, and misleading formatting are common. Presenting this as a “benefit” is disingenuous without an equally strong warning.

6. Editing and Rewriting

Paraphrasing and consistency checks may sound helpful, but they too come at a cost. When you outsource the act of rewriting, you risk losing the friction that forces you to refine your own ideas. Struggling to find words is not a flaw—it’s part of thinking. Offloading that process to an algorithm encourages passivity. You end up with smoother sentences, but not sharper thoughts. “Consistency” is also a double-edged sword: AI can enforce bland uniformity where variation and individuality might have been more compelling.

7. Customization and Integration

This is just another way of saying “industrialization of writing.” The more writing is engineered through prompts and APIs, the more it shifts from being a human practice to being an automated pipeline. At that point, writing stops being about human connection or expression and becomes just another commodity optimized for scale. That’s fine for spam emails or ad copy, but disastrous if applied to domains where authenticity and trust actually matter (e.g., journalism, education, or literature).

8. Cost Efficiency

Framing this as a cost benefit—“reduces need for human writers”—is perhaps the most telling point in your list. This reduces writing to a purely economic function, ignoring its human and cultural value. The assumption here is that human writers are redundant unless they can outcompete machines on efficiency. That is not just shortsighted; it’s destructive. Human writers don’t merely “generate content”—they interpret, critique, and shape culture. Outsourcing all that to probabilistic models risks a future where the written word is abundant but devoid of depth.

The larger issue is that your entire framing assumes writing is merely a transactional process: input (ideas or tasks) → output (words on a page). But writing is not just about producing text. It is about thinking, communicating, and connecting. By presenting LLMs as a categorical improvement, you erase the most important part of the process: the human struggle to articulate meaning.

So yes, LLMs have uses, but they should be treated as narrow tools with serious limitations—not as the new standard for all writing. To present them otherwise is to flatten human expression into machine-mediated convenience, and to celebrate that flattening as “progress.”


Must have been ages ago, even stable now has 5.4: https://packages.debian.org/search?searchon=names&suite=all&...


"even stable" - that was released 2w ago.

...

Also yes, podman v4 on bookwarm was famously useless in many cases and because of either libc or kernel (iirc) you could not even install v5 effortlessly.

I like Debian and I like podman but putting this as a usefule nice experience (up until trixie released) is just weird framing.


You're probably thinking of https://github.com/ghuntley/cursed. It... certainly seems to live up to its name.


1700 directories at the project root...


How many of those rides required human intervention by Waymo's remote operators? From what I can tell they're not sharing that information.


I worked at Zoox, which has similar teleoperations to Waymo: remote operators can't joystick the vehicles.

So if we're saying how many times would it have crashed without a human: 0.

They generally intervene when the vehicles get stuck and that happens pretty rarely, typically because humans are doing something odd like blocking the way.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: