Hacker Newsnew | past | comments | ask | show | jobs | submit | quinncom's commentslogin

You can try McFly [1] and Television [2]. I still prefer fzf.

[1] https://github.com/cantino/mcfly

[2] https://github.com/alexpasmantier/television


“We found that the phenomenon described in these posts—cognitive exhaustion from intensive oversight of AI agents—is both real and significant. We call it “AI brain fry,” which we define as mental fatigue from excessive use or oversight of AI tools beyond one’s cognitive capacity.”

https://hbr.org/2026/03/when-using-ai-leads-to-brain-fry


That's something I haven't seen discussed much, what is the "annoyance-cost" of using the tool?

Say I complete a task 30% faster but what was the annoyance-cost of the model constantly getting it wrong and having to keep correcting it


I've never experienced a miracle, but I'm surrounded by people who claim to have seen one or know somebody who did. I'm still waiting for mine.

I met with a supposedly realized Buddhist master in a monastery in Nepal and asked him if he had ever had a supernatural experience (Buddhist cosmology has many stories about enlightened people flying, walking through walls, etc). At first he had difficulty understanding the question (the translator seemed to have difficulty finding the right words). Finally he replied with a blunt “No.” But then amended that with the inconclusive statement, “If a supernatural event occurred, by it existing, it would immediately cease to be supernatural because there would be some explanation for it.”


You should have replied, “yes, there may be an explanation, but the explanation could be beyond all human understanding”. That would be “super” (the explanation siting in the superset , where our understanding is just another set in the superset).

I accidentally clicked the Claude Cowork button inside the Claude desktop app. I never used it. I didn't notice anything at the time, but a week later I discovered the huge VM file on my disk.

It would be really nice to ask the user, “Are you sure you want to use Cowork, it will download and install a huge VM on your disk.”


Same. I work on M3 Pro with 512GB disk, and most of the time I have aroung 50GB free that goes down to 1GB often quite quick (I work with video editing and photos and caches are agressive there). I use apps like Pretty Clean and some own scripts (for brew clean, deleting Flutter builds, etc). So every 10GB used is a big deal for me.

Also discovered that VM image eating 10GB for no reason. I have Claude Desktop installed, but almost never use it (mostly Claude Code).


Jesus Christ what kind of potatos are you using when 10 GB of disk space are even noticable for you?


If I had been tethering to mobile hotspot at the time it would have instantly used 500 pesos of data. That’s 3x my monthly electric bill.


Must be an apple thing


Happy is buggy af and is in the middle of a rewrite (see its Discord).

A fork named Happier looks promising, but is alpha-stage and is also a mystery-meat vibe-coded security roulette.


The README claims “Full feature parity with pi,” but I presume pz does not support pi’s extension/package ecosystem (because they’re all writted in TS and so would require bundling node/bun) – is that correct? One of the highlights of pi is its extensibility; if that’s not possible with pz, it should be clearly stated as a goal/nongoal.


I just learned yesterday that ChatGPT (and maybe others) can’t connect to a MCP running on localhost; it needs an endpoint on the public internet. (I guess because the request comes from OpenAI servers?)

I’d rather not expose a private MCP to the public, so ContextVM sounds like a step in the right direction. But I’m confused about how it is called: doesn’t OpenAI’s servers still need you to provide a public endpoint with a domain name and TLS? Or does it use a Nostr API?


Interesting, I didn't know about that. It could be for security reasons or to lock users into their platform tools, but it seems odd.

If you can still connect to a stdio MCP server, you can plug it into a remote MCP server exposed through ContextVM. You can do this using the CVMI CLI tool, or if you need custom features, the SDK provides the primitives to build a proxy. For example, using CVMI, you could run your server over Nostr. You can run an existing stdio server with the command `npx cvmi serve -- <your-command-to-run-the-server>` or a remote HTTP server with the command `npx cvmi serve -- http(s)://myserver.com/mcp`. This makes your server available through Nostr, and you will see the server's public key in your terminal.

Locally, you can then use the command `npx cvmi use <server-public-key>` to configure it as a local stdio server. The CLI binds both transports, Nostr <-> stdio, so your remote server will appear as a local stdio server. I hope this clarifies your question. For more details, see the documentation at https://docs.contextvm.org. Please ask if you have any other questions :)


I like Readeck – https://codeberg.org/readeck/readeck

Open source. Self hosted or managed. Native iOS and Android apps.

Its Content Scripts feature allows custom JS scripts that transform saved content, which could be used to do URL rewriting.


The 2025 Stack Overflow Developer Survey asked participants to identify dev tools they use (“Desired”) and want to use again (“Admired”). Dividing these scores calculates a “underrated score” which reveals which tools may be hidden gems.

I've compiled a list using this method, filtering for tools admired by >60%, and used by <20% of developers, then sorted by “underrated”.

The previous 2024 list is available here: https://news.ycombinator.com/item?id=41090759

In 2025 there were a total of 12 tools with an admired score >70%. In 2024 there were 41. Are we admiring our tools less? Or did we stop caring because AI is touching the tools instead of us?


Although I rarely hit my limit in my $20 a month Codex plan, I can imagine this would be very useful.

The issue I have more often is that I will start a conversation in ChatGPT, and realize an hour later that I needed all that context to be in Codex, so I’ll generally ask ChatGPT to give me a summary of all facts and a copy‑paste prompt for Codex. But maybe there is a way to extract the more useful content from a chat UI to an agent UI.


imo an agent learns way more by watching the raw agentic flow than by reading some sanitized context dump. you get to see exactly where the last bot derailed and then patched itself. give that a shot—handing over a spotless doc feels fake anyway.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: