Hacker Newsnew | past | comments | ask | show | jobs | submit | anabis's commentslogin

> One surprising thing that codex helped with is procrastination.

The Roomba effect is real. The AI models do all the heavy implementation work, and when it asks me to setup an execute tests, I feel obliged to get to it ASAP.


Would some sparks fly when easy decompile of MSOffice and Photoshop are available, I wonder.


This is where I would guess the world destroing AGI/ASI will come about. The neverending cat-and-mouse game of ads/blockers driven by profit motive. LLMs will used by both sides in a escalating game, with humans with its attention and wallet stuck in the middle.


> delivering 97% of the performance at 10% of the cost is a distraction.

Not if you are running RL on that model, and need to do many roll-outs.


>The ideal case would be something that can be run locally, or at least on a modest/inexpensive cluster.

It's obviously valuable, so it should be coming. I expect 2 trends:

- Local GPU/NPU will have a for-LLM version that has 50-100GB VRAM and runs MXFP4 etc.

- Distillation will come for reasoning coding agents, probably one for each tech stack (LAMP, Android app, AWS, etc.)x business domain (gaming, social, finance, etc.)


Not complaining too loudly because improvement is magical, but trying to stay on top of model cards and knowing which one to use for specific cases is bit tedious.

I think the end game is decent local model that does 80% of the work, and that also knows when to call the cloud, and which models to call.


Yeah, mapping chinese characters to linear UTF-8 space is throwing a lot of information away. Each language brings some ideas for text processing. sentencepiece inventor is Japanese, which doesn't have explicit word delimiters, for example.


It's not throwing any information away because it can be faithfully reconstructed (via an admittedly arduous process), therefore no entropy has been lost (if you consider the sum of both "input bytes" and "knowledge of utf-8 encoding/decoding").


I've seen many comments that they are great for OCR stuff, and my usecase of receipt photo processing does have it doing better than ChatGPT , Claude or Grok.


Yeah, its like a GPS navigation system. Useless and annoying in home turf. Invaluable in unfamiliar territory.


Maybe it that's an apt analogy in more ways than one, given the recent research out of MIT on AI's impact on the brain, and previous findings about GPS use deteriorating navigation skills:

> The narrative synthesis presented negative associations between GPS use and performance in environmental knowledge and self-reported sense of direction measures and a positive association with wayfinding. When considering quantitative data, results revealed a negative effect of GPS use on environmental knowledge (r = −.18 [95% CI: −.28, −.08]) and sense of direction (r = −.25 [95% CI: −.39, −.12]) and a positive yet not significant effect on wayfinding (r = .07 [95% CI: −.28, .41]).

https://www.sciencedirect.com/science/article/pii/S027249442...

Keeping the analogy going: I'm worried we will soon have a world of developers who need GPS to drive literally anywhere.


I’m navigationally clueless but I don’t drive professionally


> invaluable when you're operating in even a slightly unfamiliar environment

Its like the car navigation or Google Maps. Annoying and not much useful when in hometown. Very helpful when traveling or in unfamiliar territory.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: