>> people who are incapable of anything except bikeshedding
The amount of insulting language directed at people who actually have an open mind about AI and AI tooling is frustrating. Can you all just please address the merits of the topic of the post instead of making every AI-related post on HN an excuse to vent about your own particular worldview and insult people who don't necessarily agree?
Platform support for AI has as much place in a browser as it does in Notepad. This isn't about being open-minded at all. I have written multiple MCPs, I use it daily, I am not in the crowd who "don't have an open mind." This outright non-feature is a significant source of issues, least of which is fingerprinting.
Make an AI browser extension. Done.
Shoving AI into anything where it can go is not having an open mind about things, it's nothing more shoving AI into anything where it can go.
On the inverse, can you provide a single reason why this API should exist which is isn't something that obviously erupted from an LLM? Again:
> Browsers and operating systems are increasingly expected to gain access to language models.
God help people if they have to copy their prompt from ChatGPT to Claude.
So is it reasonable and helpful to see the same comments over and over again any time Google/Microsoft/OpenAI/Meta is mentioned in a comment - "X is bad, money drives all their decisions, they are anti-user, etc. etc." or should we actually expect to see relevant comments discussing the topic at hand?
It's inane and annoying to have to wade through the same, predictable, might-as-well-be-copy-and-paste comments on every post.
What do you have to say about the Prompt API specifically?
This same point should have been made to the grandparent as well... claiming some good people are working inside the system at a bad company is also a tired trope.
No, it's because certain people moved the goal posts. Nothing an LLM does or will do will make them belive that it's "intelligent" because they have a mental model of "intelligence" that is more religious than empirical.
We don’t have agents that are able to work entirely autonomously, even in the coding realm, which is where they seem to be most valuable. In fact, they’re seemingly not even close to replacing software engineers.
>> Number theory had no practical applications until the development of public-key cryptography
This is so wrong I don't even know where to begin. Modular arithmetic, numerical integration, pseudorandom number generation, error-correcting codes, predicting planetary orbits (!), etc.
At the risk of asserting the claim was obviously a bit facetious, number theory had the reputation of having very little practical applications and I don’t mean that silly quote by G. H. Hardy.
A lot of applications just required a lot more computing power to be practical. This all starts to happen around the same time (unsurprisingly) and if you’re going to make hay that Reed-Solomon coding was invented in 1960, I think it’s worth pointing the first big use of was on Voyager because the computing power was finally able to make these work. It’s not like people hadn’t started to notice some of this decades earlier.
reply