Hacker Newsnew | past | comments | ask | show | jobs | submit | spullara's commentslogin

Seems like a pretty cut and dried case for removal of a feature that is not only not used but even in cases where it might be useful the sites have html versions and don't expect people to click on the xml links.


I'm not sure I understand -

did you mean:

it seems like a pretty cut and dried case for removal (of a feature that is not only not used but even in cases where it might be useful the sites have html versions and don't expect people to click on the xml links) because it will break many government sites?

or

it seems like a pretty cut and dried case for removal (of a feature that is not only not used but even in cases where it might be useful the sites have html versions and don't expect people to click on the xml links) because the feature is not used etc. etc. ?


this is exactly what I did when I wrote my first agent with scraping. later we switched to taking control of the users browser through a browser extension.


just use a good agent like augmentcode that can look at relevant context across your repository and then you can name it whatever you want.


IF you do want to then ALSO have a cloud version, you can just use the AgentDB API and upload them there and just change where the SQL runs.


do any of the clients support this? I have some dynamic mcps and it doesn't seem like claude.ai supports for example.


This change basically makes it an auction.


I was so bummed when they removed the matte screen option many years and glad to see it back. My next one will definitely be matte.


it is out yet. i poll the api for the models and update this GitHub hourly.

https://github.com/spullara/models


I did this against a pretty large tweet archive and got hits on about 125k of the words in the unix dictionary.


it currently beats depending on the benchmarks


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: