Hacker Newsnew | past | comments | ask | show | jobs | submit | more sudb's commentslogin

AAISP by any chance? I am a very happy customer


No, Freedom internet. Spiritual successor to XS4ALL which got bought out and gutted.


A silly little Friday proof-of-concept project to see if a GPT could boostrap its own actions (spoiler alert: it can't)


Also helped make this - here's a GitHub repo link for an example of the auth and billing implementation: https://github.com/Engine-Labs/gpt-billing-template


Made this repo yesterday based on a previous GPT I helped build at work to manage changes to a database (https://chat.openai.com/g/g-A3ueeULl8-database-builder). Please let us know if there's anything that doesn't work or could be improved!


This sounds similar to Microsoft's Autogen, and I think it's possible to replicate a lot of what you're talking about by using the rough structure of Autogen alongside the Assistants API


I know that the use-case that I mentioned as well as many of the agentive aspects can be achieved using code. But I have to admit that using the UI and easily create GPTs, whether using them just as templates/personas or full-featured with actions/plugins, makes the use-case much easier, faster, and sharable. I can just @ at specific GPT to do something. Take the use-case that Simon mentions in his blog post, Dejargonizer, I can have a research GPT that helps with reviewin papers and I can @Dejargonizer to quickly explain a specific term, before resuming the discussion with the research GPT.

Maybe this would require additional research, but I think having a single GPT with access to all tools might be slower and less optimal, especially if the user knows exactly what they need for a given task and can reach for that quickly.


The coolest thing for me here by far is the JavaScript Code Interpreter. I had no idea you could attach arbitrary executables and was trying to work out today how I might use some npm packages from inside a GPT - am definitely going to have a play to see what's possible.


That seems like a crazy over sight? Is there some legit reason to allow this? Id imagine they're going to lock that down? I guess its unlikely to be used to attack since its paid only and attached to a real person somehow already?

Otherwise, start running commands and maybe you can get more clues to how theyre doing RAG like it mentions


I don't see any reason for them to lock this down.

The code runs in a Kubernetes sandboxed container which can't make network calls and has an execution time limit, why should they care what kind of things I'm running on that CPU (that I'm already paying for with my subscription)?

The Code Interpreter sandbox runs entirely independently of the RAG mechanism, so sadly you can't use Interpreter to figure out how their RAG system works (I wish you could, it would make up for the lack of documentation.)


I've had a fair amount of success at work recently with treating LLMs - specifically OpenAI's GPT-4 with function calling - as modules in a larger system, helped along powerfully by the ability to output structured data.

> Most systems need to be much faster than LLMs are today, and on current trends of efficiency and hardware improvements, will be for the next several years.

I think here I disagree with the author here though, and am happy to be a technological optimist - if LLMs are used modularly, what's to stop us in a few years (presumably still hardware requirement costs, on reflection) eventually having small, fast specialised LLMs for the things that we find them truly useful/irreplaceable?


Nothing's to stop us, and in fact we can do that now! This is basically what the post advocates for: replacing the LLM calls for task-specific things with smaller models. They just don't need to be LLMs.


Another happy La Pavoni owner here! I was originally after a totally manual lever machine but found a great deal (~£200) on a second hand La Pavoni in miraculously good condition and couldn't be happier (apart from the occasional burned knuckle). Readily available spare parts and lack of any serious electronics means that hopefully this thing should last me a lifetime. My one recommended upgrade is a compatibly bottomless portafilter + 20g basket.


I live this life too - my La Pavoni has been going strong for over a decade! Multiple shots a day. Maintenance every once in a while. My additional recommended simple upgrade is to add a momentary switch to bypass the sensors, allowing you to quickly increase the pressure for steaming.


This is still pretty fast - impressive! Are there any tricks you're doing to speed things up?


From the overall tone of the Guardian review (https://www.theguardian.com/books/2023/apr/13/look-at-the-li...) it feels like this might be a bit of a miss by Annie Ernaux.

On the other hand, her book The Years was excellent and was where I first learnt about the French love of fart jokes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: