Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How does OpenAI handle the function generation? Is it unique to their model? Or does their model call a model fine-tuned for functions? Has there been any research by the Home Assistant team into GorillaLLM? It appears it’s fine-tuned to API calling and it is based on LLaMa. Maybe a Mixtral tune on their dataset could provide this? Or even just their model as it is.

I find the whole area fascinating. I’ve spent an unhealthy amount of time improving “Siri” by using some of the work from the COPILOT iOS Shortcut and giving it “functions” which are really just more iOS Shortcuts to do things on the phone like interact with my calendar. I’m using GPT-4 but it would be amazing to break free of OpenAI since they’re not so open and all.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: