I can't help but think of someone downloading "Best Assistant Ever LLM" which pretends to be good but unlocks the doors for thieves or whatever.
Is that a dumb fear? With an app I need to trust the app maker. With an app that takes random LLMs I also need to trust the LLM maker.
For text gen, or image gen I don't care but for home automation, suddenly it matters if the LLM unlocks my doors, turns on/off my cameras, turns on/off my heat/aircon, sprinklers, lights, etc...
That could be solved by using something like Anthropic's Constitutional AI[1]. This works by adding a 2nd LLM that makes sure the first LLM acts according to a set of rules (the constitution). This could include a rule to block unlocking the door unless a valid code has been presented.
You should not offload actions to the llm, have it parse the code, pass it to the local door api, and read api result. LLMs are great interfaces, let's use them as such.
Is that a dumb fear? With an app I need to trust the app maker. With an app that takes random LLMs I also need to trust the LLM maker.
For text gen, or image gen I don't care but for home automation, suddenly it matters if the LLM unlocks my doors, turns on/off my cameras, turns on/off my heat/aircon, sprinklers, lights, etc...