This is awesome! I have been thinking about what I'd want in an open source product after feeling unhappy trying to mess around with Bluetooth devices and overriding the assistant on an Android.
I really think an open source experience is going to be the only way this specific area will advance (wearable voice assistants). Apple/Google/Amazon are always going to very conventional in how they think about the purpose of their products, how personalized they can be, how much they can be expected to understand the user.
Looking at the Apple prompts, it's notable how uninteresting they are. There is no real theory of function, no sense of relationship or roles. They are just letting that all default to some unspecified common sense (as found in the model), handling only the surface level of these interactions. And they don't appear to bake that into the model because that wouldn't be enough, because those deeper interactions require state that seems pretty clearly to not be specified. Anyway, I'm really going off on a prompting tangent.
I think there is _really_ deep stuff people could be creating using these building blocks. The kinds of developments that are a synthesis of modified personal behavior and the tools provided. A tool this powerful is being wasted (theoretically and right now in actuality) if you don't modify behavior when using it. But that's a terrible way to make a commercial product, you can't expect people to change for you. And so they create these very bland experiences that are the projection of their current apps onto a voice or AI interface.
And they aren't wrong to take this conservative approach... it's very boring but very rational. I think this is a particularly opportune moment for people with their own very personal and specific ideas about how integrate AI into a particular part of their life to try to actually build that experience, with an authentic goal of just improving their own life. An open source stack makes that possible... including the device, because Google and Apple just won't let you use a phone that way.
So this is very exciting! My dev kit is ordered, and I await it eagerly
As an apparently excited user of this device, what do you think about the privacy concerns - both for the users themselves, but more importantly for people who interact with the users?
Just like with the prompt, I'm not thinking right now about what this is for everyone. I want something I can use myself, for myself. I will figure out what I think is appropriate in terms of privacy and social acceptability as I use it, and if I get it wrong that will be on me.
With respect to recording I'll also be thinking about what kinds of uses are responsible. The existence of a recording doesn't mean I have to use it or store it. I honestly can't recall a time when, if I had been continuously recording, I would have used that recording against anyone present. I would expect to be as respectful of the privacy of people I interact with as I am now... I don't recount what people say to me now without considering if that what they said might have been in confidence, without considering how what they said might be interpreted differently by a different audience or out of context, and without passing on my most good faith interpretation of what they said. That's a complicated rule system, but it does actually fire when I recount other people's statements.
But I'll also have to navigate how I use it, understand what things it captures that I don't want it to, and how that affects the people around me.
Also I just want to see what's possible, without pre-censoring what's appropriate before we know how any of this stuff works in practice. I'm willing to take the risk it's all a bad idea and I'll soon think of it as a dead end.
I really think an open source experience is going to be the only way this specific area will advance (wearable voice assistants). Apple/Google/Amazon are always going to very conventional in how they think about the purpose of their products, how personalized they can be, how much they can be expected to understand the user.
Looking at the Apple prompts, it's notable how uninteresting they are. There is no real theory of function, no sense of relationship or roles. They are just letting that all default to some unspecified common sense (as found in the model), handling only the surface level of these interactions. And they don't appear to bake that into the model because that wouldn't be enough, because those deeper interactions require state that seems pretty clearly to not be specified. Anyway, I'm really going off on a prompting tangent.
I think there is _really_ deep stuff people could be creating using these building blocks. The kinds of developments that are a synthesis of modified personal behavior and the tools provided. A tool this powerful is being wasted (theoretically and right now in actuality) if you don't modify behavior when using it. But that's a terrible way to make a commercial product, you can't expect people to change for you. And so they create these very bland experiences that are the projection of their current apps onto a voice or AI interface.
And they aren't wrong to take this conservative approach... it's very boring but very rational. I think this is a particularly opportune moment for people with their own very personal and specific ideas about how integrate AI into a particular part of their life to try to actually build that experience, with an authentic goal of just improving their own life. An open source stack makes that possible... including the device, because Google and Apple just won't let you use a phone that way.
So this is very exciting! My dev kit is ordered, and I await it eagerly