> Distilling these big, slow vision transformer models into something that can be used in realtime on the edge is going to be huge.
something i didnt quite get to is does Roboflow do this for you or are you pointing to more work that you’d like to happen someday? (possibly done by Roboflow, possibly someone else) also are you worried about business model if people can distil to run on their own devices (so they dont need to pay you anymore)?
This is the core of what we do! Previously our job was distilling human knowledge into a model; now that knowledge is starting to come from bigger models with humans managing the objectives vs doing the labor.
> also are you worried about business model if people can distil to run on their own devices (so they dont need to pay you anymore)?
This is probably a risk to the current business model over the long term, but we're constantly working on reinventing ourselves & finding new ways to provide value. If we don't adapt to the changing world we deserve to go out of business someday. I'd much rather help build the thing that makes us obsolete than sit idly by while someone else builds it.
I think of this risk similarly to the way that I'm marginally worried that, in the long run, AGI will obviate the need for my job. Probably true, but the opportunities it will present are far greater & it's better to focus on how to be valuable in the future than cling to how I provide value today.
Out of my depth, but ca the SAM outputs be mapped for 6DOF models of the objects directly?or would you still need to use the resultant dataset to train 6DOF or key points for that matter?
Distilling these big, slow vision transformer models into something that can be used in realtime on the edge is going to be huge.