AFAICT: making a navigation/direction model that can translate phrase-based directions into actual map-based directions, with the caveat that the model would be updated primarily by giving it feedback the same way that you would give a person feedback.
Sounds only a couple of steps removed from basically needing AGI?
I suspect you’d want to start by trying to translate differences between images into descriptive differences. Maybe you could generate examples by symbolic manipulation to generate pairs of images or maybe nlp can let us find differences between pairs of captions? Large nlp models already feel pretty magical to me and encompass things that we would have said required AGI until recently so it seems possible, though really tough
Sounds only a couple of steps removed from basically needing AGI?