I wonder if it would be possible to create an AI image/video editor very much like blender or photoshop - where you get to drag objects around, perhaps based on a transformer(?) model, where each token(s) encode an object similarly how a 3D game engine encodes a game object - like latent vectors matrix rows that correspond to position in world space, size, texture, etc.
The 'renderer' would be a neural net that would take this soup of tokens, and resolve it to a 2d frame.
The underlying logic engine would be a human one - or perhaps a traditional video game engine, that emits the tokens from which the context can be built up and decoded into an image.
The 'renderer' would be a neural net that would take this soup of tokens, and resolve it to a 2d frame.
The underlying logic engine would be a human one - or perhaps a traditional video game engine, that emits the tokens from which the context can be built up and decoded into an image.