how will the AI know what your product looks like? You probably already have CAD models, couldn't you import those into blender and make something in an afternoon or two?
> how will the AI know what your product looks like?
Training an embedding/LoRA on the product and using it with the base model, same as is done for image-generation models (video generation models usually often use very similar architecture to image generation models -- e.g., SVD is a Stable Diffusion 2.x family model with some tweaks.)
Now, you may not be be able to do this with Sora when OpenAI releases it as a public product, just like you can't with DALL-E. But that's a limitation of OpenAI's decisions around what to expose, not the underlying technology.