Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wow, your story about the "FX and animation house" is funny, sad and unsurprising - all at the same time. I'm just surprised they didn't actually test the full workflow before leaping. It reminds me of this tale from actual production people working with Sora https://www.fxguide.com/fxfeatured/actually-using-sora/ which I also found completely unsurprising. It still took a team of three experience pros around two weeks to complete a very modest 90 second video and they needed to reduce their expectations to "making something out of the clips the AI gave us" instead of what they actually wanted. And even that reduced goal required using their entire toolbox of traditional VFX tools to modify the clips the AI generated to match each other well enough. Sure, it's early days and Sora is still pre-alpha but, while some of these problems are solvable with fine-tuning, retraining and adding extensive features for more granular control, some other aspects of these workflow gaps are fundamental to the nature of how NNs work. I suspect the bottom line is that solving some key parts of real-world high-end film/video workflows with the current prompt-based NNs is a case of "you can't get there from here."


For sure. Tooling on top of the core model functionality will absolutely increase the utility of the existing prompt-based workflows, too, but my gut says the diminishing returns on model training is going to keep the "good enough" goalposts much much further into the future with video than with text and still images.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: