Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My idea of great film tech? 8K, 60fps, and true 3-D (not the hokey 2-level thing)

It'd be awesome, but it's a long, long way away.

As Jim says, this is low-hanging fruit. Easily done with most everything that's on the shelf today.



It may be low hanging fruit, but imagine rotoscope work on a 24fps feature vs a 60fps feature. The work has just increased 2.5x for the same shot.

For fully rendered content, there is likely no additional man hour time, but for any frame by frame work involving humans, post processing could take a lot longer.


Is this kind of thing really done frame-by-frame though? I think with smart interpolation techniques you shouldn't have to do single frames anymore.


Rotoscoping[0] is by definition frame by frame. Keyframed animation would largely remain unchanged, though rendering would take 2.5x as long.

[0] http://en.wikipedia.org/wiki/Rotoscoping


I'm pretty sure large budget animations are already rendered with higher frame rate for proper motion blur. You can't add proper motion blur with post-effects, like in video games.


"120 fps ought to be enough for everyone."


Given the way your eyes work... yes! 120FPS is beyond almost everyone's internal "frame rate," so going any higher really doesn't make much sense except in special cases (trying to deliver different frames to each eye using higher frame rate and shutter glasses, for example).


I think people read my initial comment as some sort of plea for more and more tech, but that's not the way I meant it.

My point was for archival purposes. If you record data at a degree or two higher level than human perception, you can always add whatever post-processing you want to get any kind of artistic effect desired. You want that old jerky 24fps stuff? Fine. Post-process it to get it. People 200 years from now will be able to watch it in 24fps, in black-and-white, in 2-D, or whatever the initial artistic intent. But future audiences and artists might also choose to remix or to see it with more data.

We are currently in a situation akin to knowing how to shoot color movies but refusing to because everybody liked black and white so much. Or being able to record audio but being afraid of putting all those movie pianists out of work.

This kind of thing can be framed up as artistic all day long, but in reality it's just about fear of change -- more to the point, fear of mucking around with your business model too much.

So having learnt that 120FPS is the limit, I'd shoot for 240 or 480FPS.


As other people have commented, the medium is part of the artwork. To quote a comment from above: "If Leonardo could have taken a picture of Mona Lisa instead of painting her, would it be in the Louvre today?"

Also, the days of watching just the raw footage are long over. These days, almost every frame you see in a movie has been heavily post-processed. Quadrupling the frame rate for no reason other than to be safe or to allow others to remix your work in the future means quadrupling render times and, for jobs like rotoscoping, quadrupling people's workload. It just doesn't make any economic sense.

Unless you were talking about shooting just the raw footage with higher resolution and frequency (and downsampling it prior to post-processing), but then what people are remixing isn't your film but your footage. Plus, you still have increased cost for more sophisticated equipment and for storage.


I hear you, and you make good points.

But this issue is not unique to frame rate, or even cinema. We ran into this same problem when HDR took off in still photography. How much of the art is the medium, and how much is the processing?

I think you can argue it either way, but my point was that additional data can always lead to various interpretations later on, while less data always leads to a less-varied range of future possibilities. Speculating on what kinds of post-production work might be done, who might do it, or whether it's a good thing or not misses the point. What used to be part of the medium is now part of the process.

Leonardo would have taken the picture, then painted the painting. What hung in the Louvre would have been up to him as to what he decided to release. Would the painting be worth any less if we had a fully-rendered, highly-detailed model of the studio, subject, and the artist? Not at all, but there are many derivative works we could create with that kind of data that we could never do from just a painting. Think of it in a silly way: if I see a man and draw a stick man, am I locked in forever for only remembering that man in such simple terms? Or might I want to come back and paint him? Why make me choose when the tech lets me have it both ways?

I won't go into the technology/cost issues, as these things have a way of changing dramatically from year to year.


I agree with you that it would be wonderful if all footage could be preserved with maximum spatial and temporal resolution, and maximum dynamic range (and, for that matter, some day become part of the public domain!)

All I was saying is that realistically, and unfortunately, the economic realities of film production make it unlikely that anything will be shot with quality much higher than what the stakeholders can make use of in the short to medium term.


Replying mostly to DanielBMarkham, the sibling.

I respect your point and I mostly share the sentiment. Even so; no, da Vinci wouldn't choose what to put in the Louvre. That's the point. We chose what to put in the Louvre, and if da Vinci would have chosen another medium he would have created something else, which might not hang there or anywhere, today.

I'm doing a program where I have great use of Redis for some things, while I could use MongoDB for others. Even so, I choose to use Redis exclusively because I have a feeling that the constraint will help me create a better program.

I really believe in constraints in creation. But of course, if Cameron needs 60 fps he should shot in that!


I can't recall the source, but this "max eye frame rate" was shown to be a misinterpretation.

The individual eye's components do have a frame rate, but they are not synchronized, so the eye as a whole can detect changes smaller than that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: