Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Trillion-frame-per-second video (web.mit.edu)
458 points by xtacy on Dec 13, 2011 | hide | past | favorite | 61 comments


Interesting story, but flagged as blogspam.

Could an admin please change the link to a primary source like this one: http://web.mit.edu/newsoffice/2011/trillion-fps-camera-1213....

Or even this one: http://www.nytimes.com/2011/12/13/science/speed-of-light-lin...

[Edit]

gmaslov's comment provides an even better link, although not as 'newsy': http://web.media.mit.edu/~raskar/trillionfps/


My friend Steve Silverman worked on a system which they commercialized back in 2006 [1] that unlike the MIT one captures full-frames so you can get 3D at video frame rates.

It uses a 5ns long pulsed laser "photon torpedos" at 30hz to illuminate the scene and then captures the at a much higher sampling rate where the MIT system just scans a line at a time. So, unlike the MIT one, it's a small, hand-hold-able, system that captures full motion. [2]

You get full scene 3D without the drawbacks of scanning.

They flew one of their cameras on the last Discovery Mission.

[1] http://asc3D.com

[2] http://www.youtube.com/watch?v=3L91F9o600E

http://video.google.com/videoplay?docid=-3656494784112768834

I had the chance to play with it a couple years ago - quite amazing.


Parallel vs. Serial.

The Google Tech Talk goes into a little bit of detail about how they capture a stack of frames (slices in Z) into a buffer right behind the sensor and then dump that out for each snapshot.


This works by sampling a static scene a very large number of times with a laser flash and a "streak tube" camera that records a picosecond-long movie of the light arriving at a single scanline.

A normal video camera records a frame at a time, this one records a scanline-sized movie at a time. The raw data is noisy but the scene is static so they can sample the same line over many flashes.

After a few minutes of scanning they have a trillion fps video where you can see a wavefront propagate at the speed of light. Amazing.


The relevant video from MIT Media Lab is here:

http://www.youtube.com/watch?v=EtsXgODHMWk

> MIT researchers have created a new imaging system that can acquire visual data at a rate of one trillion exposures per second. That's fast enough to produce a slow-motion video of light traveling through objects.

> http://www.media.mit.edu/~raskar/trillionfps/


Ever since I started playing with PWM control of LEDs for lighting, I've wanted a visualization of the spherical pulses of light traveling through a room (inspired by the moving-mirror cameras used to analyze high-speed explosions). I calculated the PWM frequency I would need to reach before the light arriving from the far walls of a room could be distinguished from the original pulse. Thank-you, MIT, for actually making this happen, on an even cooler scale.

I wonder if the in-room impulse response of an LED light source could be exploited for ultra-high-bandwidth data transmission through open air.



The actual website for the project is here http://web.media.mit.edu/~raskar/trillionfps/ , and has a great FAQ section and more videos.


I heard the word virtual used to describe the camera, and couldn't figure out if they were, for sure, talking about taking pictures of a single photon in real life -- ergo not simulated. I suspect I misunderstood something.

I'm skeptical because.. how would you see a photon? Unless photons themselves give off light as they travel, but that would mean photons emit photons...


Photons do not emit photons. In fact, photons don't interact with each other at all. But if you send a very short pulse of light (visualized as the thin spherical shell in the video) at an object, the photons in that pulse will hit the object at different times depending on how far away the scattering surface is from the source of the light. After they reflect from the surface, these photons will also take different amounts of time to travel to the camera (your eye). If you can measure the time the photons enter the camera very accurately, you'll see different parts of the scene light up at different times.

This would be very close to watching the light pulse travel across the scene...except for that second effect: that the parts of the scene are different distances from the camera.


>In fact, photons don't interact with each other at all.

They can. It's a non-classical concequence of Quantum Electrodynamics. If you are interested in the cross-sections, check out Berestetskii et al., 1982.


> I heard the word virtual used to describe the camera, and couldn't figure out if they were, for sure, talking about taking pictures of a single photon in real life -- ergo not simulated.

I think they were using the word "virtual" to describe the array of camera sensors as one camera. It sounds like basically they had an array of sensors all snapping as fast as they could, but offset in time from each other by some amount, to achieve the effect of having 1 frame for every N fractions of a second. They also had to break down the scene into long strips and repeatedly use the sensors take the same picture on each strip and re-combine the strips later.

> I'm skeptical because.. how would you see a photon? Unless photons themselves give off light as they travel, but that would mean photons emit photons...

What you were seeing in the video were the photons entering the camera after having bounced off of the scene. It's kind of misleading that they're making it sound like you're seeing the photons as they're hitting the object in real time, When in fact, the photons had already hit the object and are bouncing back into the camera. But the resolution with which they're detecting the incoming photons conveys the "shape" of the waves bouncing and propagating off/around objects. That's my take on it at least.


>It's kind of misleading that they're making it sound like you're seeing the photons

It was misleading in the way the commentator said we can now watch the photons as they travel through space.

Space is full of photons from stars and galaxies. We only ever see the source of their current direction, which is why space is black, but we see the the bodies within it.


The camera records a single scan line from a bunch of different pulses to capture the length of time they need for that scanline. Then they rotate the mirror and capture the next scan line. After that, they reconstruct the video in software. It relies on the regularity of the laser pulse to be accurate. They aren't capturing a real movie, but an aggregate description of how light moves across a scene.

Regarding the "seeing a photon," it's just an analogy.


An analogy they repeat three times. And its just plain false every time. In fact, I'm not sure at all what the analogy is supposed to help us understand. We're not seeing a photon as it moves through space; so why say we are? We're seeing light reflected off of an object; what makes this different from any other camera that does the same thing? Its faster? Then say something to help me understand how that changes things.


We're seeing the way a pulse of photons move through space. So you can see how long it takes for a photon to diffuse through a plastic bottle. You can't see that with other cameras.


Still wondering about it, too.

It may be the laser being refracted by the air, just like commercial green lasers are refracted by fog. I actually think is the only explanation.(And neutrinos, because seriously, neutrinos rock).

Also, I can see some interesting uses to this camera, beyond light studying...


> It may be the laser being refracted by the air, just like commercial green lasers are refracted by fog. I actually think is the only explanation.

It is. Seeing is interacting with photons, so you can't 'see' a photon that didn't just hit your camera. The process is similar to how I heard we can see speed of light in space, as light from exploding star propagates through a nebula and reflects back to us. I can't find a reference for that though, I heard it as a story from a physics professor.

> And neutrinos, because seriously, neutrinos rock

And magnets.


More informative link: http://web.mit.edu/newsoffice/2011/trillion-fps-camera-1213....

Imaging Systems Applications Paper "Picosecond Camera for Time-of-Flight Imaging": http://www.opticsinfobase.org/abstract.cfm?URI=IS-2011-IMB4

ACM paper "Slow art with a trillion frames per second camera": http://dl.acm.org/citation.cfm?doid=2037715.2037730


Also, some of the math behind reconstructing the 4d light-propagation is in this tech report from some of their collaborators: http://users.soe.ucsc.edu/~amsmith/papers/ucsc-soe-08-26.pdf


For anyone wondering how the "capturing light in motion" works - the researchers use a very short pulse of light, so when playing back the footage in slow motion, you can see the light pulse moving through the scene.

They're not directly observing the photons in motion, they're observing what parts of the scene they're scattering off at a given point.


A (virtual)photon race with a photo finish. It would be totally amazing to see two different visible light waves (red vs. blue) crossing a glass prism and how their velocity difference looks. Also, wondering if it would be possible to capture light traveling through fiber optic cable (total reflection).


They can't be capturing photons in motion, or capturing anything "moving at the speed of light." That doesn't make any sense. According to relativity, photons don't actually "move" at all. (As I understand it.)


As plain English descriptions go, I think it's satisfactory. If you know enough to know what's really going on, the video still has plenty of information to describe what they are really doing. I rather suspect they already know that.

Also, photons don't "move" in their own reference frame, in which the entire universe is a zero-dimensional point with no time; for everybody who has mass, they are moving. At the risk of being mass-o-centric, I think it's not that wrong to speak of their "motion". Some of my best friends have mass.


Photons do in fact "move" by any reasonable definitions of the word. In fact, unlike massive objects, photons are moving in all inertial reference frames. They travel along "light-like" paths through spacetime:

http://en.wikipedia.org/wiki/Spacetime#Light-like_interval

You might be thinking of the fact that no proper time elapses along such a path, which is true but not relevant from an observer's perspective.


Wait, shouldn't no proper time pass from the photon's perspective, not from the observer's perspective?


We developed a middleware, which helps to capture and process all these image frames in Realtime running on blade clusters and GPUs. Think about it as Hadoop for Realtime Image Processing. Our original application was Semiconductor Inspection machines with large arrays of camera sensors.

For more info contact:

http://CLASTR.com

Email: info AT CLASTR DOT com


Interesting. The process is basically like a convolution of a flat wavefront step function over a 3D scene.

Now imagine this: instead of registering images, camera emits them in the reverse sequence, effectively making the surrounding environment send concentrated coherent impulses to the point where the laser initially was.

Pew-pew.


Sounds like real-world ray casting or ray tracing.


The problem with that idea of course is that the camera is only catching a very small amount of the light scattered by the scene.


>"Because all of our pulses look the same"

Don't misunderstand me, this is really impressive and potentially has some important applications...

...but, because the final product showing the plastic bottle is a series of similar scenes, the video seems more akin to a cell or stop motion animation rather than to what is typically considered high speed photography which captures a single event and expands time rather than compressing it. Ten seconds of traditional high speed film contains images captured in a fraction of a second. In this video, ten seconds was captured over the course of many minutes.

In other words, there is a significant degree of editorial decision making regarding the manner in which events are depicted - even if that decision making is now handled by software.

But cool nonetheless.


I would love to see two parallel mirrors in a scene. I would expect the objects between the mirrors to stay illuminated longer than the other objects and that they would fade out gradually.


"you can see photons moving through space..." I think of this statement and cant help but think it is fundamentally flawed. One cant see the photon moving through space. You might be able to see an electron moving through space because e can emit a photon. But photons cannot emit photons while moving. In essence one only sees the photon when it hits the detector. Physics majors feel free to correct if i am mistaken.


I think they're showing less than 1 trillion fps. Light travels 0.3 mm in a trillionth of a second, so when they play it at 30 fps it should be 9 mm / second. But it passes an apple in about 2 seconds, suggesting more like 30 mm / second.


This reminds me of Searle's relativistic ray tracer: http://www.anu.edu.au/Physics/Searle

In particular, check out "Flash" example in the downloads section.


A more elaborate talk about this work by one of the authors appears at: http://www.youtube.com/watch?v=aKu20y1f_RU


They should use this to demystify the double-slit experiment.


That would be a lot more interesting to see than an apple or bottle of water.


Does this even make sense?

I am not doubting it,that they've made a superb machine, but which photons does the camera catch to see photons that travel parallel to that same camera?


Yes, it makes sense. What do you mean which photons? The photons that the camera picks up are the photons from laser pulse, after they have reflected off the scene. Pause it at 2:20 and look at the shape of the light as it moves.

Imagine that you are in super slow-mo mode for a second. You hold up your hand and shine a flashlight on it. Since everything is in super slow-mo, you can watch as the photons strike your hand first, and it lights up. The photons that don't strike your hand continue on towards the wall next to you, and then some time after your hand lit up, the wall lights up, with a shadow of your hand.


I think the parent's point was that we can't see or image anything until those photons enter our sensor.

> you can watch as the photons strike your hand first, and it lights up. The photons that don't strike your hand continue on towards the wall next to

This analogy doesn't work. In order to watch, some photons have to enter our eyes, but they haven't got there yet, as they're just now interacting with our hand, etc. I.e until sufficient photons enter our eyes (sensors), there's nothing to watch.

This points to the fact that the technique is not imaging the whole scene at once, but doing some kind of reconstruction on an assumed static scene.


Are the frames viewed consecutively or are they combined?


The MIT News Office article has more details than the PR video and a link to the paper: http://web.mit.edu/newsoffice/2011/trillion-fps-camera-1213....

The actual imaging device is just a long line of photosensors. The camera aperture uses a varying electric field to deflect photons that arrive later to sensors further down the line, producing an image in effective 2D - 1D of space and 1D of time. By repeating the scene and slowly scanning the camera's mirror, a composite video is built that shows diffusion of a picosecond laser pulse.


> The camera aperture uses a varying electric field to deflect photons

blink

They're using photon-photon scattering? Wow. I thought in order to pull that off you needed very high powered lasers.

Edit: "But while both systems use ultrashort bursts of laser light"

Depending on how they define "ultrashort" this might be the key.


The press release (edit: and I) skipped a step: "A portion of the laser beam is split off with a glass plate and directed to a photodetector that provides the syn- chronization signal for the camera. The camera images the incoming light on a slit and this slit on a photocathode where photons excite electrons. These electrons are accelerated in a vacuum tube and deflected by an electric field perpendicular to their direction of motion and perpendicular to the direction of the slit. The signal generating this field is obtained by filtering and amplifying the synchronization signal from the laser. The image is thus spread out over time in a “streak” that hits a micro channel plate at the far end of the vacuum tube that further amplifies the electrons and directs them to a phosphor screen where they are converted back to photons. The screen is than imaged on a conventional low noise CCD camera. The complete time interval captured in this way spans about 1 nanosecond." [Picosecond Camera for Time-of-Flight Imaging]

So a photo cathode generates electrons which are deflected.


Thank you. That makes a lot more sense.


This will be a very interesting technology once it's perfected. It seems to me to be like the led was when it was first invented. A solution in search of a problem. And that can create wonderful innovation.



In particular I wonder what applications the TSA might find for this technology......



wonderful camera


It sounds like they are taking different phases of the light spreading, and it just looks like the light is traveling.

While cool, it's not a true 1 trillion fps camera.

Here's true 1 million fps footage:

http://www.youtube.com/watch?v=QfDoQwIAaXg


I think you described it correctly, but in all honesty, what's the difference? They're able to capture, very precisely, a trillionth of a second worth of a line of reflected light. They then repeat it until they get a video. And, since a line isn't very interesting, they repeat it in a bunch of lines parallel to each other to get a rectangular video.

Sounds a lot like a CRT. Or some of the oldest video capturing techniques (scanning line by line into a photo diode, and then using that signal to vary light on the output following the same scanning pattern).

The only thing that this camera is missing from other high-speed cameras is 1000 more lines in parallel, and a faster cool-down between frames. And many high-speed cameras in the past got around the cool-down by using multiple cameras, each taking a different slice of the action, and stitching them back together afterwards, but they're still considered high-speed cameras.

This could be 100% identical if they built a million of them and took a video in one shot, but it isn't currently feasible or cost-effective. And, since their 'bullet' is non-destructive, there's no reason to not simply repeat it with a cheaper technique.


That only works for static scenes with a predictable light source. So it wouldn't work with a light source that you don't control. Also it might or might not work for e.g. medical imaging because a person can't sit perfectly still. So for those cases you do need the million camera version.


One of the big benefits of better imaging is serendipity -- being able to see things you didn't expect to see or have to plan to see.

So, you could assume that the scene is static, but that's just not true in any scene.

Look up Debevec's Light-stage work on Digital Emily. One of the unanticipated benefits of fast, high-detail face capture is that we get to see how skin really deforms. This after decades of papers on how skin is supposed to deform.


Well before reading the article I was struggling to imagine how to film light - after all either a photon hits the sensor or it doesn't. How would you photograph a photon in mid-flight? You can't, because if it is in mid-flight, it is obviously not on your sensor. So this (presumably correct) explanation is a relief for me.


It did sound like that from the description and some of the earlier footage in the video, but no, they're still capturing reflected light :)


How they able to repeat the scene each time exactly?


In a similar vein, this striking anti-gun commercial by a local radio station in London: http://www.youtube.com/watch?v=DKAnmqdfGQY


It's even possible (although of course difficult) to make a film camera that operates at 1 million frames per second. They were mainly used to analyze explosions I think.

This new camera has limitations of course, but it has a time resolution literally 1,000,000 times the one in your video.


I'm still not sure this is how it works. They are talking about 500 sensors. When those sensors can capture @ 10000 fps, you might get a 'movie' of 5 million fps. But then again they also use the bigger mirror to scaneline the scene. Maybe it's a combination of both?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: