Pulling Out All the (F)Stops

I remember when I was about five my parents took me to the Denver Natural History Museum, which was doing an exhibit, aimed at kids, based on superheros. One of the set-ups was a screen which showed how quickly various animals– and superheros, of course– could run. When you pushed “start,” you could run along side them, and see how you stacked up. The machine also had a setting for “light,” and after trying it out, I remember being a little skeptical that anything was moving at all. Sure, it was one thing to realize I couldn’t keep up with Flash Gordon, or even a plain old Cheetah. But was the light really travelling? Or was it just showing up instantaneously at the finish as soon as I pressed the button? I was so astonished by the thought that light actually took time to travel that when I got home, I tried to repeat the ‘experiment’ to get my head around it: I stood by the front door with a flashlight, aimed the flashlight at the opposite wall, then turned it on, dropped it, and ran as quickly as I could to see if I could ever “beat” it to the end of the hallway. Sorry, Mom and Dad: I’m pretty sure I broke at least one of our flashlights that way. 

As you can imagine, I never did get my “proof” that photons have to travel at a finite speed just like everything else in the universe. As I grew up, and light became less mysterious, the idea became less difficult to grasp. Still, some small part of me (the part that still likes superheros, I assume) has never quite come to terms with the idea as anything other than a mathematical abstraction. Or, I should say, HADN’T come to terms with it. Until today.

What the video above shows is the result of a dramatic experiment at MIT, which represents the ultimate in high-speed film. High-speed camera technology has been all the rage for a while now– there’s even a show on the Discovery Channel called “Time Warp” dedicated exclusively to the technology, in which new frames are captured so rapidly that when they’re played back with a “normal” gap between them, the result is ultra slow-motion.

These super-slow videos are fascinating, and can teach you a lot of cool things, like how a drum set produces sound , or why it takes a dog so long to get a drink of water. But this MIT experiment takes things to a whole new level. In the video above, each frame represents less than one trillionth of a second (a “picosecond”), which is about how long it takes your computer to figure out that 2+2 is 4. Or, more importantly, it’s how long it takes a beam of light to travel a distance of one millimeter. Meaning that when the frames they capture are played back at “normal speed,” you can watch light itself moving across the room.* Keep this in mind as you watch the video: you’re not watching light gradually being turned on, and you’re not watching a flashlight being swept across the scene– you’re actually watching the arrival of photons.** That’s why the front of the apple lights up first, and then the sides, and then the back wall. Only one single flash of light was released at the apple, just like if someone had set off the flash on one of those cheap disposable cameras. But with this high-speed imaging, we can see the difference between how long it takes the photons at the center of the flash to get to the apple, and how long it takes the photons at the edge to move past it. You can see the effect perhaps even more dramatically here, as a light pulse is captured moving through a coke bottle  soda bottle whose label has been blanked out for hilariously legal reasons (no need to watch the full three minutes of the video, it’s all the same after the first couple runs). The key is, remember that you’re not seeing a moving light source. You’re watching the photons moving away from a stationary source.

 

Their technique for doing this is in a certain sense astonishingly straighforward in concept, if still maddeningly difficult to pull off in practice. As you can imagine, it would be impossible to have a single camera, even a fast digital one, open and close it’s shutter fast enough to take one picture every picosecond. So the scientists took a cue from the folks who shot “The Matrix,” in which time occasionally slows down and allows the camera to pan around an actor as they move in super slow motion. Of course, no camera can be “panned” that fast. Or at least, no single camera can. So instead, hundreds of cameras are laid out in a circle around the actor, and each fires one after the other, and the pictures are then put together to give the desired effect.

Image

Keanu Reaves, surrounded by cameras which can capture him stoically dodging a bullet. Now, scientists could capture him dodging a photon.

The pico-camera in the video above works the same way: inside are millions of sensors, each basically a camera capable of taking a picture in its own right. Each one takes a picture in sequence until the entire motion of the light pulse has been captured. In addition to this, another somewhat deceptively obvious trick is used: Because the tiny cameras have to be so small, they cannot photograph the whole scene, only an incredible thin “slice” of it. So instead of taking a video of just one scene, the light pulse is repeated identically over and over, with the cameras first recording a video of the very bottom slice, then the next slice up, and so on. Ultimately, a computer puts the slices together into one composite work. I suppose in a sense this is a “cheat”– it means you’re not really watching a video of something that actually happened. But because each pulse is damn near identical (something which by itself is no mean feat!) the combined effect is still no DIFFERENT than what you’d get if you were somehow able to capture it all at once. Think of having a picture taken of your entire graduating class: In all likelihood, several cameras actually took different pictures of different segments of the bleachers, and then they were stitched together later. That doesn’t mean the photo your parents overpaid for which you lost somewhere in your room isn’t “real.”

So that’s all there is too it: lots of little “slice cameras” take pictures one after another to make a vido of a slice, and then the process is repeated for slice after slice until the entire event can be reconstructed. Just follow those steps and you, too, can build your own version of the pico-camera, assuming you can scare up a few million in startup funds. Now that might seem like a lot to pay, but bear in mind, as the researchers point out, this kind of technology is not just for creating cool videos: it could be used for a number of practical purposes, particularly things like “light ultrasound,” in which the insides of objects (or human bodies!) could be imaged based not just on how sound waves scatter inside them, but on how light waves scatter. The result would be a dramatic increase in precision, which would help with everything from identifying cancer cells for removal with laser scalpels to making sure not every prenatal baby’s first sonogram of looks like the same indistinguishable black and white fuzz. 

And as if this weren’t enough, remember that it may also help some curious-but-mischevious children come to terms with the idea of light as an actual, physically propagating wave. The cost savings in the flashlight department alone should be enough to justify the investment.

 

UPDATE: Here’s a video from the inventors explaining the same information I gave you in the post, but probably better and with visual aids. Bummer for me, good news for you!

 

__

* For the sticklers out there: of course we’re not actually “seeing” the photons move across object, or we are, but it depends on what you mean by “seeing.” We only ever get to “watch” something in the sense that we get to perceive photons which have bounced off the object and come back to our eyes. So you can never “see” a photon travelling through space towards an object, you can only see the photons which have traveled to the object, bounced off, and eventually made it to you. Likewise, in the video you aren’t really watching the photons reach the apple, you’re watching the photons which had reached the apple and have now bounced back to the camera. But since we always are restricted not to “seeing” an object but only to seeing photons scattered off it, then maybe this is what “seeing” means anyway and maybe I didn’t need to make the distinction ;-)

**Same clarification as above.

Advertisements

About Colin West
Colin West is a graduate student in quantum information theory, working at the Yang Institute for Theoretical Physics at Stony Brook University. Originally from Colorado (where he attended college), his interests outside of physics include politics, paper-folding, puzzles, playing-cards, and apparently, plosives.

2 Responses to Pulling Out All the (F)Stops

  1. Paul West says:

    This is really neat! I feel like I should figure out whether even a trillionth of a second per frame should be enough time resolution to see a bunch of photons traveling, but I’m still recovering from my physical biochemistry final yesterday and don’t even feel inclined to verify that 2+2=4. All this reminds me how I felt when I first read about IBM researchers getting images of individual xenon atoms just by dragging a fine stylus over them. It seemed to straightforward to be real, especially after growing up with the idea firmly implanted that nobody could ever “see” an atom.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: