MIT media lab’s $500 nano-camera operates at the speed of light

 

 

 

last year, designboom featured a half-a million dollar femtophotographic trillion-frame-per-second camera capable of capturing a single pulse of light as it traveled through a scene. recently, a $500 three-dimensional alternative has been developed by students at the MIT media lab that can also operate at the speed of light.

 

the inexpensive camera is based on ‘time of flight’ technology, in which the location of objects are calculated by how long it takes a light signal to reflect off a surface and return to the sensor. unlike devices such as microsoft’s second generation kinect, the system can capture translucent objects, such as a glass vase, in 3D. ‘it’s going to go from expensive to cheap thanks to video games, and that should shorten the time before people start wondering what it can be used for,’ explains james davis, an associate professor at the university of california at santa cruz. in the future, the ‘nano-photographic’ camera technology will have the potential to be applied to the advancements in medical imaging, collision-avoidance detectors for cars, and interactive gaming.

 

 

nanophotography
video courtesy cameraculturegroup

 

 

‘using the current state of the art, such as the new kinect, you cannot capture translucent objects in 3D,’ says researcher achuta kadambi. ‘that is because the light that bounces off the transparent object and the background smear into one pixel on the camera. using our technique you can generate 3D models of translucent or near-transparent objects.’  in a conventional ‘time of flight camera’, a light signal is fired at a scene, where it bounces off an object and returns to strike the pixel. since the speed of light is known, it is simple for the camera to calculate the distance the signal has traveled and the depth of the object from where it has been reflected.

 

because the system is designed to measure range for a single reflected bounce of light, it suffers from technical errors due to ‘multipath interference’. for the unit to be applied for commercial application, it will need to overcome the challenges of detecting changing environmental conditions, semitransparent surfaces, edges, or motion. each creates multiple reflections that mixes with the original signal and return feed to the camera, making it difficult to determine which is the correct measurement.

 

 

MIT media lab nano-camera can operate at the speed of light

left to right: MIT students ayush bhandari, refael whyte and achuta kadambi
image © bryce vickmark

 

 

‘the new device uses an encoding technique commonly used in the telecommunications industry to calculate the distance a signal has travelled’, says ramesh raskar, the associate professor who also worked on the trillion-frame-per-second camera. ‘we use a new method that allows us to encode information in time, so when the data comes back, we can do calculations that are very common in the telecommunications world, to estimate different distances from the single signal.

 

‘conventional cameras see an average of the light arriving at the sensor, much like the human eye,’ adds davis. ‘in contrast, the researchers in raskar’s laboratory are investigating what happens when they take a camera fast enough to see that some light makes it from the ‘flash’ back to the camera sooner, and apply sophisticated computation to the resulting data. the basic technology needed for the team’s approach is very similar to that already being shipped in devices such as the new version of kinect.

 

 

MIT media lab nano-camera can operate at the speed of light

figure 1
image courtesy MIT media lab

 

 

using our custom time of flight camera, we are able to visualize light sweeping over the scene. here, multipath effects can be seen in the glass vase. in the early time-slots, bright spots are formed from the specularities on the glass. light then sweeps over the other objects on the scene and finally hits the back wall, where it can also be seen through the glass vase (8ns). light leaves, first from the specularities (8-10ns), then from the stuffed animals. the time slots correspond to the true geometry of the scene (light travels 1 foot in a nanosecond).

 

 

MIT media lab nano-camera can operate at the speed of light

figure 2
image courtesy MIT media lab

 

 

recovering depth of transparent objects is a hard problem in general and has yet to be solved for time of flight cameras. a glass unicorn is placed in a scene with a wall behind (left). a regular time of flight camera fails to resolve the correct depth of the unicorn (center-left). by using our multipath algorithm, we are able to obtain the depth of foreground (center-right) or of background (right).

 

 

nanophotography color — glass vase
video courtesy achuta kadambi