MIT’s computer science and artificial intelligence laboratory have developed technology called ‘interactive dynamic video’ (IDV) that allows viewers to reach in and virtually ‘touch’ objects in videos. using traditional cameras and algorithms, ‘IDV’ looks at the tiny, almost invisible vibrations of an object to create video simulations that users can interact with. 

 

interactive dynamic video demonstration from the MIT computer science and artificial intelligence laboratory
video courtesy of abe davis

 

 

 

a lot can be learned about objects b y manipulating them: poking, pushing, prodding, and then seeing how they react. this obviously cannot be done with videos — just try touching that cat video on your phone and see what happens. but is it crazy to think that we could take that video and simulate how the cat moves, without ever interacting with the real one?

 

‘this technique lets us capture the physical behavior of objects, which gives us a way to play with them in virtual space’, says MIT PhD student abe davis, who will be publishing the work this month. ‘by making videos interactive, we can predict how objects will respond to unknown forces and explore new ways to engage with videos.’

 mit interactive dynamic video designboom

 

 

 

davis says that ‘IDV’ has many possible uses, from filmmakers producing new kinds of visual effects to architects determining if buildings are structurally sound. for example, he shows that, in contrast to how the popular pokémon go app can drop virtual characters into real-world environments, ‘IDV’ can go one step beyond that by actually enabling virtual objects to interact with their environments in specific, realistic ways, like bouncing off the leaves of a nearby bush. 

mit interactive dynamic video designboom

 

 

 

the most common way to simulate objects’ motions is by building a 3D model. unfortunately, 3D modeling is expensive, and can be almost impossible for many objects. while algorithms exist to track motions in video and magnify them, there aren’t ones that can reliably simulate objects in unknown environments. davis’ work shows that even five seconds of video can have enough information to create realistic simulations.

 

to simulate the objects, the team analyzed video clips to find ‘vibration modes’ at various frequencies that each represent distinct ways that an object can move. by identifying these modes’ shapes, the researchers can begin to predict how these objects will move in new situations.

mit media lab interactive dynamic video designboom

 

 

 

davis used idv on videos of a variety of objects, including a bridge, a playground, and a ukelele. with a few mouse-clicks, he shows that he can push and pull the image, bending and moving it in different directions. he even demonstrated how he can make his own hand appear to ‘use the force’ and telekinetically control the leaves of a bush.

‘if you want to model how an object behaves and responds to different forces, we show that you can observe the object respond to existing forces and assume that it will respond in a consistent way to new ones’, says davis, who also found that the technique even works on some existing videos on youtube.

mit interactive dynamic video designboom

 

 

 

researchers say that the tool has many potential uses in engineering, entertainment and architecture to name but a few. for example, in movies it can be difficult and expensive to get CGI characters to realistically interact with their real-world environments. doing so requires filmmakers to use green-screens and create detailed models of virtual objects that can be synchronized with live performances.

 

with ‘IDV’, a videographer could video of an existing real-world environment and make some minor edits like masking, matting, and shading to achieve a similar effect in much less time — and at a fraction of the cost. engineers could also use the system to simulate how an old building or bridge would respond to strong winds or an earthquake.

 

‘the ability to put real-world objects into virtual models is valuable for not just the obvious entertainment applications, but also for being able to test the stress in a safe virtual environment, in a way that doesn’t harm the real-world counterpart’, commented davis.