‘recompose’ input/output actuated surface, by the MIT media lab

recompose‘, the work of matthew blackshaw, anthony devincenzi, dávid lakatos, daniel leithinger, and hiroshi ishii at america’s MIT media lab, integrates user input and visual output into a single device. responding to both physical touch and also gestural interaction, the system offers new ways of interacting in realtime with three-dimensional representations.

MIT media lab: recompose video demonstration of ‘recompose’ video courtesy of fastcodesign

‘recompose’ consists of 120 physical tiles, mounted on small rods that rise or sink in response to user input as on a typewriter. in addition to responding to direct presses, however, the keys of ‘recompose’ also react to gestural input, such that moving one’s fingers over keys or making the gesture of pulling up or pushing down will cause the same effect.

the device currently recognizes five kinds of user behaviour: ‘selection’ involves the projection of light onto the device’s surface; the ‘actuation’ gesture will raise the selected keys; and ‘translation’, ‘rotation’, and ‘scale’ interactions all modify the selected input accordingly.

MIT media lab: recompose the green light cast from a mounted projector indicates the selected area, while the user’s hand gesture instructs the device to raise these keys

not merely a user interface, ‘recompose’ can also be used for visualization, such as the three-dimensional display of graphs, or the representation of the relative ‘pressure’ of user gestural input.

because it is at once input and output device, ‘recompose’ offers completely new modes of visualization functionality, such as the ability to transform via gestures three-dimensional changes in the surface. although for the current model, this can produce only the scaling, rotation, and other manipulations of simple shapes, one can easily imagine the ways in which a more finely detailed surface might provide three-dimensional interactive visualizations of modeling projects, CAD designs, or other visual and infographic data.

MIT media lab: recompose ‘recompose’ responds to both tactile and gestural input

the device is based on team member daniel leithinger’s ‘relief‘ table, a similar input/output device for tactile input. to this basic model, ‘recompose’ adds a depth camera and projector above the table. with input from the camera, computer vision detects user interaction and determines the position, orientation, and depth of the hands and fingers and relays the desired changes to the individual keys.

MIT media lab: recompose concept diagram of gestural interactions, clockwise from top left: selection (2 images), actuation, translation, scaling (both images), and rotation (both images).

MIT media lab: recompose the view of input as modeled through computer vision

via fastcodesign