every day, almost 300 million people struggle to perform daily tasks, from crossing the road to reading a book, due to blindness and visual impairment. one day, while coming home from university, saverio and luca met a blind man who asked for help to get to the bus stop. he told them the problems he faces every day and how he tries to deal with them: on that day, he was moving along the corners of the buildings because he knew that in that way he could access the pedestrian crossing, which was otherwise undetectable (no sound; signals; or road signs). saverio and luca were both working in computer vision for robotics, and on that day they realized how that same technology could be applied to increase the wellbeing of people suffering from visual impairment, by designing a wearable and smart personal assistant. it was then that ‘horus’ came to life.

horus wearable blind designboom

 

 

 

horus‘ observes, understands and describes the environment to the person, providing useful information (text reading; recognition of faces; and objects) in a discreet way and with the right timing. using bone conduction, the hearing of the person is in no way affected (as opposed to using earphones) and it will be possible to hear the device even in noisy situations. ‘horus’ is composed of two parts: the RST, containing the visual and balance sensors, that can be worn like a head microphone. the second, can easily be worn or carried in a bag and contains the battery and the processor. 

horus wearable blind designboom

 

 

 

the device comprises a band that wraps around the back of the head; with earpieces; and two side-by-side cameras on one end. the cameras keep a close eye on what’s in front of the wearer, and can dictate what it sees through the earpiece, which uses bone conduction technology to bypass the ear canal and stimulate the tiny ear bones directly. that way, the user can still hear what’s happening around them, and the device’s narration won’t disturb anyone else.

horus wearable blind designboom

 

 

 

the headband is connected to a separate smartphone-sized unit housing the processor and battery via a 1-meter (3.3 ft) cable. users can adjust the volume and change settings with a matching set of buttons on both the processor unit and the headband. inside, a nvidia ‘tegra’ graphics processing unit powers the deep learning algorithms, allowing the device to recognize who; and what, is facing it. at the press of a button the device can describe the scene before it in detail, down to the furniture and people present.

horus wearable blind designboom

 

 

 

horus can also build a database of contacts list by scanning a new face and prompting the user to assign a name. it can then alert the wearer whenever it spots that person again. the same can be done with inanimate objects to help a blind person distinguish between a bottle of juice and a bottle of milk. the device’s object recognition apparently even works in two dimensions, allowing it to describe photographs; read text on signs; or even turn any book into an audio book. 

horus wearable blind designboom

 

 

 

in navigation mode, ‘horus ‘uses its stereo-camera setup to perceive the distance to objects in front of the user, and can respond with a system of audio cues like parking sensors in a car: the closer something is the faster the device will beep, communicating direction by focusing the sound more in either the left; or right ear. similar technology has shown up in other devices designed to improve the quality of life of visually impaired people, but ‘horus’ seems to be a more advanced, elegant solution. ‘horus’ is currently being tested in the italian community, and a wider beta program is expected to launch in january 2017, before launching later that year.