Imagine you are feeling your way through a maze blindfolded, informed only by what you can feel.
Now imagine the maze isn’t real, but actually a digital construction.
This Matrix-like scenario was recently used by the Interactive Architecture Lab at the Bartlett School of Architecture in London to test its latest innovation, Sarotis: Wearable technology that functions as a second-skin to help heighten the user’s awareness of his or her surroundings. Combining soft robotics with depth sensors, the prosthetic technology can work in tandem with Google’s Project Tango technology. This computer vision technology uses a combination of depth sensing, motion tracking, and area learning technologies to allow a smartphone or device to “see” its environment in 3-D. Sarotis then translates this data into a tactile response, inflating or exerting pressure to guide the user. It is made from a soft fabric that wraps around the body like a second skin.
As an example, the lab blindfolded participants and had them navigate an empty room with an invisible path drawn in it using Project Tango. Wearing the Sarotis technology, participants were able to navigate the maze fairly easily. Then, participants had to draw the maze and were able to reproduce its form as well.
Currently, the most obvious application of Sarotis is to aid those who are visually impaired. Sarotis could guide someone using gentle pressure to steer them away from curbs, walls, or other obstacles. However, the Interactive Architecture Lab predicts that 3-D vision technologies like Project Tango will be available on the majority of mobile devices and that by 2020 70 percent of the world’s population will have access to 3-D scanning, depth tracking, motion awareness, etc. on their smartphones. Combined, the Sarotis and 3-D computer vision could radically expand the possibilities in terms of navigating, gaming, and safety and other popular applications.