Latest Telepresence and Visual Collaboration News:
OmniTouch turns body parts and nearby surfaces into touch interfaces
October 20, 2011 | Hogan Keyser
Microsoft researchers want to turn your hand into a touchscreen
By Jon Brodkin | Published October 19, 2011 via ISPR - Multitouch screens are so versatile and easy to use, why limit them to smartphones and tablets? Researchers have been working for several years to extend multitouch to arbitrary surfaces, but a project called OmniTouch from Microsoft Research and a PhD student at the Human-Computer Interaction Institute at Carnegie Mellon University may bring it closer to reality.
OmniTouch turns body parts and nearby surfaces into touch interfaces. Users can read and reply to an e-mail by touching their hands or a nearby wall, or even use multiple applications at once on multiple surfaces. The results from a user study "suggest our prototype system approaches the accuracy of conventional, physical touch screens, but on arbitrary, ad hoc surfaces," the researchers say in a [3:24 minute] video.
The project is led by Carnegie Mellon student and former Microsoft Research intern Chris Harrison and Microsoft researchers Hrvoje Benko and Andrew Wilson. "We wanted to capitalize on the tremendous surface area the real world provides," Benko says in a Microsoft research article. "The surface area of one hand alone exceeds that of typical smart phones. Tables are an order of magnitude larger than a tablet computer."
OmniTouch is reminiscent of the SixthSense system developed at the MIT Media Lab, which had students projecting a gestural interface onto the world around them with the help of a device containing a projector, mirror and camera worn around their necks, as well as sensors placed upon their fingers. OmniTouch, however, requires only a device to be worn on one's shoulder, with nothing special on the hands or arms. A research paper on OmniTouch notes the influence of SixthSense and other similar projects, but says these systems did not create true touch interactions because they "could not differentiate between clicked and hovering fingers." The limitation was due partly to an "inability to track surfaces in the environment, which also made it impossible to have the projected interface change and follow the surface as it moved."
The proof-of-concept OmniTouch system consists of a depth-sensing camera and laser-based pico-projector. It is tethered to a desktop computer in its prototype stage, so is not yet truly portable.
Using technology principles similar to Microsoft's Kinect, OmniTouch starts by generating a depth map of a scene, while isolating fingers from appropriate touch surfaces, including a hand, forearm, notepad, table, or wall. While researchers say the system generates few false positives, it is sensitive to the angle at which fingers appear in front of the camera. Some sophisticated computation is performed to differentiate fingers touching a surface from fingers merely hovering above a surface.
Add New Comment
Telepresence Options welcomes your comments! You may comment using your name and email (which will not be displayed), or you may connect with your Twitter, Facebook, Google+, or DISQUS account.
25 June 2015 9 June 2015 8 June 2015
See what happens when YouTube and TPO come together at the Telepresence Options YouTube Channel.