Gold Sponsors
Array Telepresence Logo   Human Productivity Lab Logo   Ashton Bentley Logo
Silver Sponsors
Bronze Sponsors
Telepresence Options Magazine

Latest Telepresence and Visual Collaboration News:
Full Article:

Microsoft Research's Kinect Augmented Reality Room

November 7, 2011 | Hogan Keyser
Microsoft Research has released a proof-of-concept video for a creating an augmented reality room using Microsoft Kinect cameras and a handheld projectors that enables users to dynamically augment environments with digital graphics.

Handheld projector systems have the potential to enable users to dynamically augment environments with digital graphics. We explore new parts of the design space for interacting using handheld projection in indoor spaces, in particular systems that are more "aware" of the environment in which they are used. This is defined broadly as either spatial awareness, where the projector system has a sense of its location and/or orientation; or geometry awareness, where the system has a sense of the geometry of the world around it, which can encompass the user's hands and body as well as objects, furniture and walls. Awareness like this can be enabled through the use of sensors built into the handheld device, through infrastructure-based sensing, or a combination of the two. This paper seeks to better understand the interactive possibilities this type of awareness affords mobile projector interaction. Previous work in this area has predominantly focused on infrastructure-based spatial-aware projection and interaction.


Information by Rob Knies on Inside Microsoft Research's Blog
--

Imagine that you carry a small device that can make any nearby surface interactive -- and that those surfaces can be manipulated via multitouch gestures and can store data.

"Wouldn't that be cool?"

The enthusiasm belongs to David Molyneaux, and he is one of several Microsoft Research Cambridge researchers striving to bring this fanciful vision to reality, using interactive, environmentally aware projector systems embedded in handheld devices.

Microsoft_Kinect_Augmented_Reality_Handheld_Projectors.jpg
"In the future," Molyneaux predicts, "we will all have devices we carry around -- maybe projectors integrated into mobile phones -- that enable us to augment arbitrary surfaces and objects with digital content and relevant information. We will live in a 3-D 'information space' where objects, surfaces, and devices around us in the home or office can generate digital information or have it attached. These mobile devices will reveal this information and enable interaction with the information directly."

That vision, in many senses, is shared among the augmented-reality community and could prove invaluable in scenarios such as gaming, workflows, and collaborative, ad hoc information work.

In a way, the Cambridge researchers' project mirrors that of OmniTouch, featured during the Association for Computing Machinery's 24th Symposium on User Interface Software and Technology, being held Oct. 16-19 in Santa Barbara, Calif. While both projects are mobile depth-sensing and projection systems, the main difference is that while OmniTouch only knows about objects and planar surfaces placed directly in front of it at close range, the Cambridge projector systems aim for high-fidelity awareness of the entire environment and interaction on any shaped surface.



The environmental awareness portion of the effort has three goals:

  • Spatial awareness: The projector knows where it is in 3-D space as it moves around in real time.
  • Geometry awareness: The projector knows which objects and surfaces are in an environment.
  • Interaction: Users can use movement, gestures, and touch to interact with projected information.

This combination of awareness is key, as it then enables virtual content to be placed anywhere into the 3-D space and for it to appear projected in the correct location in the real world and undistorted for the user.

"We then investigate," Molyneaux says, "what types of interactions are possible on top of these systems and develop new ways of interacting with these types of mobile displays of the future."

The biggest challenge for this project has been in developing a mobile projector/camera system that can deliver high-fidelity environmental awareness and generate high-quality representations of that environment while simultaneously tracking the location of the projector.

The team developed both infrastructure-based and infrastructure-less systems that use Kinect depth sensors. The former uses ceiling-mounted Kinect cameras to sense a room and detect the locations of projectors and users, enabling whole-body sensing and interaction. The latter required the integration of multiple images from a handheld Kinect camera to build a model of the environment in real time -- and led, in part, to the development of the KinectFusion 3-D reconstruction system.

What Molyneaux and his colleagues are doing amounts to nothing less than inventing the future. It's an exhilarating obligation.

"I find this vision of more natural interaction with information really exciting," Molyneaux says. "Rather than being stuck with a keyboard and mouse in front of a monitor, it's bringing the content and interaction into the real world around us, so people can interact as they go about their daily lives."

David Molyneaux Microsoft, Shahram Izadi Microsoft, David Kim Newcastle University, Otmar Hilliges Microsoft, Steve Hodges Microsoft, Alex Butler Microsoft, Xiang Cao Microsoft, Hans Gellersen Lancaster University






Add New Comment

Telepresence Options welcomes your comments! You may comment using your name and email (which will not be displayed), or you may connect with your Twitter, Facebook, Google+, or DISQUS account.