Latest Telepresence and Visual Collaboration News:
Driving My First Avatar: Anybots and the Future of Telepresence
June 18, 2010 | Chris Payatagool
I just got to drive my first avatar, and while it wasn't the James Cameron experience I'd hoped for, it may represent one of the futures for telepresence. This was at a local Silicon Valley company called Anybots and, once it launches, for around $15,000 or a fraction of the cost of a high-end telepresence system, you will be able to almost be in two places at once. Let's talk about the problems this technology is attempting to solve, the experience with Anybots, and what the future is likely to bring.
The Critical Problem with Telepresence
You can buy a high-end telepresence room for more than $200,000 with monthly charges in excess of $10,000 and it really isn't the same as getting on a plane and actually being there. This is because the systems, while visually and audibly similar to actually sitting across from the person, create a formal and often adversarial working relationship by placing people at opposite sides of a virtual table rather than next to each other.
In addition, there is no real way to leave the room and have a conversation or walk around and look at things without someone using a secondary device, which generally isn't integrated into the telepresence solution to do so. This is one of the big reasons that, as good as the current generation of products are, they aren't used that much. They work best for things like staff meetings where one person is presenting; they don't work very well when groups have to break off into teams to work individually or have to conversation outside the conference room itself.
Anybots makes remotely controlled robots that look like this and that move in a similar fashion to the Segway scooter with two wheels placed opposite each other and strong balance technology. These robots are about adult size -- they can be adjusted down to child size -- and they can be remotely activated and controlled. They have a video feed of the operator in their head and, adjusted properly, the "eyes" are about at eye level. One is a camera, adding to their realistic feel. They have about the same mobility as a wheelchair and don't have hands, so they can't work things like elevators. They do have a built-in and targetable laser pointer.
The company was having network problems when I was there, which showcased the weakness for solutions like this. If the network isn't working well, you could end up with a stranded robot. But when the network was working, I could easily navigate the robot down halls and into rooms. Conversations seemed natural with people and even between robots. (You probably wouldn't do this often unless two remote people wanted to talk to someone or a group at the same time.)
This solution felt best for situations in which you only had one or two remote people and you wanted to make them feel at home. It would be particularly comfortable for someone who had experience with multi-player online games because the experience is similar.
This seemed to be a better solution for conversations or remote viewing, but remote training would be a problem because you can't really see much of the operator. It's also hard to drive the robot while trying to emulate training (as you would if the student were remote and not the teacher).
Telepresence for Training
For training, I saw an interesting implementation at a Marvell AVANTA event a few months ago. (This link is to video, but don't blink -- it's very fast.) A high-definition monitor was turned onto its side and a camera placed on top you could see the entire remote person's body. You could imagine this being used for martial arts training, aerobics or other classroom settings where the instructor needs to demonstrate things. It also was very natural for one-to-one chatting, almost like you were talking through a door jamb (in fact, I'd be tempted to frame the monitor in a door jamb to complete the illusion).
The Future: A Blended Solution
I can imagine a solution that has all of these elements to it and includes desktop videoconferencing, both to feed the facial image to the robot and for those times when a chat between two deskbound workers is most appropriate. This way people could move around a company virtually. They could appear in robots, virtual doorways and in conference rooms, depending on their needs. This would allow them the flexibility they would have if they were physically there.
I can picture a videoconference that starts out in the conference room and then transfers into the hallway and between a remote robot and person on site. The robot, working like a moving video phone, walks over to a virtual door so the remote individual can demonstrate what they are talking about, and then the two parties (one controlling the robot) walk back to the conference room for the rest of the meeting. I think that if I could do all that, I'd be far less likely to want to fly. That solution is undoubtedly coming.
Though I still think that loading an Anybot with a paintball gun, painting it in battle colors, and letting it run rampant around an office would be a lot of fun, even though it would do horrid things to productivity. That's just because I'm clearly twisted and need to get out more.
How about you, do you see a future technology in this mix that would make you willing to forego travel?
Add New Comment
Telepresence Options welcomes your comments! You may comment using your name and email (which will not be displayed), or you may connect with your Twitter, Facebook, Google+, or DISQUS account.
14 April 2017 11 April 2017 5 April 2017
20 April 2017
The new Suitable technologies BeamPro PTZ+L telepresence robot for the healthcare industry has a laser14 April 2017 27 March 2017
See what happens when YouTube and TPO come together at the Telepresence Options YouTube Channel.