Latest Telepresence and Visual Collaboration News:
How to control robots with your mind
The robot is informed that its initial motion was incorrect based upon real-time decoding of the observer's EEG signals, and it corrects its selection accordingly to properly sort an object (credit: Andres F. Salazar-Gomez et al./MIT, Boston University)
Making robots useful collaborators at home and at work
Two research teams are developing new ways to communicate with robots and shape them one day into the kind of productive workers featured in the current AMC TV show HUMANS (now in second season).
Programming robots to function in a real-world environment is normally a complex process. But now a team from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University is creating a system that lets people correct robot mistakes instantly by simply thinking.
In the initial experiment, the system uses data from an electroencephalography (EEG) helmet to correct robot performance on an object-sorting task. Novel machine-learning algorithms enable the system to classify brain waves within 10 to 30 milliseconds.
The system includes a main experiment controller, a Baxter robot, and an EEG acquisition and classification system. The goal is to make the robot pick up the cup that the experimenter is thinking about. An Arduino computer (bottom) relays messages between the EEG system and robot controller. A mechanical contact switch (yellow) detects robot arm motion initiation. (credit: Andres F. Salazar-Gomez et al./MIT, Boston University)
While the system currently handles relatively simple binary-choice activities, we may be able one day to control robots in much more intuitive ways. "Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button, or even say a word," says CSAIL Director Daniela Rus. "A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars, and other technologies we haven't even invented yet."
The team used a humanoid robot named "Baxter" from Rethink Robotics, the company led by former CSAIL director and iRobot co-founder Rodney Brooks.
Intuitive human-robot interaction
The system detects brain signals called "error-related potentials" (generated whenever our brains notice a mistake) to determine if the human agrees with a robot's decision.The system detects brain signals called "error-related potentials" (generated whenever our brains notice a mistake) to determine if the human agrees with a robot's decision.
"As you watch the robot, all you have to do is mentally agree or disagree with what it is doing," says Rus. "You don't have to train yourself to think in a certain way -- the machine adapts to you, and not the other way around." Or if the robot's not sure about its decision, it can trigger a human response to get a more accurate answer.
Robot asks questions, and based on a person's language and gesture, infers what item to deliver. (credit: David Whitney/Brown University)
The team believes that future systems could extend to more complex multiple-choice tasks. The system could even be useful for people who can't communicate verbally: the robot could be controlled via a series of several discrete binary choices, similar to how paralyzed locked-in patients spell out words with their minds.
The project was funded in part by Boeing and the National Science Foundation. An open-access paper will be presented at the IEEE International Conference on Robotics and Automation (ICRA) conference in Singapore this May.
Add New Comment
Telepresence Options welcomes your comments! You may comment using your name and email (which will not be displayed), or you may connect with your Twitter, Facebook, Google+, or DISQUS account.
See what happens when YouTube and TPO come together at the Telepresence Options YouTube Channel.