The idea here is that the wearer may want to share an experience--travel or a party or whatever--with someone who can’t be present at the time and place. With the MH-2, that person can stand in front of a 3-D immersive display (or, more likely, a television screen) outfitted with a motion capture device (like a Kinect) and remotely embody the robot on the user’s shoulder.
What the robot sees, the remote user sees on his or her screen. Likewise, his or her speech and gestures are translated back to the robot, which uses it’s remarkably plentiful degrees of freedom--seven for the arms, three for the head, and two for the body, plus one for realistic breathing (yes, breathing)--to recreate the remote user’s persona, albeit on a slightly smaller scale. The idea, according to the researchers, is something like the vision below.
The limiting factor here, at least for now, is the huge backpack full of servos and such that enable all those degrees of freedom. Presumably that will shrink as the researchers polish their finished product.
Above us, Below us, inside us and now on our shoulders as well like little angels or devils whispering in our ears. Think about it...
The idea, according to the researchers, is something like the vision above |
No comments:
Post a Comment