Mutually Assistive Robotics
Introduction
The project I intend to work on is Mutually Assistive Robotics, which involves using augmented reality specifically for disabled users. I would be doing this work under Elaine Short, Jivko Sinapov, Matthias Scheutz, Chris Rogers, Jennifer Buxton in their lab here at Tufts University. This project is funded by the NSF. The question we are trying to answer is: How can augmented reality (AR) facilitate better interaction between disabled users and their assistive robots? This is important because over 25% of Americans suffer from some form of disability. These assistive robotic technologies can help improve the quality of life for disabled people by enabling them to participate more actively in work and leisure activities and to feel more independent (Short, et al.).
Our Approach
In this project, we plan to use a strengths-based approach to assistive robotics, where the user and the robot assist each other. The user will be able to understand and control not just the high-level tasks, but low-level aspects of the robot’s movement and behavior as well. This practice is different from many assistive robots which typically take a deficit-based approach to disability, meaning they simply allow the disabled user to provide high-level goals, and then the robot completes the task entirely autonomously. This method removes all control from the user, which can be problematic from an ethical standpoint, and when control over the robot’s actions is the goal itself. Another difference is we want the robot to be able to perform not just basic chores and tasks, but activities like art, makeup, and other hobbies. While these are not necessarily medical or survival necessities, our goal is to help disabled people have a more fulfilling life. By allowing the disabled user’s intelligence to guide the robot, we can have a user-robot system that is constantly evolving and adapting.
Augmented Reality
One important technology we will use is AR. The goal of AR is to help users understand the information these robots are using to make decisions. AR can help users understand the mental models of the robot, which are used to explain the robot's behavior. By having a better understanding of how the robot works, the user will then be able to influence it in an informed manner. Understanding the robot will allow them to change its behaviors more effectively. The robot we will be using is a Kinova Jaco arm, which is designed to be mounted on a wheelchair. Its sensing will be done with depth cameras (Intel RealSense D400 depth camera), microphones (MiniDSP directional microphone array), and its proprioceptive sensors.
Prior Works
We will be using our team’s prior works in socially intelligent assistive robotics (SARs). We have extensive prior work on SARs for persons with Parkinson’s Disease. This included developing robotic architectures for fully autonomous robots administering health questionnaires and help with medication sorting. This research included a specific focus on robot touch, from general ethical concerns, to specific human perceptions of robot touch and the effect it had on their assessment of the robot, especially their trust of it. Our team has also investigated the use of AR in human-robot interaction. We will be using SENSAR (Seeing Everything iN Situ with Augmented Reality), which is an AR robotics system that allows robots to communicate their sensory data of the real-world. The user can then see, correct, and confirm the robot’s perception of the world.
Proposed Research
In order to provide useful feedback to the robot and to best understand why the robot is doing what it is doing, users need a good mental model of the robot. One challenge will be deciding what information to show to the user without overwhelming or confusing them. I will be working to develop software to visualize the robotic arm’s internal state for the user. This will help users to obtain an accurate mental model of the robot in a more simplistic way. Our interface will include language and an AR display for visualizing the robot’s sensory and cognitive data (Krause, et al.). I will be doing this in collaboration with graduate students who are experts in learning from demonstration (LfD) and reinforcement learning (RL). Through AR, we will display the robot’s future motion plan before an action is executed to let the user decide whether the displayed motion is the desired motion.
Please sign in
If you are a registered user on Laidlaw Scholars Network, please sign in
Very interesting work ! I'm looking forward to reading your poster.