VTTB: a visuo-tactile learning approach for robot-assisted bed bathing
File(s)
Author(s)
Gu, Y
Demiris, Y
Type
Journal Article
Abstract
Robot-assisted bed bathing holds the potential to enhance the quality of life for older adults and individuals with mobility impairments. Yet, accurately sensing the human body in a contact-rich manipulation task remains challenging. To address this challenge, we propose a multimodal sensing approach that perceives the 3D contour of body parts using the visual modality while capturing local contact details using the tactile modality. We employ a Transformer-based imitation learning model to utilize the multimodal information and learn to focus on crucial visuo-tactile task features for action prediction. We demonstrate our approach using a Baxter robot and a medical manikin to simulate the robot-assisted bed bathing scenario with bedridden individuals. The robot adeptly follows the contours of the manikin's body parts and cleans the surface based on its curve. Experimental results show that our method can adapt to nonlinear surface curves and generalize across multiple surface geometries, and to human subjects. Overall, our research presents a promising approach for robots to accurately sense the human body through multimodal sensing and perform safe interaction during assistive bed bathing.
Date Issued
2024-06
Online Publication Date
2024-06-17T10:10:40Z
Date Acceptance
2024-04-03
ISSN
2377-3766
Publisher
Institute of Electrical and Electronics Engineers
Start Page
5751
End Page
5758
Journal / Book Title
IEEE Robotics and Automation Letters
Volume
9
Issue
6
Copyright Statement
Copyright © 2024 IEEE. This is the author’s accepted manuscript made available under a CC-BY licence in accordance with Imperial’s Research Publications Open Access policy (www.imperial.ac.uk/oa-policy)
License URI
Identifier
http://dx.doi.org/10.1109/lra.2024.3396108
Publication Status
Published
Date Publish Online
2024-05-01