65
IRUS TotalDownloads
Altmetric
Explainable shared control in assistive robotics
File | Description | Size | Format | |
---|---|---|---|---|
Zolotas-M-2021-PhD-Thesis.pdf | Thesis | 15.95 MB | Adobe PDF | View/Open |
Title: | Explainable shared control in assistive robotics |
Authors: | Zolotas, Mark |
Item Type: | Thesis or dissertation |
Abstract: | Shared control plays a pivotal role in designing assistive robots to complement human capabilities during everyday tasks. However, traditional shared control relies on users forming an accurate mental model of expected robot behaviour. Without this accurate mental image, users may encounter confusion or frustration whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. The Explainable Shared Control paradigm introduced in this thesis attempts to resolve such model misalignment by jointly considering assistance and transparency. There are two perspectives of transparency to Explainable Shared Control: the human's and the robot's. Augmented reality is presented as an integral component that addresses the human viewpoint by visually unveiling the robot's internal mechanisms. Whilst the robot perspective requires an awareness of human "intent", and so a clustering framework composed of a deep generative model is developed for human intention inference. Both transparency constructs are implemented atop a real assistive robotic wheelchair and tested with human users. An augmented reality headset is incorporated into the robotic wheelchair and different interface options are evaluated across two user studies to explore their influence on mental model accuracy. Experimental results indicate that this setup facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment. As for human intention inference, the clustering framework is applied to a dataset collected from users operating the robotic wheelchair. Findings from this experiment demonstrate that the learnt clusters are interpretable and meaningful representations of human intent. This thesis serves as a first step in the interdisciplinary area of Explainable Shared Control. The contributions to shared control, augmented reality and representation learning contained within this thesis are likely to help future research advance the proposed paradigm, and thus bolster the prevalence of assistive robots. |
Content Version: | Open Access |
Issue Date: | Sep-2020 |
Date Awarded: | Feb-2021 |
URI: | http://hdl.handle.net/10044/1/91439 |
DOI: | https://doi.org/10.25560/91439 |
Copyright Statement: | Creative Commons Attribution NonCommercial ShareAlike Licence |
Supervisor: | Demiris, Yiannis |
Sponsor/Funder: | Engineering and Physical Sciences Research Council |
Funder's Grant Number: | 1859675 |
Department: | Electrical and Electronic Engineering |
Publisher: | Imperial College London |
Qualification Level: | Doctoral |
Qualification Name: | Doctor of Philosophy (PhD) |
Appears in Collections: | Electrical and Electronic Engineering PhD theses |
This item is licensed under a Creative Commons License