12
IRUS Total
Downloads
  Altmetric

Agent and object aware tracking and mapping methods for mobile manipulators

File Description SizeFormat 
Houseago-C-2022-PhD-Thesis.pdfThesis35.8 MBAdobe PDFView/Open
Title: Agent and object aware tracking and mapping methods for mobile manipulators
Authors: Houseago, Charles Fletcher
Item Type: Thesis or dissertation
Abstract: The age of the intelligent machine is upon us. They exist in our factories, our warehouses, our military, our hospitals, on our roads, and on the moon. Most of these things we call robots. When placed in a controlled or known environment such as an automotive factory or a distribution warehouse they perform their given roles with exceptional efficiency, achieving far more than is within reach of a humble human being. Despite the remarkable success of intelligent machines in such domains, they have yet to make a full-hearted deployment into our homes. The missing link between the robots we have now and the robots that are soon to come to our houses is perception. Perception as we mean it here refers to a level of understanding beyond the collection and aggregation of sensory data. Much of the available sensory information is noisy and unreliable, our homes contain many reflective surfaces, repeating textures on large flat surfaces, and many disruptive moving elements, including humans. These environments change over time, with objects frequently moving within and between rooms. This idea of change in an environment is fundamental to robotic applications, as in most cases we expect them to be effectors of such change. We can identify two particular challenges1 that must be solved for robots to make the jump to less structured environments - how to manage noise and disruptive elements in observational data, and how to understand the world as a set of changeable elements (objects) which move over time within a wider environment. In this thesis we look at one possible approach to solving each of these problems. For the first challenge we use proprioception aboard a robot with an articulated arm to handle difficult and unreliable visual data caused both by the robot and the environment. We use sensor data aboard the robot to improve the pose tracking of a visual system when the robot moves rapidly, with high jerk, or when observing a scene with little visual variation. For the second challenge, we build a model of the world on the level of rigid objects, and relocalise them both as they change location between different sequences and as they move. We use semantics, image keypoints, and 3D geometry to register and align objects between sequences, showing how their position has moved between disparate observations.
Content Version: Open Access
Issue Date: Jan-2022
Date Awarded: May-2022
URI: http://hdl.handle.net/10044/1/105155
DOI: https://doi.org/10.25560/105155
Copyright Statement: Creative Commons Attribution NonCommercial NoDerivatives Licence
Supervisor: Leutenegger, Stefan
Davison, Andrew
Sponsor/Funder: High Performance Embedded and Distributed Systems Centre for Doctoral Training. (HiPEDS CDT)
James Dyson Foundation
Department: Computing
Publisher: Imperial College London
Qualification Level: Doctoral
Qualification Name: Doctor of Philosophy (PhD)
Appears in Collections:Computing PhD theses



This item is licensed under a Creative Commons License Creative Commons