95
IRUS Total
Downloads
  Altmetric

Artificial social constructivism for long term human computer interaction

File Description SizeFormat 
Milanovic-K-2021-PhD-Thesis.pdfThesis13.69 MBAdobe PDFView/Open
Title: Artificial social constructivism for long term human computer interaction
Authors: Milanovic, Kristina
Item Type: Thesis or dissertation
Abstract: Connected devices, like smart speakers or autonomous vehicles, are becoming more common within society. As interactions with these devices increase, the likelihood of encountering errors will also increase. Errors are inevitable in any system but can be unexpected by human users and therefore can lead to trust breakdowns. This thesis proposes a new socio-technical system theory based on social constructivism which would encourage human users to continue using smart devices after unexpected errors occur and recover trust. The theory, called Artificial Social Constructivism (ASC), hypothesises that mutual education of the human and computer agents creates a relationship between them that allows norms and values to be created and maintained within the system, even in the face of errors. Nine online experiments were conducted with a total of 4771 unique participants to investigate the computational viability of ASC. The experiments were framed as coordination games between a human and an artificial intelligence (AI) player. Participants undertook training then played a game with the AI player. During the game they encountered an unexpected error. While the type of training did not reduce negative feedback, undertaking any form of training changed participants’ attitudes and responses. Participants who undertook training blamed themselves more than the AI player for the unexpected error and increasingly blamed themselves as the task difficulty increased. Participants who undertook training were additionally 1.5 times more likely than to change their responses to align with the AI player’s when considering norms and 2.5 times more likely when it came to values. The experimental results supported the concept that an element of education caused participants to blame the AI player less for errors. ASC could therefore be implemented as a computational model. However, it may be necessary to address users’ preconceived expectations of AI beforehand to prevent unethical applications of the theory.
Content Version: Open Access
Issue Date: Jun-2021
Date Awarded: Oct-2021
URI: http://hdl.handle.net/10044/1/92832
DOI: https://doi.org/10.25560/92832
Copyright Statement: Creative Commons Attribution NonCommercial Licence
Supervisor: Pitt, Jeremy
Department: Electrical and Electronic Engineering
Publisher: Imperial College London
Qualification Level: Doctoral
Qualification Name: Doctor of Philosophy (PhD)
Appears in Collections:Electrical and Electronic Engineering PhD theses



This item is licensed under a Creative Commons License Creative Commons