Robustness in natural language processing
File(s)
Author(s)
Li, Zhenhao
Type
Thesis or dissertation
Abstract
The recent success in Natural Language Processing has been driven by the evolution of pre-trained language models, which achieve state-of-the-art performance across a wide range of tasks, such as machine translation, sentiment analysis, question answering, etc. Despite these successes, ensuring robustness of NLP systems remains a significant challenge, especially when faced with non-standard inputs such as noisy texts and adversarial texts. In this thesis, we focus on three common robustness issues in NLP systems: noisy texts, textual adversarial attacks, and unintended data bias. We address the robustness issue from textual noise by integrating extra visual and conversational contexts.
We first incorporate visual contexts and error correction training in multimodal translation. Our findings reveal that the integration of multimodality and error correction effectively mitigates information loss from noise and improves translation robustness. In addition, we extend the exploration of noise robustness to multimodal conversational modeling by proposing a new task: multimodal conversation derailment detection and curating a corresponding new dataset. We propose a multimodal hierarchical model that combines textual, visual, and conversational contexts to predict conversation derailment, and experimentally demonstrate the efficacy of conversational contexts in noisy settings. To counter adversarial attacks, we propose a flexible method, DiffuseDef, to defend against adversarial attacks. Our method trains a diffusion layer on top of a PLM and performs denoising iteratively on the encoder hidden representations. Benefiting from removing latent noise in adversarial texts, our method achieves state-of-the-art performance against common black-box and white-box adversarial attacks. Finally, we tackle the robustness concern that stems from model training, specifically handling unintended bias. We propose an adversarial multitask model that extracts task-common representations and disentangles task-specific representations that can lead to bias. Our experiments on toxicity detection show that the proposed method can successfully mitigate unintended biases arising from three relevant tasks.
We first incorporate visual contexts and error correction training in multimodal translation. Our findings reveal that the integration of multimodality and error correction effectively mitigates information loss from noise and improves translation robustness. In addition, we extend the exploration of noise robustness to multimodal conversational modeling by proposing a new task: multimodal conversation derailment detection and curating a corresponding new dataset. We propose a multimodal hierarchical model that combines textual, visual, and conversational contexts to predict conversation derailment, and experimentally demonstrate the efficacy of conversational contexts in noisy settings. To counter adversarial attacks, we propose a flexible method, DiffuseDef, to defend against adversarial attacks. Our method trains a diffusion layer on top of a PLM and performs denoising iteratively on the encoder hidden representations. Benefiting from removing latent noise in adversarial texts, our method achieves state-of-the-art performance against common black-box and white-box adversarial attacks. Finally, we tackle the robustness concern that stems from model training, specifically handling unintended bias. We propose an adversarial multitask model that extracts task-common representations and disentangles task-specific representations that can lead to bias. Our experiments on toxicity detection show that the proposed method can successfully mitigate unintended biases arising from three relevant tasks.
Version
Open Access
Date Issued
2025-07-24
Date Awarded
2025-10-01
Copyright Statement
Attribution-Non Commercial-No Derivatives 4.0 International Licence (CC BY-NC-ND)
Advisor
Specia, Lucia
Rei, Marek
Publisher Department
Department of Computing
Publisher Institution
Imperial College London
Qualification Level
Doctoral
Qualification Name
Doctor of Philosophy (PhD)