83
IRUS Total
Downloads
  Altmetric

Control designs and reinforcement learning-based management for software defined networks

File Description SizeFormat 
Zhang-Z-2020-PhD-Thesis.pdfThesis4.67 MBAdobe PDFView/Open
Title: Control designs and reinforcement learning-based management for software defined networks
Authors: Zhang, Ziyao
Item Type: Thesis or dissertation
Abstract: In this thesis, we focus our investigations around the novel software defined net- working (SDN) paradigm. The central goal of SDN is to smoothly introduce centralised control capabilities to the otherwise distributed computer networks. This is achieved by abstracting and concentrating network control functionalities in a logically centralised control unit, which is referred to as the SDN controller. To further balance between centralised control, scalability and reliability considerations, distributed SDN is introduced to enable the coexistence of multiple physical SDN controllers. For distributed SDN, networking elements are grouped together to form various domains, with each domain managed by an SDN controller. In such a distributed SDN setting, SDN controllers of all domains synchronise with each other to maintain logically centralised network views, which is referred to as controller synchronisation. Centred on the problem of SDN controller synchronisation, this thesis specifically aims at addressing two aspects of the subject as follows. First, we model and analyse the performance enhancements brought by controller synchronisation in distributed SDN from a theoretical perspective. Second, we design intelligent controller synchronisation policies by leveraging existing and creating new Reinforcement Learning (RL) and Deep Learning (DL)-based approaches. In order to understand the performance gains of SDN controller synchronisation from a fundamental and analytical perspective, we propose a two-layer network model based on graphs to capture various characteristics of distributed SDN net- works. Then, we develop two families of analytical methods to investigate the performance of distributed SDN in relationship to network structure and the level of SDN controller synchronisation. The significance of our analytical results is that they can be used to quantify the contribution of controller synchronisation level, in improving the network performance under different network parameters. Therefore, they serve as fundamental guidelines for future SDN performance analyses and protocol designs. For the designs of SDN controller synchronisation policies, most existing works focus on the engineering-centred system design aspect of the problem for ensuring anomaly-free synchronisation. Instead, we emphasise on the performance improvements with respect to (w.r.t.) various networking tasks for designing controller synchronisation policies. Specifically, we investigate various scenarios with diverse control objectives, which range from routing related performance metric to other more sophisticated optimisation goals involving communication and computation resources in networks. We also take into consideration factors such as the scalability and robustness of the policies developed. For this goal, we employ machine learning techniques to assist our policy designs. In particular, we model the SDN controller synchronisation policy as serial decision-making processes and resort to RL-based techniques for developing the synchronisation policy. To this end, we leverage a combination of various RL and DL methods, which are tailored for handling the specific characteristics and requirements in different scenarios. Evaluation results show that our designed policies consistently outperform some already in-use controller synchronisation policies, in certain cases by considerable margins. While exploring existing RL algorithms for solving our problems, we identify some critical issues embedded within these algorithms, such as the enormity of the state-action space, which can cause inefficiency in learning. As such, we propose a novel RL algorithm to address these issues, which is named state action separable reinforcement learning (sasRL). Therefore, the sasRL approach constitutes another major contribution of this thesis in the field of RL research.
Content Version: Open Access
Issue Date: Aug-2020
Date Awarded: Nov-2020
URI: http://hdl.handle.net/10044/1/84605
DOI: https://doi.org/10.25560/84605
Copyright Statement: Creative Commons Attribution Non-Commercial NoDerivatives licence
Supervisor: Leung, Kin Kwong
Sponsor/Funder: DAIS-ITA Project
Funder's Grant Number: W911NF-16-3-0001
Department: Electrical and Electronic Engineering
Publisher: Imperial College London
Qualification Level: Doctoral
Qualification Name: Doctor of Philosophy (PhD)
Appears in Collections:Electrical and Electronic Engineering PhD theses



This item is licensed under a Creative Commons License Creative Commons