Network representation matrices and their eigenproperties: a comparative study
File(s)
Author(s)
Lutzeyer, Johannes
Type
Thesis or dissertation
Abstract
Typically, network structures are represented by one of three different representation matrices: the adjacency matrix, the unnormalised and the normalised graph Laplacian matrices.
We review their spectral (eigenvalue) and eigenvector properties and recall several relations to graph theoretic quantities. We compare the representation spectra pairwise by applying an affine transformation to one of them, which enables comparison whilst preserving certain key properties such as normalised eigengaps. Bounds are given on the eigenvalue and normalised eigengap differences thus found, which depend on the minimum and maximum degree of the graph. It is found that if the degree extreme difference is large, different choices of representation matrix may give rise to disparate inference drawn from network analysis; smaller degree extreme differences result in consistent inference, whatever the choice of representation matrix.
In order to compare the representation matrix eigenvectors, we extend the applicability and tighten the Davis--Kahan theorem by making use of polynomial matrix transformations. Our extended version of the Davis--Kahan theorem enables the comparison of the spaces spanned by sets of eigenvectors of any two symmetric matrices, such as the representation matrices. We then discuss how globally optimal bound values of this extended theorem can be computed in practice for affine transformations using fractional programming theory. We make use of our implementation of the extended Davis--Kahan theorem to compare the spaces spanned by the eigenvectors of the different graph representation matrices for a range of different stochastic blockmodel graphs and of covariance matrices in a spiked covariance model.
We review their spectral (eigenvalue) and eigenvector properties and recall several relations to graph theoretic quantities. We compare the representation spectra pairwise by applying an affine transformation to one of them, which enables comparison whilst preserving certain key properties such as normalised eigengaps. Bounds are given on the eigenvalue and normalised eigengap differences thus found, which depend on the minimum and maximum degree of the graph. It is found that if the degree extreme difference is large, different choices of representation matrix may give rise to disparate inference drawn from network analysis; smaller degree extreme differences result in consistent inference, whatever the choice of representation matrix.
In order to compare the representation matrix eigenvectors, we extend the applicability and tighten the Davis--Kahan theorem by making use of polynomial matrix transformations. Our extended version of the Davis--Kahan theorem enables the comparison of the spaces spanned by sets of eigenvectors of any two symmetric matrices, such as the representation matrices. We then discuss how globally optimal bound values of this extended theorem can be computed in practice for affine transformations using fractional programming theory. We make use of our implementation of the extended Davis--Kahan theorem to compare the spaces spanned by the eigenvectors of the different graph representation matrices for a range of different stochastic blockmodel graphs and of covariance matrices in a spiked covariance model.
Version
Open Access
Date Issued
2019-09
Online Publication Date
2020-08-31T06:00:19Z
2020-09-15T13:38:25Z
Date Awarded
2020-03
Copyright Statement
Creative Commons Attribution NonCommercial NoDerivatives Licence
Advisor
Walden, Andrew
Sponsor
Engineering and Physical Sciences Research Council
Grant Number
EP/M506345/1 (Oct 2015 - Sept 2018); EP/M507878/1 (Oct 2018 - Mar 2019)
Publisher Department
Department of Mathematics
Publisher Institution
Imperial College London
Qualification Level
Doctoral
Qualification Name
Doctor of Philosophy (PhD)