Decreasing the Discolouration Risk of Drinking Water Distribution Systems through Optimised Topological Changes and Optimal Flow Velocity Control

In this paper, a new mathematical framework is proposed for maximising the selfcleaning capacity (SCC) of drinking water distribution systems by controlling diurnal peak flow velocities in pipes under normal operation. This is achieved through an optimal change of the network connectivity (topology). We propose an efficient algorithm for network analysis of valve closures, which allows enforcing favourable changes in flow velocities for maximising the SCC by determining an optimal set of links to isolate in forming a more branched network, while satisfying hydraulic and regulatory pressure constraints at demand nodes. Multiple stochastic demands from an end-use demand model are generated to test the robustness in the improved SCC for the modified network connectivity under changing demand. We use an operational network model to demonstrate the efficacy of the proposed approach.


INTRODUCTION
An essential operational objective for water utilities is the monitoring and control of drinking-water quality (Vreeburg and Boxall 2007).Although customer concerns can sometimes arise from microbial water quality problems (Ainsworth 2013) or dissolved chemical hazards (Bellinger 2016), discolouration events are by far the main cause of customer complaints globally (Vreeburg and Boxall 2007;Husband and Boxall 2011;Armand et al. 2015).Therefore, the control of discolouration events has become a top priority for water utilities.In the UK, the aim to reduce these events is further motivated by financial incentives and penalties that encourage a reduction in customer contacts (The Chief Inspector of Drinking Water 2015; OFWAT 2013).
In the last two decades, significant progress has been made in the understanding of the physical processes and mechanisms that result in discolouration -a comprehensive review can be found in Vreeburg and Boxall (2007) and in Armand et al. (2015).Advances in causal analysis of discolouration have shown that it is mainly caused by the mobilization of particles that have accumulated within the distribution system.Whatever the origin of these particles may be (i.e. the water source, biofilm formation from dissolved organic matter (Gauthier et al. 1999), particulates from treatment processes (Vreeburg et al. 2008) or corrosion in iron pipes (Sarin et al. 2001)), they can accumulate on the pipe walls through gravitational sedimentation, non-gravitational accumulation, entrapment in biofilms or via a complex interaction of these processes (Vreeburg and Boxall 2007).The particles can then detach from the pipe walls and flow to customer taps when significant changes to normal hydraulic regimes and flow velocities cause higher scouring forces and shear stresses, potentially causing a discolouration event (Husband and Boxall 2011;Aisopou et al. 2014).
Although the pathways to discolouration and its occurrence are unavoidable, the magnitude or severity of discolouration and its frequency can be controlled through the optimisation of network design, maintenance and operations (Vreeburg 2010).One of many complementary approaches is to reduce or prevent particulates entering the WDS by mainly improving treatment processes, and by reducing external contamination in repairs and backflow from leaks (Blokker and Schaap 2015).For corroding metallic pipes, maintenance and rehabilitation (i.e.cleaning, replacement and relining of sections of the network) is an option (Husband and Boxall 2016).A third approach is to design self-cleaning networks with daily flow velocities that prevent residence and material accumulation (van den Boomen et al. 2004).In some cases the redesigning of oversized looped WDSs (i.e.larger diameter looped networks with high fire fighting demands) to have self-cleaning velocities has been successful (Vreeburg and Boxall 2007).Since the above approaches are logistically difficult and capital intensive to scale, a conventional approach taken by utilities for controlling discolouration events is the use of periodic flushing through fire hydrants, air/water scouring and pigging to remove accumulated particulates (Vreeburg and Boxall 2007;Husband and Boxall 2016).
Traditionally, flushing is done reactively after customer complaints about discoloured water or complaints of experiencing reduced pressure (Vreeburg and Boxall 2007;Husband and Boxall 2011).Since the particulate accumulation levels and their potential for discolouration vary across the network and depend on bulk water quality, and pipe types when corroding,an effective proactive flushing regime requires a scheduling tool to determine the flushing frequency and risk levels for different pipes.Although such an optimal scheduling tool does not yet exist, proac-1 This is an Accepted Manuscript of an article published by ASCE in Journal of Water Resources Planning and Management 201?Vol.14?, Issue ?available online: https://doi.org/10.??? tive flushing techniques have been proposed by making use of predictive tools for analysing risk of discolouration in prioritising flushing or determining flushing frequencies.For example, the resuspension potential method (RPM) (Vreeburg et al. 2004), a practical measuring tool, has been used in the Netherlands to determine flushing frequencies (Blokker and Schaap 2015).First isolating a pipe, RPM determines the pipe's particle mobility and so discolouration risk by increasing flow velocity by 0.35m/s over a 5-15 minute period and monitoring turbidity levels.An alternative predictive tool called PODDS (prediction of discolouration events in distribution systems) is simulation based (Boxall and Dewis 2005;Husband and Boxall 2010).It exploits cohesive transport models that are calibrated using empirical data describing particulate layer strength and mobilization mechanisms. in Furnass et al. (2014), a new model extends the capability of PODDS to also reliably model material accumulation/regeneration processes.This is done in software by tracking over time both the erosion and accumulation of cohesive layers of discoloration material.Such models for a range of materials can be used to simulate and predict turbidity as a result of hydraulic changes introduced by burst events, sudden increase in demand, rezoning etc. or potentially to prioritise flushing (Husband and Boxall 2015).Although these predictive tools are man-hour intensive at the moment and so have limited scalability in improving the effect of dedicated flushing (Vreeburg and Boxall 2007), with decreasing equitment costs (eg.turbidity monitors), employing them as an operational standard by utilities could help protect water quality.
By analysing flushing experiments in a real WDS with RPM and coupled with simulations with realistic demand data, it was shown in van den Boomen et al. (2004) and Blokker et al. (2010a) that a minimum self-cleaning velocity can be defined for pipes to reduce possible fouling by "cleaning" with the day to day maximum flows in the system during the morning demand peak.These experimental results in Ducth and Flemish (Belgian) networks, where the distribution pipes in the networks studied are predominantly PVC and AC, have shown that a velocity of 0.4m/s is sufficient for self-cleaning (Vreeburg and Boxall 2007).For the same and similar networks, it has been shown in Blokker et al. (2011) that velocities above the threshold of 0.2-0.25 m/s can keep pipes clean.Ideally, the WDS diameter sizing and connectivity would be designed to have high velocities and unidirectional flow to mobilise particles and keep pipes clean; this is rarely the case.An alternative is to introduce more unidirectional flows with higher velocity by changing the topology of the network through the closure of selected pipe sections between isolation valves, i.e. making the network more branched with shorter pipe lengths.It was shown that this could work in real net-works in experiments where selected valves were closed over a few months (Blokker et al. 2012;Blokker et al. 2011).Of course the reconfiguration may result in some pipes having a velocity below the self-cleaning velocity threshold, the aim is to get an overall lower fouling rate for the whole network (Blokker et al. 2012).Although not a panacea for discoluration risk mitigation in all networks, self-cleaning has been shown to work well for networks in the Netherlands and in Belgium (Blokker et al. 2011), where only a small percentage of the network is made of unlined cast iron pipes.Nonetheless, it has been shown in Vreeburg et al. (2008) that the most important contribution to discolouration is particle deposition (from treatment works, and to a smaller extent from upstream cast iron pipes) under the influence of hydraulics.Therefore, with the replacement and relining of more cast iron pipes, self-cleaning networks may even be more widely applied globally.
In the present article, we investigate the self-cleaning capacity (SCC) of water distribution systems and how it can be optimized through a change of network topology (i.e. by closing an appropriate selection of isolation valves).We propose a new mathematical framework and a scalable computational tool to optimise the self-cleaning capacity for an already existing network.To do so, we first derive a linear flow network model approximation for the WDS so that we can compute the impact of valve closures on flow velocity using a scalable linear graph theory tool.We then show, using an operational network model, that explicitly optimising topology for SCC significantly increases the percentage of distribution pipe lengths that experience diurnal maximum velocities above a given selfcleaning velocity threshold.under changing demands.We also study the sensitivity of the optimal valve closures and the resulting SCC to the self-cleaning threshold velocity used in the optimisation.Multiple stochastic demands, generated with the end-use model SIMDEUM (Blokker et al. 2010a), are used to test the robustness of the new topologies

PROBLEM FORMULATION: INCREASING FLOW VELOCITY THROUGH TOPOLOGICAL AND OPERATIONAL OPTIMIZATION
The optimization problem we pose here is "find links to close from a set of candidates (i.e.links in a set N v , which can be valves between sections pipes, for example)" such that the number of pipes or the length of total pipe sections that satisfy self-cleaning velocity thresholds in all otherwise open pipes are maximised.The pipes for which we aim to increase self-cleaning capacity may also be an indexed subset from the set of all pipes (N c ⊂ N p ); for example, we may want to concentrate the cleaning capacity to a chosen selection of pipes that are known to be vulnerable to particle accumulation and discolouration processes.
These can be determined based on customer complaints, RPM (i.e.previous flushing results) or on the presence of vulnerable customers such as those undergoing kidney dialysis.Another alternative is to consider the risk associated with the type and number of customers potentially affected by particle detachment from specific pipes.The problem can be mathematically formulated as: maximize regulatory and performance constraints for the WDS, hydraulic constraints of the system, constraints on number of valves closed, (1) where the binary variable w j ∈ {0, 1} defines whether a valve j is closed or not and the time indices t lie in a given period T ; for example, a set of high demand periods when the daily maximum flow velocities occur.The indicator function k(v j ) is a function that signifies whether the magnitude of a velocity in the j th pipe is above a given threshold v j,min for that pipe with length L j , i.e. ( The optimization of the network topology (i.e.valve closures) to maximize self-cleaning capacity in (1) is a mixed-integer nonlinear programming (MINLP) problem.These class of optimization problems for a WDS are especially challenging because they combine large number of discrete decision variables with the difficulties of handling non-convex nonlinear hydraulic constraints (Pecci et al. 2016).In addition, the problem in (1) has an objective function that is non-smooth at ±v min , resulting in unbounded gradients around ±v min .To circumvent the challenges associated with solving the MINLP in (1), we make use of a computationally scalable approach to approximately solve the problem.We propose an iterative algorithm that uses efficient hydraulic simulations and linear graph analysis tools to identify isolation sections that will result in the most positive change to maximum flow velocities.Since the proposed graph analysis tool (which is similar to line-outage distribution analysis for linear electrical networks (Schaub et al. 2014)) is applicable only for linear flow networks, we first derive appropriate linearisations for our water distribution network models.We then present the algorithm that employs the linearisations in the following subsection.

Linearized Model of Network Flows for Topological Optimisation
In this article, we consider networks using demanddriven hydraulic analysis, where the demand is assumed known at a given time instant.For a network with n p links connecting n n (< n p ) unknown head nodes, and n 0 known head nodes, we define the vector of unknown flows and pressure heads as q = [q 1 , . . ., q n p ] T and h = [h 1 , . . ., h n n ] T , respectively.Let pipe p j have flow q j going from node i to node k, and with pressure heads h i and h k at nodes i and k, respectively.The frictional headloss (or flow resistance) across the pipe can then be represented as: where r j , the resistance coefficient of the pipe, can be modelled as either independent of the flow in the Hazen-Williams (HW) models or implicitly dependent on flow q j in Darcy-Weisbach (DW) models (Larock et al. 1999, (2.10)).With head loss equations defined for each pipe and the fixed heads and demands for each node taken into account, the steady-state fluid flows in a water network must satisfy the two hydraulic principles: where the variables h 0 ∈ R n 0 and d ∈ R n n represent the known heads (eg. at a reservoir or tank) and demands at nodes, respectively.While (4) guarantees the conservation of flow at each junction node, (5) accounts for the frictional head loss across all links.Here, the matrices A T 12 ∈ R n n ×n p and A T 10 ∈ R n n ×n 0 are the node-toedge incidence matrices for the n n unknown head nodes and n 0 fixed head nodes, respectively.For example, each link is associated with an n n × 1 row vector in A 12 : A 12 ( j, i) = 1( or − 1) if link j enters ( or leaves) node i and A 12 ( j, i) = 0 otherwise.The square matrix A 11 ∈ R n p ×n p is a diagonal matrix with the elements representing part of the loss formula in (3).In ( 5), the frictional headloss function is expressed as a function of the flows.Using a nonlinear transformation of the headloss in pipe p j , (3) can be reformulated to: where pipe p j is topologically represented as a directed arc going from node i to node k.A matrix form of ( 7) is where Â11 ( j, j) = r It is evident from (7) that the flow in each pipe depends only on the head differential, and not on the pressure heads at the start and end nodes.For link j, we rewrite the energy conservation equation (7) as: A first-order Taylor approximation of φ (•) gives: where w = [h j , q j ] T , and the first-order flow approximation equation becomes n represent the 'flow conductance' at the given operating point for pipe j and , where N := diag(n i ).We can then re-write the matrix form of the linearized flow equation approximation (12) as: where hext = A 10 (h 0 − h0 ) + N(A 12 h + A 10 h0 ) − A 12 h can be considered as an external fixed head source (or external voltage source for the analogues electrical network (Schaub et al. 2014, Appx. A.1)).Note that we use the bar notation here because hext is a function of the operating point around which we linearize.Assuming the fixed head nodes are known (this is true as reservoir and tank levels are known/measured for operational simulation) or that h 0 does not change in the operational period considered for the linearization, hext is a constant and can be computed explicitly from the operating point data.
Flow Velocity Changes under Valve Closures: a Line Uutage Distribution Factor Approach Determining the 'impact' of changes in topology when some sections are closed off is a ubiquitous problem in WDS analysis.For example, utilities use critical link analysis (CLA) to measure network vulnerability, which consists of isolating sections of pipes in the network model, running hydraulic analysis on the model with the corresponding links isolated, and recording how many customers would be affected by such a failure.A general limitation of such an approach is that the combination of possible failure scenarios grows exponentially with network size and is not a tractable tool (Herrera et al. 2016;Schaub et al. 2014).Similarly, in security analysis of the power grid, the impact of line outages on the electrical network is studied to put in place appropriate contingency plans (Wood and Wollenberg 1996).Because a security analysis is often needed to be run in a real time operational scenario, fast scalable algorithms that use linear sensitivity methods are employed to approximately quantify the impact of failures.Specifically, the line outage distribution factor (LODF) is used to quantify the extra redistributed flow on all other lines when a given distribution line fails.The LODF is computed for all edges that can fail without explicitly simulating their failure using the nonlinear system model; this is done by simply solving a set of linear equations.Since the problem of quantifying the impact of multiple simultaneous outages is a combinatorially complex problem, LODFs can be iteratively evaluated to help identify the combination of failures that have the highest impact (Wood and Wollenberg 1996).
Assuming linear current flow on the electrical network edges, edge-to-edge graph analysis tools are proposed in Schaub et al. (2014) inspired by the LODF concept.For a linear flow network, Schaub et al (Schaub et al. 2014) show that the current flow redistribution matrix does not depend on the flow injections (i.e.demands at nodes and external voltage or current sources) but merely on the topology of the network.Moreover, an explicit form for the matrix is derived, which is computed by solving a set of linear equations.In problem (1), typically an isolation valve or a section of pipe between two valves is closed.Therefore, valve closures can be modelled as line outages.Using the language of (Schaub et al. 2014), the equilibrium equations for the WDS we get after linearisation are (see also (4) and ( 13)): where hext = A 10 (h 0 − h0 ) + N(A 12 h + A 10 h0 ) − A 12 h.
Let the weighted graph Laplacian of a given nominal network be defined as L = A T 12 GA 12 , with G computed using the link flows q and heads h of the original network around a given operating point.Now assume that we close link j of the WDS and analyse the first order model ( 14).Applying the same derivation as in Schaub et al. (2014), the vector of changes in the link flows can be shown to be: where b j = A T 12 (:, j) are the incidence vectors for the closed link, L † is either the inverse or the Moore-Penrose pseudoinverse of the Laplacian and q j is the flow on link j before its closure.Let M = GA 12 L † A T 12 and ε j = 1 − g j b T j L † b j , which are defined as the edge-to-edge trasfer matrix and the embededness measures in (Schaub et al. 2014).We can then write the matrix of flow redistribution for all edges as: where ] is what we call the water flow redistribution matrix (WFRM) and diag(q) is diagonal matrix with the j th diagonal element equal to q j .In the current application, we are of course interested in assessing the impact of a subset of link closures, i.e. a set of isolation valves of the network.Now let B f = A T 12 (:, N v ), where N v is the index set of candidate valves to be closed.Then Then, column j ∈ N v of ∆q in (16) estimates the change in volumetric flow in all the pipes when the j th valve is closed.The matrix of the changes in velocity ∆v is constructed by simply dividing the change in flow ( 16) in each pipe by the corresponding cross-sectional areas.The new velocity vector when valve j is closed can then be approximated by where v is the vector of velocities before closing valve j.
Note also that K can be computed by simply computing M and ε , where only the right hand side b j is varying.Therefore, in performing the computations L † b T j , the same factorization of L can be reused multiple times, reducing the computational cost further.
A pseudocode of the method for deriving a new topology by optimally selecting pipes to close off is presented in Algorithm 1.For clarity of notation, in ( 16) and ( 17), we show the approach by computing flow and velocity perturbations at a single time instant.In Algorithm 1, by simply iterating over all sampling points in time and associated demand values, the velocities are computed over any temporal range of interest.

RESULTS AND DISCUSSION
Here we use an operational network model to analyse the efficacy of the proposed approach for increasing selfcleaning capacity in a WDS; the LODF based algorithm for optimally choosing isolation valves (Algorithm 1) is applied to the Purmerend network from the Netherlands shown in Figure 1.This network is chosen because we have experimental results for validating the new algorithm by comparing the results with what is reported in Blokker et al. (2011) and references therein.
For the all-pipes model of the Dutch network in Purmerend, details on the pipe characteristics and demands can be found in Blokker et al. (2010a) and Blokker et al. (2011).In order to generate the demand patterns using SIMDEUM, area specific data on households (eg.house hold sizes and age distribution of residents) was used (Blokker et al. 2010a).The set of pipes (N c ) considered for increasing self-cleaning capacity are distribution pipes with diameters in the range 50mm-300mm; small pipes for customer connections are not considered.Large (transport) pipes are also not considered since their flow velocity is not impacted by the valve closures here.Algorithm 1 is used to identify the sections to close off by iteratively closing a single valve at a time.
In Blokker et al. (2011), the network area shown in Figure 1 was made more branched by closing off sections of pipes in loops.In the original "looped" network, only 5% (respectively 2%) of distribution pipes have velocities greater than 0.2 m/s (respectively 0.25m/s).This is shown by the black solid lines in Figure 2a.It was also shown in Blokker et al. (2011) that with some 5% of pipe sections closed, the network was made to have some 37% of pipe lengths experiencing a maximum velocity above 0.2m/s and some 30% experiencing a maximum velocity above 0.25m/s.We replicate these results in Figure 2a, where a bottom-up demand model with a 36 second resolution is used to compute the cumulative frequency distribution (CFD) of maximum daily flow velocity (v max ) per distribution pipe segment.We also show in the same figure, the impact of closing valves to explicitly maximise the self-cleaning capacity via Algorithm 1, versus the already good choices in Blokker et al. (2011) that were based on engineering know-how.We note that by closing only 5 valves via Algorithm 1, 50% of distribution pipes experience v max above 0.2m/s and around 40% of distribution pipes experience v max above 0.25m/s in the reconfigured topology.With 10 Valves closed via Algorithm 1, the percentage of distribution pipes with v max above 0.2m/s grows to 58%, and the percentage of distribution pipes experiencing v max above 0.25m/s increased to 50%.
In Figure 2b, we show the CDF of maximum daily flow velocities across all pipes, computed using a top-down demand model with five minute time-steps and some spatial aggregation of patterns.Typically a top-down model has time-steps of 15 minutes and multiplier patterns are allocated to nodes from pumping station data or from inlet flow measurements for sectorised networks, resulting in highly correlated demand patterns (Blokker 2010).The bottom-up approach, which aggregates demand from the household level, has been shown to give a much better representation of variability in residence times (Blokker 2010, Ch. 6) and more insight into flow reversals in looped networks (Blokker 2010, Ch. 8).In all cases, the topdown demand model underestimates the maximum velocities during a 24-hour simulation; the percentage of distribution pipes with v max above 0.2m/s (respectively 0.25m/s) is underestimated by 3% to 11% for the different cases (shown in Figure 2a and Figure 2b).This is, of  course, not surprising since the coarser spatial and temporal demand details result in a less accurate computation of flows in each pipe segment compared to the bottomup SIMDEUM demand model (Blokker et al. 2010b;Blokker et al. 2011), where each household connection is assigned a unique demand with an order higher temporal resolution; see al (Blokker et al. 2016) for a review of SIMDEUM applications.This is in line with analysis in Blokker et al. (2010a) that employing stochastic enduser models in hydraulic simulations is more accurate in modelling flow velocities at detailed temporal and spatial scales.This is in contrast to top-down models that average demands both spatially and temporally, often simply based on yearly metered user data or by dividing flow into a discrete metered area 'proportionally' by the number of users (Blokker 2010).In future, with the spatial proliferation of smart flow metres that capture different forms of water usage at high temporal precision, the more accurate (bottom-up) modelling of demand and associated cleaning capacity is envisaged.However, we note that in all the three cases the top-down model also reflects the progressively higher percentage of pipes that experience maximum flow velocities that are above the given threshold as the network is made more branched.
In Figure 3, we plot the pipe flow velocity at 8AM and the valves closed (dark red valves) to show how the different branched topologies reroute flows.The line colors and thickness represent the magnitude of the flow velocities.The results from Algorithm 1 are also positive when considering robustness to extreme changes to operations (i.e.continuity of supply in case of failures) with respect to self-cleaning networks (Agudelo-Vera et al. 2016).As shown in Agudelo-Vera et al. (2016), conitnuity of supply (measured as average customer minutes lost) is affected less for topologies with fewer valves closed.
In the branched topology of (Blokker et al. 2011) (shown here in Figure 3a), the 5% of pipe sections that are closed off using 17 valves experience stagnant water, the effect of creating these dead end water zones were deemed inconclusive in the experiments of (Blokker et al. 2011).Nonetheless, periodically refreshing the water in these 'dead end' zones was proposed.In the new branched topologies proposed by Algorithm 1 (shown here in Figure 3b and Figure 3c), a smaller proportion of pipes are closed off for a similar percentage of pipe lengths that experience maximum velocities above a threshold.So, fewer pipe sections experience stagnant water or need to be refreshed periodically.For the new topologies with 5 (respectively 10) valves closed with Algorithm 1, only 2.1% (resp.2.7 %) of pipe lengths experience stagnant water.
By modelling SCC and explicitly maximising it in our appraoch, we have shown that better self-cleaning capacity can be achieved by closing off much fewer valves (eg. 5 valves vs 17 valves in Figure 2a).For smart water networks, it is now possible to automate valve closures as reported in the field experimental studies of Wright et al. (2015).For example, a process can be automated to shutoff the chosen pipe-sections at peak hours only, reducing the need or frequency of flushing an otherwise per- manently closed sections as in (Blokker et al. 2011).By reducing the number of automated isolation valves needed for a given level of SCC, this appraoch will enable such implementations at a much lower cost of automation and maintenance.
In the above, we have chosen the isolation valves to close using an extended time simulation with a single re-alisation of user demands.To test the robustness of the different chosen configurations against stochastic variations in demand, we simulate the flow velocities using ten different realisations from SIMDEUM. Figure 4 shows the profiles for the original network, the configuration in Blokker et al. (2011)  valves.We note that the trend in Figure 2a is repeated in all realizations and conclude that performance of the new configurations are robust to day-to-day stochastic changes in demand.That is, the designs by Algorithm 1 have a similarly much higher percentage of pipe lengths experiencing maximum diurnal velocities above the selfcleaning threshold, compared to both the original looped network and the configuration in Blokker et al. (2011).
The experimental studies of Ducth and Flemish (Belgian) networks in Blokker et al. (2011), where the distribution pipes in the networks studied are predominantly PVC and AC, have shown that velocities above the threshold of 0.2-0.25 m/s can keep those pipes clean.For this reason, we have used 0.2 m/s as an example self-cleaning velocity threshold in the topology optimisation and analysis above.However, the proposed approach for increasing SCC is not wedded to this value of 0.2 m/s.Since the selfcleaning velocities for different pipes and network conditions can vary (Husband and Boxall 2010), in future work studying other networks internationally, it is possible to consider different threshold velocitities for each pipe in maximising SCC in equation ( 1).Note that in our formuation, the indicator function k(v j ) in equation ( 2) is indexed for each pipe and so a different threshold velocity can be tailored for each pipe.This could be based on ex-isiting conditioning velocity for each pipe, pipe material type and other factors that arise from ongoing research.
Here we examine the sensitivity of the proposed Algorithm and resulting topologies to this threshold velocity to show what happens to maximum velocities experienced across the network if we optimize self-cleaning capacity with different self-cleaning velocity thresholds.In Figure 5, we show the CDF of maximum velocities per pipe length for the network, when a range of self-cleaning velocity thresholds are used.We see from the three subfigures (from top to bottom), where the threshold velocities v min used in the design are 0.2 m/s, 0.4 m/s and 0.6 m/s, (respectively), that a much higher percentage of pipe lengths experience higher velocities in all cases compared to the original looped network.However, dependnig on this self-cleaning velocity therehsold used, the topologies (i.e. the valves closed) and the resulting maximum velocity CDF's are different.For example, if a threshold velocity of 0.4 m/s is used in the design (middle subfigure), around 30% of pipe lengths experience velocities above 0.4 m/s compared to only approximately 21% of pipe lengths experiencing velocities above 0.4 m/s when 0.2 m/s is used as a threshold self-cleaning velocity in the design (top subfigure).This is expected as the middle design is explicitely optimising the length of pipes with  maximum velocities above 0.4 m/s.However, the middle subfigure also attains this at the expense of less pipe lengths experincing velocities above 0.2 m/s; about 43% for the middle one compared to 53% for the top.Note also that only about 10% of pipe lengths experience maximum velocitites above 0.6 m/s in the top design; whereas, about 23% of pipe lengths experience such velocities when the threshold used is 0.6 m/s (bottom subfigure).Similarly, the bottom subfigure attains this at the expense of less pipe lengths experincing velocities above 0.2 m/s; about 38% for the bottom topology compared to 53% for the top.These results indicate that, for each pipe, we should ideally use as accuarte a self-cleaning velocity threshold as possible.
In all this, it is also worth commenting that increased SCC comes at the price of higher head losses associated with the increased velocities in more distribution pipes.Nonetheless, since we explicitly enforce feasibility for each valve closure (see Algorithm 1, line 8), we can guarantee that the trade-off between increases in maximum diurnal velocities and energy losses are acceptable for the network operator.In the simulations presented here, explicit minimum pressure constraints (of 150kPa) were imposed for all nodes.
For water companies, it is a challenge to keep good track of which valves are supposed to be open or closed and what their current status is.For this reason, there may be a preference to make more permanent alterations to the network, by taking out parts of pipes, instead of only closing them off with isolation valves.This approach would widen the search space for optimizing the network on selfcleaning capacity and is an ongoing research topic.We also note that conventional network layouts, although not designed to have a self-cleaning capacity, may have been designed to optimise other objectives like continuity of supply and minimum pressure levels under changing circumstances (Agudelo-Vera et al. 2016).The comparative stress test of different network configurations in Agudelo-Vera et al. (2016), shows that self-cleaning layouts may incur more head-losses but perform very well in terms of continuity of supply and water quality parameters.Future work will study how configurations optimised for SCC affect other objectives and scalability to more varied network types.The explicit inclusion of continuity of supply in a multi-objective codesign framework will be considered.

CONCLUSION
In this paper, we consider the proactive control of flow velocities to maximise the self-cleaning capacity (SCC) of water distribution systems under normal operations.This is done through an optimal change of the network topology by isolating some loop pipes using valves.By formulating a new mathematical optimisation framework for SCC, we derive a scalable algorithm for computing the best set of valves to close so as to maximise SCC.Using our approach on a well studied operational network from the Netherlands, we show that significant increases in the percentage of pipe lengths that achieve a self-cleaning velocity threshold can be acheived by closing-off much fewer valves than already reported in the literature.Using stochatsic demand models, we show that the new topologies also have robust performance under changing diurnal demand.
10: Set k to k + 1 11: end while All computations were performed within MATLAB R2013b-64 bit for Windows 7.0 installed on a 2.4 GHz Intel R Xeon(R) CPU E5-2665 0 with 16 Cores.A null space hydraulic solver (Abraham and Stoianov 2016) was used to check feasibility of valve closures in Algorithm 1.
FIG. 2: (a) Cumulative frequency distributions (CDF) of maximum diurnal velocity per pipe length (i.e. the fraction of distribution pipe lengths experiencing a maximum daily velocity v max when the network is made more branched by closing valves: via Algorithm 1 versus the configuration in Blokker et al. (2011), bottom-up demand profiles used.Solid balck lines show the CDF for the original looped network.(b)CDF of maximum diurnal velocity per pipe length for the configurations in (a), when a top-down demand model is used.
FIG.3:A comparison of maximum flow velocities at 8AM depicted via colour bar for the original looped Purmerend network and branched topologies for SCC.This is a snapshot of results presented in Figure2a.

FIG. 5 :
FIG. 5: Sensitivity of the SCC to the threshold velocity used for self-cleaning topology design: cumulative frequency distribution of fraction of pipe lengths experiencing a maximum daily velocity (v max ) when different self-cleaning velocity thresholds (v min ) are used in the design by Algorithm 1, results for closing 10 valves shown.Values of v min : (a) 0.2 m/s , (b) 0.4 m/s (c) 0.6 m/s.
: while ∃ valves to close & k ≤ |N v | do 1 with 17 valves closed, and the configuration by Algorithm 1 with ten closed isolation FIG.4: CFD's of maximum daily flow velocities (v max ) in distribution pipe segments simulated with multiple realisations of stochastic demands.The "bunch" of curves on the Left, Middle and Right (respectively) are for the Original Looped Network, the configuration in Blokker et al. (2011) with 17 valves closed and 10 isolation valves closed via Algorithm 1 (respectively).Multiple SIMDEUM generated demand profiles used to test robustness against stochastic variations in demand.