• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 1
  • 1
  • Tagged with
  • 23
  • 23
  • 8
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On Network Reliability

Cox, Danielle 03 June 2013 (has links)
The all terminal reliability of a graph G is the probability that at least a spanning tree is operational, given that vertices are always operational and edges independently operate with probability p in [0,1]. In this thesis, an investigation of all terminal reliability is undertaken. An open problem regarding the non-existence of optimal graphs is settled and analytic properties, such as roots, thresholds, inflection points, fixed points and the average value of the all terminal reliability polynomial on [0,1] are studied. A new reliability problem, the k -clique reliability for a graph G is introduced. The k-clique reliability is the probability that at least a clique of size k is operational, given that vertices operate independently with probability p in [0,1] . For k-clique reliability the existence of optimal networks, analytic properties, associated complexes and the roots are studied. Applications to problems regarding independence polynomials are developed as well.
2

Network Reliability: Theory, Estimation, and Applications

Khorramzadeh, Yasamin 17 December 2015 (has links)
Network reliability is the probabilistic measure that determines whether a network remains functional when its elements fail at random. Definition of functionality varies depending on the problem of interest, thus network reliability has much potential as a unifying framework to study a broad range of problems arising in complex network contexts. However, since its introduction in the 1950's, network reliability has remained more of an interesting theoretical construct than a practical tool. In large part, this is due to well-established complexity costs for both its evaluation and approximation, which has led to the classification of network reliability as a NP-Hard problem. In this dissertation we present an algorithm to estimate network reliability and then utilize it to evaluate the reliability of large networks under various descriptions of functionality. The primary goal of this dissertation is to pose network reliability as a general scheme that provides a practical and efficiently computable observable to distinguish different networks. Employing this concept, we are able to demonstrate how local structural changes can impose global consequences. We further use network reliability to assess the most critical network entities which ensure a network's reliability. We investigate each of these aspects of reliability by demonstrating some example applications. / Ph. D.
3

Management and Control of Scalable and Resilient Next-Generation Optical Networks

Liu, Guanglei 10 January 2007 (has links)
Two research topics in next-generation optical networks with wavelength-division multiplexing (WDM) technologies were investigated: (1) scalability of network management and control, and (2) resilience/reliability of networks upon faults and attacks. In scalable network management, the scalability of management information for inter-domain light-path assessment was studied. The light-path assessment was formulated as a decision problem based on decision theory and probabilistic graphical models. It was found that partial information available can provide the desired performance, i.e., a small percentage of erroneous decisions can be traded off to achieve a large saving in the amount of management information. In network resilience under malicious attacks, the resilience of all-optical networks under in-band crosstalk attacks was investigated with probabilistic graphical models. Graphical models provide an explicit view of the spatial dependencies in attack propagation, as well as computationally efficient approaches, e.g., sum-product algorithm, for studying network resilience. With the proposed cross-layer model of attack propagation, key factors that affect the resilience of the network from the physical layer and the network layer were identified. In addition, analytical results on network resilience were obtained for typical topologies including ring, star, and mesh-torus networks. In network performance upon failures, traffic-based network reliability was systematically studied. First a uniform deterministic traffic at the network layer was adopted to analyze the impacts of network topology, failure dependency, and failure protection on network reliability. Then a random network layer traffic model with Poisson arrivals was applied to further investigate the effect of network layer traffic distributions on network reliability. Finally, asymptotic results of network reliability metrics with respect to arrival rate were obtained for typical network topologies under heavy load regime. The main contributions of the thesis include: (1) fundamental understandings of scalable management and resilience of next-generation optical networks with WDM technologies; and (2) the innovative application of probabilistic graphical models, an emerging approach in machine learning, to the research of communication networks.
4

Modeling of vulnerability, reliability and risk for route choice in urban freight transport / Modelagem da vulnerabilidade, confiabilidade e risco para escolha de rota no transporte urbano de carga

George Vasconcelos Goes 06 April 2015 (has links)
CoordenaÃÃo de AperfeÃoamento de Pessoal de NÃvel Superior / The problems of urban mobility are related, among others, the spatial distribution of activities, the significant growth of automobile use, associated with a poor public transport system, and the occurrence of negative impacts caused by load handling activities in densely populated areas. The population concentration in cities gives urban centers the role of consumption nuclei, which must be supplied continuously from streams of very different nature and origin. To estimate the reliability, risk and vulnerability associated with the route chosen from several points of origin and destination can represent change in decision making. As the cost incurred on a route to greater risk, greater variability in travel time, or more vulnerable to incidents exceed the projected cost of a route optimized by time (based on the shortest path), such a decision can be reviewed. The general objective of this study was to represent the generalized cost, incorporating the concept of vulnerability, reliability and risk of urban road network for decision making regarding the choice of route for cargo transportation. Thus, we developed a method for modeling the generalized cost that incorporates the three attributes in different supply scenarios. The experiment showed the existence of trade-off (conflicting choice)between the average time variables trip, reliability / risk and the generalized cost. Only the information of time and cost of travel are not sufficient to meet the conditions of delivery of merchandise. For the driver, holds the knowledge of the reliability or the risk of delivery of the goods can be a strategic pillar cost reduction or the medium and long-term market gains. / Os problemas de mobilidade urbana estÃo relacionados, dentre outros, à distribuiÃÃo espacial das atividades, ao crescimento expressivo do uso do automÃvel, associado a um sistema deficiente de transporte pÃblico, e à ocorrÃncia de impactos negativos provocados pelas atividades de movimentaÃÃo de cargas nas Ãreas adensadas. A concentraÃÃo populacional nas cidades confere aos centros urbanos papel de nÃcleos de consumo, que devem ser abastecidos continuamente a partir de fluxos de natureza e origem muito diversas. Estimar a confiabilidade, o risco e a vulnerabilidade associados à rota escolhida entre diversos pontos de origem e destino pode representar mudanÃa na tomada de decisÃo. Conforme o custo incorrido em uma rota de maior risco, maior variabilidade no tempo de viagem, ou mais vulnerÃvel à incidentes, supere o custo projetado de uma rota de otimizada pelo tempo (baseada no caminho mÃnimo), tal decisÃo pode ser reavaliada. O objetivo geral do presente trabalho fora representar o custo generalizado, incorporando o conceito da vulnerabilidade, da confiabilidade e do risco de uma rede viÃria urbana para tomada de decisÃo quanto à escolha de rota no transporte de carga. Para isso, desenvolveu-se um mÃtodo para modelagem do custo generalizado que incorpore os trÃs atributos em cenÃrios distintos de abastecimento. O experimento evidenciou a existÃncia de trade-off (escolha conflitante) entre as variÃveis de tempo mÃdio de viagem, confiabilidade/risco e o custo generalizado. Apenas as informaÃÃes de tempo e custo da viagem nÃo sÃo suficientes para suprir as condiÃÃes de entrega da mercadoria. Para o transportador, deter o conhecimento da confiabilidade, ou do risco, da entrega da mercadoria pode ser um pilar estratÃgico de reduÃÃo de custos ou de ganho de mercado à mÃdio e longo prazo.
5

RELIABILITY AND RISK ASSESSMENT OF NETWORKED URBAN INFRASTRUCTURE SYSTEMS UNDER NATURAL HAZARDS

Rokneddin, Keivan 16 September 2013 (has links)
Modern societies increasingly depend on the reliable functioning of urban infrastructure systems in the aftermath of natural disasters such as hurricane and earthquake events. Apart from a sizable capital for maintenance and expansion, the reliable performance of infrastructure systems under extreme hazards also requires strategic planning and effective resource assignment. Hence, efficient system reliability and risk assessment methods are needed to provide insights to system stakeholders to understand infrastructure performance under different hazard scenarios and accordingly make informed decisions in response to them. Moreover, efficient assignment of limited financial and human resources for maintenance and retrofit actions requires new methods to identify critical system components under extreme events. Infrastructure systems such as highway bridge networks are spatially distributed systems with many linked components. Therefore, network models describing them as mathematical graphs with nodes and links naturally apply to study their performance. Owing to their complex topology, general system reliability methods are ineffective to evaluate the reliability of large infrastructure systems. This research develops computationally efficient methods such as a modified Markov Chain Monte Carlo simulations algorithm for network reliability, and proposes a network reliability framework (BRAN: Bridge Reliability Assessment in Networks) that is applicable to large and complex highway bridge systems. Since the response of system components to hazard scenario events are often correlated, the BRAN framework enables accounting for correlated component failure probabilities stemming from different correlation sources. Failure correlations from non-hazard sources are particularly emphasized, as they potentially have a significant impact on network reliability estimates, and yet they have often been ignored or only partially considered in the literature of infrastructure system reliability. The developed network reliability framework is also used for probabilistic risk assessment, where network reliability is assigned as the network performance metric. Risk analysis studies may require prohibitively large number of simulations for large and complex infrastructure systems, as they involve evaluating the network reliability for multiple hazard scenarios. This thesis addresses this challenge by developing network surrogate models by statistical learning tools such as random forests. The surrogate models can replace network reliability simulations in a risk analysis framework, and significantly reduce computation times. Therefore, the proposed approach provides an alternative to the established methods to enhance the computational efficiency of risk assessments, by developing a surrogate model of the complex system at hand rather than reducing the number of analyzed hazard scenarios by either hazard consistent scenario generation or importance sampling. Nevertheless, the application of surrogate models can be combined with scenario reduction methods to improve even further the analysis efficiency. To address the problem of prioritizing system components for maintenance and retrofit actions, two advanced metrics are developed in this research to rank the criticality of system components. Both developed metrics combine system component fragilities with the topological characteristics of the network, and provide rankings which are either conditioned on specific hazard scenarios or probabilistic, based on the preference of infrastructure system stakeholders. Nevertheless, they both offer enhanced efficiency and practical applicability compared to the existing methods. The developed frameworks for network reliability evaluation, risk assessment, and component prioritization are intended to address important gaps in the state-of-the-art management and planning for infrastructure systems under natural hazards. Their application can enhance public safety by informing the decision making process for expansion, maintenance, and retrofit actions for infrastructure systems.
6

Development Of A Gis Software For Evaluating Network Relibility Of Lifelines Under Seismic Hazard

Oduncucuoglu, Lutfi 01 December 2010 (has links) (PDF)
Lifelines are vital networks and it is important that those networks are still be functional after major natural disasters such as earthquakes. The goal of this study is to develop a GIS software for evaluating network reliability of lifelines under seismic hazard. In this study, GIS, statistics and facility management is used together and a GIS software module, which constructs GIS based reliability maps of lifeline networks, is developed by using geoTools. Developed GIS module imports seismic hazard and lifeline network layers in GIS formats using geoTools libraries and after creating a gridded network structure it uses a network reliability algorithm, initially developed by Yoo and Deo (1988), to calculate the upper and lower bounds of lifeline network reliability under seismic hazard. Also it can show the results in graphical form and save as shape file format. In order to validate the developed application, results are compared with a former case study of Selcuk (2000) and the results are satisfactorily close to previous study. As a result of this study, an easy to use, GIS based software module that creates GIS based reliability map of lifelines under seismic hazard was developed.
7

Alternativa nätlösningar vid reinvestering / Network planning proposal actual to a reinvestment project

Strandberg, Petter January 2015 (has links)
Hårdare krav ställs på leveranssäkerheten i de svenska elnätet. Kraven har ställts som följd av att flertal stormar och oväder har orsakat väldigt långa avbrott historiskt sett i Sverige. Det är nu olagligt att ha ett avbrott längre än 24 timmar. För att nå upp till kraven som ställs på leveranssäkerhet behöver nätföretag investera i sina befintliga nät. Generellt sett sker investering i att byta ut luftledning mot jordkabel. Fortum äger en 12 kV-linje i ett område lokaliserat nordöst om Charlottenberg som är drabbad av avbrott och har hög genomsnittlig ålder. Ett mer tillförlitligt elnät måste upprättas. I den här rapporten ges ett nätförslag som skulle leda till en förbättrad leveranssäkerhet och ökad tillförlitlighet. / Swedish legislation regarding network reliability has changed after the historic storm named Gudrun. It is now a violation to have interruptions in the distribution of electricity lasting longer than 24 hours. To reach needed reliability in the network, companies that distribute electricity need to invest in their existing grids. The general investment performed is exchanging overhead-lines to underground cables. Fortum is the owner of a 12 kV rural power-line, located northeast of Charlottenberg, Sweden. This power-line has interruptions and an overall high age. A more reliable network has to be planned. In this report, an alternative network is proposed, that would lead to improved reliability in the network.
8

Network reliability as a result of redundant connectivity

Binneman, Francois J. A. 03 1900 (has links)
Thesis (MSc (Logistics)--University of Stellenbosch, 2007. / There exists, for any connected graph G, a minimum set of vertices that, when removed, disconnects G. Such a set of vertices is known as a minimum cut-set, the cardinality of which is known as the connectivity number k(G) of G. A connectivity preserving [connectivity reducing, respectively] spanning subgraph G0 ? G may be constructed by removing certain edges of G in such a way that k(G0) = k(G) [k(G0) < k(G), respectively]. The problem of constructing such a connectivity preserving or reducing spanning subgraph of minimum weight is known to be NP–complete. This thesis contains a summary of the most recent results (as in 2006) from a comprehensive survey of literature on topics related to the connectivity of graphs. Secondly, the computational problems of constructing a minimum weight connectivity preserving or connectivity reducing spanning subgraph for a given graph G are considered in this thesis. In particular, three algorithms are developed for constructing such spanning subgraphs. The theoretical basis for each algorithm is established and discussed in detail. The practicality of the algorithms are compared in terms of their worst-case running times as well as their solution qualities. The fastest of these three algorithms has a worst-case running time that compares favourably with the fastest algorithm in the literature. Finally, a computerised decision support system, called Connectivity Algorithms, is developed which is capable of implementing the three algorithms described above for a user-specified input graph.
9

Simulation ranking and selection procedures and applications in network reliability design

Kiekhaefer, Andrew Paul 01 May 2011 (has links)
This thesis presents three novel contributions to the application as well as development of ranking and selection procedures. Ranking and selection is an important topic in the discrete event simulation literature concerned with the use of statistical approaches to select the best or set of best systems from a set of simulated alternatives. Ranking and selection is comprised of three different approaches: subset selection, indifference zone selection, and multiple comparisons. The methodology addressed in this thesis focuses primarily on the first two approaches: subset selection and indifference zone selection. Our first contribution regards the application of existing ranking and selection procedures to an important body of literature known as system reliability design. If we are capable of modeling a system via a network of arcs and nodes, then the difficult problem of determining the most reliable network configuration, given a set of design constraints, is an optimization problem that we refer to as the network reliability design problem. In this thesis, we first present a novel solution approach for one type of network reliability design optimization problem where total enumeration of the solution space is feasible and desirable. This approach focuses on improving the efficiency of the evaluation of system reliabilities as well as quantifying the probability of correctly selecting the true best design based on the estimation of the expected system reliabilities through the use of ranking and selection procedures, both of which are novel ideas in the system reliability design literature. Altogether, this method eliminates the guess work that was previously associated with this design problem and maintains significant runtime improvements over the existing methodology. Our second contribution regards the development of a new optimization framework for the network reliability design problem that is applicable to any topological and terminal configuration as well as solution sets of any sizes. This framework focuses on improving the efficiency of the evaluation and comparison of system reliabilities, while providing a more robust performance and user-friendly procedure in terms of the input parameter level selection. This is accomplished through the introduction of two novel statistical sampling procedures based on the concepts of ranking and selection: Sequential Selection of the Best Subset and Duplicate Generation. Altogether, this framework achieves the same convergence and solution quality as the baseline cross-entropy approach, but achieves runtime and sample size improvements on the order of 450% to 1500% over the example networks tested. Our final contribution regards the development and extension of the general ranking and selection literature with novel procedures for the problem concerned with the selection of the -best systems, where system means and variances are unknown and potentially unequal. We present three new ranking and selection procedures: a subset selection procedure, an indifference zone selection procedure, and a combined two stage subset selection and indifference zone selection procedure. All procedures are backed by proofs of the theoretical guarantees as well as empirical results on the probability of correct selection. We also investigate the effect of various parameters on each procedure's overall performance.
10

An investigation into 88 KV surge arrester failures in the Eskom east grid traction network

Mzulwini, Mduduzi Comfort 31 March 2023 (has links) (PDF)
The Eskom East Grid Traction Network (EGTN) supplying traction loads and distribution networks has experienced at least one surge arrester failure over the past ten years. These failures results in poor network reliability and customer dissatisfactions which are often overlooked. This is because reliability indices used in the reliability evaluation of transmission and distribution networks are different. It is suspected that fast transient faults in this network initiate system faults leading to surge arrester design parameter exceedances and poor network insulation coordination. Preliminary investigations in network suggest that transient studies were not done during network planning and design stages. This may have resulted in the lack of surge arrester parameter evaluations under transient conditions leading to improper surge arresters being selected and installed in this network resulting in surge arrester failures that are now evident. These failures may also have been exacerbated by the dynamic nature of traction loads as they are highly unbalanced, have poor power factors and emit high voltage distortions. Poor in-service conditions such as defects, insulation partial discharges and overheating, bolted faults in the network and quality of supply emissions can also contribute to surge arrester failures. To address problems arising with different reliability indices in these networks the reliability of the EGTN is evaluated. In this work the reliability evaluation of the EGTN is done by computing common distribution reliability indices using analytic and simulation methods. This is done by applying the analytic method in the EGTN by assessing network failure modes and effects analysis (FMEA) when the surge arrester fails in this network. The simulation method is applied by applying and modifying the MATLAB code proposed by Shavuka et al. [1]. These reliability indices are then compared with transmission reliability indices over the same period. This attempts to standardize reliability evaluations in these networks. To assess the impact of transient faults in the surge arrester parameter evaluation the EGTN is modelled and simulated by initiating transient faults sequentially in the network at different nodes and under different loading conditions. This is done by using Power System Blockset (PSB), Power System Analysis Toolbox (PSAT) and Alternate Transient Program (ATP) simulation tools and computing important surge arrester parameters i.e. continuous operating voltage, rated voltage, discharge current and energy absorption capability (EAC). These parameters are assessed by in the EGTN by evaluating computed surge arrester parameters against parameters provided by manufacturers, the Eskom 88 kV surge arrester specification and those parameters recommended in IEC 60099-4. To assess the impact and contribution of in-service conditions, faults and quality of supply emissions in surge arrester failures these contributing factors are investigated by assessing infra-red scans, fault analysis reports, results of the sampled faulted surge arrester in this network and quality of supply parameters around the time of failures. This study found that Eskom transmission and distribution network reliability indices can be standardized as distribution reliability indices i.e. SAIDI, SAIFI, CAIDI, ASAI and ASUI indices are similar to Eskom transmission indices i.e. SM, NOI, circuit availability index and circuit unavailability index respectively. Transient simulations in this study showed that certain surge arresters in the EGTN had their rated surge arrester parameters exceeded under certain transient conditions and loading conditions. These surge arresters failed as their discharge currents and EACs were exceeded under heavy and light network loading conditions. This study concluded that surge arresters whose discharge currents and EACs exceeded were improperly evaluated and selected prior to their installations in the EGTN. This study found the EAC to be the most import parameter in surge arrester performance evaluations. The Eskom 88 kV surge arrester specification was found to be inadequate, inaccurate and ambiguous as a number of inconsistencies in the usage of IEEE and IEC classified systems terminology were found. It was concluded that these inconsistencies may have led to confusions for manufacturers during surge arrester designs and selections in the EGTN. The evaluation of fault reports showed that two surge arrester failures in this network were caused by hardware failures such as conductor failure and poor network operating as the line was continuously closed onto a fault. There was no evidence that poor in-service and quality of supply emissions contributed to surge arrester failures in this network. PSB, PSAT and ATP simulation tools were found adequate in modelling and simulating the EGTN. However the PSB tool was found to be slow as the network expanded and the PSAT required user defined surge arrester models requiring detailed manufacture data sheets which are not readily available. ATP was found to be superior in terms of speed and accuracy in comparison to the PSB and PSAT tools. The MATLAB code proposed by Shavuka et al. [1] was found to be suitable and accurate in assessing transmission networks as EGTN's reliability indices computed from this code were comparable to benchmarked Eskom distribution reliability indices. The work carried out in this research will assist in improving surge arrester performance evaluations, the current surge arrester specification and surge arrester selections. Simulation tools utilized in this work show great potential in achieving this. Reliability studies conducted in this work will assist in standardizing reliability indices between Eskom's transmission and distribution divisions. In-service condition assessment carried out in this work will improve surge arrester condition monitoring and preventive maintenance practices.

Page generated in 0.0629 seconds