• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Large Deviations on Longest Runs

Zhu, Yurong January 2016 (has links)
The study on the longest stretch of consecutive successes in \random" trials dates back to 1916 when the German philosopher Karl Marbe wrote a paper concerning the longest stretch of consecutive births of children of the same sex as appearing in the birth register of a Bavarian town. The result was actually used by parents to \predict" the sex of their children. The longest stretch of same-sex births during that time in 200 thousand birth registrations was actually 17 t log2(200 103): During the past century, the research of longest stretch of consecutive successes (longest runs) has found applications in various areas, especially in the theory of reliability. The aim of this thesis is to study large deviations on longest runs in the setting of Markov chains. More precisely, we establish a general large deviation principle for the longest success run in a two-state (success or failure) Markov chain. Our tool is based on a recent result regarding a general large deviation for the longest success run in Bernoulli trails. It turns out that the main ingredient in the proof is to implement several global and local estimates of the cumulative distribution function of the longest success run.
2

Birnbaum Importance Patterns and Their Applications in the Component Assignment Problem

Yao, Qingzhu 01 May 2011 (has links)
The Birnbaum importance (BI) is a well-known measure that evaluates the relative contribution of components to system reliability. It has been successfully applied to tackling some reliability problems. This dissertation investigates two topics related to the BI in the reliability field: the patterns of component BIs and the BI-based heuristics and meta-heuristics for solving the component assignment problem (CAP).There exist certain patterns of component BIs (i.e., the relative order of the BI values to the individual components) for linear consecutive-k-out-of-n (Lin/Con/k/n) systems when all components have the same reliability p. This study summarizes and annotates the existing BI patterns for Lin/Con/k/n systems, proves new BI patterns conditioned on the value of p, disproves some patterns that were conjectured or claimed in the literature, and makes new conjectures based on comprehensive computational tests and analysis. More importantly, this study defines a concept of segment in Lin/Con/k/n systems for analyzing the BI patterns, and investigates the relationship between the BI and the common component reliability p and the relationship between the BI and the system size n. One can then use these relationships to further understand the proved, disproved, and conjectured BI patterns.The CAP is to find the optimal assignment of n available components to n positions in a system such that the system reliability is maximized. The ordering of component BIs has been successfully used to design heuristics for the CAP. This study proposes five new BI-based heuristics and discusses their corresponding properties. Based on comprehensive numerical experiments, a BI-based two-stage approach (BITA) is proposed for solving the CAP with each stage using different BI-based heuristics. The two-stage approach is much more efficient and capable to generate solutions of higher quality than the GAMS/CoinBonmin solver and a randomization method.This dissertation then presents a meta-heuristic, i.e., a BI-based genetic local search (BIGLS) algorithm, for the CAP in which a BI-based local search is embedded into the genetic algorithm. Comprehensive numerical experiments show the robustness and effectiveness of the BIGLS algorithm and especially its advantages over the BITA in terms of solution quality.
3

Quantifying System Reliability in Weighted-k-out-of-n Systems : A Comparative Analysis of Reliability Models and Methods of Scaling / Kvantifiering av Robusthet hos Viktade-k-av-n System : En Jämförande Analys över Robusthetsmodeller och Metoder för Skalning

Berggren, Pelle, Abraham, Elias January 2024 (has links)
Reliability is the probability that a system doesn’t fail in a time interval. A weighted-k-out-of-n system is a system of nodes with weights, where the total weight of all operational nodes must be at least equal to the value k for the system to be operational. Although previous studies have brought forward some quantification models for reliability in such systems, there is a lack of research in the comparison of these methods. There is also a lack of research in how to best scale these types of systems. This thesis thus investigates optimal methods of quantifying the reliability of weighted-k-out-of-n systems, and latterly discusses optimal methods of scaling them. Some methods of quantifying reliability of such systems are designed and/or implemented from prior theory, and compared in terms of time complexity and accuracy. Of these models the Higashiyama algorithm, a Monte Carlo simulation and a brute force enumeration method proves to be successful. Experiments are conducted in which the scaling factors of adding nodes, adding weights, decreasing k and increasing individual node reliability are tested. Results show that adding nodes generally has the best impact on reliability, but that it also varies on the real-life implementation of the system. Some correlations between minimal paths and reliability are also studied, and a pattern was seen of how optimal minimal paths led to optimal reliability. / Robusthet är sannolikheten att ett system inte slutar fungera i ett tidsintervall. Ett viktat-k-av-n-system är ett system av noder med vikter, där den totala vikten av alla fungerande noder måste vara minst lika med värdet k för att systemet ska fungera. Trots att tidigare studier har lyft fram några modeller för beräkning av robusthet i sådana system, finns det en brist på forskning i jämförelse av dessa metoder. Det finns även en brist på forskning i hur sådana system skalas på bästa sätt. Därför undersöker detta projekt optimala metoder för att mäta robustheten i viktade-k-av-n-system, och diskuterar sedan optimala metoder för att skala dessa. Detta projekt undersöker robusthet i viktade-k-av-n-system, specifikt i termer av kvantifiering och skalning. Några metoder för att kvantifiera robusthet i sådana system designas och/eller implementeras från tidigare teori, av vilka Higashiyama algoritmen, en Monte Carlo simulering och en “brute force" metod är användbara. Experiment utförs där faktorer för skalning, det vill säga att lägga till noder, lägga till vikter, sänka k, och öka robustheten hos individuella noder testas. Resultat visar att tilläggning av noder har generellt bäst påverkan på robusthet, men att det beror på hur den verkliga implementationen av systemet ser ut. Några korrelationer mellan minimala vägar och robusthet studeras också, och ett mönster syns där optimala minimala vägar leder till optimal robusthet.

Page generated in 0.0577 seconds