• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 21
  • 21
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Optimal policies in reliability modelling of systems subject to sporadic shocks and continuous healing

DEBOLINA CHATTERJEE (14206820) 03 February 2023 (has links)
<p>Recent years have seen a growth in research on system reliability and maintenance. Various studies in the scientific fields of reliability engineering, quality and productivity analyses, risk assessment, software reliability, and probabilistic machine learning are being undertaken in the present era. The dependency of human life on technology has made it more important to maintain such systems and maximize their potential. In this dissertation, some methodologies are presented that maximize certain measures of system reliability, explain the underlying stochastic behavior of certain systems, and prevent the risk of system failure.</p> <p><br></p> <p>An overview of the dissertation is provided in Chapter 1, where we briefly discuss some useful definitions and concepts in probability theory and stochastic processes and present some mathematical results required in later chapters. Thereafter, we present the motivation and outline of each subsequent chapter.</p> <p><br></p> <p>In Chapter 2, we compute the limiting average availability of a one-unit repairable system subject to repair facilities and spare units. Formulas for finding the limiting average availability of a repairable system exist only for some special cases: (1) either the lifetime or the repair-time is exponential; or (2) there is one spare unit and one repair facility. In contrast, we consider a more general setting involving several spare units and several repair facilities; and we allow arbitrary life- and repair-time distributions. Under periodic monitoring, which essentially discretizes the time variable, we compute the limiting average availability. The discretization approach closely approximates the existing results in the special cases; and demonstrates as anticipated that the limiting average availability increases with additional spare unit and/or repair facility.</p> <p><br></p> <p>In Chapter 3, the system experiences two types of sporadic impact: valid shocks that cause damage instantaneously and positive interventions that induce partial healing. Whereas each shock inflicts a fixed magnitude of damage, the accumulated effect of k positive interventions nullifies the damaging effect of one shock. The system is said to be in Stage 1, when it can possibly heal, until the net count of impacts (valid shocks registered minus valid shocks nullified) reaches a threshold $m_1$. The system then enters Stage 2, where no further healing is possible. The system fails when the net count of valid shocks reaches another threshold $m_2  (> m_1)$. The inter-arrival times between successive valid shocks and those between successive positive interventions are independent and follow arbitrary distributions. Thus, we remove the restrictive assumption of an exponential distribution, often found in the literature. We find the distributions of the sojourn time in Stage 1 and the failure time of the system. Finally, we find the optimal values of the choice variables that minimize the expected maintenance cost per unit time for three different maintenance policies.</p> <p><br></p> <p>In Chapter 4, the above defined Stage 1 is further subdivided into two parts: In the early part, called Stage 1A, healing happens faster than in the later stage, called Stage 1B. The system stays in Stage 1A until the net count of impacts reaches a predetermined threshold $m_A$; then the system enters Stage 1B and stays there until the net count reaches another predetermined threshold $m_1 (>m_A)$. Subsequently, the system enters Stage 2 where it can no longer heal. The system fails when the net count of valid shocks reaches another predetermined higher threshold $m_2 (> m_1)$. All other assumptions are the same as those in Chapter 3. We calculate the percentage improvement in the lifetime of the system due to the subdivision of Stage 1. Finally, we make optimal choices to minimize the expected maintenance cost per unit time for two maintenance policies.</p> <p><br></p> <p>Next, we eliminate the restrictive assumption that all valid shocks and all positive interventions have equal magnitude, and the boundary threshold is a preset constant value. In Chapter 5, we study a system that experiences damaging external shocks of random magnitude at stochastic intervals, continuous degradation, and self-healing. The system fails if cumulative damage exceeds a time-dependent threshold. We develop a preventive maintenance policy to replace the system such that its lifetime is utilized prudently. Further, we consider three variations on the healing pattern: (1) shocks heal for a fixed finite duration $\tau$; (2) a fixed proportion of shocks are non-healable (that is, $\tau=0$); (3) there are two types of shocks---self healable shocks heal for a finite duration, and non-healable shocks. We implement a proposed preventive maintenance policy and compare the optimal replacement times in these new cases with those in the original case, where all shocks heal indefinitely.</p> <p><br></p> <p>Finally, in Chapter 6, we present a summary of the dissertation with conclusions and future research potential.</p>
12

Logaritmicko-konkávní rozděleni pravděpodobnosti a jejich aplikace / Logarithmic-concave probability distributions and their applications

Zavadilová, Barbora January 2014 (has links)
No description available.
13

Dvejetainių atsatatymo procesų ribinės teoremos / Limit Theorems for Alternating Renewal Processes

Daškevičius, Jaroslavas 23 July 2012 (has links)
Baigiamajame magistro darbe gautos dvejetainių atstatymo procesų sumų konvergavimo į Puasono procesą sąlygos. Remiamasi Grigelionio teorema, nusakančia nepriklausomų taškinių procesų sumų konvergavimo sąlygas. Analizuojami atvejai, kai sumuojamų dvejetainių atstatymo procesų veikimo ir atstatymo periodai yra nepriklausomi ir pasiskirstę pagal tolygųjį, eksponentinį, geometrinį ir Erlango dėsnius. Taip pat nagrinėjamas atvejis, kai veikimo ir atstatymo laikotarpiai turi skirtingus skirstinius. Kiekvienu atveju gautos ir įrodytos būtinos ir pakankamos sąlygos. Remiantis teoriniais rezultatais, procesai yra modeliuojami ir lyginami. Darbo pabaigoje yra suformuluojamos išvados. / In this master thesis conditions for convergence of sums of alternating renewal processes to Poisson process is obtained. Thesis is based on Grigelionis theorem, which defines conditions for convergence of sums of independent counting processes. More specific cases, when alternating renewal processes life and recovery periods are independent and have uniform, exponential, geometric and Erlang distributions, are examined too. Also, case when life and recovery periods have different distributions is examined. Necessary and sufficient conditions are formulated and proven for each case. Processes are modeled and compared according to theoretical results. In the end of thesis conclusions are made.
14

Failure mechanisms of complex systems

Siddique, Shahnewaz 22 May 2014 (has links)
Understanding the behavior of complex, large-scale, interconnected systems in a rigorous and structured manner is one of the most pressing scientific and technological challenges of current times. These systems include, among many others, transportation and communications systems, smart grids and power grids, financial markets etc. Failures of these systems have potentially enormous social, environmental and financial costs. In this work, we investigate the failure mechanisms of load-sharing complex systems. The systems are composed of multiple nodes or components whose failures are determined based on the interaction of their respective strengths and loads (or capacity and demand respectively) as well as the ability of a component to share its load with its neighbors when needed. Each component possesses a specific strength (capacity) and can be in one of three states: failed, damaged or functioning normally. The states are determined based on the load (demand) on the component. We focus on two distinct mechanisms to model the interaction between components strengths and loads. The first, a Loss of Strength (LOS) model and the second, a Customer Service (CS) model. We implement both models on lattice and scale-free graph network topologies. The failure mechanisms of these two models demonstrate temporal scaling phenomena, phase transitions and multiple distinct failure modes excited by extremal dynamics. We find that the resiliency of these models is sensitive to the underlying network topology. For critical ranges of parameters the models demonstrate power law and exponential failure patterns. We find that the failure mechanisms of these models have parallels to failure mechanisms of critical infrastructure systems such as congestion in transportation networks, cascading failure in electrical power grids, creep-rupture in composite structures, and draw-downs in financial markets. Based on the different variants of failure, strategies for mitigating and postponing failure in these critical infrastructure systems can be formulated.
15

Modely obnovy skupinově homogenních prvků / Group-homogeneous elements renewal models

Havlasová, Zuzana January 2008 (has links)
The aim of this diploma thesis is to explain the branch of renewal models and clear up their application in examples. The first part of the thesis is concerned with the characteristics of renewal models provided that the elements are homogeneous. The reliability theory is also described because it's closely associated with renewal models. The second part of the thesis explains the problematic of group-homogeneous elements renewal models, their assumptions and solution. The final part of the diploma thesis point out practical application of individual models in objective examples.
16

Análises de sensibilidade aplicadas à modelagem de problemas de fluxo em meios porosos e estabilidade de taludes para quantificação de incertezas /

Assis, Higor Biondo de January 2019 (has links)
Orientador: Caio Gorla Nogueira / Resumo: Este trabalho apresenta um conjunto de técnicas estatísticas básicas aplicadas à modelagem de problemas de fluxo em meios porosos fraturados e de estabilidade de taludes, com o objetivo de identificar as variáveis explicativas mais influentes sobre a variabilidade das variáveis resposta. Diferentes planejamentos de experimentos foram utilizados para possibilitar a construção de metamodelos polinomiais representativos dos fenômenos estudados. Uma modificação do planejamento do tipo Box-Behnken é apresentada e foi proposta pelo autor para analisar problemas que envolvem elevado número de variáveis explicativas (e.g. 30). Os metamodelos, obtidos via método dos mínimos quadrados, são também chamados de superfícies de resposta ou modelos de regressão e são indispensáveis à verificação da sensibilidade das variáveis explicativas. O conjunto de técnicas mostrou- se muito eficaz na identificação das variáveis explicativas que provocaram efeitos mais significativos sobre a variável resposta. Evidenciou-se também, por meio dos exemplos de estabilidade de taludes tratados, a possibilidade de se quantificar incertezas com o uso de metamodelos suficientemente adequados, uma opção que pode ser bastante útil no processo de quantificação de incertezas de problemas que não possuem soluções analíticas simples. / Abstract: This paper presents a set of basic statistical techniques applied to the modeling of flow problems in fractured porous and slope stability media, aiming to identify the most influential explanatory variables on the response variables variability. Different designs of experiments were used to enable the construction of polynomial metamodels representative of the studied phenomena. A Box-Behnken type design modification is presented and was proposed by the author to analyze problems involving high number of explanatory variables (e.g. 30). The metamodels, obtained by the least squares method, are also called response surfaces or regression models and are indispensable for verifying the sensitivity of the explanatory variables. The set of techniques was very effective in identifying the explanatory variables that had the most significant effects on the response variable. It was also evidenced, through the examples of stability of treated slopes, the possibility of quantifying uncertainties using sufficiently adequate metamodels, an option that can be very useful in the process of quantifying uncertainty of problems that do not have simple analytical solutions. / Mestre
17

Die Relevanz der High Reliability Theory für Hochleistungssysteme : Diskussionspapier

Mistele, Peter 04 November 2005 (has links)
Organisationen wie Feuerwehren, med. Rettungsdienste oder Spezialeinheiten der Polizei zeigen auch in Situationen, die durch Unsicherheit, unvollständige Informationen oder eine sehr hohe Dynamik gekennzeichnet sind, eine effiziente und effektive Leistungsfähigkeit. Sie können deswegen als Hochleistungssysteme (HLS) bezeichnet werden. Im vorliegenden Artikel wird dargestellt wie sich Erkenntnisse der High Reliabilty Theory auf die Untersuchung von Hochleistungssystemen auswirken und welche Gemeinsamkeiten und Parallelen bestehen. Dabei wird insbesondere ein Schwerpunkt auf die Thematik des Lernens gelegt.
18

Conjurer la malédiction de la dimension dans le calcul du noyau de viabilité à l'aide de parallélisation sur carte graphique et de la théorie de la fiabilité : application à des dynamiques environnementales / Dispel the dimensionality curse in viability kernel computation with the help of GPGPU and reliability theory : application to environmental dynamics

Brias, Antoine 15 December 2016 (has links)
La théorie de la viabilité propose des outils permettant de contrôler un système dynamique afin de le maintenir dans un domaine de contraintes. Le concept central de cette théorie est le noyau de viabilité, qui est l’ensemble des états initiaux à partir desquels il existe au moins une trajectoire contrôlée restant dans le domaine de contraintes. Cependant, le temps et l’espace nécessaires au calcul du noyau de viabilité augmentent exponentiellement avec le nombre de dimensions du problème considéré. C’est la malédiction de la dimension. Elle est d’autant plus présente dans le cas de systèmes incorporant des incertitudes. Dans ce cas-là, le noyau de viabilité devient l’ensemble des états pour lesquels il existe une stratégie de contrôle permettant de rester dans le domaine de contraintes avec au moins une certaine probabilité jusqu’à l’horizon de temps donné. L’objectif de cette thèse est d’étudier et de développer des approches afin de combattre cette malédiction de la dimension. Pour ce faire, nous avons proposé deux axes de recherche : la parallélisation des calculs et l’utilisation de la théorie de la fiabilité. Les résultats sont illustrés par plusieurs applications. Le premier axe explore l’utilisation de calcul parallèle sur carte graphique. La version du programme utilisant la carte graphique est jusqu’à 20 fois plus rapide que la version séquentielle, traitant des problèmes jusqu’en dimension 7. Outre ces gains en temps de calcul, nos travaux montrent que la majeure partie des ressources est utilisée pour le calcul des probabilités de transition du système. Cette observation fait le lien avec le deuxième axe de recherche qui propose un algorithme calculant une approximation de noyaux de viabilité stochastiques utilisant des méthodes fiabilistes calculant les probabilités de transition. L’espace-mémoire requis par cet algorithme est une fonction linéaire du nombre d’états de la grille utilisée, contrairement à l’espace-mémoire requis par l’algorithme de programmation dynamique classique qui dépend quadratiquement du nombre d’états. Ces approches permettent d’envisager l’application de la théorie de la viabilité à des systèmes de plus grande dimension. Ainsi nous l’avons appliquée à un modèle de dynamique du phosphore dans le cadre de la gestion de l’eutrophisation des lacs, préalablement calibré sur les données du lac du Bourget. De plus, les liens entre fiabilité et viabilité sont mis en valeur avec une application du calcul de noyau de viabilité stochastique, autrement appelé noyau de fiabilité, en conception fiable dans le cas d’une poutre corrodée. / Viability theory provides tools to maintain a dynamical system in a constraint domain. The main concept of this theory is the viability kernel, which is the set of initial states from which there is at least one controlled trajectory remaining in the constraint domain. However, the time and space needed to calculate the viability kernel increases exponentially with the number of dimensions of the problem. This issue is known as “the curse of dimensionality”. This curse is even more present when applying the viability theory to uncertain systems. In this case, the viability kernel is the set of states for which there is at least a control strategy to stay in the constraint domain with some probability until the time horizon. The objective of this thesis is to study and develop approaches to beat back the curse of dimensionality. We propose two lines of research: the parallel computing and the use of reliability theory tools. The results are illustrated by several applications. The first line explores the use of parallel computing on graphics card. The version of the program using the graphics card is up to 20 times faster than the sequential version, dealing with problems until dimension 7. In addition to the gains in calculation time, our work shows that the majority of the resources is used to the calculation of transition probabilities. This observation makes the link with the second line of research which proposes an algorithm calculating a stochastic approximation of viability kernels by using reliability methods in order to compute the transition probabilities. The memory space required by this algorithm is a linear function of the number of states of the grid, unlike the memory space required by conventional dynamic programming algorithm which quadratically depends on the number of states. These approaches may enable the use of the viability theory in the case of high-dimension systems. So we applied it to a phosphorus dynamics for the management of Lake Bourget eutrophication, previously calibrated from experimental data. In addition the relationship between reliability and viability is highlighted with an application of stochastic viability kernel computation, otherwise known as reliability kernel, in reliable design in the case of a corroded beam.
19

Stochastic models for biological systems

Ali, Mansour Fathey Yassen 09 December 2003 (has links)
The aim of this thesis is to define and study stochastic models of repairable systems and the application of these models to biological systems, especially for cell survival after irradiation with ionizing radiation.
20

Unexpected Events in Nigerian Construction Projects: A Case of Four Construction Companies

Pidomson, Gabriel Baritulem 01 January 2016 (has links)
In Nigeria, 50% to 70% of construction projects are delayed due to unexpected events that are linked to lapses in performance, near misses, and surprises. While researchers have theorized on the impact of mindfulness and information systems management (ISM) on unexpected events, information is lacking on how project teams can combine ISM and mindfulness in response to unexpected events in construction projects. The purpose of this case study was to examine how project teams can combine mindfulness with ISM in response to unexpected events during the execution phase of Nigerian construction projects. The framework of High Reliability Theory revealed that unexpected events could be minimized by mindfulness defined by 5 cognitive processes: preoccupation with failure, reluctance to simplify, sensitivity to operations, commitment to resilience, and deference to expertise. In-depth semi-structured interviews elicited the views of 24 project experts on team behaviors, tactics, and processes for combining mindfulness with ISM. Data analysis was conducted by open coding to identify and reduce data into themes, and axial coding was used to identify and isolate categories. Findings were that project teams could combine mindfulness with ISM in response to unexpected events by integrating effective risk, team, and communication management with appropriate training and technology infrastructure. If policymakers, project clients, and practitioners adopt practices suggested in this study, the implications for social change are that project management practices, organizational learning, and the performance of construction projects may improve, construction wastes may be reduced, and taxpayers may derive optimum benefits from public funds committed to construction projects.

Page generated in 0.0411 seconds