• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 136
  • 39
  • 28
  • 10
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 1
  • Tagged with
  • 272
  • 272
  • 52
  • 45
  • 43
  • 37
  • 31
  • 30
  • 30
  • 29
  • 29
  • 29
  • 28
  • 28
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

System reliability from component reliabilities

Duffett, James Roy January 1959 (has links)
In this dissertation, the synthesis of system reliability from the reliabilities of the componentry constituting the system is considered. For the purpose of contextual elucidation, major emphasis is accorded to complex missile systems. / Ph. D.
142

Reliability assessment under incomplete information: an evaluative study

Hernandez Ruiz, Ruth 12 March 2009 (has links)
Traditionally, in reliability design, the random variables acting on a system are assumed independent. This assumption is usually poor because in most real life problems the variables are correlated. The available information, most of the time, is limited to the first and second moments. Very few methods can handle correlation between the variables when the joint probability density function is unknown. There are no reports that provide information of the accuracy of these methods. This work presents an evaluative study of reliability under incomplete information, comparing three existing methods for calculating the probability of failure: The method presented by Ang and Tang which assumes the correlation between the variables to be invariant; Kiureghian and Liu/s method which accounts for the change in correlation and; Rackwitz's method under the assumption of independence. We have also developed a new algorithm to generate random samples of correlated random variables when the marginal distributions and correlation coefficients of these variables are specified. These samples can be used in Monte Carlo simulation which is a tool for comparison of the three methods described above. This Monte Carlo simulation approach is based on the assumption of normal joint probability density function as considered by Kiureghian and Liu. To examine if this approach is biased towards Kiureghian and Liu, a second Monte Carlo simulation approach with no assumption about the joint probability density function is developed and compared with the first one. Both methods that account for correlation show a clear advantage over the traditional approach of assuming that the variables are independent. Moreover, Kiureghian and Liu's approach proved to be more accurate in most cases than Ang and Tang's method. In this study, it is also shown that there is an error in calculating the safety index for correlated variables when either one of the methods in study is implemented, because the joint probability density function of the random variables is neglected. / Master of Science
143

Applications of fuzzy logic to mechanical reliability analysis

Touzé, Patrick A. 14 March 2009 (has links)
In this work, fuzzy sets are used to express data or model uncertainty in structural systems where random numbers used to be utilized. / Master of Science
144

Uncertainty in marine structural strength with application to compressive failure of longitudinally stiffened panels

Hess, Paul E. 24 January 2009 (has links)
It is important in structural analysis and design, whether deterministic or reliability-based, to know the level of uncertainty for the methods of strength prediction. The uncertainty associated with strength prediction is the result of ambiguity and vagueness in the system. This study addresses the ambiguity component of uncertainty; this includes uncertainty due to randomness in the basic strength parameters (random uncertainty) and systematic errors and scatter in the prediction of strength (modeling uncertainty). The vagueness component is briefly discussed. A methodology for quantifying modeling and random uncertainty is presented for structural failure modes with a well defined limit state. A methodology is also presented for determining the relative importance of the basic strength parameters in terms of their importance to the total random uncertainty. These methodologies are applied to the compressive failure of longitudinally stiffened panels. The strength prediction model used in this analysis was developed in the UK and is widely used in analysis and design. Several experimental sample sets are used in the analysis. Mean values and coefficients of variation are reported for the random and modeling uncertainties. A comparison with results from other studies with several strength prediction algorithms is undertaken for the modeling uncertainty. All of these studies involve longitudinally stiffened panels which fail in axially compressive collapse. Ranges for the mean and coefficient of variation of the modeling uncertainty are presented. / Master of Science
145

The influence of critical asset management facets on improving reliability in power systems

Perkel, Joshua 04 November 2008 (has links)
The objective of the proposed research is to develop statistical algorithms for controlling failure trends through targeted maintenance of at-risk components. The at-risk components are identified via chronological history and diagnostic data, if available. Utility systems include many thousands (possibly millions) of components with many of them having already exceeded their design lives. Unfortunately, neither the budget nor manufacturing resources exist to allow for the immediate replacement of all these components. On the other hand, the utility cannot tolerate a decrease in reliability or the associated increased costs. To combat this problem, an overall maintenance model has been developed that utilizes all the available historical information (failure rates and population sizes) and diagnostic tools (real-time conditions of each component) to generate a maintenance plan. This plan must be capable of delivering the needed reliability improvements while remaining economical. It consists of three facets each of which addresses one of the critical asset management issues: * Failure Prediction Facet - Statistical algorithm for predicting future failure trends and estimating required numbers of corrective actions to alter these failure trends to desirable levels. Provides planning guidance and expected future performance of the system. * Diagnostic Facet - Development of diagnostic data and techniques for assessing the accuracy and validity of that data. Provides the true effectiveness of the different diagnostic tools that are available. * Economics Facet - Stochastic model of economic benefits that may be obtained from diagnostic directed maintenance programs. Provides the cost model that may be used for budgeting purposes. These facets function together to generate a diagnostic directed maintenance plan whose goal is to provide the best available guidance for maximizing the gains in reliability for the budgetary limits utility engineers must operate within.
146

Detector multiusuario sub-otimo por confiabilidade de amostras / Sub-optimal multiser detector based on reliable samples

Frison, Celso Iwata 21 October 2009 (has links)
Orientador: Celso de Almeida / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-14T23:28:24Z (GMT). No. of bitstreams: 1 Frison_CelsoIwata_M.pdf: 5336693 bytes, checksum: bde1ddd7684a93de5f398e08705c6bb0 (MD5) Previous issue date: 2009 / Resumo: Dentre as técnicas de detecção multiusuário existentes em sistemas CDMA, a técnica conhecida como ótima é a responsável por gerar a menor probabilidade de erro de símbolo possível. Porém, o desempenho referente a esta técnica é obtido através de uma elevada complexidade em número de cálculos, o que leva à sua impraticabilidade em sistemas reais. Com isso, um detector multiusuário sub-ótimo que utiliza limiares de confiabilidade nas amostras recebidas para classificá-las como confiáveis ou não-confiáveis é proposto em um ambiente CDMA síncrono. Cada uma destas amostras já classificadas, recebe um processamento diferenciado na detecção. A introdução de limiares de confiabilidade na detecção multiusuário demonstrou que um desempenho equiparável ao de um detector multiusuário ótimo pode ser possível, e ao mesmo tempo com uma menor complexidade em número de cálculos realizados. Uma modelagem matemática foi desenvolvida para a obtenção das equações de complexidade em número de cálculos e da probabilidade de erro de bit. Estas expressões analíticas foram validadas através de simulações realizadas. / Abstract: Among all the existing multiuser detection techniques in CDMA systems, the one which gives the minimum symbol error probability is called optimum. Conversely, the performance of this technique is obtained with a high complexity in the number of calculations, which make this technique impracticable in real systems. Then, a sub-optimum multiuser detector which applies reliability thresholds to the received samples, to classify them as reliable or nonreliable, is proposed in a synchronous CDMA system. Each one of these samples that has been already classified receives a different management in the detection process of the bits. The insertion of these reliability thresholds in the multiuser detection showed that a performance similar to the optimum multiuser detector could be achieved, and at the same time, with a significant reduction in the number of calculations (detector's complexity). Theoretical equations of complexity an bit error rate are presented. These theoretical expressions are tight when compared to the respective simulations. / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
147

Solar cell degradation under ionizing radiation ambient: preemptive testing and evaluation via electrical overstressing

Unknown Date (has links)
The efforts addressed in this thesis refer to assaying the degradations in modern solar cells used in space-borne and/or nuclear environment applications. This study is motivated to address the following: 1. Modeling degradations in Si pn-junction solar cells (devices-under-test or DUTs) under different ionizing radiation dosages 2. Preemptive and predictive testing to determine the aforesaid degradations that decide eventual reliability of the DUTs; and 3. Using electrical overstressing (EOS) to emulate the fluence of ionizing radiation dosage on the DUT. Relevant analytical methods, computational efforts and experimental studies are described. Forward/reverse characteristics as well as ac impedance performance of a set of DUTs under pre- and post- electrical overstressings are evaluated. Change in observed DUT characteristics are correlated to equivalent ionizing-radiation dosages. The results are compiled and cause-effect considerations are discussed. Conclusions are enumerated and inferences are made with direction for future studies. / by George A. Thengum Pallil. / Thesis (M.S.C.S.)--Florida Atlantic University, 2010. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2010. Mode of access: World Wide Web.
148

Resilient system design and efficient link management for the wireless communication of an ocean current turbine test bed

Unknown Date (has links)
To ensure that a system is robust and will continue operation even when facing disruptive or traumatic events, we have created a methodology for system architects and designers which may be used to locate risks and hazards in a design and enable the development of more robust and resilient system architectures. It uncovers design vulnerabilities by conducting a complete exploration of a systems’ component operational state space by observing the system from multi-dimensional perspectives and conducts a quantitative design space analysis by means of probabilistic risk assessment using Bayesian Networks. Furthermore, we developed a tool which automated this methodology and demonstrated its use in an assessment of the OCTT PHM communication system architecture. To boost the robustness of a wireless communication system and efficiently allocate bandwidth, manage throughput, and ensure quality of service on a wireless link, we created a wireless link management architecture which applies sensor fusion to gather and store platform networked sensor metrics, uses time series forecasting to predict the platform position, and manages data transmission for the links (class based, packet scheduling and capacity allocation). To validate our architecture, we developed a link management tool capable of forecasting the link quality and uses cross-layer scheduling and allocation to modify capacity allocation at the IP layer for various packet flows (HTTP, SSH, RTP) and prevent congestion and priority inversion. Wireless sensor networks (WSN) are vulnerable to a plethora of different fault types and external attacks after their deployment. To maintain trust in these systems and increase WSN reliability in various scenarios, we developed a framework for node fault detection and prediction in WSNs. Individual wireless sensor nodes sense characteristics of an object or environment. After a smart device successfully connects to a WSN’s base station, these sensed metrics are gathered, sent to and stored on the device from each node in the network, in real time. The framework issues alerts identifying nodes which are classified as faulty and when specific sensors exceed a percentage of a threshold (normal range), it is capable of discerning between faulty sensor hardware and anomalous sensed conditions. Furthermore we developed two proof of concept, prototype applications based on this framework. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2013.
149

New solution schemes in constrained redundancy optimizatin [sic.]. / New solution schemes in constrained redundancy optimization

January 1999 (has links)
by Lam Ngok. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 111-114). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Overview --- p.1 / Chapter 1.2 --- Organization outline --- p.2 / Chapter 2 --- Fundamentals of reliability theory --- p.4 / Chapter 2.1 --- State vector --- p.4 / Chapter 2.2 --- Minimal path sets --- p.4 / Chapter 2.3 --- Minimal cut sets --- p.7 / Chapter 2.4 --- Structure functions --- p.7 / Chapter 2.5 --- The structure functions and the systems reliability --- p.10 / Chapter 3 --- Literature review --- p.12 / Chapter 3.1 --- Introduction --- p.12 / Chapter 3.2 --- Approximation schemes --- p.13 / Chapter 3.3 --- Heurisitic search schemes --- p.14 / Chapter 3.4 --- Exact solution schemes --- p.16 / Chapter 3.5 --- Software reliability --- p.17 / Chapter 4 --- Characteristics of Series-parallel network --- p.18 / Chapter 4.1 --- Series-parallel network problem formulation --- p.18 / Chapter 4.2 --- Characteristics of series-parallel networks --- p.19 / Chapter 4.3 --- Some further properties of Maximal Monotonicity --- p.20 / Chapter 4.3.1 --- Definitions and the background --- p.20 / Chapter 4.3.2 --- The proper and the improper MMPs --- p.22 / Chapter 4.4 --- Examples --- p.37 / Chapter 4.5 --- Computational results --- p.39 / Chapter 4.6 --- New progress --- p.40 / Chapter 5 --- Extensions for the series-parallel reducible networks --- p.43 / Chapter 5.1 --- Some new notations for computation --- p.44 / Chapter 5.1.1 --- Notation --- p.44 / Chapter 5.2 --- Problem formulation --- p.46 / Chapter 5.3 --- The series-parallel reducible networks --- p.46 / Chapter 5.4 --- The algorithm --- p.54 / Chapter 6 --- "On ""Successive Solution Scheme For Constrained Redun- dancy Optimization In Reliability Networks"" [1]" --- p.56 / Chapter 6.1 --- Introduction --- p.56 / Chapter 6.2 --- The contents --- p.56 / Chapter 6.2.1 --- The motivation --- p.56 / Chapter 6.2.2 --- The Successive Solution Scheme --- p.57 / Chapter 6.3 --- Illustrative examples --- p.62 / Chapter 6.3.1 --- Example 1 --- p.62 / Chapter 6.3.2 --- Example 2 --- p.67 / Chapter 7 --- Conclusions --- p.76 / Chapter 8 --- Appendix --- p.79 / Chapter 8.1 --- Computational results --- p.79 / Chapter 8.2 --- Programme codes --- p.97
150

Maintenance model and warranty problem.

January 2000 (has links)
Tse Yee Kit. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 55-58). / Abstracts in English and Chinese. / Chapter Chapter 1 --- Introduction / Chapter 1.1 --- Geometric Process and Maintenance Problem --- p.1 / Chapter 1.2 --- Warranty Problem --- p.5 / Chapter 1.3 --- An Outline of the Thesis --- p.8 / Chapter Chapter 2 --- Multistate Deteriorative System / Chapter 2.1 --- The Multistate Model --- p.10 / Chapter 2.2 --- Long-run Average Cost Per Unit Time --- p.15 / Chapter 2.3 --- The Optimal Policy N* --- p.18 / Chapter 2.4 --- The Monotonicity of the Optimal Policy --- p.21 / Chapter Chapter 3 --- Extended Warranty Model / Chapter 3.1 --- The Extended Warranty Model --- p.30 / Chapter 3.2 --- "The Expected Discounted Cost Over the Lifetime Cycle [0,T]" --- p.34 / Chapter 3.2.1 --- Consumer's Discounted Cost --- p.34 / Chapter 3.2.2 --- Manufacturer's Discounted Cost --- p.37 / Chapter 3.3 --- The Exponential Distribution Case --- p.40 / Chapter 3.4 --- Numerical Examples --- p.51 / Bibliography --- p.55

Page generated in 0.1169 seconds