141 |
System reliability from component reliabilitiesDuffett, James Roy January 1959 (has links)
In this dissertation, the synthesis of system reliability from the reliabilities of the componentry constituting the system is considered. For the purpose of contextual elucidation, major emphasis is accorded to complex missile systems. / Ph. D.
|
142 |
Reliability assessment under incomplete information: an evaluative studyHernandez Ruiz, Ruth 12 March 2009 (has links)
Traditionally, in reliability design, the random variables acting on a system are assumed independent. This assumption is usually poor because in most real life problems the variables are correlated. The available information, most of the time, is limited to the first and second moments. Very few methods can handle correlation between the variables when the joint probability density function is unknown. There are no reports that provide information of the accuracy of these methods.
This work presents an evaluative study of reliability under incomplete information, comparing three existing methods for calculating the probability of failure: The method presented by Ang and Tang which assumes the correlation between the variables to be invariant; Kiureghian and Liu/s method which accounts for the change in correlation and; Rackwitz's method under the assumption of independence. We have also developed a new algorithm to generate random samples of correlated random variables when the marginal distributions and correlation coefficients of these variables are specified. These samples can be used in Monte Carlo simulation which is a tool for comparison of the three methods described above. This Monte Carlo simulation approach is based on the assumption of normal joint probability density function as considered by Kiureghian and Liu. To examine if this approach is biased towards Kiureghian and Liu, a second Monte Carlo simulation approach with no assumption about the joint probability density function is developed and compared with the first one.
Both methods that account for correlation show a clear advantage over the traditional approach of assuming that the variables are independent. Moreover, Kiureghian and Liu's approach proved to be more accurate in most cases than Ang and Tang's method.
In this study, it is also shown that there is an error in calculating the safety index for correlated variables when either one of the methods in study is implemented, because the joint probability density function of the random variables is neglected. / Master of Science
|
143 |
Applications of fuzzy logic to mechanical reliability analysisTouzé, Patrick A. 14 March 2009 (has links)
In this work, fuzzy sets are used to express data or model uncertainty in structural systems where random numbers used to be utilized. / Master of Science
|
144 |
Uncertainty in marine structural strength with application to compressive failure of longitudinally stiffened panelsHess, Paul E. 24 January 2009 (has links)
It is important in structural analysis and design, whether deterministic or reliability-based, to know the level of uncertainty for the methods of strength prediction. The uncertainty associated with strength prediction is the result of ambiguity and vagueness in the system. This study addresses the ambiguity component of uncertainty; this includes uncertainty due to randomness in the basic strength parameters (random uncertainty) and systematic errors and scatter in the prediction of strength (modeling uncertainty). The vagueness component is briefly discussed.
A methodology for quantifying modeling and random uncertainty is presented for structural failure modes with a well defined limit state. A methodology is also presented for determining the relative importance of the basic strength parameters in terms of their importance to the total random uncertainty. These methodologies are applied to the compressive failure of longitudinally stiffened panels. The strength prediction model used in this analysis was developed in the UK and is widely used in analysis and design. Several experimental sample sets are used in the analysis. Mean values and coefficients of variation are reported for the random and modeling uncertainties.
A comparison with results from other studies with several strength prediction algorithms is undertaken for the modeling uncertainty. All of these studies involve longitudinally stiffened panels which fail in axially compressive collapse. Ranges for the mean and coefficient of variation of the modeling uncertainty are presented. / Master of Science
|
145 |
A software application with novel approaches for modeling network systems and computing performance measuresBruce, Steven 01 January 1998 (has links)
No description available.
|
146 |
Reliability of space systems: a multi-element integrated approachBaker, Gena Humphrey 01 October 2000 (has links)
No description available.
|
147 |
The influence of critical asset management facets on improving reliability in power systemsPerkel, Joshua 04 November 2008 (has links)
The objective of the proposed research is to develop statistical algorithms for controlling failure trends through targeted maintenance of at-risk components. The at-risk components are identified via chronological history and diagnostic data, if available. Utility systems include many thousands (possibly millions) of components with many of them having already exceeded their design lives. Unfortunately, neither the budget nor manufacturing resources exist to allow for the immediate replacement of all these components. On the other hand, the utility cannot tolerate a decrease in reliability or the associated increased costs. To combat this problem, an overall maintenance model has been developed that utilizes all the available historical information (failure rates and population sizes) and diagnostic tools (real-time conditions of each component) to generate a maintenance plan. This plan must be capable of delivering the needed reliability improvements while remaining economical. It consists of three facets each of which addresses one of the critical asset management issues:
* Failure Prediction Facet - Statistical algorithm for predicting future failure trends and estimating required numbers of corrective actions to alter these failure trends to desirable levels. Provides planning guidance and expected future performance of the system.
* Diagnostic Facet - Development of diagnostic data and techniques for assessing the accuracy and validity of that data. Provides the true effectiveness of the different diagnostic tools that are available.
* Economics Facet - Stochastic model of economic benefits that may be obtained from diagnostic directed maintenance programs. Provides the cost model that may be used for budgeting purposes.
These facets function together to generate a diagnostic directed maintenance plan whose goal is to provide the best available guidance for maximizing the gains in reliability for the budgetary limits utility engineers must operate within.
|
148 |
Detector multiusuario sub-otimo por confiabilidade de amostras / Sub-optimal multiser detector based on reliable samplesFrison, Celso Iwata 21 October 2009 (has links)
Orientador: Celso de Almeida / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-14T23:28:24Z (GMT). No. of bitstreams: 1
Frison_CelsoIwata_M.pdf: 5336693 bytes, checksum: bde1ddd7684a93de5f398e08705c6bb0 (MD5)
Previous issue date: 2009 / Resumo: Dentre as técnicas de detecção multiusuário existentes em sistemas CDMA, a técnica conhecida como ótima é a responsável por gerar a menor probabilidade de erro de símbolo possível. Porém, o desempenho referente a esta técnica é obtido através de uma elevada complexidade em número de cálculos, o que leva à sua impraticabilidade em sistemas reais. Com isso, um detector multiusuário sub-ótimo que utiliza limiares de confiabilidade nas amostras recebidas para classificá-las como confiáveis ou não-confiáveis é proposto em um ambiente CDMA síncrono. Cada uma destas amostras já classificadas, recebe um processamento diferenciado na detecção. A introdução de limiares de confiabilidade na detecção multiusuário demonstrou que um desempenho equiparável ao de um detector multiusuário ótimo pode ser possível, e ao mesmo tempo com uma menor complexidade em número de cálculos realizados. Uma modelagem matemática foi desenvolvida para a obtenção das equações de complexidade em número de cálculos e da probabilidade de erro de bit. Estas expressões analíticas foram validadas através de simulações realizadas. / Abstract: Among all the existing multiuser detection techniques in CDMA systems, the one which gives the minimum symbol error probability is called optimum. Conversely, the performance of this technique is obtained with a high complexity in the number of calculations, which make this technique impracticable in real systems. Then, a sub-optimum multiuser detector which applies reliability thresholds to the received samples, to classify them as reliable or nonreliable, is proposed in a synchronous CDMA system. Each one of these samples that has been already classified receives a different management in the detection process of the bits. The insertion of these reliability thresholds in the multiuser detection showed that a performance similar to the optimum multiuser detector could be achieved, and at the same time, with a significant reduction in the number of calculations (detector's complexity). Theoretical equations of complexity an bit error rate are presented. These theoretical expressions are tight when compared to the respective simulations. / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
|
149 |
Solar cell degradation under ionizing radiation ambient: preemptive testing and evaluation via electrical overstressingUnknown Date (has links)
The efforts addressed in this thesis refer to assaying the degradations in modern solar cells used in space-borne and/or nuclear environment applications. This study is motivated to address the following: 1. Modeling degradations in Si pn-junction solar cells (devices-under-test or DUTs) under different ionizing radiation dosages 2. Preemptive and predictive testing to determine the aforesaid degradations that decide eventual reliability of the DUTs; and 3. Using electrical overstressing (EOS) to emulate the fluence of ionizing radiation dosage on the DUT. Relevant analytical methods, computational efforts and experimental studies are described. Forward/reverse characteristics as well as ac impedance performance of a set of DUTs under pre- and post- electrical overstressings are evaluated. Change in observed DUT characteristics are correlated to equivalent ionizing-radiation dosages. The results are compiled and cause-effect considerations are discussed. Conclusions are enumerated and inferences are made with direction for future studies. / by George A. Thengum Pallil. / Thesis (M.S.C.S.)--Florida Atlantic University, 2010. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2010. Mode of access: World Wide Web.
|
150 |
Resilient system design and efficient link management for the wireless communication of an ocean current turbine test bedUnknown Date (has links)
To ensure that a system is robust and will continue operation even when facing
disruptive or traumatic events, we have created a methodology for system architects and
designers which may be used to locate risks and hazards in a design and enable the
development of more robust and resilient system architectures. It uncovers design
vulnerabilities by conducting a complete exploration of a systems’ component
operational state space by observing the system from multi-dimensional perspectives and
conducts a quantitative design space analysis by means of probabilistic risk assessment
using Bayesian Networks. Furthermore, we developed a tool which automated this
methodology and demonstrated its use in an assessment of the OCTT PHM communication system architecture. To boost the robustness of a wireless communication system and efficiently allocate bandwidth, manage throughput, and ensure quality of service on a wireless link, we created a wireless link management architecture which applies sensor fusion to gather and store platform networked sensor metrics, uses time series forecasting to predict the platform position, and manages data transmission for the links (class based, packet scheduling and capacity allocation). To validate our architecture, we developed a link management tool capable of forecasting the link quality and uses cross-layer scheduling and allocation to modify capacity allocation at the IP layer for various packet flows (HTTP, SSH, RTP) and prevent congestion and priority inversion. Wireless sensor networks (WSN) are vulnerable to a plethora of different fault types and external attacks after their deployment. To maintain trust in these systems and
increase WSN reliability in various scenarios, we developed a framework for node fault
detection and prediction in WSNs. Individual wireless sensor nodes sense characteristics
of an object or environment. After a smart device successfully connects to a WSN’s base
station, these sensed metrics are gathered, sent to and stored on the device from each
node in the network, in real time. The framework issues alerts identifying nodes which
are classified as faulty and when specific sensors exceed a percentage of a threshold
(normal range), it is capable of discerning between faulty sensor hardware and anomalous
sensed conditions. Furthermore we developed two proof of concept, prototype
applications based on this framework. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2013.
|
Page generated in 0.0285 seconds