• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1775
  • 669
  • 320
  • 269
  • 233
  • 136
  • 46
  • 29
  • 27
  • 22
  • 18
  • 17
  • 17
  • 14
  • 13
  • Tagged with
  • 4451
  • 890
  • 591
  • 565
  • 559
  • 457
  • 444
  • 353
  • 348
  • 334
  • 333
  • 333
  • 332
  • 323
  • 293
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

ENHANCING AUTONOMOUSUNDERWATER VEHICLE MISSIONDEPENDABILITY THROUGH ADAPTIVEDYNAMIC REDUNDANCY

Barhaido, Matteus January 2024 (has links)
This master thesis presents, suggests, and discusses a novel approach to enhance the mission dependability of Autonomous Underwater Vehicles (AUVs) through Adaptive Dynamic Redundancy (ADR). The AUVs demand improved dependability not only because they operate in harsh and unpredictable waters, but also because mission failures occur, leading to mission aborts and potential loss of valuable data and equipment. The ADR was carefully implemented for AUV thrusters using curated methods. The ADR stands out as it maintains high dependability while utilizing less power consumption, basing this on necessity, needs, environmental data and conditions, thruster health, and mission criticality. By leveraging a switching feature between Dual Modular Redundancy (DMR) and 1-out-of-3 redundancy, ADR aims to minimize the risk of failures while optimizing the power consumption and reducing wear and tear on the thrusters both for their operational and futuristic state. Through the comparative analysis, the ADR has demonstrated its capability to enhance dependability by improving reliability, safety, and operational efficiency of AUVs compared to other standardized redundancy concepts. The findings suggest that ADR not only prevents failures more effectively than TMR and DMR, but also significantly extends the mission’s lifespan and increases overall mission success rates.
172

Analysis and design of reliable mixed-signal CMOS circuits

Xuan, Xiangdong 04 August 2004 (has links)
Facing the constantly increasing reliability challenges under technology scaling, the topics in IC reliability technique have been receiving serious attention during recent years. In this work, based on the understanding of existing physical failure models that have been concentrating on the pre-fab circuits, a set of revised models for major failure mechanisms such as electromigration, hot-carrier, and gate oxide wear-out are created. Besides the modeling of degradation behaviors for circuits in design phase, these models tend to deal with the post-fab device characteristics with the presence of physical defects. In addition, the simulation work has been taken from device level to circuit level hierarchically, presenting the evaluation of circuit level reliability such as degradations of circuit level specs and circuit lifetime prediction. For post-fab ICs under electromigration, the expected circuit lifetime is calculated based on statistical processes and the probability theory. By incorporating all physics-of-failure models and applying circuit level simulation approaches, an IC reliability simulator called ARET (ASIC reliability evaluation tool) has been developed. Besides the reliability evaluation, the reliability hotspot identification function is developed in ARET, which is a key step for conducting IC local design-for-reliability approaches. ARET has been calibrated with a series of stress tests conducted at The Boeing Company. Design-for-reliability (DFR) is a very immature technical area, which has been becoming critical with the continuously shrinking reliability safety margin. A novel concept, local design-for-reliability is proposed in this work. This DFR technique is closely based on reliability simulation and hotspot identification. By redesigning the circuit locally around reliability hotspots, this DFR approach offers the overall reliability improvement with the maintained circuit performance. Various DFR algorithms are developed for different circuit situations. The experiments on designed and benchmark circuits have shown that significant circuit reliability improvements can be obtained without compromising performance by applying these DFR algorithms.
173

An introduction to a reliability shorthand

Repicky, John J., Jr. 03 1900 (has links)
Approved for public release; distribution is unlimited / The determination of a system's life distribution usually requires the synthesis of a mixture of system survival modes. In order to alleviate the normal non-trivial calculations, this paper presents the concept of a reliability shorthand. After describing the possible ways a system can survive a mission, the practitioner of this shorthand can use stock formulas to obtain a system's survival function. Then simple insertion of the failure rates of the system's components into the known equations results in the system's reliability. Simple examples show the convenience of this shorthand. The TI-59 is demonstrated to be a useful tool; adequate to implement the methodology. / http://archive.org/details/introductiontore00repi / Lieutenant Commander, United States Navy
174

Reliability analysis of the 4.5 roller bearing

Muller, Cole 06 1900
Approved for public release, distribution is unlimited / The J-52 engine used in the EA-6B Prowler has been found to have a faulty design which has led to in-flight engine failures due to the degradation of the 4.5 roller bearing. Because of cost constraints, the Navy developed a policy of maintaining rather than replacing the faulty engine with a re-designed engine. With an increase in Prowler crashes related to the failure of this bearing, the Navy has begun to re-evaluate this policy. This thesis analyzed the problem using methods in reliability statistics to develop policy recommendations for the Navy. One method analyzed the individual times to failure of the bearings and fit the data to a known distribution. Using this distribution, we estimated lower confidence bounds for the time which 0.0001% of the bearings are expected to fail, finding it was below fifty hours. Such calculations can be used to form maintenance and replacement policies. Another approach analyzed oil samples taken from the J-52 engine. The oil samples contain particles of different metals that compose the 4.5 roller bearing. Linear regression, classification and regression trees, and discriminant analysis were used to determine that molybdenum and vanadium levels are good indicators of when a bearing is near failure. / http://hdl.handle.net/10945/945 / Ensign, United States Navy
175

Determine network survivability using heuristic models

Chua, Eng Hong 03 1900 (has links)
Approved for public release; distribution in unlimited. / Contemporary large-scale networked systems have improved the efficiency and effectiveness of our way of life. However, such benefit is accompanied by elevated risks of intrusion and compromises. Incorporating survivability capabilities into systems is one of the ways to mitigate these risks. The Server Agent-based Active network Management (SAAM) project was initiated as part of the next generation Internet project to address the increasing multi-media Internet service demands. Its objective is to provide a consistent and dedicated quality of service to the users. SAAM monitors the network traffic conditions in a region and responds to routing requests from the routers in that region with optimal routes. Mobility has been incorporated to SAAM server to prevent a single point of failure from bringing down the entire SAAM server and its service. With mobility, it is very important to select a good SAAM server locality from the client's point of view. The choice of the server must be a node where connection to the client is most survivable. In order to do that, a general metric is defined to measure the connection survivability of each of the potential server hosts. However, due to the complexity of the network, the computation of the metric becomes very complex too. This thesis develops heuristic solutions of polynomial complexity to find the hosting server node. In doing so, it minimizes the time and computer power required. / Defence Science & Technology Agency (Singapore)
176

Fault tolerance and reliability patterns

Unknown Date (has links)
The need to achieve dependability in critical infrastructures has become indispensable for government and commercial enterprises. This need has become more necessary with the proliferation of malicious attacks on critical systems, such as healthcare, aerospace and airline applications. Additionally, due to the widespread use of web services in critical systems, the need to ensure their reliability is paramount. We believe that patterns can be used to achieve dependability. We conducted a survey of fault tolerance, reliability and web service products and patterns to better understand them. One objective of our survey is to evaluate the state of these patterns, and to investigate which standards are being used in products and their tool support. Our survey found that these patterns are insufficient, and many web services products do not use them. In light of this, we wrote some fault tolerance and web services reliability patterns and present an analysis of them. / by Ingrid A. Buckley. / Thesis (M.S.C.S.)--Florida Atlantic University, 2008. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2008. Mode of access: World Wide Web.
177

Towards a methodology for building reliable systems

Unknown Date (has links)
Reliability is a key system characteristic that is an increasing concern for current systems. Greater reliability is necessary due to the new ways in which services are delivered to the public. Services are used by many industries, including health care, government, telecommunications, tools, and products. We have defined an approach to incorporate reliability along the stages of system development. We first did a survey of existing dependability patterns to evaluate their possible use in this methodology. We have defined a systematic methodology that helps the designer apply reliability in all steps of the development life cycle in the form of patterns. A systematic failure enumeration process to define corresponding countermeasures was proposed as a guideline to define where reliability is needed. We introduced the idea of failure patterns which show how failures manifest and propagate in a system. We also looked at how to combine reliability and security. Finally, we defined an approach to certify the level of reliability of an implemented web service. All these steps lead towards a complete methodology. / by Ingrid A. Buckley. / Thesis (Ph.D.)--Florida Atlantic University, 2012. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2012. Mode of access: World Wide Web.
178

On fault tolerance, performance, and reliability for wireless and sensor networks. / CUHK electronic theses & dissertations collection

January 2005 (has links)
Finally, to obtain a long network lifetime without sacrificing crucial aspects of quality of service (area coverage, sensing reliability, and network connectivity) in wireless sensor networks, we present sensibility-based sleeping configuration protocols (SSCPs) with two sensing models: Boolean sensing model (BSM) and collaborative sensing model (CSM). (Abstract shortened by UMI.) / Furthermore, we extend the traditional reliability analysis. Wireless networks inherit the unique handoff characteristic which leads to different communication structures of various types with a number of components and links. Therefore, the traditional definition of two-terminal reliability is not applicable anymore. We propose a new term, end-to-end mobile reliability, to integrate those different communication structures into one metric, which includes not only failure parameters but also service parameters. Nevertheless, it is still a monotonically decreasing function of time. With the proposed end-to-end mobile reliability, we could identify the reliability importance of imperfect components in wireless networks. / The emerging mobile wireless environment poses exciting challenges for distributed fault-tolerant (FT) computing. This thesis develops a message logging and recovery protocol on the top of Wireless CORBA to complement FT-CORBA specified for wired networks. It employs the storage available at access bridge (AB) as the stable storage for logging messages and saving checkpoints on behalf of mobile hosts (MHs). Our approach engages both the quasi-sender-based and the receiver-based message logging techniques and conducts seamless handoff in the presence of failures. / Then we extend the analysis of the program execution time without and with checkpointing in the presence of MH failures from wired to wireless networks. Due to the underlying message-passing communication mechanism, we employ the number of received computational messages instead of time to indicate the completion of program execution at an MH. Handoff is another distinct factor that should be taken into consideration in mobile wireless environments. Three checkpointing strategies, deterministic, random, and time-based checkpointing, are investigated. In our approach, failures may occur during checkpointing and recovery periods. / Chen Xinyu. / "June 2005." / Adviser: Michael R. Lyu. / Source: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 3889. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (p. 180-198). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract in English and Chinese. / School code: 1307.
179

A Current Sweep Method for Assessing the Mixed-Mode Damage Spectrum of SIGe HBTS

Cheng, Peng 15 November 2007 (has links)
In this work a new current-sweep stress methodology for quantitatively assessing the mixed-mode reliability (simultaneous application of high current and high voltage) of advanced SiGe HBTs is presented. This stress methodology allows one to quickly obtain the complete damage spectrum of a given device from a particular technology platform, enabling better understanding of the complex voltage, current, and temperature interdependence associated with electrical stress and burn-in of advanced transistors. We consistently observed three distinct regions of mixed-mode damage in SiGe HBTs, and find that hot carrier induced damage can be introduced into SiGe HBTs under surprisingly modest mixed-mode stress conditions. For more aggressively scaled silicon-germanium technology generations, a larger percentage of hot carriers generated in the collector-base junction are able to travel to and hence damage the EB spacer, leading to enhanced forward-mode base current leakage under stress. A new self-heating induced mixed-mode annealing effect was observed for the first time under fairly high voltage and current stress conditions, and a new damage mechanism was observed under very high voltage and current conditions. Finally, as an example of the utility of our stress methodology, we quantified the composite mixed-mode damage spectrum of a commercial third-generation (200 GHz) generation SiGe HBT. It is found that if devices are stressed with either voltage or current alone during burn-in, they can easily withstand extreme over-stress conditions. Unfortunately, devices were easily damaged when stressed with a combination of stress voltage and current, and this has significant implications for the device and circuit lifetime prediction under realistic mixed-signal operating conditions.
180

Reliability characterization and prediction of high k dielectric thin film

Luo, Wen 12 April 2006 (has links)
As technologies continue advancing, semiconductor devices with dimensions in nanometers have entered all spheres of human life. This research deals with both the statistical aspect of reliability and some electrical aspect of reliability characterization. As an example of nano devices, TaO<sub>x</sub>-based high k dielectric thin &#64257;lms are studied on the failure mode identi&#64257;cation, accelerated life testing, lifetime projection, and failure rate estimation. Experiment and analysis on dielectric relaxation and transient current show that the relaxation current of high k dielectrics is distinctive to the trapping/detrapping current of SiO<sub>2</sub>; high k &#64257;lms have a lower leakage current but a higher relaxation current than SiO<sub>2</sub>. Based on the connection between polarization-relaxation and &#64257;lm integrity demonstrated in ramped voltage stress tests, a new method of breakdown detection is proposed. It monitors relaxation during the test, and uses the disappearing of relaxation current as the signal of a breakdown event. This research develops a Bayesian approach which is suitable to reliability estimation and prediction of current and future generations of nano devices. It combines the Weibull lifetime distribution with the empirical acceleration relationship, and put the model parameters into a hierarchical Bayesian structure. The value of the Bayesian approach lies in that it can fully utilize available information in modeling uncertainty and provide cogent prediction with limited resources in a reasonable period of time. Markov chain Monte Carlo simulation is used for posterior inference of the reliability projection and for sensitivity analysis over a variety of vague priors. Time-to-breakdown data collected in the accelerated life tests also are modeled with a bathtub failure rate curve. The decreasing failure rate is estimated with a non-parametric Bayesian approach, and the constant failure rate is estimated with a regular parametric Bayesian approach. This method can provide a fast and reliable estimation of failure rate for burn-in optimization when only a small sample of data is available.

Page generated in 0.2854 seconds