• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1774
  • 669
  • 320
  • 269
  • 233
  • 136
  • 46
  • 29
  • 27
  • 22
  • 18
  • 17
  • 17
  • 14
  • 13
  • Tagged with
  • 4448
  • 889
  • 591
  • 565
  • 559
  • 457
  • 444
  • 353
  • 347
  • 334
  • 333
  • 333
  • 332
  • 323
  • 293
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Variation in load path in a wood structural system and a new reliability-based adjustment factor

Wang, Wenqi 01 May 2010 (has links)
This thesis introduces a new adjustment factor for the probability-based load and resistance factor (LRFD) design for wood structures. By investigating the empirical data of reaction forces for a wooden house built by the Forest Product Laboratory in 2001, it is found that the reaction values exhibit great variability. To explore the causes of this variability, a 3-D finite element model is built and analyzed using commercial software MSC/Nastran. It is found that differences in member geometry are a major cause of reaction variability. In examining the potential effect the reaction variability might have on the structural safety, reliability is assessed for two different types of wood products under several different situations. Finally, a new adjustment factor Ks, which accounts for the variability in load path, is obtained and validated based on structural reliability theory.
272

Experimental Studies of Combined Reliability

Dumala, Richard 26 June 2015 (has links)
<p> In the field of Reliability, a new concept is introduced. The Combined Dependability theory is put forth in this thesis. </p> <p> Attempts are made to prove this theory experimentally by use of accelerated failure tests of GE-47 miniature lamps. The lamps are tested individually and ten in series. The individual lamp test results are then used for the prediction of the reliability of the ten lamps in series.</p> <p> The ten lamps in series simulate a machine with ten components. A failure of one of the components will produce a failure of the machine. The reliability of the machine can be found if the reliability of each part is known. The single part when tested individually must be subjected to the same stresses and conditions that it would encounter when operating in the machine. If this is not accomplished, the machine's calculation of reliability is invalid.</p> / Thesis / Master of Engineering (MEngr)
273

Characterizing the reliability of a BioMEMS-based cantilever sensor

Bhalerao, Kaustubh D. 09 June 2004 (has links)
No description available.
274

Development and Testing of a Food and Nutrition Practice Checklist (FNPC) for Use with Basic Nutrition and Disease Prevention Education Programs

Bradford, Traliece Nicole 06 September 2006 (has links)
Each year, the Expanded Food and Nutrition Education Program (EFNEP) and Food Stamp Nutrition Education (FSNE) receive around 60 million dollars in federal funding. In order to document impacts, it is critical that these programs utilize valid and reliable instruments. By having validated instruments that measure behavior changes, it can be documented that these federally funded programs are achieving program objectives. To date, research on measurements of change is either lacking or under-reported. The goal of this study was to develop a valid and reliable assessment instrument to be utilized with a specific curriculum titled Healthy Futures, which is used within Virginia FSNE. To accomplish this, an expert panel was assembled to conceptualize and construct the instrument. The instrument was pilot-tested, evaluated, then finalized and tested. Results with 73 individuals representing 34 white, non-Hispanics and 36 non-Hispanic blacks, found that the physical activity and dietary quality domains of the instrument had achieved an acceptable test-retest reliability coefficient of .70, however the food safety domain achieved a 0.51. For validity, the instrument scored an overall Spearman Correlation Coefficient of 0.28 for physical activity, 0.34 for food safety, and 0.20 for dietary quality. All three domains were sensitive to change (p < 0.0001). The results indicate that this instrument could detect dietary and physical activity change among limited resource participants of FSNE with confidence. / Master of Science
275

Size Optimization of Utility-Scale Solar PV System  Considering Reliability Evaluation

Chen, Xiao 19 July 2016 (has links)
In this work, a size optimization approach for utility-scale solar photovoltaic (PV) systems is proposed. The purpose of the method is to determine the optimal solar energy generation capacity and optimal location by the minimizing total system cost subject to the constraint that the system reliability requirements. Due to the stochastic characteristic of the solar irradiation, the reliability performance of a power system with PV generation is quite different from the one with only conventional generation. Basically, generation adequacy level of power systems containing solar energy is evaluated by reliability assessment and the most widely used reliability index is the loss of load probability (LOLP). The value of LOLP depends on various factors such as power output of the PV system, outage rate of generating facilities and the system load profile. To obtain the LOLP, the Monte Carlo method is applied to simulate the reliability performance of the solar penetrated power system. The total system cost model consists of the system installation cost, mitigation cost, and saving fuel and operation cost. Mitigation cost is accomplished with N-1 contingency analysis. The cost function minimization process is implemented in Genetic Algorithm toolbox, which has the ability to search the global optimum with relative computational simplicity. / Master of Science
276

Shear Strength of Full-Scale Prestressed Lightweight Concrete Girders with Composite Decks

Kassner, Bernard Leonard 21 January 2013 (has links)
Although design codes have accepted lightweight concrete as a suitable structural material for nearly 50 years, there is still a good deal of uncertainty as to how to calculate the strength of this material when designing for shear in beams.  Design codes tend to penalize lightweight concrete due to its lower tensile strength and smoother interface along the shear cracks.  In this study, there were twelve tests on six full-scale, prestressed girders with composite decks designed to provide answers to some of those uncertainties.  The variables considered were concrete density, concrete compressive strength, effective shear depth, shear span-to-effective depth ratio, the amount of shear reinforcement, and the composite cross-sectional area.  Results show that the sand-lightweight concrete girders exceeded the expected shear strength according to the 2010 AASHTO LRFD Bridge Specifications.  Compared to normal weight concrete, sand-lightweight concrete performed reasonably well, and therefore, does not need a lightweight modifier when designing for shear.  However, a reliability analysis of the sand-lightweight girders in this study as well as twelve previous experiments indicate that there should be two different strength reduction factors for the shear design of sand-lightweight concrete depending on which shear design procedures are used in the 2010 AASHTO LRFD Bridge Design Specifications.  For the General Procedure as well as the guidelines outline in Appendix B5, the strength reduction factor should be increased from 0.70 to 1.00.  For the Simplified Procedure, that factor should be 0.75. / Ph. D.
277

Compiler-Directed Error Resilience for Reliable Computing

Liu, Qingrui 08 August 2018 (has links)
Error resilience has become as important as power and performance in modern computing architecture. There are various sources of errors that can paralyze real-world computing systems. Of particular interest to this dissertation are single-event errors. They can be the results of energetic particle strike or abrupt power outage that corrupts the program states leading to system failures. Specifically, energetic particle strike is the major cause of soft error while abrupt power outage can result in memory inconsistency in the nonvolatile memory systems. Unfortunately, existing techniques to handle those single-event errors are either resource consuming (e.g., hardware approaches) or heavy-weight (e.g., software approaches). To address this problem, this dissertation identifies idempotent processing as an alternative recovery technique to handle the system failures in an efficient and low-cost manner. Then, this dissertation first proposes to design and develop a compiler-directed lightweight methodology which leverages idempotent processing and the state-of-the-art sensor-based detection to achieve soft error resilience at low-cost. This dissertation also introduces a lightweight soft error tolerant hardware design that redefines idempotent processing where the idempotent regions can be created, verified and recovered from the processor's point of view. Furthermore, this dissertation proposes a series of compiler optimizations that significantly reduce the hardware and runtime overhead of the idempotent processing. Lastly, this dissertation proposes a failure-atomic system integrated with idempotent processing to resolve another type of single-event error, i.e., failure-induced memory inconsistency in the nonvolatile memory systems. / Ph. D. / Our computing systems are vulnerable to different kinds of errors. All these errors can potentially crash real-world computing systems. This dissertation specifically addresses the challenges of single-event errors. Single-event errors can be caused by energetic particle strikes or abrupt power outage that can corrupt the program states leading to system failures. Unfortunately, existing techniques to handle those single-event errors are expensive in terms of hardware/software. To address this problem, this dissertation leverages an interesting property called idempotence in the program. A region of code is idempotent if and only if it always generates the same output whenever the program jumps back to the region entry from any execution point within the region. Thus, we can leverage the idempotent property as a low-cost recovery technique to recover the system failures by jumping back to the beginning of the region where the errors occur. This dissertation proposes solutions to incorporate the idempotent property for resilience against those single-event errors. Furthermore, this dissertation introduces a series of optimization techniques with compiler and hardware support to improve the efficiency and overheads for error resilience. We believe that our proposed techniques in this dissertation can inspire researchers for future error resilience research.
278

Reliability of Fatigue Measures in an Overhead Work Task: A Study of Shoulder Muscle Electromyography and Perceived Discomfort

Hager, Kristopher Ming-Ren 21 January 2004 (has links)
This study was conducted to measure the reliability of fatigue measures in an intermittent overhead work task. Fatigue measures included several EMG based parameters and subjective discomfort ratings through use of the Borg CR-10 scale. This study was part of a larger existing study that simulates overhead work in an automobile manufacturing plant. Ten participants used a drill tool to perform an overhead tapping task for one hour at a height relative to individual anthropometry. Reliability indexes, including Intraclass Correlation Coefficients, Standard Errors of Measurement, and Coefficients of Variation were determined for each fatigue measure for each of three shoulder muscles (anterior deltoid, middle deltoid, and trapezius). High reliability implies repeatable results, and precise and credible methods. Conversely, measurement error and subject variability can lead to low reliability of measures. The results indicated that ratings of perceived discomfort (RPD) parameters (slope and final rating) showed relatively high reliability. Intercepts for mean power frequency (MnPF), median power frequency (MdPF), and root means square (RMS) also showed very high reliability. Actual slopes for MnPF, MdPF, and RMS showed low reliability overall, and normalizing slopes did not necessarily improve reliability. Taking the absolute value of slopes led to a noticeable increase in reliability. RPD slope did not correlate with any of the EMG slopes. The high reliability of RPD parameters allows for its inexpensive application to the industrial setting for similar overhead tasks. The reliability of EMG intercepts implies consistent methods; however the reliability of overall EMG trends is suspect if the slope is not reliable. Some EMG slope parameters show promise; however, more research is needed to determine if these parameters are reliable for complex tasks. / Master of Science
279

RELIABILITY GROWTH MODELS FOR ATTRIBUTES (BAYES, SMITH).

SANATGAR FARD, NASSER. January 1982 (has links)
In this dissertation the estimation of reliability for a developmental process generating attribute type data is examined. It is assumed that the process consists of m stages, and the probability of failure is constant or decreasing from stage to stage. Several models for estimating the reliability at each stage of the developmental process are examined. In the classical area, Barlow and Scheuer's model, Lloyd and Lipow's model and a cumulative maximum likelihood estimation model are investigated. In the Bayesian area A.F.M. Smith's model, an empirical Bayes model and a cumulative beta Bayes model are investigated. These models are analyzed both theoretically and by computer simulation. The strengths and weaknesses of each are pointed out, and modifications are made in an attempt to improve their accuracy. The constrained maximum likelihood estimation model of Barlow and Scheuer is shown to be inaccurate when no failures occur at the final stage. Smith's model is shown to be incorrect and a corrected algorithm is presented. The simulation results of these models with the same data indicate that with the exception of the Barlow and Scheuer's model they are all conservative estimators. When reliability estimation with growth is considered, it is reasonable to emphasize data obtained at recent stages and de-emphasize data from the earlier stages. A methodology is developed using geometric weights to improve the estimates. This modification is applied to the cumulative MLE model, Lloyd and Lipow's model, Barlow and Scheuer's model and cumulative beta Bayes model. The simulation results of these modified models show considerable improvement is obtained in the cumulative MLE model and the cumulative beta Bayes model. For Bayesian models, in the absence of prior knowledge, the uniform prior is usually used. A prior with maximum variance is examined theoretically and through simulation experiments for use with the cumulative beta Bayes model. These results show that the maximum variance prior results in faster convergence of the posterior distribution than the uniform prior. The revised Smith's model is shown to provide good estimates of the unknown parameter during the developmental process, particularly for the later stages. The beta Bayes model with maximum variance prior and geometric weights also provides good estimates.
280

Development of a knowledge model for the computer-aided design for reliability of electronic packaging systems

Kim, Injoong 19 December 2007 (has links)
Microelectronic systems such as cell phones, computers, consumer electronics, and implantable medical devices consist of subsystems which in turn consist of other subsystems and components. When such systems are designed, fabricated, assembled, and tested, they need to meet reliability, cost, performance, and other targets for being competitive. The design of reliable electronic packaging systems in a systematic and timely manner requires a consistent and unified method for allocating, predicting, and assessing reliability and for recommending design changes at the component and system level with consideration of both random and wearout failures. Accordingly, this dissertation presents a new unified knowledge modeling method for System Design for Reliability (SDfR) called the Reliability Object Model (ROM) method. The ROM method consistently addresses both reliability allocation and assessment for systems composed of series and parallel subsystems. The effectiveness of the ROM method has been demonstrated for allocating, predicting, and assessing reliability, and the results show that ROM is more effective compared to existing methods, providing richer semantics, unified techniques, and improved SDfR quality. Furthermore, this dissertation develops representative reliability metrics for random and wearout failures, and incorporates such metrics into ROM together with representative algorithms for allocation, assessment, and design change recommendations. Finally, this research implemented the ROM method in a computing framework and demonstrated its applicability using several relevant microelectronic system test cases and prototype SDfR tools.

Page generated in 0.4346 seconds