• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Nonparametric Confidence Intervals for the Reliability of Real Systems Calculated from Component Data

Spooner, Jean 01 May 1987 (has links)
A methodology which calculates a point estimate and confidence intervals for system reliability directly from component failure data is proposed and evaluated. This is a nonparametric approach which does not require the component time to failures to follow a known reliability distribution. The proposed methods have similar accuracy to the traditional parametric approaches, can be used when the distribution of component reliability is unknown or there is a limited amount of sample component data, are simpler to compute, and use less computer resources. Depuy et al. (1982) studied several parametric approaches to calculating confidence intervals on system reliability. The test systems employed by them are utilized for comparison with published results. Four systems with sample sizes per component of 10, 50, and 100 were studied. The test systems were complex systems made up of I components, each component has n observed (or estimated) times to failure. An efficient method for calculating a point estimate of system reliability is developed based on counting minimum cut sets that cause system failures. Five nonparametric approaches to calculate the confidence intervals on system reliability from one test sample of components were proposed and evaluated. Four of these were based on the binomial theory and the Kolomogorov empirical cumulative distribution theory. 600 Monte Carlo simulations generated 600 new sets of component failure data from the population with corresponding point estimates of system reliability and confidence intervals. Accuracy of these confidence intervals was determined by determining the fraction that included the true system reliability. The bootstrap method was also studied to calculate confidence interval from one sample. The bootstrap method is computer intensive and involves generating many sets of component samples using only the failure data from the initial sample. The empirical cumulative distribution function of 600 bootstrapped point estimates were examined to calculate the confidence intervals for 68, 80, 90 95 and 99 percent confidence levels. The accuracy of the bootstrap confidence intervals was determined by comparison with the distribution of 600 point estimates of system reliability generated from the Monte Carlo simulations. The confidence intervals calculated from the Kolomogorov empirical distribution function and the bootstrap method were very accurate. Sample sizes of 10 were not always sufficient for systems with reliabilities close to one.
2

Round-trip engineering concept for hierarchical UML models in AUTOSAR-based safety projects

Pathni, Charu 30 September 2015 (has links)
Product development process begins at a very abstract level of understanding the requirements. The data needs to be passed on the next phase of development. This happens after every stage for further development and finally a product is made. This thesis deals with the data exchange process of software development process in specific. The problem lies in handling of data in terms of redundancy and versions of the data to be handled. Also, once data passed on to next stage, the ability to exchange it in reveres order is not existent in evident forms. The results found during this thesis discusses the solutions for the problem by getting all the data at same level, in terms of its format. Having the concept ready, provides an opportunity to use this data based on our requirements. In this research, the problem of data consistency, data verification is dealt with. This data is used during the development and data merging from various sources. The concept that is formulated can be expanded to a wide variety of applications with respect to development process. If the process involves exchange of data - scalability and generalization are the main foundation concepts that are contained within the concept.

Page generated in 0.0705 seconds