• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Bridging the Gap: Selected Problems in Model Specification, Estimation, and Optimal Design from Reliability and Lifetime Data Analysis

King, Caleb B. 13 April 2015 (has links)
Understanding the lifetime behavior of their products is crucial to the success of any company in the manufacturing and engineering industries. Statistical methods for lifetime data are a key component to achieving this level of understanding. Sometimes a statistical procedure must be updated to be adequate for modeling specific data as is discussed in Chapter 2. However, there are cases in which the methods used in industrial standards are themselves inadequate. This is distressing as more appropriate statistical methods are available but remain unused. The research in Chapter 4 deals with such a situation. The research in Chapter 3 serves as a combination of both scenarios and represents how both statisticians and engineers from the industry can join together to yield beautiful results. After introducing basic concepts and notation in Chapter 1, Chapter 2 focuses on lifetime prediction for a product consisting of multiple components. During the production period, some components may be upgraded or replaced, resulting in a new ``generation" of component. Incorporating this information into a competing risks model can greatly improve the accuracy of lifetime prediction. A generalized competing risks model is proposed and simulation is used to assess its performance. In Chapter 3, optimal and compromise test plans are proposed for constant amplitude fatigue testing. These test plans are based on a nonlinear physical model from the fatigue literature that is able to better capture the nonlinear behavior of fatigue life and account for effects from the testing environment. Sensitivity to the design parameters and modeling assumptions are investigated and suggestions for planning strategies are proposed. Chapter 4 considers the analysis of ADDT data for the purposes of estimating a thermal index. The current industry standards use a two-step procedure involving least squares regression in each step. The methodology preferred in the statistical literature is the maximum likelihood procedure. A comparison of the procedures is performed and two published datasets are used as motivating examples. The maximum likelihood procedure is presented as a more viable alternative to the two-step procedure due to its ability to quantify uncertainty in data inference and modeling flexibility. / Ph. D.
2

A DPG method for convection-diffusion problems

Chan, Jesse L. 03 October 2013 (has links)
Over the last three decades, CFD simulations have become commonplace as a tool in the engineering and design of high-speed aircraft. Experiments are often complemented by computational simulations, and CFD technologies have proved very useful in both the reduction of aircraft development cycles, and in the simulation of conditions difficult to reproduce experimentally. Great advances have been made in the field since its introduction, especially in areas of meshing, computer architecture, and solution strategies. Despite this, there still exist many computational limitations in existing CFD methods; in particular, reliable higher order and hp-adaptive methods for the Navier-Stokes equations that govern viscous compressible flow. Solutions to the equations of viscous flow can display shocks and boundary layers, which are characterized by localized regions of rapid change and high gradients. The use of adaptive meshes is crucial in such settings -- good resolution for such problems under uniform meshes is computationally prohibitive and impractical for most physical regimes of interest. However, the construction of "good" meshes is a difficult task, usually requiring a-priori knowledge of the form of the solution. An alternative to such is the construction of automatically adaptive schemes; such methods begin with a coarse mesh and refine based on the minimization of error. However, this task is difficult, as the convergence of numerical methods for problems in CFD is notoriously sensitive to mesh quality. Additionally, the use of adaptivity becomes more difficult in the context of higher order and hp methods. Many of the above issues are tied to the notion of robustness, which we define loosely for CFD applications as the degradation of the quality of numerical solutions on a coarse mesh with respect to the Reynolds number, or nondimensional viscosity. For typical physical conditions of interest for the compressible Navier-Stokes equations, the Reynolds number dictates the scale of shock and boundary layer phenomena, and can be extremely high -- on the order of 10⁷ in a unit domain. For an under-resolved mesh, the Galerkin finite element method develops large oscillations which prevent convergence and pollute the solution. The issue of robustness for finite element methods was addressed early on by Brooks and Hughes in the SUPG method, which introduced the idea of residual-based stabilization to combat such oscillations. Residual-based stabilizations can alternatively be viewed as modifying the standard finite element test space, and consequently the norm in which the finite element method converges. Demkowicz and Gopalakrishnan generalized this idea in 2009 by introducing the Discontinous Petrov-Galerkin (DPG) method with optimal test functions, where test functions are determined such that they minimize the discrete linear residual in a dual space. Under the ultra-weak variational formulation, these test functions can be computed locally to yield a symmetric, positive-definite system. The main theoretical thrust of this research is to develop a DPG method that is provably robust for singular perturbation problems in CFD, but does not suffer from discretization error in the approximation of test functions. Such a method is developed for the prototypical singular perturbation problem of convection-diffusion, where it is demonstrated that the method does not suffer from error in the approximation of test functions, and that the L² error is robustly bounded by the energy error in which DPG is optimal -- in other words, as the energy error decreases, the L² error of the solution is guaranteed to decrease as well. The method is then extended to the linearized Navier-Stokes equations, and applied to the solution of the nonlinear compressible Navier-Stokes equations. The numerical work in this dissertation has focused on the development of a 2D compressible flow code under the Camellia library, developed and maintained by Nathan Roberts at ICES. In particular, we have developed a framework allowing for rapid implementation of problems and the easy application of higher order and hp-adaptive schemes based on a natural error representation function that stems from the DPG residual. Finally, the DPG method is applied to several convection diffusion problems which mimic difficult problems in compressible flow simulations, including problems exhibiting both boundary layers and singularities in stresses. A viscous Burgers' equation is solved as an extension of DPG to nonlinear problems, and the effectiveness of DPG as a numerical method for compressible flow is assessed with the application of DPG to two benchmark problems in supersonic flow. In particular, DPG is used to solve the Carter flat plate problem and the Holden compression corner problem over a range of Mach numbers and laminar Reynolds numbers using automatically adaptive schemes, beginning with very under-resolved/coarse initial meshes. / text
3

INFERENCE FOR ONE-SHOT DEVICE TESTING DATA

Ling, Man Ho 10 1900 (has links)
<p>In this thesis, inferential methods for one-shot device testing data from accelerated life-test are developed. Due to constraints on time and budget, accelerated life-tests are commonly used to induce more failures within a reasonable amount of test-time for obtaining more lifetime information that will be especially useful in reliability analysis. One-shot devices, which can be used only once as they get destroyed immediately after testing, yield observations only on their condition and not on their real lifetimes. So, only binary response data are observed from an one-shot device testing experiment. Since no failure times of units are observed, we use the EM algorithm for determining the maximum likelihood estimates of the model parameters. Also, inference for the reliability at a mission time and the mean lifetime at normal operating conditions are also developed.</p> <p>The thesis proceeds as follows. Chapter 2 considers the exponential distribution with single-stress relationship and develops inferential methods for the model parameters, the reliability and the mean lifetime. The results obtained by the EM algorithm are compared with those obtained from the Bayesian approach. A one-shot device testing data is analyzed by the proposed method and presented as an illustrative example. Next, in Chapter 3, the exponential distribution with multiple-stress relationship is considered and corresponding inferential results are developed. Jackknife technique is described for the bias reduction in the developed estimates. Interval estimation for the reliability and the mean lifetime are also discussed based on observed information matrix, jackknife technique, parametric bootstrap method, and transformation technique. Again, we present an example to illustrate all the inferential methods developed in this chapter. Chapter 4 considers the point and interval estimation for the one-shot device testing data under the Weibull distribution with multiple-stress relationship and illustrates the application of the proposed methods in a study involving the development of tumors in mice with respect to risk factors such as sex, strain of offspring, and dose effects of benzidine dihydrochloride. A Monte Carlo simulation study is also carried out to evaluate the performance of the EM estimates for different levels of reliability and different sample sizes. Chapter 5 describes a general algorithm for the determination of the optimal design of an accelerated life-test plan for one-shot device testing experiment. It is based on the asymptotic variance of the estimated reliability at a specific mission time. A numerical example is presented to illustrate the application of the algorithm. Finally, Chapter 6 presents some concluding remarks and some additional research problems that would be of interest for further study.</p> / Doctor of Philosophy (PhD)
4

Asymptotic efficiency in an instrumental variable model

Chaves, Leonardo Salim Saker 28 April 2015 (has links)
Submitted by Leonardo Salim Saker Chaves (lsalimsaker@gmail.com) on 2015-07-24T19:51:22Z No. of bitstreams: 1 Dissertacao_LeonardoSalim_BMHS.pdf: 661288 bytes, checksum: a89da060d1378be5cf51ff1edc18cfc6 (MD5) / Approved for entry into archive by BRUNA BARROS (bruna.barros@fgv.br) on 2015-07-27T14:42:35Z (GMT) No. of bitstreams: 1 Dissertacao_LeonardoSalim_BMHS.pdf: 661288 bytes, checksum: a89da060d1378be5cf51ff1edc18cfc6 (MD5) / Approved for entry into archive by Maria Almeida (maria.socorro@fgv.br) on 2015-07-30T19:33:12Z (GMT) No. of bitstreams: 1 Dissertacao_LeonardoSalim_BMHS.pdf: 661288 bytes, checksum: a89da060d1378be5cf51ff1edc18cfc6 (MD5) / Made available in DSpace on 2015-07-30T19:33:33Z (GMT). No. of bitstreams: 1 Dissertacao_LeonardoSalim_BMHS.pdf: 661288 bytes, checksum: a89da060d1378be5cf51ff1edc18cfc6 (MD5) Previous issue date: 2015-04-28 / This work studies the hypothesis testing based on generalized method of moments (GMM) estimation given by instruments condition. The importance for the development of Economics lies on the fact that when identi cation is weak, the standard test can be misleading. Therefore, it is made a review of proposed tests to overcome this problem and also present two useful frameworks of study; from Moreira (2002), Moreira and Moreira (2013) and Kleibergen (2005). So, this work conciliate the previous frameworks a way to write the score proposed initially in Kleibergen (2005) using Moreira and Moreira (2013) statistics and presents the optimal score test based on asymptotic theory from Newey and McFadden (1984). Moreover, the study shows the equivalence between the GMM and maximum likelihood estimation to deal with the weak instruments problem. / Esta dissertação se propõe ao estudo de inferência usando estimação por método generalizado dos momentos (GMM) baseado no uso de instrumentos. A motivação para o estudo está no fato de que sob identificação fraca dos parâmetros, a inferência tradicional pode levar a resultados enganosos. Dessa forma, é feita uma revisão dos mais usuais testes para superar tal problema e uma apresentação dos arcabouços propostos por Moreira (2002) e Moreira & Moreira (2013), e Kleibergen (2005). Com isso, o trabalho concilia as estatísticas utilizadas por eles para realizar inferência e reescreve o teste score proposto em Kleibergen (2005) utilizando as estatísticas de Moreira & Moreira (2013), e é obtido usando a teoria assintótica em Newey & McFadden (1984) a estatística do teste score ótimo. Além disso, mostra-se a equivalência entre a abordagem por GMM e a que usa sistema de equações e verossimilhança para abordar o problema de identificação fraca.

Page generated in 0.0666 seconds