• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 36
  • 12
  • 10
  • 9
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Statistical Methods for Reliability Data from Designed Experiments

Freeman, Laura J. 07 May 2010 (has links)
Product reliability is an important characteristic for all manufacturers, engineers and consumers. Industrial statisticians have been planning experiments for years to improve product quality and reliability. However, rarely do experts in the field of reliability have expertise in design of experiments (DOE) and the implications that experimental protocol have on data analysis. Additionally, statisticians who focus on DOE rarely work with reliability data. As a result, analysis methods for lifetime data for experimental designs that are more complex than a completely randomized design are extremely limited. This dissertation provides two new analysis methods for reliability data from life tests. We focus on data from a sub-sampling experimental design. The new analysis methods are illustrated on a popular reliability data set, which contains sub-sampling. Monte Carlo simulation studies evaluate the capabilities of the new modeling methods. Additionally, Monte Carlo simulation studies highlight the principles of experimental design in a reliability context. The dissertation provides multiple methods for statistical inference for the new analysis methods. Finally, implications for the reliability field are discussed, especially in future applications of the new analysis methods. / Ph. D.
12

An unstructured numerical method for computational aeroacoustics

Portas, Lance O. January 2009 (has links)
The successful application of Computational Aeroacoustics (CAA) requires high accuracy numerical schemes with good dissipation and dispersion characteristics. Unstructured meshes have a greater geometrical flexibility than existing high order structured mesh methods. This work investigates the suitability of unstructured mesh techniques by computing a two-dimensionallinearised Euler problem with various discretisation schemes and different mesh types. The goal of the present work is the development of an unstructured numerical method with the high accuracy, low dissipation and low dispersion required to be an effective tool in the study of aeroacoustics. The suitability of the unstructured method is investigated using aeroacoustic test cases taken from CAA Benchmark Workshop proceedings. Comparisons are made with exact solutions and a high order structured method. The higher order structured method was based upon a standard central differencing spatial discretisation. For the unstructured method a vertex-based data structure is employed. A median-dual control volume is used for the finite volume approximation with the option of using a Green-Gauss gradient approximation technique or a Least Squares approximation. The temporal discretisation used for both the structured and unstructured numerical methods is an explicit Runge-Kutta method with local timestepping. For the unstructured method, the gradient approximation technique is used to compute gradients at each vertex, these are then used to reconstruct the fluxes at the control volume faces. The unstructured mesh types used to evaluate the numerical method include semi-structured and purely unstructured triangular meshes. The semi-structured meshes were created directly from the associated structured mesh. The purely unstructured meshes were created using a commercial paving algorithm. The Least Squares method has the potential to allow high order reconstruction. Results show that a Weighted Least gradient approximation gives better solutions than unweighted and Green-Gauss gradient computation. The solutions are of acceptable accuracy on these problems with the absolute error of the unstructured method approaching that of a high order structured solution on an equivalent mesh for specific aeroacoustic scenarios.
13

A New Approach to Statistical Efficiency of Weighted Least Squares Fitting Algorithms for Reparameterization of Nonlinear Regression Models

Zheng, Shimin, Gupta, A. K. 01 April 2012 (has links)
We study nonlinear least-squares problem that can be transformed to linear problem by change of variables. We derive a general formula for the statistically optimal weights and prove that the resulting linear regression gives an optimal estimate (which satisfies an analogue of the Rao–Cramer lower bound) in the limit of small noise.
14

On Some Properties of Interior Methods for Optimization

Sporre, Göran January 2003 (has links)
This thesis consists of four independent papers concerningdifferent aspects of interior methods for optimization. Threeof the papers focus on theoretical aspects while the fourth oneconcerns some computational experiments. The systems of equations solved within an interior methodapplied to a convex quadratic program can be viewed as weightedlinear least-squares problems. In the first paper, it is shownthat the sequence of solutions to such problems is uniformlybounded. Further, boundedness of the solution to weightedlinear least-squares problems for more general classes ofweight matrices than the one in the convex quadraticprogramming application are obtained as a byproduct. In many linesearch interior methods for nonconvex nonlinearprogramming, the iterates can "falsely" converge to theboundary of the region defined by the inequality constraints insuch a way that the search directions do not converge to zero,but the step lengths do. In the sec ond paper, it is shown thatthe multiplier search directions then diverge. Furthermore, thedirection of divergence is characterized in terms of thegradients of the equality constraints along with theasymptotically active inequality constraints. The third paper gives a modification of the analytic centerproblem for the set of optimal solutions in linear semidefiniteprogramming. Unlike the normal analytic center problem, thesolution of the modified problem is the limit point of thecentral path, without any strict complementarity assumption.For the strict complementarity case, the modified problem isshown to coincide with the normal analytic center problem,which is known to give a correct characterization of the limitpoint of the central path in that case. The final paper describes of some computational experimentsconcerning possibilities of reusing previous information whensolving system of equations arising in interior methods forlinear programming. <b>Keywords:</b>Interior method, primal-dual interior method,linear programming, quadratic programming, nonlinearprogramming, semidefinite programming, weighted least-squaresproblems, central path. <b>Mathematics Subject Classification (2000):</b>Primary90C51, 90C22, 65F20, 90C26, 90C05; Secondary 65K05, 90C20,90C25, 90C30.
15

On Some Properties of Interior Methods for Optimization

Sporre, Göran January 2003 (has links)
<p>This thesis consists of four independent papers concerningdifferent aspects of interior methods for optimization. Threeof the papers focus on theoretical aspects while the fourth oneconcerns some computational experiments.</p><p>The systems of equations solved within an interior methodapplied to a convex quadratic program can be viewed as weightedlinear least-squares problems. In the first paper, it is shownthat the sequence of solutions to such problems is uniformlybounded. Further, boundedness of the solution to weightedlinear least-squares problems for more general classes ofweight matrices than the one in the convex quadraticprogramming application are obtained as a byproduct.</p><p>In many linesearch interior methods for nonconvex nonlinearprogramming, the iterates can "falsely" converge to theboundary of the region defined by the inequality constraints insuch a way that the search directions do not converge to zero,but the step lengths do. In the sec ond paper, it is shown thatthe multiplier search directions then diverge. Furthermore, thedirection of divergence is characterized in terms of thegradients of the equality constraints along with theasymptotically active inequality constraints.</p><p>The third paper gives a modification of the analytic centerproblem for the set of optimal solutions in linear semidefiniteprogramming. Unlike the normal analytic center problem, thesolution of the modified problem is the limit point of thecentral path, without any strict complementarity assumption.For the strict complementarity case, the modified problem isshown to coincide with the normal analytic center problem,which is known to give a correct characterization of the limitpoint of the central path in that case.</p><p>The final paper describes of some computational experimentsconcerning possibilities of reusing previous information whensolving system of equations arising in interior methods forlinear programming.</p><p><b>Keywords:</b>Interior method, primal-dual interior method,linear programming, quadratic programming, nonlinearprogramming, semidefinite programming, weighted least-squaresproblems, central path.</p><p><b>Mathematics Subject Classification (2000):</b>Primary90C51, 90C22, 65F20, 90C26, 90C05; Secondary 65K05, 90C20,90C25, 90C30.</p>
16

Model-based calibration of a non-invasive blood glucose monitor

Shulga, Yelena A 11 January 2006 (has links)
This project was dedicated to the problem of improving a non-invasive blood glucose monitor being developed by the VivaScan Corporation. The company has made some progress in the non-invasive blood glucose device development and approached WPI for a statistical assistance in the improvement of their model in order to predict the glucose level more accurately. The main goal of this project was to improve the ability of the non-invasive blood glucose monitor to predict the glucose values more precisely. The goal was achieved by finding and implementing the best regression model. The methods included ordinary least squared regression, partial least squares regression, robust regression method, weighted least squares regression, local regression, and ridge regression. VivaScan calibration data for seven patients were analyzed in this project. For each of these patients, the individual regression models were built and compared based on the two factors that evaluate the model prediction ability. It was determined that partial least squares and ridge regressions are two best methods among the others that were considered in this work. Using these two methods gave better glucose prediction. The additional problem of data reduction to minimize the data collection time was also considered in this work.
17

Inference for Discrete Time Stochastic Processes using Aggregated Survey Data

Davis, Brett Andrew, Brett.Davis@abs.gov.au January 2003 (has links)
We consider a longitudinal system in which transitions between the states are governed by a discrete time finite state space stochastic process X. Our aim, using aggregated sample survey data of the form typically collected by official statistical agencies, is to undertake model based inference for the underlying process X. We will develop inferential techniques for continuing sample surveys of two distinct types. First, longitudinal surveys in which the same individuals are sampled in each cycle of the survey. Second, cross-sectional surveys which sample the same population in successive cycles but with no attempt to track particular individuals from one cycle to the next. Some of the basic results have appeared in Davis et al (2001) and Davis et al (2002).¶ Longitudinal surveys provide data in the form of transition frequencies between the states of X. In Chapter Two we develop a method for modelling and estimating the one-step transition probabilities in the case where X is a non-homogeneous Markov chain and transition frequencies are observed at unit time intervals. However, due to their expense, longitudinal surveys are typically conducted at widely, and sometimes irregularly, spaced time points. That is, the observable frequencies pertain to multi-step transitions. Continuing to assume the Markov property for X, in Chapter Three, we show that these multi-step transition frequencies can be stochastically interpolated to provide accurate estimates of the one-step transition probabilities of the underlying process. These estimates for a unit time increment can be used to calculate estimates of expected future occupation time, conditional on an individual’s state at initial point of observation, in the different states of X.¶ For reasons of cost, most statistical collections run by official agencies are cross-sectional sample surveys. The data observed from an on-going survey of this type are marginal frequencies in the states of X at a sequence of time points. In Chapter Four we develop a model based technique for estimating the marginal probabilities of X using data of this form. Note that, in contrast to the longitudinal case, the Markov assumption does not simplify inference based on marginal frequencies. The marginal probability estimates enable estimation of future occupation times (in each of the states of X) for an individual of unspecified initial state. However, in the applications of the technique that we discuss (see Sections 4.4 and 4.5) the estimated occupation times will be conditional on both gender and initial age of individuals.¶ The longitudinal data envisaged in Chapter Two is that obtained from the surveillance of the same sample in each cycle of an on-going survey. In practice, to preserve data quality it is necessary to control respondent burden using sample rotation. This is usually achieved using a mechanism known as rotation group sampling. In Chapter Five we consider the particular form of rotation group sampling used by the Australian Bureau of Statistics in their Monthly Labour Force Survey (from which official estimates of labour force participation rates are produced). We show that our approach to estimating the one-step transition probabilities of X from transition frequencies observed at incremental time intervals, developed in Chapter Two, can be modified to deal with data collected under this sample rotation scheme. Furthermore, we show that valid inference is possible even when the Markov property does not hold for the underlying process.
18

Data-driven estimation for Aalen's additive risk model

Boruvka, Audrey 02 August 2007 (has links)
The proportional hazards model developed by Cox (1972) is by far the most widely used method for regression analysis of censored survival data. Application of the Cox model to more general event history data has become possible through extensions using counting process theory (e.g., Andersen and Borgan (1985), Therneau and Grambsch (2000)). With its development based entirely on counting processes, Aalen’s additive risk model offers a flexible, nonparametric alternative. Ordinary least squares, weighted least squares and ridge regression have been proposed in the literature as estimation schemes for Aalen’s model (Aalen (1989), Huffer and McKeague (1991), Aalen et al. (2004)). This thesis develops data-driven parameter selection criteria for the weighted least squares and ridge estimators. Using simulated survival data, these new methods are evaluated against existing approaches. A survey of the literature on the additive risk model and a demonstration of its application to real data sets are also provided. / Thesis (Master, Mathematics & Statistics) -- Queen's University, 2007-07-18 22:13:13.243
19

Some Aspects on Confirmatory Factor Analysis of Ordinal Variables and Generating Non-normal Data

Luo, Hao January 2011 (has links)
This thesis, which consists of five papers, is concerned with various aspects of confirmatory factor analysis (CFA) of ordinal variables and the generation of non-normal data. The first paper studies the performances of different estimation methods used in CFA when ordinal data are encountered.  To take ordinality into account the four estimation methods, i.e., maximum likelihood (ML), unweighted least squares, diagonally weighted least squares, and weighted least squares (WLS), are used in combination with polychoric correlations. The effect of model sizes and number of categories on the parameter estimates, their standard errors, and the common chi-square measure of fit when the models are both correct and misspecified are examined. The second paper focuses on the appropriate estimator of the polychoric correlation when fitting a CFA model. A non-parametric polychoric correlation coefficient based on the discrete version of Spearman's rank correlation is proposed to contend with the situation of non-normal underlying distributions. The simulation study shows the benefits of using the non-parametric polychoric correlation under conditions of non-normality. The third paper raises the issue of simultaneous factor analysis. We study the effect of pooling multi-group data on the estimation of factor loadings. Given the same factor loadings but different factor means and correlations, we investigate how much information is lost by pooling the groups together and only estimating the combined data set using the WLS method. The parameter estimates and their standard errors are compared with results obtained by multi-group analysis using ML. The fourth paper uses a Monte Carlo simulation to assess the reliability of the Fleishman's power method under various conditions of skewness, kurtosis, and sample size. Based on the generated non-normal samples, the power of D'Agostino's (1986) normality test is studied. The fifth paper extends the evaluation of algorithms to the generation of multivariate non-normal data.  Apart from the requirement of generating reliable skewness and kurtosis, the generated data also need to possess the desired correlation matrices.  Four algorithms are investigated in terms of simplicity, generality, and reliability of the technique.
20

MÉTODOS ALTERNATIVOS PARA ESTIMAÇÃO DE ESTADO EM SISTEMAS DE ENERGIA ELÉTRICA / ALTERNATIVE METHODS FOR ESTIMATION OF STATE IN POWER SYSTEMS

Frazão, Rodrigo José Albuquerque 23 January 2012 (has links)
Made available in DSpace on 2016-08-17T14:53:18Z (GMT). No. of bitstreams: 1 Dissertacao Rodrigo Albuquerque.pdf: 3312916 bytes, checksum: c9ee0be229b62b8aafd7816c3400351d (MD5) Previous issue date: 2012-01-23 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The state estimation process applied to electric power systems aims to provide a trustworthy ―image‖, coherent and complete of the system operation, allowing an efficient monitoring. The state estimation is one of the most important functions of energy management systems. In this work, will be proposed alternative methods of state estimation for electric power systems in the levels of transmission, subtransmission and distribution. For transmission systems are proposed two hybrid methods considering the insertion of conventional measurements combined with phasor measurements based on phasor measurement unit (PMU). To estimate the state in subtransmission systems is proposed an alternative method which, in occurrence of failures in active and/or reactive meters in the substations, uses a load forecasting model based on criteria similar days and application of artificial neural networks. This process of load forecasting is used as a generator of pseudo measurements in state estimation problem, which takes place through the propagation of phasor measurements provided by a PMU placed in the boundary busbar. For the distribution system state estimation the proposed method uses the mathematical method of weighted least squares with equality constraints by modifying the set of measurements and the state variables. It is also proposed a methodology evaluation of the PMUs measurement channel availability for observability analysis. The application of the proposed methods to test systems shows that the results are satisfactory. / O processo de estimação de estado aplicado a sistemas elétricos de energia tem como objetivo fornecer uma imagem confiável, coerente e completa da operação do sistema, permitindo um monitoramento eficiente. A estimação de estado é uma das funções mais importantes dos sistemas de gerenciamento de energia. Neste trabalho são propostos métodos alternativos de estimação de estado para sistemas elétricos nos níveis de transmissão, subtransmissão e de distribuição. Para sistemas de transmissão são propostos dois métodos híbridos considerando a inserção das medições convencionais combinadas com medições fasoriais baseadas na unidade de medição fasorial (PMU - Phasor Measurement Unit). Para a estimação de estado em sistemas de subtransmissão é proposto um método alternativo que, na ocorrência de falhas nos medidores de potência ativa e/ou reativa das subestações, utiliza um modelo de previsão de carga baseado no critério de dias similares e na aplicação de redes neurais artificiais. Esse processo de previsão de carga é utilizado como gerador de pseudomedições na estimação de estado, que se dá através da propagação da medição fasorial fornecida por uma PMU alocada no barramento de fronteira. Para sistemas de distribuição o método de estimação de estado proposto consiste em aplicar o método de mínimos quadrados ponderados com restrições de igualdade, modificando-se o plano de medição e as variáveis de estado. Também é proposta uma metodologia para avaliação da disponibilidade dos canais de medições da PMU e o seu impacto na observabilidade do sistema. A aplicação dos métodos propostos a sistemas teste mostram que os resultados obtidos são satisfatórios.

Page generated in 0.0764 seconds