• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 7
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 19
  • 10
  • 10
  • 9
  • 9
  • 9
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Exact D-optimal designs for mixture experiments in Scheffe's quadratic models

Wu, Shian-Chung 05 July 2006 (has links)
The exact D-optimal design problems for regression models has been in-vestigated in many literatures. Huang (1987) and Gaffke (1987) provided a sufficient condition for the minimum sample size for an certain set of candidate designs to be exact D-optimal for polynomial regression models on a compact interval. In this work we consider a mixture experiment with q nonnegative components, where the proportions of components are sub- ject to the simplex restriction $sum_{i=1}^q x_i =1$, $x_i ¡Ù 0$. The exact D-optimal designs for mixture experiments for Scheffe¡¦s quadratic models are investigated. Based on results in Kiefer (1961) results about the exact D-optimal designs for mixture models with two or three ingredients are provided and numerical verifications for models with ingredients between four and nine are presented.
12

Evaluating SLAM algorithms for Autonomous Helicopters

Skoglund, Martin January 2008 (has links)
<p>Navigation with unmanned aerial vehicles (UAVs) requires good knowledge of the current position and other states. A UAV navigation system often uses GPS and inertial sensors in a state estimation solution. If the GPS signal is lost or corrupted state estimation must still be possible and this is where simultaneous localization and mapping (SLAM) provides a solution. SLAM considers the problem of incrementally building a consistent map of a previously unknown environment and simultaneously localize itself within this map, thus a solution does not require position from the GPS receiver.</p><p>This thesis presents a visual feature based SLAM solution using a low resolution video camera, a low-cost inertial measurement unit (IMU) and a barometric pressure sensor. State estimation in made with a extended information filter (EIF) where sparseness in the information matrix is enforced with an approximation.</p><p>An implementation is evaluated on real flight data and compared to a EKF-SLAM solution. Results show that both solutions provide similar estimates but the EIF is over-confident. The sparse structure is exploited, possibly not fully, making the solution nearly linear in time and storage requirements are linear in the number of features which enables evaluation for a longer period of time.</p>
13

Evaluating SLAM algorithms for Autonomous Helicopters

Skoglund, Martin January 2008 (has links)
Navigation with unmanned aerial vehicles (UAVs) requires good knowledge of the current position and other states. A UAV navigation system often uses GPS and inertial sensors in a state estimation solution. If the GPS signal is lost or corrupted state estimation must still be possible and this is where simultaneous localization and mapping (SLAM) provides a solution. SLAM considers the problem of incrementally building a consistent map of a previously unknown environment and simultaneously localize itself within this map, thus a solution does not require position from the GPS receiver. This thesis presents a visual feature based SLAM solution using a low resolution video camera, a low-cost inertial measurement unit (IMU) and a barometric pressure sensor. State estimation in made with a extended information filter (EIF) where sparseness in the information matrix is enforced with an approximation. An implementation is evaluated on real flight data and compared to a EKF-SLAM solution. Results show that both solutions provide similar estimates but the EIF is over-confident. The sparse structure is exploited, possibly not fully, making the solution nearly linear in time and storage requirements are linear in the number of features which enables evaluation for a longer period of time.
14

Designs for nonlinear regression with a prior on the parameters

Karami, Jamil Unknown Date
No description available.
15

Applied Adaptive Optimal Design and Novel Optimization Algorithms for Practical Use

Strömberg, Eric January 2016 (has links)
The costs of developing new pharmaceuticals have increased dramatically during the past decades. Contributing to these increased expenses are the increasingly extensive and more complex clinical trials required to generate sufficient evidence regarding the safety and efficacy of the drugs.  It is therefore of great importance to improve the effectiveness of the clinical phases by increasing the information gained throughout the process so the correct decision may be made as early as possible.   Optimal Design (OD) methodology using the Fisher Information Matrix (FIM) based on Nonlinear Mixed Effect Models (NLMEM) has been proven to serve as a useful tool for making more informed decisions throughout the clinical investigation. The calculation of the FIM for NLMEM does however lack an analytic solution and is commonly approximated by linearization of the NLMEM. Furthermore, two structural assumptions of the FIM is available; a full FIM and a block-diagonal FIM which assumes that the fixed effects are independent of the random effects in the NLMEM. Once the FIM has been derived, it can be transformed into a scalar optimality criterion for comparing designs. The optimality criterion may be considered local, if the criterion is based on singe point values of the parameters or global (robust), where the criterion is formed for a prior distribution of the parameters.  Regardless of design criterion, FIM approximation or structural assumption, the design will be based on the prior information regarding the model and parameters, and is thus sensitive to misspecification in the design stage.  Model based adaptive optimal design (MBAOD) has however been shown to be less sensitive to misspecification in the design stage.   The aim of this thesis is to further the understanding and practicality when performing standard and MBAOD. This is to be achieved by: (i) investigating how two common FIM approximations and the structural assumptions may affect the optimized design, (ii) reducing runtimes complex design optimization by implementing a low level parallelization of the FIM calculation, (iii) further develop and demonstrate a framework for performing MBAOD, (vi) and investigate the potential advantages of using a global optimality criterion in the already robust MBAOD.
16

Some Properties of Exchange Design Algorithms Under Correlation

Stehlik, Milan January 2006 (has links) (PDF)
In this paper we discuss an algorithm for the construction of D-optimal experimental designs for the parameters in a regression model when the errors have a correlation structure. We show that design points can collapse under the presence of some covariance structures and a so called nugget can be employed in a natural way. We also show that the information of equidistant design on covariance parameter is increasing with the number of design points under exponential variogram, however these designs are not D-optimal. Also in higher dimensions the exponential structure without nugget leads to collapsing of the D-optimal design when also parameters of covariance structure are of interest. However, if only trend parameters are of interest, the designs covering uniformly the whole design space are very efficient. For illustration some numerical examples are also included. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
17

Bias Reduction and Goodness-of-Fit Tests in Conditional Logistic Regression Models

Sun, Xiuzhen 2010 August 1900 (has links)
This dissertation consists of three projects in matched case-control studies. In the first project, we employ a general bias preventive approach developed by Firth (1993) to handle the bias of an estimator of the log-odds ratio parameter in conditional logistic regression by solving a modified score equation. The resultant estimator not only reduces bias but also can prevent producing infinite value. Furthermore, we propose a method to calculate the standard error of the resultant estimator. A closed form expression for the estimator of the log-odds ratio parameter is derived in the case of a dichotomous exposure variable. Finite sample properties of the estimator are investigated via a simulation study. Finally, we apply the method to analyze a matched case-control data from a low-birth-weight study. In the second project of this dissertation, we propose a score typed test for checking adequacy of a functional form of a covariate of interest in matched case-control studies by using penalized regression splines to approximate an unknown function. The asymptotic distribution of the test statistics under the null model is a linear combination of several chi-square random variables. We also derive the asymptotic distribution of the test statistic when the alternative model holds. Through a simulation study we assess and compare the finite sample properties of the proposed test with that of Arbogast and Lin (2004). To illustrate the usefulness of the method, we apply the proposed test to a matched case-control data constructed from the breast cancer data of the SEER study. Usually a logistic model is needed to associate the risk of the disease with the covariates of interests. However, this logistic model may not be appropriate in some instances. In the last project , we adopt idea to matched case-control studies and derive an information matrix based test for testing overall model adequacy and investigate the properties against the cumulative residual based test in Arbogast and Lin (2004) via a simulation study. The proposed method is less time consuming and has comparative power for small parameters. It is suitable to explore the overall model fitting.
18

Algoritmo genético aplicado à determinação da melhor configuração e do menor tamanho amostral na análise da variabilidade espacial de atributos químicos do solo / Genetic algorithm applied to determine the best configuration and the lowest sample size in the analysis of space variability of chemical attributes of soil

Maltauro, Tamara Cantú 21 February 2018 (has links)
Submitted by Neusa Fagundes (neusa.fagundes@unioeste.br) on 2018-09-10T17:23:20Z No. of bitstreams: 2 Tamara_Maltauro2018.pdf: 3146012 bytes, checksum: 16eb0e2ba58be9d968ba732c806d14c1 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-09-10T17:23:20Z (GMT). No. of bitstreams: 2 Tamara_Maltauro2018.pdf: 3146012 bytes, checksum: 16eb0e2ba58be9d968ba732c806d14c1 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-02-21 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / It is essential to determine a sampling design with a size that minimizes operating costs and maximizes the results quality throughout a trial setting that involves the study of spatial variability of chemical attributes on soil. Thus, this trial aimed at resizing a sample configuration with the least possible number of points for a commercial area composed of 102 points, regarding the information on spatial variability of soil chemical attributes to optimize the process. Initially, Monte Carlo simulations were carried out, assuming Gaussian, isotropic, and exponential model for semi-variance function and three initial sampling configurations: systematic, simple random and lattice plus close pairs. The Genetic Algorithm (GA) was used to obtain simulated data and chemical attributes of soil, in order to resize the optimized sample, considering two objective-functions. They are based on the efficiency of spatial prediction and geostatistical model estimation, which are respectively: maximization of global accuracy precision and minimization of functions based on Fisher information matrix. It was observed by the simulated data that for both objective functions, when the nugget effect and range varied, samplings usually showed the lowest values of objectivefunction, whose nugget effect was 0 and practical range was 0.9. And the increase in practical range has generated a slight reduction in the number of optimized sampling points for most cases. In relation to the soil chemical attributes, GA was efficient in reducing the sample size with both objective functions. Thus, sample size varied from 30 to 35 points in order to maximize global accuracy precision, which corresponded to 29.41% to 34.31% of the initial mesh, with a minimum spatial prediction similarity to the original configuration, equal to or greater than 85%. It is noteworthy that such data have reflected on the optimization process, which have similarity between the maps constructed with sample configurations: original and optimized. Nevertheless, the sample size of the optimized sample varied from 30 to 40 points to minimize the function based on Fisher information matrix, which corresponds to 29.41% and 39.22% of the original mesh, respectively. However, there was no similarity between the constructed maps when considering the initial and optimum sample configuration. For both objective functions, the soil chemical attributes showed mild spatial dependence for the original sample configuration. And, most of the attributes showed mild or strong spatial dependence for optimum sample configuration. Thus, the optimization process was efficient when applied to both simulated data and soil chemical attributes. / É necessário determinar um esquema de amostragem com um tamanho que minimize os custos operacionais e maximize a qualidade dos resultados durante a montagem de um experimento que envolva o estudo da variabilidade espacial de atributos químicos do solo. Assim, o objetivo deste trabalho foi redimensionar uma configuração amostral com o menor número de pontos possíveis para uma área comercial composta por 102 pontos, considerando a informação sobre a variabilidade espacial de atributos químicos do solo no processo de otimização. Inicialmente, realizaram-se simulações de Monte Carlo, assumindo as variáveis estacionárias Gaussiana, isotrópicas, modelo exponencial para a função semivariância e três configurações amostrais iniciais: sistemática, aleatória simples e lattice plus close pairs. O Algoritmo Genético (AG) foi utilizado para a obtenção dos dados simulados e dos atributos químicos do solo, a fim de se redimensionar a amostra otimizada, considerando duas funções-objetivo. Essas estão baseadas na eficiência quanto à predição espacial e à estimação do modelo geoestatístico, as quais são respectivamente: a maximização da medida de acurácia exatidão global e a minimização de funções baseadas na matriz de informação de Fisher. Observou-se pelos dados simulados que, para ambas as funções-objetivo, quando o efeito pepita e o alcance variaram, em geral, as amostragens apresentaram os menores valores da função-objetivo, com efeito pepita igual a 0 e alcance prático igual a 0,9. O aumento do alcance prático gerou uma leve redução do número de pontos amostrais otimizados para a maioria dos casos. Em relação aos atributos químicos do solo, o AG, com ambas as funções-objetivo, foi eficiente quanto à redução do tamanho amostral. Para a maximização da exatidão global, tem-se que o tamanho amostral da nova amostra reduzida variou entre 30 e 35 pontos que corresponde respectivamente a 29,41% e a 34,31% da malha inicial, com uma similaridade mínima de predição espacial, em relação à configuração original, igual ou superior a 85%. Vale ressaltar que tais dados refletem no processo de otimização, os quais apresentam similaridade entres os mapas construídos com as configurações amostrais: original e otimizada. Todavia, o tamanho amostral da amostra otimizada variou entre 30 e 40 pontos para minimizar a função baseada na matriz de informaçãode Fisher, a qual corresponde respectivamente a 29,41% e 39,22% da malha original. Mas, não houve similaridade entre os mapas elaborados quando se considerou a configuração amostral inicial e a otimizada. Para ambas as funções-objetivo, os atributos químicos do solo apresentaram moderada dependência espacial para a configuração amostral original. E, a maioria dos atributos apresentaram moderada ou forte dependência espacial para a configuração amostral otimizada. Assim, o processo de otimização foi eficiente quando aplicados tanto nos dados simulados como nos atributos químicos do solo.
19

General Weighted Optimality of Designed Experiments

Stallings, Jonathan W. 22 April 2014 (has links)
Design problems involve finding optimal plans that minimize cost and maximize information about the effects of changing experimental variables on some response. Information is typically measured through statistically meaningful functions, or criteria, of a design's corresponding information matrix. The most common criteria implicitly assume equal interest in all effects and certain forms of information matrices tend to optimize them. However, these criteria can be poor assessments of a design when there is unequal interest in the experimental effects. Morgan and Wang (2010) addressed this potential pitfall by developing a concise weighting system based on quadratic forms of a diagonal matrix W that allows a researcher to specify relative importance of information for any effects. They were then able to generate a broad class of weighted optimality criteria that evaluate a design's ability to maximize the weighted information, ultimately targeting those designs that efficiently estimate effects assigned larger weight. This dissertation considers a much broader class of potential weighting systems, and hence weighted criteria, by allowing W to be any symmetric, positive definite matrix. Assuming the response and experimental effects may be expressed as a general linear model, we provide a survey of the standard approach to optimal designs based on real-valued, convex functions of information matrices. Motivated by this approach, we introduce fundamental definitions and preliminary results underlying the theory of general weighted optimality. A class of weight matrices is established that allows an experimenter to directly assign weights to a set of estimable functions and we show how optimality of transformed models may be placed under a weighted optimality context. Straightforward modifications to SAS PROC OPTEX are shown to provide an algorithmic search procedure for weighted optimal designs, including A-optimal incomplete block designs. Finally, a general theory is given for design optimization when only a subset of all estimable functions is assumed to be in the model. We use this to develop a weighted criterion to search for A-optimal completely randomized designs for baseline factorial effects assuming all high-order interactions are negligible. / Ph. D.
20

The new class of Kummer beta generalized distributions: theory and applications / A nova classe de distribuições Kummer beta generalizada: teoria e aplicações

Pescim, Rodrigo Rossetto 06 December 2013 (has links)
In this study, a new class of generalized distributions was developed, based on the Kummer beta distribution (NG; KOTZ, 1995), which contains as particular cases the exponentiated and beta generators of distributions. The main feature of the new family of distributions is to provide greater flexibility to the extremes of the density function and therefore, it becomes suitable for analyzing data sets with high degree of asymmetry and kurtosis. Also, two new distributions belonging to the new class of distributions, based on the Birnbaum-Saunders and generalized gamma distributions, that has as main characteristic the hazard function which assumes different forms (unimodal, bathtub shape, increase, decrease) were studied. In all studies, general mathematical properties such as ordinary and incomplete moments, generating function, mean deviations, reliability, entropies, order statistics and their moments were discussed. The estimation of parameters is approached by the method of maximum likelihood and Bayesian analysis and the observed information matrix is derived. It is also considered the likelihood ratio statistics and formal goodness-of-fit tests to compare all the proposed distributions with some of its sub-models and non-nested models. The developed results for all studies were applied to six real data sets. / Neste trabalho, foi proposta uma nova classe de distribuições generalizadas, baseada na distribuição Kummer beta (NG; KOTZ, 1995), que contém como casos particulares os geradores exponencializado e beta de distribuições. A principal característica da nova família de distribuições é fornecer grande flexibilidade para as extremidades da função densidade e portanto, ela torna-se adequada para a análise de conjuntos de dados com alto grau de assimetria e curtose. Também foram estudadas duas novas distribuições que pertencem à nova família de distribuições, baseadas nas distribuições Birnbaum-Saunders e gama generalizada, que possuem função de taxas de falhas que assumem diferentes formas (unimodal, forma de banheira, crescente e decrescente). Em todas as pesquisas, propriedades matemáticas gerais como momentos ordinários e incompletos, função geradora, desvios médio, confiabilidade, entropias, estatísticas de ordem e seus momentos foram discutidas. A estimação dos parâmetros é abordada pelo método da máxima verossimilhança e pela análise bayesiana e a matriz de informação observada foi derivada. Considerou-se, também, a estatística de razão de verossimilhanças e testes formais de qualidade de ajuste para comparar todas as distribuições propostas com alguns de seus submodelos e modelos não encaixados. Os resultados desenvolvidos foram aplicados a seis conjuntos de dados.

Page generated in 0.1171 seconds