• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 23
  • 6
  • 6
  • 6
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 146
  • 146
  • 93
  • 34
  • 31
  • 24
  • 23
  • 23
  • 21
  • 20
  • 18
  • 17
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Latent variable models for longitudinal twin data

Dominicus, Annica January 2006 (has links)
Longitudinal twin data provide important information for exploring sources of variation in human traits. In statistical models for twin data, unobserved genetic and environmental factors influencing the trait are represented by latent variables. In this way, trait variation can be decomposed into genetic and environmental components. With repeated measurements on twins, latent variables can be used to describe individual trajectories, and the genetic and environmental variance components are assessed as functions of age. This thesis contributes to statistical methodology for analysing longitudinal twin data by (i) exploring the use of random change point models for modelling variance as a function of age, (ii) assessing how nonresponse in twin studies may affect estimates of genetic and environmental influences, and (iii) providing a method for hypothesis testing of genetic and environmental variance components. The random change point model, in contrast to linear and quadratic random effects models, is shown to be very flexible in capturing variability as a function of age. Approximate maximum likelihood inference through first-order linearization of the random change point model is contrasted with Bayesian inference based on Markov chain Monte Carlo simulation. In a set of simulations based on a twin model for informative nonresponse, it is demonstrated how the effect of nonresponse on estimates of genetic and environmental variance components depends on the underlying nonresponse mechanism. This thesis also reveals that the standard procedure for testing variance components is inadequate, since the null hypothesis places the variance components on the boundary of the parameter space. The asymptotic distribution of the likelihood ratio statistic for testing variance components in classical twin models is derived, resulting in a mixture of chi-square distributions. Statistical methodology is illustrated with applications to empirical data on cognitive function from a longitudinal twin study of aging.
52

C-optimal Designs for Parameter Testing with Survival Data under Bivariate Copula Models

Yeh, Chia-Min 31 July 2007 (has links)
Current status data are usually obtained with a failure time variable T which is diffcult observed but can be determined to lie below or above a random monitoring time or inspection time t. In this work we consider bivariate current status data ${t,delta_1,delta_2}$ and assume we have some prior information of the bivariate failure time variables T1 and T2. Our main goal is to find an optimal inspection time for testing the relationship between T1 and T2.
53

Selection and ranking procedures based on likelihood ratios

Chotai, Jayanti January 1979 (has links)
This thesis deals with random-size subset selection and ranking procedures• • • )|(derived through likelihood ratios, mainly in terms of the P -approach.Let IT , . .. , IT, be k(> 2) populations such that IR.(i = l, . . . , k) hasJ_ K. — 12the normal distribution with unknwon mean 0. and variance a.a , where a.i i i2 . . is known and a may be unknown; and that a random sample of size n^ istaken from . To begin with, we give procedure (with tables) whichselects IT. if sup L(0;x) >c SUD L(0;X), where SÎ is the parameter space1for 0 = (0-^, 0^) ; where (with c: ß) is the set of all 0 with0. = max 0.; where L(*;x) is the likelihood function based on the total1sample; and where c is the largest constant that makes the rule satisfy theP*-condition. Then, we consider other likelihood ratios, with intuitivelyreasonable subspaces of ß, and derive several new rules. Comparisons amongsome of these rules and rule R of Gupta (1956, 1965) are made using differentcriteria; numerical for k=3, and a Monte-Carlo study for k=10.For the case when the populations have the uniform (0,0^) distributions,and we have unequal sample sizes, we consider selection for the populationwith min 0.. Comparisons with Barr and Rizvi (1966) are made. Generalizai<j<k Jtions are given.Rule R^ is generalized to densities satisfying some reasonable assumptions(mainly unimodality of the likelihood, and monotonicity of the likelihoodratio). An exponential class is considered, and the results are exemplifiedby the gamma density and the Laplace density. Extensions and generalizationsto cover the selection of the t best populations (using various requirements)are given. Finally, a discussion oil the complete ranking problem,and on the relation between subset selection based on likelihood ratios andstatistical inference under order restrictions, is given. / digitalisering@umu
54

Analytical Methods for the Performance Evaluation of Binary Linear Block Codes

Chaudhari, Pragat January 2000 (has links)
The modeling of the soft-output decoding of a binary linear block code using a Binary Phase Shift Keying (BPSK) modulation system (with reduced noise power) is the main focus of this work. With this model, it is possible to provide bit error performance approximations to help in the evaluation of the performance of binary linear block codes. As well, the model can be used in the design of communications systems which require knowledge of the characteristics of the channel, such as combined source-channel coding. Assuming an Additive White Gaussian Noise channel model, soft-output Log Likelihood Ratio (LLR) values are modeled to be Gaussian distributed. The bit error performance for a binary linear code over an AWGN channel can then be approximated using the Q-function that is used for BPSK systems. Simulation results are presented which show that the actual bit error performance of the code is very well approximated by the LLR approximation, especially for low signal-to-noise ratios (SNR). A new measure of the coding gain achievable through the use of a code is introduced by comparing the LLR variance to that of an equivalently scaled BPSK system. Furthermore, arguments are presented which show that the approximation requires fewer samples than conventional simulation methods to obtain the same confidence in the bit error probability value. This translates into fewer computations and therefore, less time is needed to obtain performance results. Other work was completed that uses a discrete Fourier Transform technique to calculate the weight distribution of a linear code. The weight distribution of a code is defined by the number of codewords which have a certain number of ones in the codewords. For codeword lengths of small to moderate size, this method is faster and provides an easily implementable and methodical approach over other methods. This technique has the added advantage over other techniques of being able to methodically calculate the number of codewords of a particular Hamming weight instead of calculating the entire weight distribution of the code.
55

Analytical Methods for the Performance Evaluation of Binary Linear Block Codes

Chaudhari, Pragat January 2000 (has links)
The modeling of the soft-output decoding of a binary linear block code using a Binary Phase Shift Keying (BPSK) modulation system (with reduced noise power) is the main focus of this work. With this model, it is possible to provide bit error performance approximations to help in the evaluation of the performance of binary linear block codes. As well, the model can be used in the design of communications systems which require knowledge of the characteristics of the channel, such as combined source-channel coding. Assuming an Additive White Gaussian Noise channel model, soft-output Log Likelihood Ratio (LLR) values are modeled to be Gaussian distributed. The bit error performance for a binary linear code over an AWGN channel can then be approximated using the Q-function that is used for BPSK systems. Simulation results are presented which show that the actual bit error performance of the code is very well approximated by the LLR approximation, especially for low signal-to-noise ratios (SNR). A new measure of the coding gain achievable through the use of a code is introduced by comparing the LLR variance to that of an equivalently scaled BPSK system. Furthermore, arguments are presented which show that the approximation requires fewer samples than conventional simulation methods to obtain the same confidence in the bit error probability value. This translates into fewer computations and therefore, less time is needed to obtain performance results. Other work was completed that uses a discrete Fourier Transform technique to calculate the weight distribution of a linear code. The weight distribution of a code is defined by the number of codewords which have a certain number of ones in the codewords. For codeword lengths of small to moderate size, this method is faster and provides an easily implementable and methodical approach over other methods. This technique has the added advantage over other techniques of being able to methodically calculate the number of codewords of a particular Hamming weight instead of calculating the entire weight distribution of the code.
56

Empirical Likelihood Method for Ratio Estimation

Dong, Bin 22 February 2011 (has links)
Empirical likelihood, which was pioneered by Thomas and Grunkemeier (1975) and Owen (1988), is a powerful nonparametric method of statistical inference that has been widely used in the statistical literature. In this thesis, we investigate the merits of empirical likelihood for various problems arising in ratio estimation. First, motivated by the smooth empirical likelihood (SEL) approach proposed by Zhou & Jing (2003), we develop empirical likelihood estimators for diagnostic test likelihood ratios (DLRs), and derive the asymptotic distributions for suitable likelihood ratio statistics under certain regularity conditions. To skirt the bandwidth selection problem that arises in smooth estimation, we propose an empirical likelihood estimator for the same DLRs that is based on non-smooth estimating equations (NEL). Via simulation studies, we compare the statistical properties of these empirical likelihood estimators (SEL, NEL) to certain natural competitors, and identify situations in which SEL and NEL provide superior estimation capabilities. Next, we focus on deriving an empirical likelihood estimator of a baseline cumulative hazard ratio with respect to covariate adjustments under two nonproportional hazard model assumptions. Under typical regularity conditions, we show that suitable empirical likelihood ratio statistics each converge in distribution to a 2 random variable. Through simulation studies, we investigate the advantages of this empirical likelihood approach compared to use of the usual normal approximation. Two examples from previously published clinical studies illustrate the use of the empirical likelihood methods we have described. Empirical likelihood has obvious appeal in deriving point and interval estimators for time-to-event data. However, when we use this method and its asymptotic critical value to construct simultaneous confidence bands for survival or cumulative hazard functions, it typically necessitates very large sample sizes to achieve reliable coverage accuracy. We propose using a bootstrap method to recalibrate the critical value of the sampling distribution of the sample log-likelihood ratios. Via simulation studies, we compare our EL-based bootstrap estimator for the survival function with EL-HW and EL-EP bands proposed by Hollander et al. (1997) and apply this method to obtain a simultaneous confidence band for the cumulative hazard ratios in the two clinical studies that we mentioned above. While copulas have been a popular statistical tool for modeling dependent data in recent years, selecting a parametric copula is a nontrivial task that may lead to model misspecification because different copula families involve different correlation structures. This observation motivates us to use empirical likelihood to estimate a copula nonparametrically. With this EL-based estimator of a copula, we derive a goodness-of-fit test for assessing a specific parametric copula model. By means of simulations, we demonstrate the merits of our EL-based testing procedure. We demonstrate this method using the data from Wieand et al. (1989). In the final chapter of the thesis, we provide a brief introduction to several areas for future research involving the empirical likelihood approach.
57

A Study of Designs in Clinical Trials and Schedules in Operating Rooms

Hung, Wan-Ping 20 January 2011 (has links)
The design of clinical trials is one of the important problems in medical statistics. Its main purpose is to determine the methodology and the sample size required of a testing study to examine the safety and efficacy of drugs. It is also a part of the Food and Drug Administration approval process. In this thesis, we first study the comparison of the efficacy of drugs in clinical trials. We focus on the two-sample comparison of proportions to investigate testing strategies based on two-stage design. The properties and advantages of the procedures from the proposed testing designs are demonstrated by numerical results, where comparison with the classical method is made under the same sample size. A real example discussed in Cardenal et al. (1999) is provided to explain how the methods may be used in practice. Some figures are also presented to illustrate the pattern changes of the power functions of these methods. In addition, the proposed procedure is also compared with the Pocock (1997) and O¡¦Brien and Fleming (1979) tests based on the standardized statistics. In the second part of this work, the operating room scheduling problem is considered, which is also important in medical studies. The national health insurance system has been conducted more than ten years in Taiwan. The Bureau of National Health Insurance continues to improve the national health insurance system and try to establish a reasonable fee ratio for people in different income ranges. In accordance to the adjustment of the national health insurance system, hospitals must pay more attention to control the running cost. One of the major hospital's revenues is generated by its surgery center operations. In order to maintain financial balance, effective operating room management is necessary. For this topic, this study focuses on the model fitting of operating times and operating room scheduling. Log-normal and mixture log-normal distributions are identified to be acceptable statistically in describing these operating times. The procedure is illustrated through analysis of thirteen operations performed in the gynecology department of a major teaching hospital in southern Taiwan. The best fitting distributions are used to evaluate performances of some operating combinations on daily schedule, which occurred in real data. The fitted distributions are selected through certain information criteria and bootstrapping the log-likelihood ratio test. Moreover, we also classify the operations into three different categories as well as three stages for each operation. Then based on the classification, a strategy of efficient scheduling is proposed. The benefits of rescheduling based on the proposed strategy are compared with the original scheduling observed.
58

Biologically-inspired Motion Control for Kinematic Redundancy Resolution and Self-sensing Exploitation for Energy Conservation in Electromagnetic Devices

Babakeshizadeh, Vahid January 2014 (has links)
This thesis investigates particular topics in advanced motion control of two distinct mechanical systems: human-like motion control of redundant robot manipulators and advanced sensing and control for energy-efficient operation of electromagnetic devices. Control of robot manipulators for human-like motions has been one of challenging topics in robot control for over half a century. The first part of this thesis considers methods that exploits robot manipulators??? degrees of freedom for such purposes. Jacobian transpose control law is investigated as one of the well-known controllers and sufficient conditions for its universal convergence are derived by using concepts of ???stability on a manifold??? and ???transferability to a sub-manifold???. Firstly, a modification on this method is proposed to enhance the rectilinear trajectory of the robot end-effector. Secondly, an abridged Jacobian controller is proposed that exploits passive control of joints to reduce the attended degrees of freedom of the system. Finally, the application of minimally-attended controller for human-like motion is introduced. Electromagnetic (EM) access control systems are one of growing electronic systems which are used in applications where conventional mechanical locks may not guarantee the expected safety of the peripheral doors of buildings. In the second part of this thesis, an intelligent EM unit is introduced which recruits the selfsensing capability of the original EM block for detection purposes. The proposed EM device optimizes its energy consumption through a control strategy which regulates the supply to the system upon detection of any eminent disturbance. Therefore, it draws a very small current when the full power is not needed. The performance of the proposed control strategy was evaluated based on a standard safety requirement for EM locking mechanisms. For a particular EM model, the proposed method is verified to realize a 75% reduction in the power consumption.
59

Empirical Likelihood Method for Ratio Estimation

Dong, Bin 22 February 2011 (has links)
Empirical likelihood, which was pioneered by Thomas and Grunkemeier (1975) and Owen (1988), is a powerful nonparametric method of statistical inference that has been widely used in the statistical literature. In this thesis, we investigate the merits of empirical likelihood for various problems arising in ratio estimation. First, motivated by the smooth empirical likelihood (SEL) approach proposed by Zhou & Jing (2003), we develop empirical likelihood estimators for diagnostic test likelihood ratios (DLRs), and derive the asymptotic distributions for suitable likelihood ratio statistics under certain regularity conditions. To skirt the bandwidth selection problem that arises in smooth estimation, we propose an empirical likelihood estimator for the same DLRs that is based on non-smooth estimating equations (NEL). Via simulation studies, we compare the statistical properties of these empirical likelihood estimators (SEL, NEL) to certain natural competitors, and identify situations in which SEL and NEL provide superior estimation capabilities. Next, we focus on deriving an empirical likelihood estimator of a baseline cumulative hazard ratio with respect to covariate adjustments under two nonproportional hazard model assumptions. Under typical regularity conditions, we show that suitable empirical likelihood ratio statistics each converge in distribution to a 2 random variable. Through simulation studies, we investigate the advantages of this empirical likelihood approach compared to use of the usual normal approximation. Two examples from previously published clinical studies illustrate the use of the empirical likelihood methods we have described. Empirical likelihood has obvious appeal in deriving point and interval estimators for time-to-event data. However, when we use this method and its asymptotic critical value to construct simultaneous confidence bands for survival or cumulative hazard functions, it typically necessitates very large sample sizes to achieve reliable coverage accuracy. We propose using a bootstrap method to recalibrate the critical value of the sampling distribution of the sample log-likelihood ratios. Via simulation studies, we compare our EL-based bootstrap estimator for the survival function with EL-HW and EL-EP bands proposed by Hollander et al. (1997) and apply this method to obtain a simultaneous confidence band for the cumulative hazard ratios in the two clinical studies that we mentioned above. While copulas have been a popular statistical tool for modeling dependent data in recent years, selecting a parametric copula is a nontrivial task that may lead to model misspecification because different copula families involve different correlation structures. This observation motivates us to use empirical likelihood to estimate a copula nonparametrically. With this EL-based estimator of a copula, we derive a goodness-of-fit test for assessing a specific parametric copula model. By means of simulations, we demonstrate the merits of our EL-based testing procedure. We demonstrate this method using the data from Wieand et al. (1989). In the final chapter of the thesis, we provide a brief introduction to several areas for future research involving the empirical likelihood approach.
60

Análise de agrupamento de semeadoras manuais quanto à distribuição do número de sementes / Cluster analysis of manual planters according to the distribution of the number of seeds

Patricia Peres Araripe 10 December 2015 (has links)
A semeadora manual é uma ferramenta que, ainda nos dias de hoje, exerce um papel importante em diversos países do mundo que praticam a agricultura familiar e de conservação. Sua utilização é de grande importância devido a minimização do distúrbio do solo, exigências de trabalho no campo, maior produtividade sustentável entre outros fatores. De modo a avaliar e/ou comparar as semeadoras manuais existentes no mercado, diversos trabalhos têm sido realizados, porém considerando somente medidas de posição e dispersão. Neste trabalho é utilizada, como alternativa, uma metodologia para a comparação dos desempenhos das semeadoras manuais. Neste caso, estimou-se as probabilidades associadas a cada categoria de resposta e testou-se a hipótese de que essas probabilidades não variam para as semeadoras quando comparadas duas a duas, utilizando o teste da razão das verossimilhanças e o fator de Bayes nos paradigmas clássico e bayesiano, respectivamente. Por fim, as semeadoras foram agrupadas considerando, como medida de distância, a medida de divergência J-divergência na análise de agrupamento. Como ilustração da metodologia apresentada, são considerados os dados para a comparação de quinze semeadoras manuais de diferentes fabricantes analisados por Molin, Menegatti e Gimenez (2001) em que as semeadoras foram reguladas para depositarem exatamente duas sementes por golpe. Inicialmente, na abordagem clássica, foram comparadas as semeadoras que não possuíam valores nulos nas categorias de resposta, sendo as semeadoras 3, 8 e 14 as que apresentaram melhores comportamentos. Posteriormente, todas as semeadoras foram comparadas duas a duas, agrupando-se as categorias e adicionando as contantes 0,5 ou 1 à cada categoria de resposta. Ao agrupar categorias foi difícil a tomada de conclusões pelo teste da razão de verossimilhanças, evidenciando somente o fato da semeadora 15 ser diferente das demais. Adicionando 0,5 ou 1 à cada categoria não obteve-se, aparentemente, a formação de grupos distintos, como a semeadora 1 pelo teste diferiu das demais e apresentou maior frequência no depósito de duas sementes, o exigido pelo experimento agronômico, foi a recomendada neste trabalho. Na abordagem bayesiana, utilizou-se o fator de Bayes para comparar as semeadoras duas a duas, no entanto as conclusões foram semelhantes às obtidas na abordagem clássica. Finalmente, na análise de agrupamento foi possível uma melhor visualização dos grupos de semeadoras semelhantes entre si em ambas as abordagens, reafirmando os resultados obtidos anteriormente. / The manual planter is a tool that today still has an important role in several countries around the world, which practices family and conservation agriculture. The use of it has importance due to minimizing soil disturbance, labor requirements in the field, most sustainable productivity and other factors. In order to analyze and/or compare the commercial manual planters, several studies have been conducted, but considering only position and dispersion measures. This work presents an alternatively method for comparing the performance of manual planters. In this case, the probabilities associated with each category of response has estimated and the hypothesis that these probabilities not vary for planters when compared in pairs evaluated using the likelihood ratio test and Bayes factor in the classical and bayesian paradigms, respectively. Finally, the planters were grouped considering as a measure of distance, the divergence measure J-divergence in the cluster analysis. As an illustration of this methodology, the data from fifteen manual planters adjusted to deposit exactly two seeds per hit of different manufacturers analyzed by Molin, Menegatti and Gimenez (2001) were considered. Initially, in the classical approach, the planters without zero values in response categories were compared and the planters 3, 8 and 14 presents the better behavior. After, all the planters were compared in pairs, grouping categories and adding the constants 0,5 or 1 for each response category. Grouping categories was difficult making conclusions by the likelihood ratio test, only highlighting the fact that the planter 15 is different from others. Adding 0,5 or 1 for each category, apparently not obtained the formation of different groups, such as planter 1 which by the test differed from the others and presented more frequently the deposit of two seeds, required by agronomic experiment and recommended in this work. In the Bayesian approach, the Bayes factor was used to compare the planters in pairs, but the findings were similar to those obtained in the classical approach. Finally, the cluster analysis allowed a better idea of similar planters groups with each other in the both approaches, confirming the results obtained previously.

Page generated in 0.0664 seconds