1 |
Maximum Likelihood and Least Squares EstimationGoddard, Jennifer McDonald DaCosta January 1978 (has links)
1 volume
|
2 |
Maximum likelihood estimation of parameters with constraints in normaland multinomial distributionsXue, Huitian., 薛惠天. January 2012 (has links)
Motivated by problems in medicine, biology, engineering and economics, con-
strained parameter problems arise in a wide variety of applications. Among them
the application to the dose-response of a certain drug in development has attracted
much interest. To investigate such a relationship, we often need to conduct a dose-
response experiment with multiple groups associated with multiple dose levels of
the drug. The dose-response relationship can be modeled by a shape-restricted
normal regression. We develop an iterative two-step ascent algorithm to estimate
normal means and variances subject to simultaneous constraints. Each iteration
consists of two parts: an expectation{maximization (EM) algorithm that is utilized
in Step 1 to compute the maximum likelihood estimates (MLEs) of the restricted
means when variances are given, and a newly developed restricted De Pierro algorithm that is used in Step 2 to find the MLEs of the restricted variances when
means are given. These constraints include the simple order, tree order, umbrella
order, and so on. A bootstrap approach is provided to calculate standard errors of
the restricted MLEs. Applications to the analysis of two real datasets on radioim-munological assay of cortisol and bioassay of peptides are presented to illustrate
the proposed methods.
Liu (2000) discussed the maximum likelihood estimation and Bayesian estimation in a multinomial model with simplex constraints by formulating this
constrained parameter problem into an unconstrained parameter problem in the
framework of missing data. To utilize the EM and data augmentation (DA) algorithms, he introduced latent variables {Zil;Yil} (to be defined later). However,
the proposed DA algorithm in his paper did not provide the necessary individual
conditional distributions of Yil given (the observed data and) the updated parameter estimates. Indeed, the EM algorithm developed in his paper is based on the
assumption that{ Yil} are fixed given values. Fortunately, the EM algorithm is
invariant under any choice of the value of Yil, so the final result is always correct.
We have derived the aforesaid conditional distributions and hence provide a valid
DA algorithm. A real data set is used for illustration. / published_or_final_version / Statistics and Actuarial Science / Master / Master of Philosophy
|
3 |
Estimation of mixture of normal distributions.January 1979 (has links)
by Tse Siu-Keung. / Thesis (M.Phil.)--Chinese University of Hong Kong. / Bibliography: leaves 47-48.
|
4 |
The midrange estimator in symmetric distributionsSundheim, Richard Allen January 2010 (has links)
Digitized by Kansas Correctional Industries
|
5 |
Nonlinear estimation : Gauss-Newton procedureMcFarland, Steven Deane January 2011 (has links)
Digitized by Kansas Correctional Industries
|
6 |
New regularization methods for the inverse problem of parameter estimation in Ordinary differential equations / Nouvelles méthodes de régularisation du problème inverse d'estimlation de paramètres dans des équations différentielles ordinairesClairon, Quentin 09 April 2015 (has links)
Nous présentons dans cette thèse deux méthodes de régularisation du problème d’estimationde paramètres dans des équations différentielles ordinaires (EDOs). La première est une extensionde la méthode two-step, permettant d’obtenir une expression de la variance asymptotique etd’éviter l’usage de la derivée de l’estimateur non-paramétrique. Elle fait appel à la notion desolution faible et propose une caractérisation variationnelle de la solution de l’EDO. Ce faisant,elle identifie le vrai ensemble de paramètres comme celui respectant un ensemble de moments,fonctions plus régulières des paramètres que le critère des moindre carrés. Cette formulationgénérale permet de définir un estimateur s’appliquant à une large classe d’équations différentielles,et pouvant incorporer des informations supplémentaires disponibles sur la solution de l’EDO. Cesarguments, confortés par les resultats numériques obtenus, en font une approche compétitive parrapport aux moindres carrés. Néanmoins, cet estimateur nécessite l’observation de toutes lesvariables d’état.La seconde méthode s’applique également au cas partiellement observé. Elle régularise leproblème inverse par relaxation de la contrainte imposée par l’EDO en replaçant l’équationoriginale par une version perturbée. L’estimateur est ensuite défini à travers la minimisation d’uncoût profilé sur l’ensemble des perturbations possibles et pénalisant la distance entre le modèleinitial et le modèle perturbé. Cette approche introduit un problème d’optimisation en dimensioninfinie résolu grâce à un résultat fondamental de la théorie du contrôle optimal, le principedu maximum de Pontryagin. Il permet de ramener la résolution du problème d’optimisationà l’intégration d’une EDO avec condition aux bords. Ainsi, nous avons obtenu un estimateurimplémentable et que nous avons démontré consistent. Un intérêt particulier est porté au cas desEDOs linéaires pour lequel nous avons démontré la vitesse de convergence paramétrique et lanormalité asymptotique de notre estimateur. En outre, nous disposons d’une expression simplifiéedu coût profilé, ce qui facilite l’implémentation numérique de notre estimateur. Ces résultats sontdus à la théorie linéaire-quadratique, derivée du principe du maximum de Pontryagin dans lecas linéaire, elle assure l’existence, l’unicité et donne une expression simple de la solution duproblème d’optimisation définissant notre estimateur. A travers des exemples numériques nousavons montré que notre estimateur est compétitif avec les moindres carrés et le lissage généralisé,en particulier en présence d’une mauvaise spécification de modèle grâce à la relaxation du modèleoriginal introduite dans notre approche. Enfin, notre méthode d’estimation par utilisation de lathéorie du contrôle optimal offre un cadre pertinent pour traiter des problèmes d’analyse dedonnées fonctionnelles, ceci est illustré à travers un exemple dans le cas linéaire. / We present in this thesis two regularization methods of the parameter estimation problemin ordinary differential equations (ODEs). The first one is an extension of the two-step method,its initial motivations are to obtain an expression of the asymptotic variance and to avoid theuse of the derivative form of the nonparametric estimator. It relies on the notion of weak ODEsolution and considers a variational characterisation of the solution. By doing so, it identifiesthe true parameter set as the one satisfying a set of moments, which are smoother functions ofthe parameters than the least squares criteria. This general formulation defines an estimatorwhich applies to a broad range of differential equations, and can incorporate prior knowledgeson the ODE solution. Theses arguments, supported by the obtained numerical results, make ofthis method a competitive one comparing to least squares. Nonetheless, this estimator requiresto observe all state variables.The second method also applies to the partially observed case. It regularizes the inverseproblem thanks to a relaxation of the constraint imposed by the ODE by replacing the initialmodel by a pertubated one. The estimator is then defined as the minimizer of a profiled cost onthe set of possible perturbations and penalizing the distance between the initial model and theperturbated one. This approach requires the introduction of an infinite dimensional optimizationproblem solved by the use of a fundamental result coming from optimal control theory, the Pontryaginmaximum principle. Its application turns the resolution of the optimization problem intothe integration of an ODE with boundary constraints. Thus, we have obtained an implementableestimator for which we have proven consistency. We dedicate a thorough analysis to the caseof linear ODEs for which we have derived the parametric convergence rate and the asymptoticnormality of our estimator. In addition, we have access to a simpler expression for the profiledcost, which makes the numerical implementation of our estimator easier. Theses results are dueto linear-quadratic theory, derived from the Pontryagin maximum principle in the linear case, itgives existence, uniqueness and a simple expression of the solution of the optimization problemdefining our estimator. We have shown, through numerical examples, that our estimator is acompetitive one comparing to least squares and generalized smoothing, in particular in presenceof model misspecification thanks to the model relaxation introduced in our approach. Furthermore,our estimation method based on optimal control theory offers a relevant framework fordealing with functional data analysis problems, which is emphasized thanks to an example inthe linear case.
|
7 |
Channel Estimation Using Superimposed Scheme in MIMO-OFDM SystemsLi, Wei-ting 27 August 2007 (has links)
In this thesis, channel estimation technique in Multiple Input Multiple Output - Orthogonal Frequency Division Multiplexing (MIMO-OFDM) systems is investigated. Channel estimation plays an important role in the receiver performance of the MIMO-OFDM systems. In this thesis, the Space-Frequency Block Codes - Orthogonal Frequency Division Multiplexing (SFBC-OFDM) structure with the superimposed training scheme are adopted to increase bandwidth efficiency. In addition to using the first-order statistics of the received signal to reduce the interference induced by the unknown signals, the modified least square (MLS) estimator and the decision-feedback scheme are combined together to improve the channel estimation. In contrast to the conventional interference cancellation scheme for channel estimations adopting the decision feedback mechanism, the structure of the SFBC is utilized to decrease the interference. Furthermore, the effects of the decision-feedback scheme with erroneous decisions upon channel estimation are also examined. Analytical results show that the proposed method has better performance and computation complexity is not increased.
|
8 |
Likelihood ratio inference in regular and non-regular problems /Banerjee, Moulinath, January 2000 (has links)
Thesis (Ph. D.)--University of Washington, 2000. / Vita. Includes bibliographical references (p. 231-234).
|
9 |
Selecting tuning parameters in minimum distance estimators.Warwick, Jane. January 2001 (has links)
Thesis (Ph. D.)--Open University. BLDSC no. DXN049325.
|
10 |
Estimation of parameters in incomplete compositional data /Ngai, Hung-man. January 1900 (has links)
Thesis (M. Phil.)--University of Hong Kong, 1989.
|
Page generated in 0.0875 seconds