• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 336
  • 39
  • 30
  • 12
  • 10
  • 10
  • 10
  • 10
  • 10
  • 10
  • 6
  • 1
  • 1
  • Tagged with
  • 448
  • 448
  • 122
  • 120
  • 47
  • 47
  • 45
  • 43
  • 42
  • 35
  • 34
  • 34
  • 32
  • 31
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

On the relative properties of ordinary least squares estimation for the prediction problem with errors in variables /

Yum, Bong Jin, January 1981 (has links)
No description available.
212

Estimating SDy for economic utility calculations : measurement validity of Schmidt-Hunter's method of direct estimation /

Wroten, Steven Phillip January 1984 (has links)
No description available.
213

The Design of a Processing Element for the Systolic Array Implementation of a Kalman Filter

Condorodis, John P. 01 January 1987 (has links) (PDF)
The Kalman filter is an important component of optimal estimation theory. It has applications in a wide range of high performance control systems including navigational, fire control, and targeting systems. The Kalman filter, however, has not been utilized to its full potential due to the limitations of its inherent computational intensiveness which requires "off-line" processing or allows only low bandwidth real-time applications. The recent advances in VLSI circuit technology have created the opportunity to design algorithms and data structures for direct implementation in integrated circuits. A systolic architecture is a concept which allows the construction of massively parallel systems in integrated circuits and has been utilized as a means of achieving high data rates. A systolic system consists of a set of interconnected processing elements, each capable of performing some simple operation. The design of a processing element in an orthogonal systolic architecture will be investigated using the state of the art in VLSI technology. The goal is to create a high speed, high precision processing element which is adaptive to a highly configurable systolic architecture. In order to achieve the necessary high computational throughput, the arithmetic unit of the processing element will be implemented using the Logarithmic Number System. The Systolic architecture approach will be used in an attempt to implement a Kalman filtering system with both a high sampling rate and a small package size. The design of such a Kalman filter would enable this filtering technology to be applied to the areas of process control, computer vision, and robotics.
214

Estimability and testability in linear models

Alalouf, Serge January 1975 (has links)
No description available.
215

Estimation with samples drawn from different but parametrically related distributions

Fields, Raymond Ira January 1960 (has links)
This dissertation discusses a method of estimation with random samples drawn from two different normal populations. The two populations may be either univariate or multivariate (provided the populations have the same number of variates) and they are to be parametrically related in that the means (or vector of means) are equal. Since it is assumed that the two normal populations have a common mean (or a common vector of means), these common parameters are jointly estimated along with all the other unknown parameters. The joint estimates, called iteration estimates, are obtained by an iteration method developed for solving the likelihood equations. A detailed study is made for the joint estimation of parameters when sampling from two univariate normal populations. The iteration procedure is based on jointly estimating the common mean and the individual variances by finding a weighted mean and the individual variances about the weighted mean. The initial weighted mean is found by taking as weights the reciprocals of the estimates of the variances of the individual estimates of the mean. It is proved that the iteration method produces a unique set of estimates which satisfies the likelihood equations. Since this set of estimates is not always identical with the set of maximum likelihood estimates, the conditions under which the two sets may possibly differ are established. Numerical examples are given to illustrate the iteration technique and to compare the iteration estimates with maximum likelihood estimates in the cases where they differ. Empirical sampling, with small sizes, is done with the aid of the IBM 650 Computer to obtain information regarding the distribution of the iteration estimates and also the maximum likelihood estimates in the cases where they differ. The experimental results indicate that the iteration estimate of the common mean tends to be normally distributed and the iteration estimates of the individual variances are virtually unbiased. The iteration procedure is compared with Fisher’s Method, which uses the Information Matrix, and is shown to give identical results while requiring less computation. An extension of the iteration procedure is made to the case where the samples are drawn from two bivariate normal populations with the components of the common vector of means and the elements of the individual covariance matrices being estimated jointly. For the particular case in which the individual variances within each population may be assumed equal, it is shown that a linear transformation to obtain new uncorrelated variables will materially lessen the time required for the iteration method. A numerical example is given to illustrate the iteration technique both with and without a transformation of variables and a proof is given to show that the two methods produce identical results. The iteration procedure is further extended to the case where the samples are drawn from two multivariate normal populations which have the same number of variates and joint estimates are obtained for the common vector of means and the individual covariance matrices. It is also shown that if a linear transformation can be found which gives new uncorrelated variables in each population, then transformation before iteration greatly reduces the computational labor involved in obtaining the joint estimates. / Ph. D.
216

State estimation using a multiple model likelihood weighted filter array

Wood, Eric F. 01 April 2001 (has links)
No description available.
217

A computer simulation study for comparing three methods of estimating variance components

Walsh, Thomas Richard January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
218

Estimation of growth yields in aerobic and anaerobic cultures

Oner, Mehmet Durdu January 2011 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
219

Statistical models for catch-at-length data with birth cohort information

Chung, Sai-ho., 鍾世豪. January 2005 (has links)
published_or_final_version / abstract / Social Sciences / Doctoral / Doctor of Philosophy
220

Estimation robuste de la matrice de covariance en traitement du signal / Robust covariance matrix estimation in signal processing

Mahot, Mélanie 06 December 2012 (has links)
De nombreuses applications de traitement de signal nécessitent la connaissance de la matrice de covariance des données reçues. Lorsqu'elle n'est pas directement accessible, elle est estimée préalablement à l'aide de données d'apprentissage. Traditionnellement, le milieu est considéré comme gaussien. L'estimateur du maximum de vraisemblance est alors la sample covariance matrix (SCM). Cependant, dans de nombreuses applications, notamment avec l'arrivée des techniques haute résolution, cette hypothèse n'est plus valable. De plus, même en milieu gaussien, il s'avère que la SCM peut-être très influencée par des perturbations (données aberrantes, données manquantes, brouilleurs...) sur les données. Dans cette thèse nous nous proposons de considérer un modèle plus général, celui des distributions elliptiques. Elles permettent de représenter de nombreuses distributions et des campagnes de mesures ont montré leur bonne adéquation avec les données réelles, dans de nombreuses applications telles que le radar ou l'imagerie hyperspectrale. Dans ce contexte, nous proposons des estimateurs plus robustes et plus adaptés : les M-estimateurs et l'estimateur du point-fixe (FPE). Leurs performances et leur robustesse sont étudiées et comparées à celles de la SCM. Nous montrons ainsi que dans de nombreuses applications, ces estimateurs peuvent remplacer très simplement la SCM, avec de meilleures performances lorsque les données sont non-gaussiennes et des performances comparables à la SCM lorsque les données sont gaussiennes. Les résultats théoriques développés dans cette thèse sont ensuite illustrés à partir de simulations puis à partir de données réels dans le cadre de traitements spatio-temporels adaptatifs. / In many signal processing applications, the covariance matrix of the received data must be known. If unknown, it is firstly estimated with some training data. Classically, the background is considered as Gaussian. In such a case, the maximum likelihood estimator is the Sample Covariance Matrix (SCM). However, due to high resolution methods or other new technics, the Gaussian assumption is not valid anymore. Moreover, even when the data are Gaussian, the SCM can be strongly influenced by some disturbances such as missing data and/or outliers. In this thesis, we use a more general model which encompasses a large panel of distributions: the elliptical distributions. Many campagns of measurement have shown that this model leads to a better modelling of the data. In this context, we present more robust and adapted estimators: the M-estimators and Fixed Point Estimator (FPE). Their properties are derived in terms of performance and robustness, and they are compared to the SCM. We show that these estimators can be used instead of the SCM with nearly the same performance when the data are Gaussian, and better performance when the data are non-Gaussian. Theoretical results are emphasized on simulations and on real data in a context of Space Time Adaptive Processing.

Page generated in 0.1247 seconds