• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 288
  • 113
  • 32
  • 30
  • 15
  • 13
  • 8
  • 7
  • 7
  • 6
  • 5
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 600
  • 600
  • 212
  • 118
  • 100
  • 99
  • 97
  • 82
  • 78
  • 65
  • 61
  • 60
  • 55
  • 53
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Maximum a posteriori models for cortical modeling feature detectors, topography and modularity /

Weber, Cornelius. Unknown Date (has links) (PDF)
Techn. University, Diss., 2001--Berlin.
42

Quantitative Analyse dynamischer nichtlinearer Panelmodelle

Bode, Oliver. Unknown Date (has links) (PDF)
Universiẗat, Diss., 2001--Göttingen.
43

Beitrag zur skalenabhängigen Erfassung teilschlagspezifischer Pflanzenschäden mit Methoden der Fernerkundung und Geoinformation

Voß, Kerstin. Unknown Date (has links) (PDF)
Universiẗat, Diss., 2005--Bonn.
44

Methods and Experiments With Bounded Tree-width Markov Networks

Liang, Percy, Srebro, Nathan 30 December 2004 (has links)
Markov trees generalize naturally to bounded tree-width Markov networks, onwhich exact computations can still be done efficiently. However, learning themaximum likelihood Markov network with tree-width greater than 1 is NP-hard, sowe discuss a few algorithms for approximating the optimal Markov network. Wepresent a set of methods for training a density estimator. Each method isspecified by three arguments: tree-width, model scoring metric (maximumlikelihood or minimum description length), and model representation (using onejoint distribution or several class-conditional distributions). On thesemethods, we give empirical results on density estimation and classificationtasks and explore the implications of these arguments.
45

Estimation of long-range dependence

Vivero, Oskar January 2010 (has links)
A set of observations from a random process which exhibit correlations that decay slower than an exponential rate is regarded as long-range dependent. This phenomenon has stimulated great interest in the scientific community as it appears in a wide range of areas of knowledge. For example, this property has been observed in data pertaining to electronics, econometrics, hydrology and biomedical signals.There exist several estimation methods for finding model parameters that help explain the set of observations exhibiting long-range dependence. Among these methods, maximum likelihood is attractive, given its desirable statistical properties such as asymptotic consistency and efficiency. However, its computational complexity makes the implementation of maximum likelihood prohibitive.This thesis presents a group of computationally efficient estimators based on the maximum likelihood framework. The thesis consists of two main parts. The first part is devoted to developing a computationally efficient alternative to the maximum likelihood estimate. This alternative is based on the circulant embedding concept and it is shown to maintain the desirable statistical properties of maximum likelihood.Interesting results are obtained by analysing the circulant embedding estimate. In particular, this thesis shows that the maximum likelihood based methods are ill-conditioned; the estimators' performance will deteriorate significantly when the set of observations is corrupted by errors. The second part of this thesis focuses on developing computationally efficient estimators with improved performance under the presence of errors in the observations.
46

The covariance structure of conditional maximum likelihood estimates

Strasser, Helmut 11 1900 (has links) (PDF)
In this paper we consider conditional maximum likelihood (cml) estimates for item parameters in the Rasch model under random subject parameters. We give a simple approximation for the asymptotic covariance matrix of the cml-estimates. The approximation is stated as a limit theorem when the number of item parameters goes to infinity. The results contain precise mathematical information on the order of approximation. The results enable the analysis of the covariance structure of cml-estimates when the number of items is large. Let us give a rough picture. The covariance matrix has a dominating main diagonal containing the asymptotic variances of the estimators. These variances are almost equal to the efficient variances under ml-estimation when the distribution of the subject parameter is known. Apart from very small numbers n of item parameters the variances are almost not affected by the number n. The covariances are more or less negligible when the number of item parameters is large. Although this picture intuitively is not surprising it has to be established in precise mathematical terms. This has been done in the present paper. The paper is based on previous results [5] of the author concerning conditional distributions of non-identical replications of Bernoulli trials. The mathematical background are Edgeworth expansions for the central limit theorem. These previous results are the basis of approximations for the Fisher information matrices of cmlestimates. The main results of the present paper are concerned with the approximation of the covariance matrices. Numerical illustrations of the results and numerical experiments based on the results are presented in Strasser, [6].
47

Introduction to fast Super-Paramagnetic Clustering

Yelibi, Lionel 25 February 2020 (has links)
We map stock market interactions to spin models to recover their hierarchical structure using a simulated annealing based Super-Paramagnetic Clustering (SPC) algorithm. This is directly compared to a modified implementation of a maximum likelihood approach to fast-Super-Paramagnetic Clustering (f-SPC). The methods are first applied standard toy test-case problems, and then to a dataset of 447 stocks traded on the New York Stock Exchange (NYSE) over 1249 days. The signal to noise ratio of stock market correlation matrices is briefly considered. Our result recover approximately clusters representative of standard economic sectors and mixed clusters whose dynamics shine light on the adaptive nature of financial markets and raise concerns relating to the effectiveness of industry based static financial market classification in the world of real-time data-analytics. A key result is that we show that the standard maximum likelihood methods are confirmed to converge to solutions within a Super-Paramagnetic (SP) phase. We use insights arising from this to discuss the implications of using a Maximum Entropy Principle (MEP) as opposed to the Maximum Likelihood Principle (MLP) as an optimization device for this class of problems.
48

N-mixture models with auxiliary populations and for large population abundances

Parker, Matthew R. P. 29 April 2020 (has links)
The key results of this thesis are (1) an extension of N-mixture models to incorporate the additional layer of obfuscation brought by observing counts from a related auxiliary population (rather than the target population), (2) an extension of N-mixture models to allow for grouped counts, the purpose being two-fold: to extend the applicability of N-mixtures to larger population sizes, and to allow for the use of coarse counts in fitting N-mixture models, (3) a new R package allowing the easy application of the new N-mixture models, (4) a new R package allowing for optimization of multi-parameter functions using arbitrary precision arithmetic, which was a necessary tool for optimization of the likelihood in large population abundance N-mixture models, as well as (5) simulation studies validating the new grouped count models and comparing them to the classic N-mixtures models. / Graduate
49

Logspline Density Estimation with an Application to the Study of Survival Data of Lung Cancer Patients.

Chen, Yong 18 August 2004 (has links) (PDF)
A Logspline method of estimating an unknown density function f based on sample data is studied. Our approach is to use maximum likelihood estimation to estimate the unknown density function from a space of linear splines that have a finite number of fixed uniform knots. In the end of this thesis, the method is applied to a real survival data set of lung cancer patients.
50

Food Shelf Life: Estimation and Experimental Design

Larsen, Ross Allen Andrew 15 May 2006 (has links) (PDF)
Shelf life is a parameter of the lifetime distribution of a food product, usually the time until a specified proportion (1-50%) of the product has spoiled according to taste. The data used to estimate shelf life typically come from a planned experiment with sampled food items observed at specified times. The observation times are usually selected adaptively using ‘staggered sampling.’ Ad-hoc methods based on linear regression have been recommended to estimate shelf life. However, other methods based on maximizing a likelihood (MLE) have been proposed, studied, and used. Both methods assume the Weibull distribution. The observed lifetimes in shelf life studies are censored, a fact that the ad-hoc methods largely ignore. One purpose of this project is to compare the statistical properties of the ad-hoc estimators and the maximum likelihood estimator. The simulation study showed that the MLE methods have higher coverage than the regression methods, better asymptotic properties in regards to bias, and have lower median squared errors (mese) values, especially when shelf life is defined by smaller percentiles. Thus, they should be used in practice. A genetic algorithm (Hamada et al. 2001) was used to find near-optimal sampling designs. This was successfully programmed for general shelf life estimation. The genetic algorithm generally produced designs that had much smaller median squared errors than the staggered design that is used commonly in practice. These designs were radically different than the standard designs. Thus, the genetic algorithm may be used to plan studies in the future that have good estimation properties.

Page generated in 0.065 seconds