• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3709
  • 915
  • 683
  • 426
  • 160
  • 93
  • 61
  • 57
  • 45
  • 38
  • 36
  • 35
  • 35
  • 34
  • 27
  • Tagged with
  • 7555
  • 1138
  • 885
  • 809
  • 727
  • 724
  • 711
  • 571
  • 535
  • 534
  • 526
  • 523
  • 500
  • 482
  • 476
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Méthodes spectrales pour l'inférence grammaticale probabiliste de langages stochastiques rationnels

Bailly, Raphael 12 December 2011 (has links)
Nous nous plaçons dans le cadre de l’inférence grammaticale probabiliste. Il s’agit, étant donnée une distribution p sur un ensemble de chaînes S∗ inconnue, d’inférer un modèle probabiliste pour p à partir d’un échantillon fini S d’observations supposé i.i.d. selon p. L’inférence gram- maticale se concentre avant tout sur la structure du modèle, et la convergence de l’estimation des paramètres. Les modèles probabilistes dont il sera question ici sont les automates pondérés, ou WA. Les fonctions qu’ils modélisent sont appelées séries rationnelles. Dans un premier temps, nous étudierons la possibilité de trouver un critère de convergence absolue pour de telles séries. Par la suite, nous introduirons un type d’algorithme pour l’inférence de distributions rationnelles (i.e. distributions modélisées par un WA), basé sur des méthodes spectrales. Nous montrerons comment adapter cet algorithme pour l’appliquer au domaine, assez proche, des distributions sur les arbres. Enfin, nous tenterons d’utiliser cet algorithme d’inférence dans un contexte plus statistique d’estimation de densité. / Our framework is the probabilistic grammatical inference. That is, given an unknown distribution p on a set of string S∗ , to infer a probabilistic model for p from a sample S of observations assumed to be i.i.d. according to p. Grammatical inference focuses primarily on the structure of the probabilistic model, and the convergence of parameter estimate. Probabilistic models which will be considered here are weighted automata, or WA. The series they model are called rational series. Initially, we study the possibility of finding an absolute convergence criterion for such series. Subsequently, we introduce a algorithm for the inference of rational distrbutions (i.e. distributions modeled by WA), based on spectral methods. We will show how to fit this algorithm to the domain, fairly close, of rational distributions on trees. Finally, we will try to see how to use the spectral algorithm in a more statistical way, in a density estimation task.
232

Prédiction, inférence sélective et quelques problèmes connexes

Yadegari, Iraj January 2017 (has links)
Nous étudions le problème de l'estimation de moyenne et de la densité prédictive d'une population sélectionnée, en obtenant de nouveaux développements qui incluent l'analyse de biais, la décomposition du risque et les problèmes avec restrictions sur les paramètres (chapitre 2). Nous proposons des estimateurs de densité prédictive efficaces en termes de pertes Kullback-Leibler et Hellinger (chapitre 3) améliorant les procédures de plug-in via une perte duale et via une d'expansion de variance. Enfin, nous présentons les résultats de l'amélioration de l'estimateur du maximum de vraisemblance (EMV) d'une moyenne normale bornée pour une classe de fonctions de perte, y compris la perte normale réfléchie, avec des implications pour l'estimation de densité prédictive. A savoir, nous donnons des conditions sur la perte et la largeur de l'espace paramétrique pour lesquels l'estimateur de Bayes par rapport à la loi a priori uniforme sur la frontière domine la EMV. / Abstract : We study the problem of point estimation and predictive density estimation of the mean of a selected population, obtaining novel developments which include bias analysis, decomposition of risk, and problems with restricted parameters (Chapter 2). We propose efficient predictive density estimators in terms of Kullback-Leibler and Hellinger losses (Chapter 3) improving on plug-in procedures via a dual loss and via a variance expansion scheme. Finally (Chapter 4), we present findings on improving on the maximum likelihood estimator (MLE) of a bounded normal mean under a class of loss functions, including reflected normal loss, with implications for predictive density estimation. Namely, we give conditions on the loss and the width of the parameter space for which the Bayes estimator with respect to the boundary uniform prior dominates the MLE.​
233

A hybrid power estimation technique to improve high-level power models / Technique hybride d'estimation de puissance pour l’amélioration des modèles de puissance haut-niveau

Nocua Cifuentes, Jorge Alejandro 02 November 2016 (has links)
Une forte consommation d'énergie est un facteur clé impactant les performances des systèmes sur puce (SoC). Des modèles de puissance précis et efficaces doivent être introduits le plus tôt possible dans le flot de conception lorsque la majeure partie du potentiel d'optimisation est possible. Cependant, l'obtention d’une estimation précise ne peut être assurée en raison du manque de connaissance détaillées de la structure du circuit final. La conception actuelle de SoC repose sur la réutilisation de cœur IP (Intelectual Property) car des informations de bas niveau sur les composants du circuit ainsi que la structure sont disponibles. Ainsi, la précision de l'estimation au niveau du système peut être amélioré en utilisant ces informations et en élaborant une méthode d'estimation qui correspond aux besoins de modélisation de puissance des cœurs IP.La principale contribution de cette thèse est le développement d’une technique d'estimation hybride (HPET), dans laquelle les informations provenant de différents niveaux d'abstraction sont utilisées pour évaluer la consommation d'énergie de manière rapide et précise. HPET est basé sur une méthodologie efficace de caractérisation de la bibliothèque technologique et une approche hybride de modélisation de puissance. Les résultats des simulations obtenues avec HPET ont été validés sur différents circuits de référence synthétisés en utilisant la technologie 28nm "Fully Depleted Silicon On Insulator" (FDSOI). Les résultats expérimentaux montrent que nous pouvons atteindre en moyenne jusqu'à 70X d'amélioration en vitesse de calcul tout en ayant une précision au niveau transistor. Pour les deux types puissance analysés (instantanée et moyenne), les résultats de HPET sont bien corrélés par rapport à ceux calculés avec SPECTRE et Primetime-PX. Cela démontre que HPET est une technique efficace pour améliorer la création de macro-modèles de puissance à haut niveau d'abstraction. / High power consumption is a key factor hindering System-on-Chip (SoC) performance. Accurate and efficient power models have to be introduced early in the design flow when most of the optimization potential is possible. However, early accuracy cannot be ensured because of the lack of precise knowledge of the final circuit structure. Current SoC design paradigm relies on IP (Intellectual Property) core reuse since low-level information about circuit components and structure is available. Thus, power estimation accuracy at the system level can be improved by using this information and developing an estimation methodology that fits IP cores power modeling needs.The main contribution of this thesis is the development of a Hybrid Power Estimation Technique (HPET), in which, information coming from different abstraction levels is used to assess the power consumption in a fast and accurate manner. HPET is based on an effective characterization methodology of the technology library and an efficient hybrid power modeling approach. Experimental results, derived using HPET, have been validated on different benchmark circuits synthesized using the 28nm “Fully Depleted Silicon On Insulator” (FDSOI) technology. Experimental results show that in average we can achieve up to 70X speedup while having transistor-level accuracy. For both analyzed power types (instantaneous and average), HPET results are well correlated with respect to the ones computed in SPECTRE and Primetime-PX. This demonstrates that HPET is an effective technique to enhance power macro-modeling creation at high abstraction levels.
234

Parameter estimation of the bounded binomial distribution.

January 1983 (has links)
by Ho Yat Fan. / Bibliography: leaf 59 / Thesis (M.Phil.) -- Chinese University of Hong Kong, 1983
235

Sex and age at death estimation from the os pubis: validation of two methods on a modern autopsy sample

Curtis, Ashley Elizabeth 12 July 2017 (has links)
Estimating sex and age at death are two crucial processes during the creation of a biological profile for a set of skeletal remains. Whether the remains are archaeological or forensic, estimating the sex and age of the individual is necessary for further analysis and interpretation. Specifically, in a medicolegal context, knowing the biological sex and approximate age of the remains assists law enforcement or government agencies in identifying unknown individuals. Since the inception of the field of forensic anthropology, practitioners have been developing methods to perform the aforementioned tasks. It is crucial that these methods be consistent, repeatedly tested, validated, and improved for multiple reasons. Firstly, to conform to Daubert (1993) standards, and additionally, to make sure that they are accurate and applicable to modern forensic cases. The present study was performed to validate the efficacy of the method for estimating sex from the os pubis originally proposed in Klales et al. (2012), as well as the efficacy of the “transition analysis” method for estimating age, originally outlined in Boldsen et al. (2002). Considering the recent popularity of using these methods to create a biological profile for forensic cases, it is necessary to develop error rates on a large, modern, American autopsy sample. These two methods are not only being readily utilized, but are additionally being taught to students in training. The utilization of these models involves a “logistic regression model” created by Klales et. al (2012) to process ordinal scores, and the Bayesian statistics software program “ADBOU” that was created for processing data collected using the method in Boldsen et. al (2002). These statistical systems which produced age estimates are relatively young compared to methods developed for the same purpose. The new generation of forensic anthropologists is fully responsible for objectively critiquing and validating these methods that are being disseminated by their professors and senior practitioners. The goal of the present study is to do just that. A skeletal reference sample of 630 pubic bones, all removed from modern autopsy cases and housed at the Maricopa County Forensic Science Center in Phoenix, Arizona, was utilized for data collection in the present study. Each pubic bone was assessed and scored according to the exact instructions outlined in the materials for each method, which was the Klales et al. (2012) paper for sex estimation, and the UTK Data Collection Procedures for Forensic Skeletal Material 2.0 for age estimation (Langley et al. 2016). Additionally, the observers recorded their “gestalt” estimates for sex using the Phenice (1969) system, as well as Brooks and Suchey (1990) and Hartnett (2010a) phases for each pubis. Demographic information labels were hidden, and the collection demographic information was not viewed until the completion of data collection. The null hypothesis in the present study is that both methods (the Klales et al. method (2012) and “transition analysis” method (Boldsen et al. 2002) will perform as well as they did in the original studies. The alternate hypothesis is that they do not result in the same accuracy rates reported in the original studies. Statistical analysis of the data indicates that there is sufficient evidence to reject the null hypothesis as it applies to the Klales et al. (2012) method. The classification accuracies achieved applying the logistic regression equation to the sample of pubic bones was found to be significantly lower than reported in the original study (86.2%), averaging around 70% between observers. The level of both intraobserver and interobserver agreement was only moderate for this method. It was also found that asymmetry occurred in some individuals, producing differing estimates of sex when the left and right pubes were scored separately. When utilizing the Boldsen et al. (2012) method and the ADBOU software package on only pubic symphyseal components to estimate age, the method was found to perform reasonably well. The majority (about 82%) of individuals had actual ages at death that fell within the predicted range produced by the statistical analysis. The majority of the symphyseal component scores showed moderate to good levels of interobserver agreement, and the estimated maximum likelihood (point estimate) of age at death predicted by the software package correlated moderately well with the actual age of death of the individual. These methods did not perform as well as reported in the original studies, and they should be further validated and recalibrated to improve their accuracy and reliability.
236

Spacecraft Attitude Estimation Integrating the Q-Method into an Extended Kalman Filter

Ainscough, Thomas 16 September 2013 (has links)
A new algorithm is proposed that smoothly integrates the nonlinear estimation of the attitude quaternion using Davenport's q-method and the estimation of non-attitude states within the framework of an extended Kalman filter. A modification to the q-method and associated covariance analysis is derived with the inclusion of an a priori attitude estimate. The non-attitude states are updated from the nonlinear attitude estimate based on linear optimal Kalman filter techniques. The proposed filter is compared to existing methods and is shown to be equivalent to second-order in the attitude update and exactly equivalent in the non-attitude state update with the Sequential Optimal Attitude Recursion filter. Monte Carlo analysis is used in numerical simulations to demonstrate the validity of the proposed approach. This filter successfully estimates the nonlinear attitude and non-attitude states in a single Kalman filter without the need for iterations.
237

Nonlinear Estimation for Model Based Fault Diagnosis of Nonlinear Chemical Systems

Qu, Chunyan 2009 December 1900 (has links)
Nonlinear estimation techniques play an important role for process monitoring since some states and most of the parameters cannot be directly measured. There are many techniques available for nonlinear state and parameter estimation, i.e., extended Kalman filter (EKF), unscented Kalman filter (UKF), particle filtering (PF) and moving horizon estimation (MHE) etc. However, many issues related to the available techniques are to be solved. This dissertation discusses three important techniques in nonlinear estimation, which are the application of unscented Kalman filters, improvement of moving horizon estimation via computation of the arrival cost and different implementations of extended Kalman filters. First the use of several estimation algorithms such as linearized Kalman filter (LKF), extended Kalman filter (EKF), unscented Kalman filter (UKF) and moving horizon estimation (MHE) are investigated for nonlinear systems with special emphasis on UKF as it is a relatively new technique. Detailed case studies show that UKF has advantages over EKF for highly nonlinear unconstrained estimation problems while MHE performs better for systems with constraints. Moving horizon estimation alleviates the computational burden of solving a full information estimation problem by considering a finite horizon of the measurement data; however, it is non-trivial to determine the arrival cost. A commonly used approach for computing the arrival cost is to use a first order Taylor series approximation of the nonlinear model and then apply an extended Kalman filter. The second contribution of this dissertation is that an approach to compute the arrival cost for moving horizon estimation based on an unscented Kalman filter is proposed. It is found that such a moving horizon estimator performs better in some cases than if one based on an extended Kalman filter. It is a promising alternative for approximating the arrival cost for MHE. Many comparative studies, often based upon simulation results, between extended Kalman filters (EKF) and other estimation methodologies such as moving horizon estimation, unscented Kalman filter, or particle filtering have been published over the last few years. However, the results returned by the extended Kalman filter are affected by the algorithm used for its implementation and some implementations of EKF may lead to inaccurate results. In order to address this point, this dissertation investigates several different algorithms for implementing extended Kalman filters. Advantages and drawbacks of different EKF implementations are discussed in detail and illustrated in some comparative simulation studies. Continuously predicting covariance matrix for EKF results in an accurate implementation. Evaluating covariance matrix at discrete times can also be applied. Good performance can be expected if covariance matrix is obtained from integrating the continuous-time equation or if the sensitivity equation is used for computing the Jacobian matrix.
238

Robustness analysis of linear estimators

Tayade, Rajeshwary 30 September 2004 (has links)
Robustness of a system has been defined in various ways and a lot of work has been done to model the system robustness , but quantifying or measuring robustness has always been very difficult. In this research we consider a simple system of a linear estimator and then attempt to model the system performance and robustness in a geometrical manner which admits an analysis using the differential geometric concepts of slope and curvature. We try to compare two different types of curvatures, namely the curvature along the maximum slope of a surface and the square-root of the absolute value of sectional curvature of a surface, and observe the values to see if both of them can alternately be used in the process of understanding or measuring system robustness. In this process we have worked on two different examples and taken readings for many points to find if there is any consistency in the two curvatures.
239

Application des méthodes d'approximations stochastiques à l'estimation de la densité et de la régression

Slaoui, Yousri 18 December 2006 (has links) (PDF)
L'objectif de cette thèse est d'appliquer les méthodes d'approximations stochastiques à l'estimation de la densité et de la régression. Dans le premier chapitre, nous construisons un algorithme stochastique à pas simple qui définit toute une famille d'estimateurs récursifs à noyau d'une densité de probabilité. Nous étudions les différentes propriétés de cet algorithme. En particulier, nous identifions deux classes d'estimateurs; la première correspond à un choix de pas qui permet d'obtenir un risque minimal, la seconde une variance minimale. Dans le deuxième chapitre, nous nous intéressons à l'estimateur proposé par Révész (1973, 1977) pour estimer une fonction de régression r:x-> E[Y|X=x]. Son estimateur r_n, construit à l'aide d'un algorithme stochastique à pas simple, a un gros inconvénient: les hypothèses sur la densité marginale de X nécessaires pour établir la vitesse de convergence de r_n sont beaucoup plus fortes que celles habituellement requises pour étudier le comportement asymptotique d'un estimateur d'une fonction de régression. Nous montrons comment l'application du principe de moyennisation des algorithmes stochastiques permet, tout d'abord en généralisant la définition de l'estimateur de Révész, puis en moyennisant cet estimateur généralisé, de construire un estimateur récursif br_n qui possède de bonnes propriétés asymptotiques. Dans le troisième chapitre, nous appliquons à nouveau les méthodes d'approximation stochastique à l'estimation d'une fonction de régression. Mais cette fois, plutôt que d'utiliser des algorithmes stochastiques à pas simple, nous montrons comment les algorithmes stochastiques à pas doubles permettent de construire toute une classe d'estimateurs récursifs d'une fonction de régression, et nous étudions les propriétés asymptotiques de ces estimateurs. Cette approche est beaucoup plus simple que celle du deuxième chapitre: les estimateurs construits à l'aide des algorithmes à pas doubles n'ont pas besoin d'être moyennisés pour avoir les bonnes propriétés asymptotiques.
240

Estimating Signal Features from Noisy Images with Stochastic Backgrounds

Whitaker, Meredith Kathryn January 2008 (has links)
Imaging is often used in scientific applications as a measurement tool. The location of a target, brightness of a star, and size of a tumor are all examples of object features that are sought after in various imaging applications. A perfect measurement of these quantities from image data is impossible because of, most notably, detector noise fluctuations, finite resolution, sensitivity of the imaging instrument, and obscuration by undesirable object structures. For these reasons, sophisticated image-processing techniques are designed to treat images as random variables. Quantities calculated from an image are subject to error and fluctuation; implied by calling them estimates of object features.This research focuses on estimator error for tasks common to imaging applications. Computer simulations of imaging systems are employed to compare the estimates to the true values. These computations allow for algorithm performance tests and subsequent development. Estimating the location, size, and strength of a signal embedded in a background structure from noisy image data is the basic task of interest. The estimation task's degree of difficulty is adjusted to discover the simplest data-processing necessary to yield successful estimates.Even when using an idealized imaging model, linear Wiener estimation was found to be insufficient for estimating signal location and shape. These results motivated the investigation of more complex data processing. A new method (named the scanning-linear estimator because it maximizes a linear functional) is successful in cases where linear estimation fails. This method has also demonstrated positive results when tested in realistic simulations of tomographic SPECT imaging systems. A comparison to a model of current clinical estimation practices found that the scanning-linear method offers substantial gains in performance.

Page generated in 0.0544 seconds