• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 17
  • 6
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 65
  • 65
  • 33
  • 11
  • 11
  • 9
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Stochastic routing models in sensor networks

Keeler, Holger Paul January 2010 (has links)
Sensor networks are an evolving technology that promise numerous applications. The random and dynamic structure of sensor networks has motivated the suggestion of greedy data-routing algorithms. / In this thesis stochastic models are developed to study the advancement of messages under greedy routing in sensor networks. A model framework that is based on homogeneous spatial Poisson processes is formulated and examined to give a better understanding of the stochastic dependencies arising in the system. The effects of the model assumptions and the inherent dependencies are discussed and analyzed. A simple power-saving sleep scheme is included, and its effects on the local node density are addressed to reveal that it reduces one of the dependencies in the model. / Single hop expressions describing the advancement of messages are derived, and asymptotic expressions for the hop length moments are obtained. Expressions for the distribution of the multihop advancement of messages are derived. These expressions involve high-dimensional integrals, which are evaluated with quasi-Monte Carlo integration methods. An importance sampling function is derived to speed up the quasi-Monte Carlo methods. The subsequent results agree extremely well with those obtained via routing simulations. A renewal process model is proposed to model multihop advancements, and is justified under certain assumptions. / The model framework is extended by incorporating a spatially dependent density, which is inversely proportional to the sink distance. The aim of this extension is to demonstrate that an inhomogeneous Poisson process can be used to model a sensor network with spatially dependent node density. Elliptic integrals and asymptotic approximations are used to describe the random behaviour of hops. The final model extension entails including random transmission radii, the effects of which are discussed and analyzed. The thesis is concluded by giving future research tasks and directions.
42

Kombinování diskrétních pravděpodobnostních rozdělení pomocí křížové entropie pro distribuované rozhodování / Cross-entropy based combination of discrete probability distributions for distributed decision making

Sečkárová, Vladimíra January 2015 (has links)
Dissertation abstract Title: Cross-entropy based combination of discrete probability distributions for distributed de- cision making Author: Vladimíra Sečkárová Author's email: seckarov@karlin.mff.cuni.cz Department: Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague Supervisor: Ing. Miroslav Kárný, DrSc., The Institute of Information Theory and Automation of the Czech Academy of Sciences Supervisor's email: school@utia.cas.cz Abstract: In this work we propose a systematic way to combine discrete probability distributions based on decision making theory and theory of information, namely the cross-entropy (also known as the Kullback-Leibler (KL) divergence). The optimal combination is a probability mass function minimizing the conditional expected KL-divergence. The ex- pectation is taken with respect to a probability density function also minimizing the KL divergence under problem-reflecting constraints. Although the combination is derived for the case when sources provided probabilistic type of information on the common support, it can applied to other types of given information by proposed transformation and/or extension. The discussion regarding proposed combining and sequential processing of available data, duplicate data, influence...
43

Estimation d'une densité prédictive avec information additionnelle

Sadeghkhani, Abdolnasser January 2017 (has links)
Dans le contexte de la théorie bayésienne et de théorie de la décision, l'estimation d'une densité prédictive d'une variable aléatoire occupe une place importante. Typiquement, dans un cadre paramétrique, il y a présence d’information additionnelle pouvant être interprétée sous forme d’une contrainte. Cette thèse porte sur des stratégies et des améliorations, tenant compte de l’information additionnelle, pour obtenir des densités prédictives efficaces et parfois plus performantes que d’autres données dans la littérature. Les résultats s’appliquent pour des modèles avec données gaussiennes avec ou sans une variance connue. Nous décrivons des densités prédictives bayésiennes pour les coûts Kullback-Leibler, Hellinger, Kullback-Leibler inversé, ainsi que pour des coûts du type $\alpha-$divergence et établissons des liens avec les familles de lois de probabilité du type \textit{skew--normal}. Nous obtenons des résultats de dominance faisant intervenir plusieurs techniques, dont l’expansion de la variance, les fonctions de coût duaux en estimation ponctuelle, l’estimation sous contraintes et l’estimation de Stein. Enfin, nous obtenons un résultat général pour l’estimation bayésienne d’un rapport de deux densités provenant de familles exponentielles. / Abstract: In the context of Bayesian theory and decision theory, the estimation of a predictive density of a random variable represents an important and challenging problem. Typically, in a parametric framework, usually there exists some additional information that can be interpreted as constraints. This thesis deals with strategies and improvements that take into account the additional information, in order to obtain effective and sometimes better performing predictive densities than others in the literature. The results apply to normal models with a known or unknown variance. We describe Bayesian predictive densities for Kullback--Leibler, Hellinger, reverse Kullback-Leibler losses as well as for α--divergence losses and establish links with skew--normal densities. We obtain dominance results using several techniques, including expansion of variance, dual loss functions in point estimation, restricted parameter space estimation, and Stein estimation. Finally, we obtain a general result for the Bayesian estimator of a ratio of two exponential family densities.
44

Discrepancy-based algorithms for best-subset model selection

Zhang, Tao 01 May 2013 (has links)
The selection of a best-subset regression model from a candidate family is a common problem that arises in many analyses. In best-subset model selection, we consider all possible subsets of regressor variables; thus, numerous candidate models may need to be fit and compared. One of the main challenges of best-subset selection arises from the size of the candidate model family: specifically, the probability of selecting an inappropriate model generally increases as the size of the family increases. For this reason, it is usually difficult to select an optimal model when best-subset selection is attempted based on a moderate to large number of regressor variables. Model selection criteria are often constructed to estimate discrepancy measures used to assess the disparity between each fitted candidate model and the generating model. The Akaike information criterion (AIC) and the corrected AIC (AICc) are designed to estimate the expected Kullback-Leibler (K-L) discrepancy. For best-subset selection, both AIC and AICc are negatively biased, and the use of either criterion will lead to overfitted models. To correct for this bias, we introduce a criterion AICi, which has a penalty term evaluated from Monte Carlo simulation. A multistage model selection procedure AICaps, which utilizes AICi, is proposed for best-subset selection. In the framework of linear regression models, the Gauss discrepancy is another frequently applied measure of proximity between a fitted candidate model and the generating model. Mallows' conceptual predictive statistic (Cp) and the modified Cp (MCp) are designed to estimate the expected Gauss discrepancy. For best-subset selection, Cp and MCp exhibit negative estimation bias. To correct for this bias, we propose a criterion CPSi that again employs a penalty term evaluated from Monte Carlo simulation. We further devise a multistage procedure, CPSaps, which selectively utilizes CPSi. In this thesis, we consider best-subset selection in two different modeling frameworks: linear models and generalized linear models. Extensive simulation studies are compiled to compare the selection behavior of our methods and other traditional model selection criteria. We also apply our methods to a model selection problem in a study of bipolar disorder.
45

Cellular diagnostic systems using hidden Markov models

Mohammad, Maruf H. 29 November 2006 (has links)
Radio frequency system optimization and troubleshooting remains one of the most challenging aspects of working in a cellular network. To stay competitive, cellular providers continually monitor the performance of their networks and use this information to determine where to improve or expand services. As a result, operators are saddled with the task of wading through overwhelmingly large amounts of data in order to trouble-shoot system problems. Part of the difficulty of this task is that for many complicated problems such as hand-off failure, clues about the cause of the failure are hidden deep within the statistics of underlying dynamic physical phenomena like fading, shadowing, and interference. In this research we propose that Hidden Markov Models (HMMs) be used as a method to infer signature statistics about the nature and sources of faults in a cellular system by fitting models to various time-series data measured throughout the network. By including HMMs in the network management tool, a provider can explore the statistical relationships between channel dynamics endemic to a cell and its resulting performance. This research effort also includes a new distance measure between a pair of HMMs that approximates the Kullback-Leibler divergence (KLD). Since there is no closed-form solution to calculate the KLD between the HMMs, the proposed analytical expression is very useful in classification and identification problems. A novel HMM based position location technique has been introduced that may be very useful for applications involving cognitive radios. / Ph. D.
46

BAYESIAN OPTIMAL DESIGN OF EXPERIMENTS FOR EXPENSIVE BLACK-BOX FUNCTIONS UNDER UNCERTAINTY

Piyush Pandita (6561242) 10 June 2019 (has links)
<div>Researchers and scientists across various areas face the perennial challenge of selecting experimental conditions or inputs for computer simulations in order to achieve promising results.</div><div> The aim of conducting these experiments could be to study the production of a material that has great applicability.</div><div> One might also be interested in accurately modeling and analyzing a simulation of a physical process through a high-fidelity computer code.</div><div> The presence of noise in the experimental observations or simulator outputs, called aleatory uncertainty, is usually accompanied by limited amount of data due to budget constraints.</div><div> This gives rise to what is known as epistemic uncertainty. </div><div> This problem of designing of experiments with limited number of allowable experiments or simulations under aleatory and epistemic uncertainty needs to be treated in a Bayesian way.</div><div> The aim of this thesis is to extend the state-of-the-art in Bayesian optimal design of experiments where one can optimize and infer statistics of the expensive experimental observation(s) or simulation output(s) under uncertainty.</div>
47

Prise en compte métrologique de la couleur dans un contexte de classification et d'indexation / Taking metrologically into account colour for classification and image retrieval

Chatoux, Hermine 21 May 2019 (has links)
Cette thèse aborde la question du traitement correct et complet de la couleur selon les contraintes métrologiques. Le manque d’approches adaptées a justifié la reformulation principaux outils de traitement d’images que sont le gradient, la détection et la description de points d’intérêt. Les approches proposées sont génériques : indépendantes du nombre de canaux d’acquisition (de la couleur à l’hyper-spectral), de la plage spectrale considérée et prenant en compte les courbes de sensibilité spectrales du capteur ou de l’œil.Le full-vector gradient nait de cet objectif métrologique. La preuve de concept est effectuée sur des images couleurs, multi et hyper-spectrales. L’extension développée pour l’analyse de la déficience visuelle ouvre également de nombreuses s perspectives intéressantes pour l’analyse du système visuel humain. Ce gradient est au cœur de la proposition d’un détecteur de points d’intérêt, lui aussi générique. Nous montrons la nécessité d’un choix mathématiquement valide de la distance entre attributs et l’importance de la cohérence de la paire attribut/distance. Une paire attribut/distance complète l’ensemble.Pour chaque développement, nous proposons des protocoles objectifs de validation liés à des générateurs d’images de synthèse explorant toute la complexité spatio-chromatique possible. Notre hypothèse est que la difficulté d’extraction du gradient/des points d’intérêts… est liée à la complexité de discrimination des distributions couleur dans la zone de traitement. Une confrontation aux approches courantes du domaine a été également mise en œuvre. / The PhD thesis objective is to study a colour’s correct and complete processing, respecting metrological constraint. The lack of compatible approaches justified that we reformulate the main image processing tools that are gradient, key point detector and descriptor. The proposed approaches are generic: channel count independent and taking the sensor’s or eye’s sensitivity curves into account.The full-vector gradient is born from this metrological objective. Proof of concept was realised on colour, multi and hyper-spectral images. The extension developed for human vision deficiency opens interesting perspectives to study of the human vision system. This gradient is the centre of the key point detector proposition, also generic.We also showed how necessary was a mathematically valid choice of distance between features. We revealed the importance of the pair feature/distance and completed the work with a pair: RC2O/Kulback-Leibler divergence based on colour differences.For each development, we propose unbiased validation protocols linked to synthetic images generators exploring the most spatial-chromatic complexity possible. Our hypothesis being that the extraction difficulty comes from the discrimination complexity between colour distributions in the processing area. We also compared our proposition to state of the art approaches in recurring datasets/protocols.
48

有影響力自變數的偵測

盧惟真 Unknown Date (has links)
在一個具有多個自變數的線性模式中,當我們發現模式在加入或刪除某些自變數時,若對其他參數的估計或估計分配或後驗分配造成極大的影響,我們就有必要提出警告訊息並做進一步分析。而偵測這些造成影響之自變數的方法,除了Schall和Dunne(1990)所提的Cook距離和AP統計量外,本文提出用Kullback-Leibler對稱散度的方法,以自變數增加前後,參數估計分配間的差異作為所加入之自變數影響力的指標。另一方面,就貝氏的觀點,以自變數增加前後,參數後驗分配間的差異程度作為偵測有影響力自變數的方法。此外,本文亦探索Kullback-Leibler對稱散度與自變數間共線性的關係。
49

Sélection de modèles semi-paramétriques

Liquet, benoit 11 December 2002 (has links) (PDF)
Cette thèse développe des méthodes de sélection de modèles pour des applications en Biostatistique et plus particulièrement dans le domaine médical. Dans la première partie, nous proposons une méthode et un programme de correction du niveau de signification d'un test lorsque plusieurs codages d'une variable explicative sont essayés. Ce travail est réalisé dans le cadre d'une régression logistique et appliqué à des données sur la relation entre cholestérol et démence. La deuxième partie de la thèse est consacrée au développement d'un critère d'information général permettant de sélectionner un estimateur parmi une famille d'estimateurs semi-paramétriques. Le critère que nous proposons est basé sur l'estimation par bootstrap de l'information de Kullback-Leibler. Nous appliquons ensuite ce critère à la modélisation de l'effet de l'amiante sur le risque de mésothéliome et nous comparons cette approche à la méthode de sélection de Birgé-Massart. Enfin, la troisième partie présente un critère de sélection en présence des données incomplètes. Le critère proposé est une extension du critère developpé dans la deuxième partie. Ce critère, construit sur l'espérance de la log-vraisemblance observée, permet en particulier de sélectionner le paramètre de lissage dans l'estimation lisse de la fonction de risque et de choisir entre des modèles stratifiés et des modèles à risques proportionnels. Nous avons notamment appliqué cette méthode à la modélisation de l'effet du sexe et du niveau d'éducation sur le risque de démence.
50

Representation Of Covariance Matrices In Track Fusion Problems

Gunay, Melih 01 November 2007 (has links) (PDF)
Covariance Matrix in target tracking algorithms has a critical role at multi- sensor track fusion systems. This matrix reveals the uncertainty of state es- timates that are obtained from diferent sensors. So, many subproblems of track fusion usually utilize this matrix to get more accurate results. That is why this matrix should be interchanged between the nodes of the multi-sensor tracking system. This thesis mainly deals with analysis of approximations of the covariance matrix that can best represent this matrix in order to efectively transmit this matrix to the demanding site. Kullback-Leibler (KL) Distance is exploited to derive some of the representations for Gaussian case. Also com- parison of these representations is another objective of this work and this is based on the fusion performance of the representations and the performance is measured for a system of a 2-radar track fusion system.

Page generated in 0.0331 seconds