• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 17
  • 7
  • 3
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 69
  • 66
  • 34
  • 12
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Suivi d'objets d'intérêt dans une séquence d`images : des points saillants aux mesures statistiques

Garcia, Vincent 11 December 2008 (has links) (PDF)
Le problème du suivi d'objets dans une vidéo se pose dans des domaines tels que la vision par ordinateur (vidéo-surveillance par exemple) et la post-production télévisuelle et cinématographique (effets spéciaux). Il se décline en deux variantes principales : le suivi d'une région d'intérêt, qui désigne un suivi grossier d'objet, et la segmentation spatio-temporelle, qui correspond à un suivi précis des contours de l'objet d'intérêt. Dans les deux cas, la région ou l'objet d'intérêt doivent avoir été préalablement détourés sur la première, et éventuellement la dernière, image de la séquence vidéo. Nous proposons dans cette thèse une méthode pour chacun de ces types de suivi ainsi qu'une implémentation rapide tirant partie du Graphics Processing Unit (GPU) d'une méthode de suivi de régions d'intérêt développée par ailleurs. La première méthode repose sur l'analyse de trajectoires temporelles de points saillants et réalise un suivi de régions d'intérêt. Des points saillants (typiquement des lieux de forte courbure des lignes isointensité) sont détectés dans toutes les images de la séquence. Les trajectoires sont construites en liant les points des images successives dont les voisinages sont cohérents. Notre contribution réside premièrement dans l'analyse des trajectoires sur un groupe d'images, ce qui améliore la qualité d'estimation du mouvement. De plus, nous utilisons une pondération spatio-temporelle pour chaque trajectoire qui permet d'ajouter une contrainte temporelle sur le mouvement tout en prenant en compte les déformations géométriques locales de l'objet ignorées par un modèle de mouvement global. La seconde méthode réalise une segmentation spatio-temporelle. Elle repose sur l'estimation du mouvement du contour de l'objet en s'appuyant sur l'information contenue dans une couronne qui s'étend de part et d'autre de ce contour. Cette couronne nous renseigne sur le contraste entre le fond et l'objet dans un contexte local. C'est là notre première contribution. De plus, la mise en correspondance par une mesure de similarité statistique, à savoir l'entropie du résiduel, d'une portion de la couronne et d'une zone de l'image suivante dans la séquence permet d'améliorer le suivi tout en facilitant le choix de la taille optimale de la couronne. Enfin, nous proposons une implémentation rapide d'une méthode de suivi de régions d'intérêt existante. Cette méthode repose sur l'utilisation d'une mesure de similarité statistique : la divergence de Kullback-Leibler. Cette divergence peut être estimée dans un espace de haute dimension à l'aide de multiples calculs de distances au k-ème plus proche voisin dans cet espace. Ces calculs étant très coûteux, nous proposons une implémentation parallèle sur GPU (grâce à l'interface logiciel CUDA de NVIDIA) de la recherche exhaustive des k plus proches voisins. Nous montrons que cette implémentation permet d'accélérer le suivi des objets, jusqu'à un facteur 15 par rapport à une implémentation de cette recherche nécessitant au préalable une structuration des données.
42

Information-Based Sensor Management for Static Target Detection Using Real and Simulated Data

Kolba, Mark Philip January 2009 (has links)
<p>In the modern sensing environment, large numbers of sensor tasking decisions must be made using an increasingly diverse and powerful suite of sensors in order to best fulfill mission objectives in the presence of situationally-varying resource constraints. Sensor management algorithms allow the automation of some or all of the sensor tasking process, meaning that sensor management approaches can either assist or replace a human operator as well as ensure the safety of the operator by removing that operator from a dangerous operational environment. Sensor managers also provide improved system performance over unmanaged sensing approaches through the intelligent control of the available sensors. In particular, information-theoretic sensor management approaches have shown promise for providing robust and effective sensor manager performance.</p><p>This work develops information-theoretic sensor managers for a general static target detection problem. Two types of sensor managers are developed. The first considers a set of discrete objects, such as anomalies identified by an anomaly detector or grid cells in a gridded region of interest. The second considers a continuous spatial region in which targets may be located at any point in continuous space. In both types of sensor managers, the sensor manager uses a Bayesian, probabilistic framework to model the environment and tasks the sensor suite to make new observations that maximize the expected information gain for the system. The sensor managers are compared to unmanaged sensing approaches using simulated data and using real data from landmine detection and unexploded ordnance (UXO) discrimination applications, and it is demonstrated that the sensor managers consistently outperform the unmanaged approaches, enabling targets to be detected more quickly using the sensor managers. The performance improvement represented by the rapid detection of targets is of crucial importance in many static target detection applications, resulting in higher rates of advance and reduced costs and resource consumption in both military and civilian applications.</p> / Dissertation
43

Improved facies modelling with multivariate spatial statistics

Li, Yupeng Unknown Date
No description available.
44

Stochastic routing models in sensor networks

Keeler, Holger Paul January 2010 (has links)
Sensor networks are an evolving technology that promise numerous applications. The random and dynamic structure of sensor networks has motivated the suggestion of greedy data-routing algorithms. / In this thesis stochastic models are developed to study the advancement of messages under greedy routing in sensor networks. A model framework that is based on homogeneous spatial Poisson processes is formulated and examined to give a better understanding of the stochastic dependencies arising in the system. The effects of the model assumptions and the inherent dependencies are discussed and analyzed. A simple power-saving sleep scheme is included, and its effects on the local node density are addressed to reveal that it reduces one of the dependencies in the model. / Single hop expressions describing the advancement of messages are derived, and asymptotic expressions for the hop length moments are obtained. Expressions for the distribution of the multihop advancement of messages are derived. These expressions involve high-dimensional integrals, which are evaluated with quasi-Monte Carlo integration methods. An importance sampling function is derived to speed up the quasi-Monte Carlo methods. The subsequent results agree extremely well with those obtained via routing simulations. A renewal process model is proposed to model multihop advancements, and is justified under certain assumptions. / The model framework is extended by incorporating a spatially dependent density, which is inversely proportional to the sink distance. The aim of this extension is to demonstrate that an inhomogeneous Poisson process can be used to model a sensor network with spatially dependent node density. Elliptic integrals and asymptotic approximations are used to describe the random behaviour of hops. The final model extension entails including random transmission radii, the effects of which are discussed and analyzed. The thesis is concluded by giving future research tasks and directions.
45

Stochastic routing models in sensor networks

Keeler, Holger Paul January 2010 (has links)
Sensor networks are an evolving technology that promise numerous applications. The random and dynamic structure of sensor networks has motivated the suggestion of greedy data-routing algorithms. / In this thesis stochastic models are developed to study the advancement of messages under greedy routing in sensor networks. A model framework that is based on homogeneous spatial Poisson processes is formulated and examined to give a better understanding of the stochastic dependencies arising in the system. The effects of the model assumptions and the inherent dependencies are discussed and analyzed. A simple power-saving sleep scheme is included, and its effects on the local node density are addressed to reveal that it reduces one of the dependencies in the model. / Single hop expressions describing the advancement of messages are derived, and asymptotic expressions for the hop length moments are obtained. Expressions for the distribution of the multihop advancement of messages are derived. These expressions involve high-dimensional integrals, which are evaluated with quasi-Monte Carlo integration methods. An importance sampling function is derived to speed up the quasi-Monte Carlo methods. The subsequent results agree extremely well with those obtained via routing simulations. A renewal process model is proposed to model multihop advancements, and is justified under certain assumptions. / The model framework is extended by incorporating a spatially dependent density, which is inversely proportional to the sink distance. The aim of this extension is to demonstrate that an inhomogeneous Poisson process can be used to model a sensor network with spatially dependent node density. Elliptic integrals and asymptotic approximations are used to describe the random behaviour of hops. The final model extension entails including random transmission radii, the effects of which are discussed and analyzed. The thesis is concluded by giving future research tasks and directions.
46

Kombinování diskrétních pravděpodobnostních rozdělení pomocí křížové entropie pro distribuované rozhodování / Cross-entropy based combination of discrete probability distributions for distributed decision making

Sečkárová, Vladimíra January 2015 (has links)
Dissertation abstract Title: Cross-entropy based combination of discrete probability distributions for distributed de- cision making Author: Vladimíra Sečkárová Author's email: seckarov@karlin.mff.cuni.cz Department: Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague Supervisor: Ing. Miroslav Kárný, DrSc., The Institute of Information Theory and Automation of the Czech Academy of Sciences Supervisor's email: school@utia.cas.cz Abstract: In this work we propose a systematic way to combine discrete probability distributions based on decision making theory and theory of information, namely the cross-entropy (also known as the Kullback-Leibler (KL) divergence). The optimal combination is a probability mass function minimizing the conditional expected KL-divergence. The ex- pectation is taken with respect to a probability density function also minimizing the KL divergence under problem-reflecting constraints. Although the combination is derived for the case when sources provided probabilistic type of information on the common support, it can applied to other types of given information by proposed transformation and/or extension. The discussion regarding proposed combining and sequential processing of available data, duplicate data, influence...
47

Estimation d'une densité prédictive avec information additionnelle

Sadeghkhani, Abdolnasser January 2017 (has links)
Dans le contexte de la théorie bayésienne et de théorie de la décision, l'estimation d'une densité prédictive d'une variable aléatoire occupe une place importante. Typiquement, dans un cadre paramétrique, il y a présence d’information additionnelle pouvant être interprétée sous forme d’une contrainte. Cette thèse porte sur des stratégies et des améliorations, tenant compte de l’information additionnelle, pour obtenir des densités prédictives efficaces et parfois plus performantes que d’autres données dans la littérature. Les résultats s’appliquent pour des modèles avec données gaussiennes avec ou sans une variance connue. Nous décrivons des densités prédictives bayésiennes pour les coûts Kullback-Leibler, Hellinger, Kullback-Leibler inversé, ainsi que pour des coûts du type $\alpha-$divergence et établissons des liens avec les familles de lois de probabilité du type \textit{skew--normal}. Nous obtenons des résultats de dominance faisant intervenir plusieurs techniques, dont l’expansion de la variance, les fonctions de coût duaux en estimation ponctuelle, l’estimation sous contraintes et l’estimation de Stein. Enfin, nous obtenons un résultat général pour l’estimation bayésienne d’un rapport de deux densités provenant de familles exponentielles. / Abstract: In the context of Bayesian theory and decision theory, the estimation of a predictive density of a random variable represents an important and challenging problem. Typically, in a parametric framework, usually there exists some additional information that can be interpreted as constraints. This thesis deals with strategies and improvements that take into account the additional information, in order to obtain effective and sometimes better performing predictive densities than others in the literature. The results apply to normal models with a known or unknown variance. We describe Bayesian predictive densities for Kullback--Leibler, Hellinger, reverse Kullback-Leibler losses as well as for α--divergence losses and establish links with skew--normal densities. We obtain dominance results using several techniques, including expansion of variance, dual loss functions in point estimation, restricted parameter space estimation, and Stein estimation. Finally, we obtain a general result for the Bayesian estimator of a ratio of two exponential family densities.
48

Discrepancy-based algorithms for best-subset model selection

Zhang, Tao 01 May 2013 (has links)
The selection of a best-subset regression model from a candidate family is a common problem that arises in many analyses. In best-subset model selection, we consider all possible subsets of regressor variables; thus, numerous candidate models may need to be fit and compared. One of the main challenges of best-subset selection arises from the size of the candidate model family: specifically, the probability of selecting an inappropriate model generally increases as the size of the family increases. For this reason, it is usually difficult to select an optimal model when best-subset selection is attempted based on a moderate to large number of regressor variables. Model selection criteria are often constructed to estimate discrepancy measures used to assess the disparity between each fitted candidate model and the generating model. The Akaike information criterion (AIC) and the corrected AIC (AICc) are designed to estimate the expected Kullback-Leibler (K-L) discrepancy. For best-subset selection, both AIC and AICc are negatively biased, and the use of either criterion will lead to overfitted models. To correct for this bias, we introduce a criterion AICi, which has a penalty term evaluated from Monte Carlo simulation. A multistage model selection procedure AICaps, which utilizes AICi, is proposed for best-subset selection. In the framework of linear regression models, the Gauss discrepancy is another frequently applied measure of proximity between a fitted candidate model and the generating model. Mallows' conceptual predictive statistic (Cp) and the modified Cp (MCp) are designed to estimate the expected Gauss discrepancy. For best-subset selection, Cp and MCp exhibit negative estimation bias. To correct for this bias, we propose a criterion CPSi that again employs a penalty term evaluated from Monte Carlo simulation. We further devise a multistage procedure, CPSaps, which selectively utilizes CPSi. In this thesis, we consider best-subset selection in two different modeling frameworks: linear models and generalized linear models. Extensive simulation studies are compiled to compare the selection behavior of our methods and other traditional model selection criteria. We also apply our methods to a model selection problem in a study of bipolar disorder.
49

Cellular diagnostic systems using hidden Markov models

Mohammad, Maruf H. 29 November 2006 (has links)
Radio frequency system optimization and troubleshooting remains one of the most challenging aspects of working in a cellular network. To stay competitive, cellular providers continually monitor the performance of their networks and use this information to determine where to improve or expand services. As a result, operators are saddled with the task of wading through overwhelmingly large amounts of data in order to trouble-shoot system problems. Part of the difficulty of this task is that for many complicated problems such as hand-off failure, clues about the cause of the failure are hidden deep within the statistics of underlying dynamic physical phenomena like fading, shadowing, and interference. In this research we propose that Hidden Markov Models (HMMs) be used as a method to infer signature statistics about the nature and sources of faults in a cellular system by fitting models to various time-series data measured throughout the network. By including HMMs in the network management tool, a provider can explore the statistical relationships between channel dynamics endemic to a cell and its resulting performance. This research effort also includes a new distance measure between a pair of HMMs that approximates the Kullback-Leibler divergence (KLD). Since there is no closed-form solution to calculate the KLD between the HMMs, the proposed analytical expression is very useful in classification and identification problems. A novel HMM based position location technique has been introduced that may be very useful for applications involving cognitive radios. / Ph. D.
50

BAYESIAN OPTIMAL DESIGN OF EXPERIMENTS FOR EXPENSIVE BLACK-BOX FUNCTIONS UNDER UNCERTAINTY

Piyush Pandita (6561242) 10 June 2019 (has links)
<div>Researchers and scientists across various areas face the perennial challenge of selecting experimental conditions or inputs for computer simulations in order to achieve promising results.</div><div> The aim of conducting these experiments could be to study the production of a material that has great applicability.</div><div> One might also be interested in accurately modeling and analyzing a simulation of a physical process through a high-fidelity computer code.</div><div> The presence of noise in the experimental observations or simulator outputs, called aleatory uncertainty, is usually accompanied by limited amount of data due to budget constraints.</div><div> This gives rise to what is known as epistemic uncertainty. </div><div> This problem of designing of experiments with limited number of allowable experiments or simulations under aleatory and epistemic uncertainty needs to be treated in a Bayesian way.</div><div> The aim of this thesis is to extend the state-of-the-art in Bayesian optimal design of experiments where one can optimize and infer statistics of the expensive experimental observation(s) or simulation output(s) under uncertainty.</div>

Page generated in 0.0308 seconds