• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 91
  • 91
  • 91
  • 19
  • 18
  • 13
  • 11
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Persistent Places in the Late Archaic Landscape / A GIS-based Case Study of CRM Sites in the Lower Grand River Area, Ontario

Tincombe, Eric January 2020 (has links)
My aim in this study is to identify Late Archaic persistent places—places of continued importance throughout the long-term occupation of a region—within the lower Grand River Area of what is now southern Ontario. I accomplish this through the use of kernel density estimation applied to datasets containing the locations of Late Archaic (4000-2800 RCYBP) sites within this study area which were discovered through cultural resource management (CRM) survey and excavation. Areas identified as persistent places were investigated with regard to landscape features and environmental affordances that could have structured their consistent re-use throughout the Late Archaic, with particular attention paid to the hypothesis that persistent places may have developed around the riverine spawning grounds of spring-spawning fish. Two places with particularly intense concentrations of diagnostic materials dating to successive periods of the Late Archaic were identified: one surrounding Seneca Creek near Caledonia, and one near D’Aubigny Creek south of Brantford. The results show that the persistent use of these places would likely have been structured by the presence of landscape features which would have made these areas particularly rich in many different seasonal resources during the Late Archaic. Perhaps most significantly, both areas are located in close proximity to areas identified as walleye spawning grounds. The contributions of this thesis include the synthesis of the results of many years of CRM survey of the Grand River Area, evidence for the existence of Late Archaic riverine fishing sites related to the spawning runs of walleye, and an improved understanding of Late Archaic subsistence-settlement systems. / Thesis / Master of Arts (MA) / Lay Abstract: My aim in this study is to identify persistent places—places of continued importance throughout the long-term occupation of a region—within the lower Grand River Area of what is now southern Ontario during a period known as the Late Archaic (ca. 2500 B.C.- ca. 1000 B.C). This was accomplished using GIS spatial analysis of data produced through commercial archaeological assessments. As a result of this analysis, I identified two persistent places within the study area: one near D’Aubigny Creek south of Brantford, and one surrounding Seneca Creek near Caledonia. I also investigated the environments surrounding these places to determine what may have made them continuously appealing for over a millennium. Both areas were found to contain environmental features that would have likely made them particularly resource-rich and appealing to hunter-gatherers. One of the most important findings was that both areas are in close proximity to walleye spawning grounds.
42

Trajectory Similarity Based Prediction for Remaining Useful Life Estimation

Wang, Tianyi 06 December 2010 (has links)
No description available.
43

Stochastic distribution tracking control for stochastic non-linear systems via probability density function vectorisation

Liu, Y., Zhang, Qichun, Yue, H. 08 February 2022 (has links)
Yes / This paper presents a new control strategy for stochastic distribution shape tracking regarding non-Gaussian stochastic non-linear systems. The objective can be summarised as adjusting the probability density function (PDF) of the system output to any given desired distribution. In order to achieve this objective, the system output PDF has first been formulated analytically, which is time-variant. Then, the PDF vectorisation has been implemented to simplify the model description. Using the vector-based representation, the system identification and control design have been performed to achieve the PDF tracking. In practice, the PDF evolution is difficult to implement in real-time, thus a data-driven extension has also been discussed in this paper, where the vector-based model can be obtained using kernel density estimation (KDE) with the real-time data. Furthermore, the stability of the presented control design has been analysed, which is validated by a numerical example. As an extension, the multi-output stochastic systems have also been discussed for joint PDF tracking using the proposed algorithm, and the perspectives of advanced controller have been discussed. The main contribution of this paper is to propose: (1) a new sampling-based PDF transformation to reduce the modelling complexity, (2) a data-driven approach for online implementation without model pre-training, and (3) a feasible framework to integrate the existing control methods. / This paper is partly supported by National Science Foundation of China under Grants (61603262 and 62073226), Liaoning Province Natural Science Joint Foundation in Key Areas (2019- KF-03-08), Natural Science Foundation of Liaoning Province (20180550418), Liaoning BaiQianWan Talents Program, i5 Intelligent Manufacturing Institute Fund of Shenyang Institute of Technology (i5201701), Central Government Guides Local Science and Technology Development Funds of Liaoning Province (2021JH6/10500137).
44

Maximum-likelihood kernel density estimation in high-dimensional feature spaces /| C.M. van der Walt

Van der Walt, Christiaan Maarten January 2014 (has links)
With the advent of the internet and advances in computing power, the collection of very large high-dimensional datasets has become feasible { understanding and modelling high-dimensional data has thus become a crucial activity, especially in the field of pattern recognition. Since non-parametric density estimators are data-driven and do not require or impose a pre-defined probability density function on data, they are very powerful tools for probabilistic data modelling and analysis. Conventional non-parametric density estimation methods, however, originated from the field of statistics and were not originally intended to perform density estimation in high-dimensional features spaces { as is often encountered in real-world pattern recognition tasks. Therefore we address the fundamental problem of non-parametric density estimation in high-dimensional feature spaces in this study. Recent advances in maximum-likelihood (ML) kernel density estimation have shown that kernel density estimators hold much promise for estimating nonparametric probability density functions in high-dimensional feature spaces. We therefore derive two new iterative kernel bandwidth estimators from the maximum-likelihood (ML) leave one-out objective function and also introduce a new non-iterative kernel bandwidth estimator (based on the theoretical bounds of the ML bandwidths) for the purpose of bandwidth initialisation. We name the iterative kernel bandwidth estimators the minimum leave-one-out entropy (MLE) and global MLE estimators, and name the non-iterative kernel bandwidth estimator the MLE rule-of-thumb estimator. We compare the performance of the MLE rule-of-thumb estimator and conventional kernel density estimators on artificial data with data properties that are varied in a controlled fashion and on a number of representative real-world pattern recognition tasks, to gain a better understanding of the behaviour of these estimators in high-dimensional spaces and to determine whether these estimators are suitable for initialising the bandwidths of iterative ML bandwidth estimators in high dimensions. We find that there are several regularities in the relative performance of conventional kernel density estimators across different tasks and dimensionalities and that the Silverman rule-of-thumb bandwidth estimator performs reliably across most tasks and dimensionalities of the pattern recognition datasets considered, even in high-dimensional feature spaces. Based on this empirical evidence and the intuitive theoretical motivation that the Silverman estimator optimises the asymptotic mean integrated squared error (assuming a Gaussian reference distribution), we select this estimator to initialise the bandwidths of the iterative ML kernel bandwidth estimators compared in our simulation studies. We then perform a comparative simulation study of the newly introduced iterative MLE estimators and other state-of-the-art iterative ML estimators on a number of artificial and real-world high-dimensional pattern recognition tasks. We illustrate with artificial data (guided by theoretical motivations) under what conditions certain estimators should be preferred and we empirically confirm on real-world data that no estimator performs optimally on all tasks and that the optimal estimator depends on the properties of the underlying density function being estimated. We also observe an interesting case of the bias-variance trade-off where ML estimators with fewer parameters than the MLE estimator perform exceptionally well on a wide variety of tasks; however, for the cases where these estimators do not perform well, the MLE estimator generally performs well. The newly introduced MLE kernel bandwidth estimators prove to be a useful contribution to the field of pattern recognition, since they perform optimally on a number of real-world pattern recognition tasks investigated and provide researchers and practitioners with two alternative estimators to employ for the task of kernel density estimation. / PhD (Information Technology), North-West University, Vaal Triangle Campus, 2014
45

The effects of alcohol access on the spatial and temporal distribution of crime

Fitterer, Jessica Laura 15 March 2017 (has links)
Increases in alcohol availability have caused crime rates to escalate across multiple regions around the world. As individuals consume alcohol they experience impaired judgment and a dose-response escalation in aggression that, for some, leads to criminal behaviour. By limiting alcohol availability it is possible to reduce crime; however, the literature remains mixed on the best practices for alcohol access restrictions. Variances in data quality and statistical methods have created an inconsistency in the reported effects of price, hour of sales, and alcohol outlet restrictions on crime. Most notably, the research findings are influenced by the different effects of alcohol establishments on crime. The objective of this PhD research was to develop novel quantitative approaches to establish the extent alcohol access (outlets) influences the frequency of crime (liquor, disorder, violent) at a fine level of spatial detail (x,y locations and block groups). Analyses were focused on British Columbia’s largest cities where policies are changing to allow greater alcohol access, but little is known about the crime-alcohol access relationship. Two reviews were conducted to summarize and contrast the effects of alcohol access restrictions (price, hours of sales, alcohol outlet density) on crime, and evaluate the state-of-the-art in statistical methods used to associate crime with alcohol availability. Results highlight key methodological limitations and fragmentation in alcohol policy effects on crime across multiple disciplines. Using a spatial data science approach, recommendations were made to increase spatial detail in modelling to limit the scale effects on crime-alcohol association. Providing guidelines for alcohol-associated crime reduction, kernel density space-time change detection methods were also applied to provide the first evaluation of active policing on alcohol-associated crime in the Granville St. entertainment district of Vancouver, British Columbia. Foot patrols were able to reduce the spatial density of crime, but hot spots of liquor and violent assaults remained within 60m proximity to bars (nightclubs). To estimate the association between alcohol establishment size, and type on disorder and violent crime reports in block groups across Victoria, British Columbia a Poisson Generalized Linear Model with spatial lag effects was applied. Estimates provided the factor increase (1.0009) expected in crime for every additional patron seat added to an establishment capacity, and indicated that establishments should be spaced greater than 300m a part to significantly reduce alcohol-associated crime. These results offer the first evaluation of seating capacity and establishment spacing on alcohol-associated crime for alcohol license decision making, and are pertinent at a time when alcohol policy reform is being prioritized by the British Columbia government. In summary, this dissertation contributes 1) cross-disciplinary policy and methodological reviews, 2) expands the application of spatial statistics to alcohol-attributable crime research, 3) advances knowledge on local scale of effects of different alcohol establishment types on crime, 4) and develops transferable models to estimate the effects of alcohol establishment seating capacity and proximity between establishments on the frequency of crime. / Graduate / 2018-02-27
46

Finančná situácia domácností / Financial Situation of Households

Dvorožňáková, Zuzana January 2009 (has links)
The financial situations of households are among the key factors influencing the market in the country. This thesis focuses on analyzing and evaluating the potential and financial position of Czech and Slovak households. It deals mainly with the analysis of data from the years 2004 to 2008, which is the period of the entrance of Slovakia and the Czech Republic into the European Union. The theoretical section describes events in particular, the economic and financial situation of households in those two countries during the observed years. The practical process uses different types of statistical methods and analysis to identify the financial situation in the Czech Republic and Slovakia, as well as identify the differences between them. The conclusion summarizes the findings and observations of the research findings, which are implemented in the practical section.
47

Efficient inference and learning in graphical models for multi-organ shape segmentation / Inférence efficace et apprentissage des modèles graphiques pour la segmentation des formes multi-organes

Boussaid, Haithem 08 January 2015 (has links)
Cette thèse explore l’utilisation des modèles de contours déformables pour la segmentation basée sur la forme des images médicales. Nous apportons des contributions sur deux fronts: dans le problème de l’apprentissage statistique, où le modèle est formé à partir d’un ensemble d’images annotées, et le problème de l’inférence, dont le but est de segmenter une image étant donnée un modèle. Nous démontrons le mérite de nos techniques sur une grande base d’images à rayons X, où nous obtenons des améliorations systématiques et des accélérations par rapport à la méthode de l’état de l’art. Concernant l’apprentissage, nous formulons la formation de la fonction de score des modèles de contours déformables en un problème de prédiction structurée à grande marge et construisons une fonction d’apprentissage qui vise à donner le plus haut score à la configuration vérité-terrain. Nous intégrons une fonction de perte adaptée à la prédiction structurée pour les modèles de contours déformables. En particulier, nous considérons l’apprentissage avec la mesure de performance consistant en la distance moyenne entre contours, comme une fonction de perte. L’utilisation de cette fonction de perte au cours de l’apprentissage revient à classer chaque contour candidat selon sa distance moyenne du contour vérité-terrain. Notre apprentissage des modèles de contours déformables en utilisant la prédiction structurée avec la fonction zéro-un de perte surpasse la méthode [Seghers et al. 2007] de référence sur la base d’images médicales considérée [Shiraishi et al. 2000, van Ginneken et al. 2006]. Nous démontrons que l’apprentissage avec la fonction de perte de distance moyenne entre contours améliore encore plus les résultats produits avec l’apprentissage utilisant la fonction zéro-un de perte et ce d’une quantité statistiquement significative.Concernant l’inférence, nous proposons des solveurs efficaces et adaptés aux problèmes combinatoires à variables spatiales discrétisées. Nos contributions sont triples: d’abord, nous considérons le problème d’inférence pour des modèles graphiques qui contiennent des boucles, ne faisant aucune hypothèse sur la topologie du graphe sous-jacent. Nous utilisons un algorithme de décomposition-coordination efficace pour résoudre le problème d’optimisation résultant: nous décomposons le graphe du modèle en un ensemble de sous-graphes en forme de chaines ouvertes. Nous employons la Méthode de direction alternée des multiplicateurs (ADMM) pour réparer les incohérences des solutions individuelles. Même si ADMM est une méthode d’inférence approximative, nous montrons empiriquement que notre implémentation fournit une solution exacte pour les exemples considérés. Deuxièmement, nous accélérons l’optimisation des modèles graphiques en forme de chaîne en utilisant l’algorithme de recherche hiérarchique A* [Felzenszwalb & Mcallester 2007] couplé avec les techniques d’élagage développés dans [Kokkinos 2011a]. Nous réalisons une accélération de 10 fois en moyenne par rapport à l’état de l’art qui est basé sur la programmation dynamique (DP) couplé avec les transformées de distances généralisées [Felzenszwalb & Huttenlocher 2004]. Troisièmement, nous intégrons A* dans le schéma d’ADMM pour garantir une optimisation efficace des sous-problèmes en forme de chaine. En outre, l’algorithme résultant est adapté pour résoudre les problèmes d’inférence augmentée par une fonction de perte qui se pose lors de l’apprentissage de prédiction des structure, et est donc utilisé lors de l’apprentissage et de l’inférence. [...] / This thesis explores the use of discriminatively trained deformable contour models (DCMs) for shape-based segmentation in medical images. We make contributions in two fronts: in the learning problem, where the model is trained from a set of annotated images, and in the inference problem, whose aim is to segment an image given a model. We demonstrate the merit of our techniques in a large X-Ray image segmentation benchmark, where we obtain systematic improvements in accuracy and speedups over the current state-of-the-art. For learning, we formulate training the DCM scoring function as large-margin structured prediction and construct a training objective that aims at giving the highest score to the ground-truth contour configuration. We incorporate a loss function adapted to DCM-based structured prediction. In particular, we consider training with the Mean Contour Distance (MCD) performance measure. Using this loss function during training amounts to scoring each candidate contour according to its Mean Contour Distance to the ground truth configuration. Training DCMs using structured prediction with the standard zero-one loss already outperforms the current state-of-the-art method [Seghers et al. 2007] on the considered medical benchmark [Shiraishi et al. 2000, van Ginneken et al. 2006]. We demonstrate that training with the MCD structured loss further improves over the generic zero-one loss results by a statistically significant amount. For inference, we propose efficient solvers adapted to combinatorial problems with discretized spatial variables. Our contributions are three-fold:first, we consider inference for loopy graphical models, making no assumption about the underlying graph topology. We use an efficient decomposition-coordination algorithm to solve the resulting optimization problem: we decompose the model’s graph into a set of open, chain-structured graphs. We employ the Alternating Direction Method of Multipliers (ADMM) to fix the potential inconsistencies of the individual solutions. Even-though ADMMis an approximate inference scheme, we show empirically that our implementation delivers the exact solution for the considered examples. Second,we accelerate optimization of chain-structured graphical models by using the Hierarchical A∗ search algorithm of [Felzenszwalb & Mcallester 2007] couple dwith the pruning techniques developed in [Kokkinos 2011a]. We achieve a one order of magnitude speedup in average over the state-of-the-art technique based on Dynamic Programming (DP) coupled with Generalized DistanceTransforms (GDTs) [Felzenszwalb & Huttenlocher 2004]. Third, we incorporate the Hierarchical A∗ algorithm in the ADMM scheme to guarantee an efficient optimization of the underlying chain structured subproblems. The resulting algorithm is naturally adapted to solve the loss-augmented inference problem in structured prediction learning, and hence is used during training and inference. In Appendix A, we consider the case of 3D data and we develop an efficientmethod to find the mode of a 3D kernel density distribution. Our algorithm has guaranteed convergence to the global optimum, and scales logarithmically in the volume size by virtue of recursively subdividing the search space. We use this method to rapidly initialize 3D brain tumor segmentation where we demonstrate substantial acceleration with respect to a standard mean-shift implementation. In Appendix B, we describe in more details our extension of the Hierarchical A∗ search algorithm of [Felzenszwalb & Mcallester 2007] to inference on chain-structured graphs.
48

Etude de la confusion des descripteurs locaux de points d'intérêt : application à la mise en correspondance d'images de documents / Study of keypoints and local features confusion : document images matching scenario

Royer, Emilien 24 October 2017 (has links)
Ce travail s’inscrit dans une tentative de liaison entre la communauté classique de la Vision par ordinateur et la communauté du traitement d’images de documents, analyse être connaissance (DAR). Plus particulièrement, nous abordons la question des détecteurs de points d’intérêts et des descripteurs locaux dans une image. Ceux-ci ayant été conçus pour des images issues du monde réel, ils ne sont pas adaptés aux problématiques issues du document dont les images présentent des caractéristiques visuelles différentes.Notre approche se base sur la résolution du problème de la confusion entre les descripteurs,ceux-ci perdant leur pouvoir discriminant. Notre principale contribution est un algorithme de réduction de la confusion potentiellement présente dans un ensemble de vecteurs caractéristiques d’une même image, ceci par une approche probabiliste en filtrant les vecteurs fortement confusifs. Une telle conception nous permet d’appliquer des algorithmes d’extractions de descripteurs sans avoir à les modifier ce qui constitue une passerelle entre ces deux mondes. / This work tries to establish a bridge between the field of classical computer vision and document analysis and recognition. Specificaly, we tackle the issue of keypoints detection and associated local features computation in the image. These are not suitable for document images since they were designed for real-world images which have different visual characteristic. Our approach is based on resolving the issue of reducing the confusion between feature vectors since they usually lose their discriminant power with document images. Our main contribution is an algorithm reducing the confusion between local features by filtering the ones which present a high confusing risk. We are tackling this by using tools from probability theory. Such a method allows us to apply features extraction algorithms without having to modify them, thus establishing a bridge between these two worlds.
49

Automatic Random Variate Generation for Simulation Input

Hörmann, Wolfgang, Leydold, Josef January 2000 (has links) (PDF)
We develop and evaluate algorithms for generating random variates for simulation input. One group called automatic, or black-box algorithms can be used to sample from distributions with known density. They are based on the rejection principle. The hat function is generated automatically in a setup step using the idea of transformed density rejection. There the density is transformed into a concave function and the minimum of several tangents is used to construct the hat function. The resulting algorithms are not too complicated and are quite fast. The principle is also applicable to random vectors. A second group of algorithms is presented that generate random variates directly from a given sample by implicitly estimating the unknown distribution. The best of these algorithms are based on the idea of naive resampling plus added noise. These algorithms can be interpreted as sampling from the kernel density estimates. This method can be also applied to random vectors. There it can be interpreted as a mixture of naive resampling and sampling from the multi-normal distribution that has the same covariance matrix as the data. The algorithms described in this paper have been implemented in ANSI C in a library called UNURAN which is available via anonymous ftp. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
50

時間數列之核密度估計探討 / Kernel Density Estimation for Time Series

姜一銘, Jiang, I Ming Unknown Date (has links)
對樣本資料之機率密度函數f(x)的無母數估計方法,一直是統計推論領域的研究重點之一,而且在通訊理論與圖形辨別上有非常重要的地位。傳統的文獻對密度函數的估計方法大部分著重於獨立樣本的情形。對於時間數列的相關樣本(例如:經濟指標或加權股票指數資料)比較少提到。本文針對具有弱相關性的穩定時間數列樣本,嘗試提出一個核密度估計的方法並探討其性質。 / For a sample data, the nonparametric estimation of a probability density f(x) is always one point of research problem in statistical inference and plays an important role in communication theory and pattern recognition. Traditionally, the literature dealing with density estimation when the observations are independent is extensive. Time series sample with weak dependence, (for example, an economic indicator or a stock market index data), less in this aspect of discussion. Our main purpose is concerned with the estimation of the probability density function f(x) of a stationary time series sample and discusses some properties of this kernel density.

Page generated in 0.1546 seconds