• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On Renyi Divergence Measures for Continuous Alphabet Sources

GIL, MANUEL 30 August 2011 (has links)
The idea of `probabilistic distances' (also called divergences), which in some sense assess how `close' two probability distributions are from one another, has been widely employed in probability, statistics, information theory, and related fields. Of particular importance due to their generality and applicability are the Renyi divergence measures. While the closely related concept of Renyi entropy of a probability distribution has been studied extensively, and closed-form expressions for the most common univariate and multivariate continuous distributions have been obtained and compiled, the literature currently lacks the corresponding compilation for continuous Renyi divergences. The present thesis addresses this issue for analytically tractable cases. Closed-form expressions for Kullback-Leibler divergences are also derived and compiled, as they can be seen as an extension by continuity of the Renyi divergences. Additionally, we establish a connection between Renyi divergence and the variance of the log-likelihood ratio of two distributions, which extends the work of Song (2001) on the relation between Renyi entropy and the log-likelihood function, and which becomes practically useful in light of the Renyi divergence expressions we have derived. Lastly, we consider the Renyi divergence rate between two zero-mean stationary Gaussian processes. / Thesis (Master, Mathematics & Statistics) -- Queen's University, 2011-08-30 13:37:41.792
2

Representation and Learning for Sign Language Recognition

Nayak, Sunita 17 January 2008 (has links)
While recognizing some kinds of human motion patterns requires detailed feature representation and tracking, many of them can be recognized using global features. The global configuration or structure of an object in a frame can be expressed as a probability density function constructed using relational attributes between low-level features, e.g. edge pixels that are extracted from the regions of interest. The probability density changes with motion, tracing a trajectory in the latent space of distributions, which we call the configuration space. These trajectories can then be used for recognition using standard techniques such as dynamic time warping. Can these frame-wise probability functions, which usually have high dimensionality, be embedded into a low-dimensional space so that we can still estimate various meaningful probabilistic distances in the new space? Given these trajectory-based representations, can one learn models of signs in an unsupervised manner? We address these two fundamental questions in this dissertation. Existing embedding approaches do not extend easily to preserve meaningful probabilistic distances between the samples. We present an embedding framework to preserve the probabilistic distances like Chernoff, Bhattacharya, Matusita, KL or symmetric-KL based on dot-products between points in this space. It results in computational savings. We experiment with the five different probabilistic distance measures and show the usefulness of the representation in three different contexts - sign recognition of 147 different signs (with large number of possible classes), gesture recognition with 7 different gestures performed by 7 different persons (with person variations) and classification of 8 different kinds of human-human interaction sequences (with segmentation problems). Currently, researchers in continuous sign language recognition assume that the training signs are already available and often those are manually selected from continuous sentences. It consumes a lot of human time and is tedious. We present an approach for automatically learning signs from multiple sentences by using a probabilistic framework to extract the parts of signs that are present in most of its occurrences, and are robust to variations produced by adjacent signs. We show results by learning 10 signs and 10 spoken words from 136 sign language sentences and 136 spoken sequences respectively.

Page generated in 0.1097 seconds