• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 22
  • 22
  • 9
  • 9
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Privacy Preserving Kin Genomic Data Publishing

Shang, Hui 16 July 2020 (has links)
No description available.
2

Input of Factor Graphs into the Detection, Classification, and Localization Chain and Continuous Active SONAR in Undersea Vehicles

Gross, Brandi Nicole 10 September 2015 (has links)
The focus of this thesis is to implement factor graphs into the problem of detection, classification, and localization (DCL) of underwater objects using active SOund Navigation And Ranging (SONAR). A factor graph is a bipartite graphical representation of the decomposition of a particular function. Messages are passed along the edges connecting factor and variable nodes, on which, a message passing algorithm is applied to compute the posterior probabilities at a particular node. This thesis addresses two issues. In the first section, the formulation of factor graphs for each section of the DCL chain required followed by their closed-form solutions. For the detector, the factor graph determines if the signal is a detection or simply noise. In the classifier, it outputs the probability for the elements in the class. Last, when using a factor graph for the tracker, it gives the estimated state of the object being tracked. The second part concentrates on the application to Continuous Active SONAR (CAS). When using CAS, a bistatic configuration is used allowing for a more rapid update rate where two unmanned underwater vehicles (UUVs) are used as the receiver and transmitter. The goal is to evaluate CAS's effectiveness to determine if the tracking accuracy improves as the transmit interval decreases. If CAS proves to be more efficient in target tracking, the next objective is to determine which messages sent between the two UUVs are most beneficial. To test this, a particle filter simulation is used. / Master of Science
3

Affinity Propagation: Clustering Data by Passing Messages

Dueck, Delbert 24 September 2009 (has links)
Clustering data by identifying a subset of representative examples is important for detecting patterns in data and in processing sensory signals. Such "exemplars" can be found by randomly choosing an initial subset of data points as exemplars and then iteratively refining it, but this works well only if that initial choice is close to a good solution. This thesis describes a method called "affinity propagation" that simultaneously considers all data points as potential exemplars, exchanging real-valued messages between data points until a high-quality set of exemplars and corresponding clusters gradually emerges. Affinity propagation takes as input a set of pairwise similarities between data points and finds clusters on the basis of maximizing the total similarity between data points and their exemplars. Similarity can be simply defined as negative squared Euclidean distance for compatibility with other algorithms, or it can incorporate richer domain-specific models (e.g., translation-invariant distances for comparing images). Affinity propagation’s computational and memory requirements scale linearly with the number of similarities input; for non-sparse problems where all possible similarities are computed, these requirements scale quadratically with the number of data points. Affinity propagation is demonstrated on several applications from areas such as computer vision and bioinformatics, and it typically finds better clustering solutions than other methods in less time.
4

Affinity Propagation: Clustering Data by Passing Messages

Dueck, Delbert 24 September 2009 (has links)
Clustering data by identifying a subset of representative examples is important for detecting patterns in data and in processing sensory signals. Such "exemplars" can be found by randomly choosing an initial subset of data points as exemplars and then iteratively refining it, but this works well only if that initial choice is close to a good solution. This thesis describes a method called "affinity propagation" that simultaneously considers all data points as potential exemplars, exchanging real-valued messages between data points until a high-quality set of exemplars and corresponding clusters gradually emerges. Affinity propagation takes as input a set of pairwise similarities between data points and finds clusters on the basis of maximizing the total similarity between data points and their exemplars. Similarity can be simply defined as negative squared Euclidean distance for compatibility with other algorithms, or it can incorporate richer domain-specific models (e.g., translation-invariant distances for comparing images). Affinity propagation’s computational and memory requirements scale linearly with the number of similarities input; for non-sparse problems where all possible similarities are computed, these requirements scale quadratically with the number of data points. Affinity propagation is demonstrated on several applications from areas such as computer vision and bioinformatics, and it typically finds better clustering solutions than other methods in less time.
5

Map Partition and Loop Closure in a Factor Graph Based SAM System

Relfsson, Emil January 2020 (has links)
The graph-based formulation of the navigation problem is establishing itself as one of the standard ways to formulate the navigation problem within the sensor fusion community. It enables a convenient way to access information from previous positions which can be used to enhance the estimate of the current position.To restrict working memory usage, map partitioning can be used to store older parts of the map on a hard drive, in the form of submaps. This limits the number of previous positions within the active map. This thesis examines the effect that map partitioning information loss has on the state of the art positioning algorithm iSAM2, both in open routes and when loop closure is achieved. It finds that larger submaps appear to cause a smaller positional error than smaller submaps for open routes. The smaller submaps seem to give smaller positional error than larger submaps when loop closure is achieved. The thesis also examines how the density of landmarks at the partition point affects the positional error, but the obtained result is mixed and no clear conclusions can be made. Finally it reviews some loop closure detection algorithms that can be convenient to pair with the iSAM2 algorithm.
6

A Hardware Generator for Factor Graph Applications

Demma, James Daniel 08 June 2014 (has links)
A Factor Graph (FG -- http://en.wikipedia.org/wiki/Factor_graph) is a structure used to find solutions to problems that can be represented as a Probabilistic Graphical Model (PGM). They consist of interconnected variable nodes and factor nodes, which iteratively compute and pass messages to each other. FGs can be applied to solve decoding of forward error correcting codes, Markov chains and Markov Random Fields, Kalman Filtering, Fourier Transforms, and even some games such as Sudoku. In this paper, a framework is presented for rapid prototyping of hardware implementations of FG-based applications. The FG developer specifies aspects of the application, such as graphical structure, factor computation, and message passing algorithm, and the framework returns a design. A system of Python scripts and Verilog Hardware Description Language templates together are used to generate the HDL source code for the application. The generated designs are vendor/platform agnostic, but currently target the Xilinx Virtex-6-based ML605. The framework has so far been primarily applied to construct Low Density Parity Check (LDPC) decoders. The characteristics of a large basket of generated LDPC decoders, including contemporary 802.11n decoders, have been examined as a verification of the system and as a demonstration of its capabilities. As a further demonstration, the framework has been applied to construct a Sudoku solver. / Master of Science
7

Code-aided synchronization for digital burst communications

Herzet, Cédric 21 April 2006 (has links)
This thesis deals with the synchronization of digital communication systems. Synchronization (from the Greek syn (together) and chronos (time)) denotes the task of making two systems running at the same time. In communication systems, the synchronization of the transmitter and the receiver requires to accurately estimate a number of parameters such as the carrier frequency and phase offsets, the timing epoch... In the early days of digital communications, synchronizers used to operate in either data-aided (DA) or non-data-aided (NDA) modes. However, with the recent advent of powerful coding techniques, these conventional synchronization modes have been shown to be unable to properly synchronize state-of-the-art receivers. In this context, we investigate in this thesis a new family of synchronizers referred to as code-aided (CA) synchronizers. The idea behind CA synchronization is to take benefit from the structure of the code used to protect the data to improve the estimation quality achieved by the synchronizers. In a first part of the thesis, we address the issue of turbo synchronization, i.e., the iterative synchronization of continuous parameters. In particular, we derive several mathematical frameworks enabling a systematic derivation of turbo synchronizers and a deeper understanding of their behavior. In a second part, we focus on the so-called CA hypothesis testing problem. More particularly, we derive optimal solutions to deal with this problem and propose efficient implementations of the proposed algorithms. Finally, in a last part of this thesis, we derive theoretical lower bounds on the performance of turbo synchronizers.
8

Generalized Survey Propagation

Tu, Ronghui 09 May 2011 (has links)
Survey propagation (SP) has recently been discovered as an efficient algorithm in solving classes of hard constraint-satisfaction problems (CSP). Powerful as it is, SP is still a heuristic algorithm, and further understanding its algorithmic nature, improving its effectiveness and extending its applicability are highly desirable. Prior to the work in this thesis, Maneva et al. introduced a Markov Random Field (MRF) formalism for k-SAT problems, on which SP may be viewed as a special case of the well-known belief propagation (BP) algorithm. This result had sometimes been interpreted to an understanding that “SP is BP” and allows a rigorous extension of SP to a “weighted” version, or a family of algorithms, for k-SAT problems. SP has also been generalized, in a non-weighted fashion, for solving non-binary CSPs. Such generalization is however presented using statistical physics language and somewhat difficult to access by more general audience. This thesis generalizes SP both in terms of its applicability to non-binary problems and in terms of introducing “weights” and extending SP to a family of algorithms. Under a generic formulation of CSPs, we first present an understanding of non-weighted SP for arbitrary CSPs in terms of “probabilistic token passing” (PTP). We then show that this probabilistic interpretation of non-weighted SP makes it naturally generalizable to a weighted version, which we call weighted PTP. Another main contribution of this thesis is a disproof of the folk belief that “SP is BP”. We show that the fact that SP is a special case of BP for k-SAT problems is rather incidental. For more general CSPs, SP and generalized SP do not reduce from BP. We also established the conditions under which generalized SP may reduce as special cases of BP. To explore the benefit of generalizing SP to a wide family and for arbitrary, particularly non-binary, problems, we devised a simple weighted PTP based algorithm for solving 3-COL problems. Experimental results, compared against an existing non-weighted SP based algorithm, reveal the potential performance gain that generalized SP may bring.
9

Generalized Survey Propagation

Tu, Ronghui 09 May 2011 (has links)
Survey propagation (SP) has recently been discovered as an efficient algorithm in solving classes of hard constraint-satisfaction problems (CSP). Powerful as it is, SP is still a heuristic algorithm, and further understanding its algorithmic nature, improving its effectiveness and extending its applicability are highly desirable. Prior to the work in this thesis, Maneva et al. introduced a Markov Random Field (MRF) formalism for k-SAT problems, on which SP may be viewed as a special case of the well-known belief propagation (BP) algorithm. This result had sometimes been interpreted to an understanding that “SP is BP” and allows a rigorous extension of SP to a “weighted” version, or a family of algorithms, for k-SAT problems. SP has also been generalized, in a non-weighted fashion, for solving non-binary CSPs. Such generalization is however presented using statistical physics language and somewhat difficult to access by more general audience. This thesis generalizes SP both in terms of its applicability to non-binary problems and in terms of introducing “weights” and extending SP to a family of algorithms. Under a generic formulation of CSPs, we first present an understanding of non-weighted SP for arbitrary CSPs in terms of “probabilistic token passing” (PTP). We then show that this probabilistic interpretation of non-weighted SP makes it naturally generalizable to a weighted version, which we call weighted PTP. Another main contribution of this thesis is a disproof of the folk belief that “SP is BP”. We show that the fact that SP is a special case of BP for k-SAT problems is rather incidental. For more general CSPs, SP and generalized SP do not reduce from BP. We also established the conditions under which generalized SP may reduce as special cases of BP. To explore the benefit of generalizing SP to a wide family and for arbitrary, particularly non-binary, problems, we devised a simple weighted PTP based algorithm for solving 3-COL problems. Experimental results, compared against an existing non-weighted SP based algorithm, reveal the potential performance gain that generalized SP may bring.
10

3-D Scene Reconstruction from Multiple Photometric Images

Forne, Christopher Jes January 2007 (has links)
This thesis deals with the problem of three dimensional scene reconstruction from multiple camera images. This is a well established problem in computer vision and has been significantly researched. In recent years some excellent results have been achieved, however existing algorithms often fall short of many biological systems in terms of robustness and generality. The aim of this research was to develop improved algorithms for reconstructing 3D scenes, with a focus on accurate system modelling and correctly dealing with occlusions. With scene reconstruction the objective is to infer scene parameters describing the 3D structure of the scene from the data given by camera images. This is an illposed inverse problem, where an exact solution cannot be guaranteed. The use of a statistical approach to deal with the scene reconstruction problem is introduced and the differences between maximum a priori (MAP) and minimum mean square estimate (MMSE) considered. It is discussed how traditional stereo matching can be performed using a volumetric scene model. An improved model describing the relationship between the camera data and a discrete model of the scene is presented. This highlights some of the common causes of modelling errors, enabling them to be dealt with objectively. The problems posed by occlusions are considered. Using a greedy algorithm the scene is progressively reconstructed to account for visibility interactions between regions and the idea of a complete scene estimate is established. Some simple and improved techniques for reliably assigning opaque voxels are developed, making use of prior information. Problems with variations in the imaging convolution kernel between images motivate the development of a pixel dissimilarity measure. Belief propagation is then applied to better utilise prior information and obtain an improved global optimum. A new volumetric factor graph model is presented which represents the joint probability distribution of the scene and imaging system. By utilising the structure of the local compatibility functions, an efficient procedure for updating the messages is detailed. To help convergence, a novel approach of accentuating beliefs is shown. Results demonstrate the validity of this approach, however the reconstruction error is similar or slightly higher than from the Greedy algorithm. To simplify the volumetric model, a new approach to belief propagation is demonstrated by applying it to a dynamic model. This approach is developed as an alternative to the full volumetric model because it is less memory and computationally intensive. Using a factor graph, a volumetric known visibility model is presented which ensures the scene is complete with respect to all the camera images. Dynamic updating is also applied to a simpler single depth-map model. Results show this approach is unsuitable for the volumetric known visibility model, however, improved results are obtained with the simple depth-map model.

Page generated in 0.0625 seconds