• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 11
  • 6
  • 5
  • 4
  • Tagged with
  • 78
  • 78
  • 18
  • 17
  • 17
  • 16
  • 16
  • 13
  • 13
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Low complexity turbo equalization using superstructures

Myburgh, Hermanus Carel January 2013 (has links)
In a wireless communication system the transmitted information is subjected to a number of impairments, among which inter-symbol interference (ISI), thermal noise and fading are the most prevalent. Owing to the dispersive nature of the communication channel, ISI results from the arrival of multiple delayed copies of the transmitted signal at the receiver. Thermal noise is caused by the random fluctuation on electrons in the receiver hardware, while fading is the result of constructive and destructive interference, as well as absorption during transmission. To protect the source information, error-correction coding (ECC) is performed in the transmitter, after which the coded information is interleaved in order to separate the information to be transmitted temporally. Turbo equalization (TE) is a technique whereby equalization (to correct ISI) and decoding (to correct errors) are iteratively performed by iteratively exchanging extrinsic information formed by optimal posterior probabilistic information produced by each algorithm. The extrinsic information determined from the decoder output is used as prior information by the equalizer, and vice versa, allowing for the bit-error rate (BER) performance to be improved with each iteration. Turbo equalization achieves excellent BER performance, but its computational complexity grows exponentially with an increase in channel memory as well as with encoder memory, and can therefore not be used in dispersive channels where the channel memory is large. A number of low complexity equalizers have consequently been developed to replace the maximum a posteriori probability (MAP) equalizer in order to reduce the complexity. Some of the resulting low complexity turbo equalizers achieve performance comparable to that of a conventional turbo equalizer that uses a MAP equalizer. In other cases the low complexity turbo equalizers perform much worse than the corresponding conventional turbo equalizer (CTE) because of suboptimal equalization and the inability of the low complexity equalizers to utilize the extrinsic information effectively as prior information. In this thesis the author develops two novel iterative low complexity turbo equalizers. The turbo equalization problem is modeled on superstructures, where, in the context of this thesis, a superstructure performs the task of the equalizer and the decoder. The resulting low complexity turbo equalizers process all the available information as a whole, so there is no exchange of extrinsic information between different subunits. The first is modeled on a dynamic Bayesian network (DBN) modeling the Turbo Equalization problem as a quasi-directed acyclic graph, by allowing a dominant connection between the observed variables and their corresponding hidden variables, as well as weak connections between the observed variables and past and future hidden variables. The resulting turbo equalizer is named the dynamic Bayesian network turbo equalizer (DBN-TE). The second low complexity turbo equalizer developed in this thesis is modeled on a Hopfield neural network, and is named the Hopfield neural network turbo equalizer (HNN-TE). The HNN-TE is an amalgamation of the HNN maximum likelihood sequence estimation (MLSE) equalizer, developed previously by this author, and an HNN MLSE decoder derived from a single codeword HNN decoder. Both the low complexity turbo equalizers developed in this thesis are able to jointly and iteratively equalize and decode coded, randomly interleaved information transmitted through highly dispersive multipath channels. The performance of both these low complexity turbo equalizers is comparable to that of the conventional turbo equalizer while their computational complexities are superior for channels with long memory. Their performance is also comparable to that of other low complexity turbo equalizers, but their computational complexities are worse. The computational complexity of both the DBN-TE and the HNN-TE is approximately quadratic at best (and cubic at worst) in the transmitted data block length, exponential in the encoder constraint length and approximately independent of the channel memory length. The approximate quadratic complexity of both the DBN-TE and the HNN-TE is mostly due to interleaver mitigation, requiring matrix multiplication, where the matrices have dimensions equal to the data block length, without which turbo equalization using superstructures is impossible for systems employing random interleavers. / Thesis (PhD)--University of Pretoria, 2013. / gm2013 / Electrical, Electronic and Computer Engineering / unrestricted
62

Computation with continuous mode CMOS circuits in image processing and probabilistic reasoning

Mroszczyk, Przemyslaw January 2014 (has links)
The objective of the research presented in this thesis is to investigate alternative ways of information processing employing asynchronous, data driven, and analogue computation in massively parallel cellular processor arrays, with applications in machine vision and artificial intelligence. The use of cellular processor architectures, with only local neighbourhood connectivity, is considered in VLSI realisations of the trigger-wave propagation in binary image processing, and in Bayesian inference. Design issues, critical in terms of the computational precision and system performance, are extensively analysed, accounting for the non-ideal operation of MOS devices caused by the second order effects, noise and parameter mismatch. In particular, CMOS hardware solutions for two specific tasks: binary image skeletonization and sum-product algorithm for belief propagation in factor graphs, are considered, targeting efficient design in terms of the processing speed, power, area, and computational precision. The major contributions of this research are in the area of continuous-time and discrete-time CMOS circuit design, with applications in moderate precision analogue and asynchronous computation, accounting for parameter variability. Various analogue and digital circuit realisations, operating in the continuous-time and discrete-time domains, are analysed in theory and verified using combined Matlab-Hspice simulations, providing a versatile framework suitable for custom specific analyses, verification and optimisation of the designed systems. Novel solutions, exhibiting reduced impact of parameter variability on the circuit operation, are presented and applied in the designs of the arithmetic circuits for matrix-vector operations and in the data driven asynchronous processor arrays for binary image processing. Several mismatch optimisation techniques are demonstrated, based on the use of switched-current approach in the design of current-mode Gilbert multiplier circuit, novel biasing scheme in the design of tunable delay gates, and averaging technique applied to the analogue continuous-time circuits realisations of Bayesian networks. The most promising circuit solutions were implemented on the PPATC test chip, fabricated in a standard 90 nm CMOS process, and verified in experiments.
63

Effective Bayesian inference for sparse factor analysis models

Sharp, Kevin John January 2011 (has links)
We study how to perform effective Bayesian inference in high-dimensional sparse Factor Analysis models with a zero-norm, sparsity-inducing prior on the model parameters. Such priors represent a methodological ideal, but Bayesian inference in such models is usually regarded as impractical. We test this view. After empirically characterising the properties of existing algorithmic approaches, we use techniques from statistical mechanics to derive a theory of optimal learning in the restricted setting of sparse PCA with a single factor. Finally, we describe a novel `Dense Message Passing' algorithm (DMP) which achieves near-optimal performance on synthetic data generated from this model.DMP exploits properties of high-dimensional problems to operate successfully on a densely connected graphical model. Similar algorithms have been developed in the statistical physics community and previously applied to inference problems in coding and sparse classification. We demonstrate that DMP out-performs both a newly proposed variational hybrid algorithm and two other recently published algorithms (SPCA and emPCA) on synthetic data while it explains at least the same amount of variance, for a given level of sparsity, in two gene expression datasets used in previous studies of sparse PCA.A significant potential advantage of DMP is that it provides an estimate of the marginal likelihood which can be used for hyperparameter optimisation. We show that, for the single factor case, this estimate exhibits good qualitative agreement both with theoretical predictions and with the hyperparameter posterior inferred by a collapsed Gibbs sampler. Preliminary work on an extension to inference of multiple factors indicates its potential for selecting an optimal model from amongst candidates which differ both in numbers of factors and their levels of sparsity.
64

Message Passing Approaches to Compressive Inference Under Structured Signal Priors

Ziniel, Justin A. January 2014 (has links)
No description available.
65

Dynamic cavity method and problems on graphs / Méthode de cavité dynamique et problèmes sur des graphes

Lokhov, Andrey Y. 14 November 2014 (has links)
Un grand nombre des problèmes d'optimisation, ainsi que des problèmes inverses, combinatoires ou hors équilibre qui apparaissent en physique statistique des systèmes complexes, peuvent être représentés comme un ensemble des variables en interaction sur un certain réseau. Bien que la recette universelle pour traiter ces problèmes n'existe pas, la compréhension qualitative et quantitative des problèmes complexes sur des graphes a fait des grands progrès au cours de ces dernières années. Un rôle particulier a été joué par des concepts empruntés de la physique des verres de spin et la théorie des champs, qui ont eu beaucoup de succès en ce qui concerne la description des propriétés statistiques des systèmes complexes et le développement d'algorithmes efficaces pour des problèmes concrets.En première partie de cette thèse, nous étudions des problèmes de diffusion sur des réseaux, avec la dynamique hors équilibre. En utilisant la méthode de cavité sur des trajectoires dans le temps, nous montrons comment dériver des équations dynamiques dites "message-passing'' pour une large classe de modèles avec une dynamique unidirectionnelle -- la propriété clef qui permet de résoudre le problème. Ces équations sont asymptotiquement exactes pour des graphes localement en arbre et en général représentent une bonne approximation pour des réseaux réels. Nous illustrons cette approche avec une application des équations dynamiques pour résoudre le problème inverse d'inférence de la source d'épidémie dans le modèle "susceptible-infected-recovered''.Dans la seconde partie du manuscrit, nous considérons un problème d'optimisation d'appariement planaire optimal sur une ligne. En exploitant des techniques de la théorie de champs et des arguments combinatoires, nous caractérisons une transition de phase topologique qui se produit dans un modèle désordonné simple, le modèle de Bernoulli. Visant une application à la physique des structures secondaires de l'ARN, nous discutons la relation entre la transition d'appariement parfait-imparfait et la transition de basse température connue entre les états fondu et vitreux de biopolymère; nous proposons également des modèles généralisés qui suggèrent une correspondance exacte entre la matrice des contacts et la séquence des nucléotides, permettant ainsi de donner un sens à la notion des alphabets effectifs non-entiers. / A large number of optimization, inverse, combinatorial and out-of-equilibrium problems, arising in the statistical physics of complex systems, allow for a convenient representation in terms of disordered interacting variables defined on a certain network. Although a universal recipe for dealing with these problems does not exist, the recent years have seen a serious progress in understanding and quantifying an important number of hard problems on graphs. A particular role has been played by the concepts borrowed from the physics of spin glasses and field theory, that appeared to be extremely successful in the description of the statistical properties of complex systems and in the development of efficient algorithms for concrete problems.In the first part of the thesis, we study the out-of-equilibrium spreading problems on networks. Using dynamic cavity method on time trajectories, we show how to derive dynamic message-passing equations for a large class of models with unidirectional dynamics -- the key property that makes the problem solvable. These equations are asymptotically exact for locally tree-like graphs and generally provide a good approximation for real-world networks. We illustrate the approach by applying the dynamic message-passing equations for susceptible-infected-recovered model to the inverse problem of inference of epidemic origin. In the second part of the manuscript, we address the optimization problem of finding optimal planar matching configurations on a line. Making use of field-theory techniques and combinatorial arguments, we characterize a topological phase transition that occurs in the simple Bernoulli model of disordered matching. As an application to the physics of the RNA secondary structures, we discuss the relation of the perfect-imperfect matching transition to the known molten-glass transition at low temperatures, and suggest generalized models that incorporate a one-to-one correspondence between the contact matrix and the nucleotide sequence, thus giving sense to the notion of effective non-integer alphabets.
66

Modern Stereo Correspondence Algorithms : Investigation and Evaluation

Olofsson, Anders January 2010 (has links)
<p>Many different approaches have been taken towards solving the stereo correspondence problem and great progress has been made within the field during the last decade. This is mainly thanks to newly evolved global optimization techniques and better ways to compute pixel dissimilarity between views. The most successful algorithms are based on approaches that explicitly model smoothness assumptions made about the physical world, with image segmentation and plane fitting being two frequently used techniques.</p><p>Within the project, a survey of state of the art stereo algorithms was conducted and the theory behind them is explained. Techniques found interesting were implemented for experimental trials and an algorithm aiming to achieve state of the art performance was implemented and evaluated. For several cases, state of the art performance was reached.</p><p>To keep down the computational complexity, an algorithm relying on local winner-take-all optimization, image segmentation and plane fitting was compared against minimizing a global energy function formulated on pixel level. Experiments show that the local approach in several cases can match the global approach, but that problems sometimes arise – especially when large areas that lack texture are present. Such problematic areas are better handled by the explicit modeling of smoothness in global energy minimization.</p><p>Lastly, disparity estimation for image sequences was explored and some ideas on how to use temporal information were implemented and tried. The ideas mainly relied on motion detection to determine parts that are static in a sequence of frames. Stereo correspondence for sequences is a rather new research field, and there is still a lot of work to be made.</p>
67

Multi-agent based control of large-scale complex systems employing distributed dynamic inference engine

Zhang, Daili 26 March 2010 (has links)
Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.
68

Coding techniques for information-theoretic strong secrecy on wiretap channels

Subramanian, Arunkumar 29 August 2011 (has links)
Traditional solutions to information security in communication systems act in the application layer and are oblivious to the effects in the physical layer. Physical-layer security methods, of which information-theoretic security is a special case, try to extract security from the random effects in the physical layer. In information-theoretic security, there are two asymptotic notions of secrecy---weak and strong secrecy This dissertation investigates the problem of information-theoretic strong secrecy on the binary erasure wiretap channel (BEWC) with a specific focus on designing practical codes. The codes designed in this work are based on analysis and techniques from error-correcting codes. In particular, the dual codes of certain low-density parity-check (LDPC) codes are shown to achieve strong secrecy in a coset coding scheme. First, we analyze the asymptotic block-error rate of short-cycle-free LDPC codes when they are transmitted over a binary erasure channel (BEC) and decoded using the belief propagation (BP) decoder. Under certain conditions, we show that the asymptotic block-error rate falls according to an inverse square law in block length, which is shown to be a sufficient condition for the dual codes to achieve strong secrecy. Next, we construct large-girth LDPC codes using algorithms from graph theory and show that the asymptotic bit-error rate of these codes follow a sub-exponential decay as the block length increases, which is a sufficient condition for strong secrecy. The secrecy rates achieved by the duals of large-girth LDPC codes are shown to be an improvement over that of the duals of short-cycle-free LDPC codes.
69

Data Mining Meets HCI: Making Sense of Large Graphs

Chau, Dueng Horng 01 July 2012 (has links)
We have entered the age of big data. Massive datasets are now common in science, government and enterprises. Yet, making sense of these data remains a fundamental challenge. Where do we start our analysis? Where to go next? How to visualize our findings? We answers these questions by bridging Data Mining and Human- Computer Interaction (HCI) to create tools for making sense of graphs with billions of nodes and edges, focusing on: (1) Attention Routing: we introduce this idea, based on anomaly detection, that automatically draws people’s attention to interesting areas of the graph to start their analyses. We present three examples: Polonium unearths malware from 37 billion machine-file relationships; NetProbe fingers bad guys who commit auction fraud. (2) Mixed-Initiative Sensemaking: we present two examples that combine machine inference and visualization to help users locate next areas of interest: Apolo guides users to explore large graphs by learning from few examples of user interest; Graphite finds interesting subgraphs, based on only fuzzy descriptions drawn graphically. (3) Scaling Up: we show how to enable interactive analytics of large graphs by leveraging Hadoop, staging of operations, and approximate computation. This thesis contributes to data mining, HCI, and importantly their intersection, including: interactive systems and algorithms that scale; theories that unify graph mining approaches; and paradigms that overcome fundamental challenges in visual analytics. Our work is making impact to academia and society: Polonium protects 120 million people worldwide from malware; NetProbe made headlines on CNN, WSJ and USA Today; Pegasus won an opensource software award; Apolo helps DARPA detect insider threats and prevent exfiltration. We hope our Big Data Mantra “Machine for Attention Routing, Human for Interaction” will inspire more innovations at the crossroad of data mining and HCI.
70

Belief Propagation and Algorithms for Mean-Field Combinatorial Optimisations

Khandwawala, Mustafa January 2014 (has links) (PDF)
We study combinatorial optimization problems on graphs in the mean-field model, which assigns independent and identically distributed random weights to the edges of the graph. Specifically, we focus on two generalizations of minimum weight matching on graphs. The first problem of minimum cost edge cover finds application in a computational linguistics problem of semantic projection. The second problem of minimum cost many-to-one matching appears as an intermediate optimization step in the restriction scaffold problem applied to shotgun sequencing of DNA. For the minimum cost edge cover on a complete graph on n vertices, where the edge weights are independent exponentially distributed random variables, we show that the expectation of the minimum cost converges to a constant as n →∞ For the minimum cost many-to-one matching on an n x m complete bipartite graph, scaling m as [ n/α ] for some fixed α > 1, we find the limit of the expected minimum cost as a function of α. For both problems, we show that a belief propagation algorithm converges asymptotically to the optimal solution. The belief propagation algorithm yields a near optimal solution with lesser complexity than the known best algorithms designed for optimality in worst-case settings. Our proofs use the machinery of the objective method and local weak convergence, which are ideas developed by Aldous for proving the ζ(2) limit for the minimum cost bipartite matching. We use belief propagation as a constructive proof technique to supplement the objective method. Recursive distributional equations(RDEs) arise naturally in the objective method approach. In a class of RDEs that arise as extensions of the minimum weight matching and travelling salesman problems, we prove existence and uniqueness of a fixed point distribution, and characterize its domain of attraction.

Page generated in 0.1217 seconds