• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 13
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 139
  • 139
  • 32
  • 31
  • 30
  • 27
  • 22
  • 22
  • 20
  • 20
  • 19
  • 19
  • 18
  • 18
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Modélisation multi-échelles et calculs parallèles appliqués à la simulation de l'activité neuronale / Multiscale modeling and parallel computations applied to the simulation of neuronal activity

Bedez, Mathieu 18 December 2015 (has links)
Les neurosciences computationnelles ont permis de développer des outils mathématiques et informatiques permettant la création, puis la simulation de modèles représentant le comportement de certaines composantes de notre cerveau à l’échelle cellulaire. Ces derniers sont utiles dans la compréhension des interactions physiques et biochimiques entre les différents neurones, au lieu d’une reproduction fidèle des différentes fonctions cognitives comme dans les travaux sur l’intelligence artificielle. La construction de modèles décrivant le cerveau dans sa globalité, en utilisant une homogénéisation des données microscopiques est plus récent, car il faut prendre en compte la complexité géométrique des différentes structures constituant le cerveau. Il y a donc un long travail de reconstitution à effectuer pour parvenir à des simulations. D’un point de vue mathématique, les différents modèles sont décrits à l’aide de systèmes d’équations différentielles ordinaires, et d’équations aux dérivées partielles. Le problème majeur de ces simulations vient du fait que le temps de résolution peut devenir très important, lorsque des précisions importantes sur les solutions sont requises sur les échelles temporelles mais également spatiales. L’objet de cette étude est d’étudier les différents modèles décrivant l’activité électrique du cerveau, en utilisant des techniques innovantes de parallélisation des calculs, permettant ainsi de gagner du temps, tout en obtenant des résultats très précis. Quatre axes majeurs permettront de répondre à cette problématique : description des modèles, explication des outils de parallélisation, applications sur deux modèles macroscopiques. / Computational Neuroscience helped develop mathematical and computational tools for the creation, then simulation models representing the behavior of certain components of our brain at the cellular level. These are helpful in understanding the physical and biochemical interactions between different neurons, instead of a faithful reproduction of various cognitive functions such as in the work on artificial intelligence. The construction of models describing the brain as a whole, using a homogenization microscopic data is newer, because it is necessary to take into account the geometric complexity of the various structures comprising the brain. There is therefore a long process of rebuilding to be done to achieve the simulations. From a mathematical point of view, the various models are described using ordinary differential equations, and partial differential equations. The major problem of these simulations is that the resolution time can become very important when important details on the solutions are required on time scales but also spatial. The purpose of this study is to investigate the various models describing the electrical activity of the brain, using innovative techniques of parallelization of computations, thereby saving time while obtaining highly accurate results. Four major themes will address this issue: description of the models, explaining parallelization tools, applications on both macroscopic models.
102

Low-density Parity-Check decoding Algorithms / Low-density Parity-Check avkodare algoritm

Pirou, Florent January 2004 (has links)
<p>Recently, low-density parity-check (LDPC) codes have attracted much attention because of their excellent error correcting performance and highly parallelizable decoding scheme. However, the effective VLSI implementation of and LDPC decoder remains a big challenge and is a crucial issue in determining how well we can exploit the benefits of the LDPC codes in the real applications. In this master thesis report, following a error coding background, we describe Low-Density Parity-Check codes and their decoding algorithm, and also requirements and architectures of LPDC decoder implementations.</p>
103

Efficient Message Passing Decoding Using Vector-based Messages

Grimnell, Mikael, Tjäder, Mats January 2005 (has links)
<p>The family of Low Density Parity Check (LDPC) codes is a strong candidate to be used as Forward Error Correction (FEC) in future communication systems due to its strong error correction capability. Most LDPC decoders use the Message Passing algorithm for decoding, which is an iterative algorithm that passes messages between its variable nodes and check nodes. It is not until recently that computation power has become strong enough to make Message Passing on LDPC codes feasible. Although locally simple, the LDPC codes are usually large, which increases the required computation power. Earlier work on LDPC codes has been concentrated on the binary Galois Field, GF(2), but it has been shown that codes from higher order fields have better error correction capability. However, the most efficient LDPC decoder, the Belief Propagation Decoder, has a squared complexity increase when moving to higher order Galois Fields. Transmission over a channel with M-PSK signalling is a common technique to increase spectral efficiency. The information is transmitted as the phase angle of the signal.</p><p>The focus in this Master’s Thesis is on simplifying the Message Passing decoding when having inputs from M-PSK signals transmitted over an AWGN channel. Symbols from higher order Galois Fields were mapped to M-PSK signals, since M-PSK is very bandwidth efficient and the information can be found in the angle of the signal. Several simplifications of the Belief Propagation has been developed and tested. The most promising is the Table Vector Decoder, which is a Message Passing Decoder that uses a table lookup technique for check node operations and vector summation as variable node operations. The table lookup is used to approximate the check node operation in a Belief Propagation decoder. Vector summation is used as an equivalent operation to the variable node operation. Monte Carlo simulations have shown that the Table Vector Decoder can achieve a performance close to the Belief Propagation. The capability of the Table Vector Decoder depends on the number of reconstruction points and the placement of them. The main advantage of the Table Vector Decoder is that its complexity is unaffected by the Galois Field used. Instead, there will be a memory space requirement which depends on the desired number of reconstruction points.</p>
104

Low-density Parity-Check decoding Algorithms / Low-density Parity-Check avkodare algoritm

Pirou, Florent January 2004 (has links)
Recently, low-density parity-check (LDPC) codes have attracted much attention because of their excellent error correcting performance and highly parallelizable decoding scheme. However, the effective VLSI implementation of and LDPC decoder remains a big challenge and is a crucial issue in determining how well we can exploit the benefits of the LDPC codes in the real applications. In this master thesis report, following a error coding background, we describe Low-Density Parity-Check codes and their decoding algorithm, and also requirements and architectures of LPDC decoder implementations.
105

Efficient Message Passing Decoding Using Vector-based Messages

Grimnell, Mikael, Tjäder, Mats January 2005 (has links)
The family of Low Density Parity Check (LDPC) codes is a strong candidate to be used as Forward Error Correction (FEC) in future communication systems due to its strong error correction capability. Most LDPC decoders use the Message Passing algorithm for decoding, which is an iterative algorithm that passes messages between its variable nodes and check nodes. It is not until recently that computation power has become strong enough to make Message Passing on LDPC codes feasible. Although locally simple, the LDPC codes are usually large, which increases the required computation power. Earlier work on LDPC codes has been concentrated on the binary Galois Field, GF(2), but it has been shown that codes from higher order fields have better error correction capability. However, the most efficient LDPC decoder, the Belief Propagation Decoder, has a squared complexity increase when moving to higher order Galois Fields. Transmission over a channel with M-PSK signalling is a common technique to increase spectral efficiency. The information is transmitted as the phase angle of the signal. The focus in this Master’s Thesis is on simplifying the Message Passing decoding when having inputs from M-PSK signals transmitted over an AWGN channel. Symbols from higher order Galois Fields were mapped to M-PSK signals, since M-PSK is very bandwidth efficient and the information can be found in the angle of the signal. Several simplifications of the Belief Propagation has been developed and tested. The most promising is the Table Vector Decoder, which is a Message Passing Decoder that uses a table lookup technique for check node operations and vector summation as variable node operations. The table lookup is used to approximate the check node operation in a Belief Propagation decoder. Vector summation is used as an equivalent operation to the variable node operation. Monte Carlo simulations have shown that the Table Vector Decoder can achieve a performance close to the Belief Propagation. The capability of the Table Vector Decoder depends on the number of reconstruction points and the placement of them. The main advantage of the Table Vector Decoder is that its complexity is unaffected by the Galois Field used. Instead, there will be a memory space requirement which depends on the desired number of reconstruction points.
106

Decentralized Estimation Under Communication Constraints

Uney, Murat 01 August 2009 (has links) (PDF)
In this thesis, we consider the problem of decentralized estimation under communication constraints in the context of Collaborative Signal and Information Processing. Motivated by sensor network applications, a high volume of data collected at distinct locations and possibly in diverse modalities together with the spatially distributed nature and the resource limitations of the underlying system are of concern. Designing processing schemes which match the constraints imposed by the system while providing a reasonable accuracy has been a major challenge in which we are particularly interested in the tradeoff between the estimation performance and the utilization of communications subject to energy and bandwidth constraints. One remarkable approach for decentralized inference in sensor networks is to exploit graphical models together with message passing algorithms. In this framework, after the so-called information graph of the problem is constructed, it is mapped onto the underlying network structure which is responsible for delivering the messages in accordance with the schedule of the inference algorithm. However it is challenging to provide a design perspective that addresses the tradeoff between the estimation accuracy and the cost of communications. Another approach has been performing the estimation at a fusion center based on the quantized information provided by the peripherals in which the fusion and quantization rules are sought while taking a restricted set of the communication constraints into account. We consider two classes of in-network processing strategies which cover a broad range of constraints and yield tractable Bayesian risks that capture the cost of communications as well as the penalty for estimation errors. A rigorous design setting is obtained in the form of a constrained optimization problem utilizing the Bayesian risks. These processing schemes have been previously studied together with the structures that the solutions exhibit in the context of decentralized detection in which a decision out of finitely many choices is made. We adopt this framework for the estimation problem. However, for the case, computationally infeasible solutions arise that involve integral operators that are impossible to evaluate exactly in general. In order not to compromise the fidelity of the model we develop an approximation framework using Monte Carlo methods and obtain particle representations and approximate computational schemes for both the in-network processing strategies and the solution schemes to the design problem. Doing that, we can produce approximating strategies for decentralized estimation networks under communication constraints captured by the framework including the cost. The proposed Monte Carlo optimization procedures operate in a scalable and efficient manner and can produce results for any family of distributions of concern provided that samples can be produced from the marginals. In addition, this approach enables a quantification of the tradeoff between the estimation accuracy and the cost of communications through a parameterized Bayesian risk.
107

Execution Of Distributed Database Queries On A Hpc System

Onder, Ibrahim Seckin 01 January 2010 (has links) (PDF)
Increasing performance of computers and ability to connect computers with high speed communication networks make distributed databases systems an attractive research area. In this study, we evaluate communication and data processing capabilities of a HPC machine. We calculate accurate cost formulas for high volume data communication between processing nodes and experimentally measure sorting times. A left deep query plan executer has been implemented and experimentally used for executing plans generated by two different genetic algorithms for a distributed database environment using message passing paradigm to prove that a parallel system can provide scalable performance by increasing the number of nodes used for storing database relations and processing nodes. We compare the performance of plans generated by genetic algorithms with optimal plans generated by exhaustive search algorithm. Our results have verified that optimal plans are better than those of genetic algorithms, as expected.
108

Digit-Online LDPC Decoding

Marshall, Philip A. Unknown Date
No description available.
109

Cumulative Distribution Networks: Inference, Estimation and Applications of Graphical Models for Cumulative Distribution Functions

Huang, Jim C. 01 March 2010 (has links)
This thesis presents a class of graphical models for directly representing the joint cumulative distribution function (CDF) of many random variables, called cumulative distribution networks (CDNs). Unlike graphical models for probability density and mass functions, in a CDN, the marginal probabilities for any subset of variables are obtained by computing limits of functions in the model. We will show that the conditional independence properties in a CDN are distinct from the conditional independence properties of directed, undirected and factor graph models, but include the conditional independence properties of bidirected graphical models. As a result, CDNs are a parameterization for bidirected models that allows us to represent complex statistical dependence relationships between observable variables. We will provide a method for constructing a factor graph model with additional latent variables for which graph separation of variables in the corresponding CDN implies conditional independence of the separated variables in both the CDN and in the factor graph with the latent variables marginalized out. This will then allow us to construct multivariate extreme value distributions for which both a CDN and a corresponding factor graph representation exist. In order to perform inference in such graphs, we describe the `derivative-sum-product' (DSP) message-passing algorithm where messages correspond to derivatives of the joint cumulative distribution function. We will then apply CDNs to the problem of learning to rank, or estimating parametric models for ranking, where CDNs provide a natural means with which to model multivariate probabilities over ordinal variables such as pairwise preferences. We will show that many previous probability models for rank data, such as the Bradley-Terry and Plackett-Luce models, can be viewed as particular types of CDN. Applications of CDNs will be described for the problems of ranking players in multiplayer team-based games, document retrieval and discovering regulatory sequences in computational biology using the above methods for inference and estimation of CDNs.
110

Algorithmique distribuée asynchrone avec une majorité de pannes / Asynchronous distributed computing with a majority of crashes

Bonnin, David 24 November 2015 (has links)
En algorithmique distribuée, le modèle asynchrone par envoi de messages et à pannes est connu et utilisé dans de nombreux articles de par son réalisme,par ailleurs il est suffisamment simple pour être utilisé et suffisamment complexe pour représenter des problèmes réels. Dans ce modèle, les n processus communiquent en s'échangeant des messages, mais sans borne sur les délais de communication, c'est-à-dire qu'un message peut mettre un temps arbitrairement long à atteindre sa destination. De plus, jusqu'à f processus peuvent tomber en panne, et ainsi arrêter définitivement de fonctionner. Ces pannes indétectables à cause de l'asynchronisme du système limitent les possibilités de ce modèle. Dans de nombreux cas, les résultats connus dans ces systèmes sont limités à une stricte minorité de pannes. C'est par exemple le cas de l'implémentation de registres atomiques et de la résolution du renommage. Cette barrière de la majorité de pannes, expliquée par le théorème CAP, s'applique à de nombreux problèmes, et fait que le modèle asynchrone par envoi de messages avec une majorité de pannes est peu étudié. Il est donc intéressant d'étudier ce qu'il est possible de faire dans ce cadre.Cette thèse cherche donc à mieux comprendre ce modèle à majorité de pannes, au travers de deux principaux problèmes. Dans un premier temps, on étudie l'implémentation d'objets partagés similaires aux registres habituels, en définissant les bancs de registres x-colorés et les α-registres. Dans un second temps, le problème du renommage est étendu en renommage k-redondant, dans ses versions à-un-coup et réutilisable, et de même pour les objets partagés diviseurs, étendus en k-diviseurs. / In distributed computing, asynchronous message-passing model with crashes is well-known and considered in many articles, because of its realism and it issimple enough to be used and complex enough to represent many real problems.In this model, n processes communicate by exchanging messages, but withoutany bound on communication delays, i.e. a message may take an arbitrarilylong time to reach its destination. Moreover, up to f among the n processesmay crash, and thus definitely stop working. Those crashes are undetectablebecause of the system asynchronism, and restrict the potential results in thismodel.In many cases, known results in those systems must verify the propertyof a strict minority of crashes. For example, this applies to implementationof atomic registers and solving of renaming. This barrier of a majority ofcrashes, explained by the CAP theorem, restricts numerous problems, and theasynchronous message-passing model with a majority of crashes is thus notwell-studied and rather unknown. Hence, studying what can be done in thiscase of a majority of crashes is interesting.This thesis tries to analyse this model, through two main problems. The first part studies the implementation of shared objects, similar to usual registers,by defining x-colored register banks, and α-registers. The second partextends the renaming problem into k-redundant renaming, for both one-shotand long-lived versions, and similarly for the shared objects called splitters intok-splitters.

Page generated in 0.0879 seconds