Spelling suggestions: "subject:"brassington"" "subject:"assinging""
201 |
Métaheuristiques hybrides distribuées et massivement parallèles / Hybrid metaheuristics distributed and massively parallelAbdelkafi, Omar 07 November 2016 (has links)
De nombreux problèmes d'optimisation propres à différents secteurs industriels et académiques (énergie, chimie, transport, etc.) nécessitent de concevoir des méthodes de plus en plus efficaces pour les résoudre. Afin de répondre à ces besoins, l'objectif de cette thèse est de développer une bibliothèque composée de plusieurs métaheuristiques hybrides distribuées et massivement parallèles. Dans un premier temps, nous avons étudié le problème du voyageur de commerce et sa résolution par la méthode colonie de fourmis afin de mettre en place les techniques d'hybridation et de parallélisation. Ensuite, deux autres problèmes d'optimisation ont été traités, à savoir, le problème d'affectation quadratique (QAP) et le problème de la résolution structurale des zéolithes (ZSP). Pour le QAP, plusieurs variantes basées sur une recherche taboue itérative avec des diversifications adaptatives ont été proposées. Le but de ces propositions est d'étudier l'impact de : l'échange des données, des stratégies de diversification et des méthodes de coopération. Notre meilleure variante est comparée à six des meilleurs travaux de la littérature. En ce qui concerne le ZSP, deux nouvelles formulations de la fonction objective sont proposées pour évaluer le potentiel des structures zéolitiques trouvées. Ces formulations sont basées sur le principe de pénalisation et de récompense. Deux algorithmes génétiques hybrides et parallèles sont proposés pour générer des structures zéolitiques stables. Nos algorithmes ont généré actuellement six topologies stables, parmi lesquelles trois ne sont pas répertoriées sur le site Web du SC-IZA ou dans l'Atlas of Prospective Zeolite Structures. / Many optimization problems specific to different industrial and academic sectors (energy, chemicals, transportation, etc.) require the development of more effective methods in resolving. To meet these needs, the aim of this thesis is to develop a library of several hybrid metaheuristics distributed and massively parallel. First, we studied the traveling salesman problem and its resolution by the ant colony method to establish hybridization and parallelization techniques. Two other optimization problems have been dealt, which are, the quadratic assignment problem (QAP) and the zeolite structure problem (ZSP). For the QAP, several variants based on an iterative tabu search with adaptive diversification have been proposed. The aim of these proposals is to study the impact of: the data exchange, the diversification strategies and the methods of cooperation. Our best variant is compared with six from the leading works of the literature. For the ZSP two new formulations of the objective function are proposed to evaluate the potential of the zeolites structures founded. These formulations are based on reward and penalty evaluation. Two hybrid and parallel genetic algorithms are proposed to generate stable zeolites structures. Our algorithms have now generated six stable topologies, three of them are not listed in the SC-JZA website or in the Atlas of Prospective Zeolite Structures.
|
202 |
Modélisation multi-échelles et calculs parallèles appliqués à la simulation de l'activité neuronale / Multiscale modeling and parallel computations applied to the simulation of neuronal activityBedez, Mathieu 18 December 2015 (has links)
Les neurosciences computationnelles ont permis de développer des outils mathématiques et informatiques permettant la création, puis la simulation de modèles représentant le comportement de certaines composantes de notre cerveau à l’échelle cellulaire. Ces derniers sont utiles dans la compréhension des interactions physiques et biochimiques entre les différents neurones, au lieu d’une reproduction fidèle des différentes fonctions cognitives comme dans les travaux sur l’intelligence artificielle. La construction de modèles décrivant le cerveau dans sa globalité, en utilisant une homogénéisation des données microscopiques est plus récent, car il faut prendre en compte la complexité géométrique des différentes structures constituant le cerveau. Il y a donc un long travail de reconstitution à effectuer pour parvenir à des simulations. D’un point de vue mathématique, les différents modèles sont décrits à l’aide de systèmes d’équations différentielles ordinaires, et d’équations aux dérivées partielles. Le problème majeur de ces simulations vient du fait que le temps de résolution peut devenir très important, lorsque des précisions importantes sur les solutions sont requises sur les échelles temporelles mais également spatiales. L’objet de cette étude est d’étudier les différents modèles décrivant l’activité électrique du cerveau, en utilisant des techniques innovantes de parallélisation des calculs, permettant ainsi de gagner du temps, tout en obtenant des résultats très précis. Quatre axes majeurs permettront de répondre à cette problématique : description des modèles, explication des outils de parallélisation, applications sur deux modèles macroscopiques. / Computational Neuroscience helped develop mathematical and computational tools for the creation, then simulation models representing the behavior of certain components of our brain at the cellular level. These are helpful in understanding the physical and biochemical interactions between different neurons, instead of a faithful reproduction of various cognitive functions such as in the work on artificial intelligence. The construction of models describing the brain as a whole, using a homogenization microscopic data is newer, because it is necessary to take into account the geometric complexity of the various structures comprising the brain. There is therefore a long process of rebuilding to be done to achieve the simulations. From a mathematical point of view, the various models are described using ordinary differential equations, and partial differential equations. The major problem of these simulations is that the resolution time can become very important when important details on the solutions are required on time scales but also spatial. The purpose of this study is to investigate the various models describing the electrical activity of the brain, using innovative techniques of parallelization of computations, thereby saving time while obtaining highly accurate results. Four major themes will address this issue: description of the models, explaining parallelization tools, applications on both macroscopic models.
|
203 |
Low-density Parity-Check decoding Algorithms / Low-density Parity-Check avkodare algoritmPirou, Florent January 2004 (has links)
<p>Recently, low-density parity-check (LDPC) codes have attracted much attention because of their excellent error correcting performance and highly parallelizable decoding scheme. However, the effective VLSI implementation of and LDPC decoder remains a big challenge and is a crucial issue in determining how well we can exploit the benefits of the LDPC codes in the real applications. In this master thesis report, following a error coding background, we describe Low-Density Parity-Check codes and their decoding algorithm, and also requirements and architectures of LPDC decoder implementations.</p>
|
204 |
Efficient Message Passing Decoding Using Vector-based MessagesGrimnell, Mikael, Tjäder, Mats January 2005 (has links)
<p>The family of Low Density Parity Check (LDPC) codes is a strong candidate to be used as Forward Error Correction (FEC) in future communication systems due to its strong error correction capability. Most LDPC decoders use the Message Passing algorithm for decoding, which is an iterative algorithm that passes messages between its variable nodes and check nodes. It is not until recently that computation power has become strong enough to make Message Passing on LDPC codes feasible. Although locally simple, the LDPC codes are usually large, which increases the required computation power. Earlier work on LDPC codes has been concentrated on the binary Galois Field, GF(2), but it has been shown that codes from higher order fields have better error correction capability. However, the most efficient LDPC decoder, the Belief Propagation Decoder, has a squared complexity increase when moving to higher order Galois Fields. Transmission over a channel with M-PSK signalling is a common technique to increase spectral efficiency. The information is transmitted as the phase angle of the signal.</p><p>The focus in this Master’s Thesis is on simplifying the Message Passing decoding when having inputs from M-PSK signals transmitted over an AWGN channel. Symbols from higher order Galois Fields were mapped to M-PSK signals, since M-PSK is very bandwidth efficient and the information can be found in the angle of the signal. Several simplifications of the Belief Propagation has been developed and tested. The most promising is the Table Vector Decoder, which is a Message Passing Decoder that uses a table lookup technique for check node operations and vector summation as variable node operations. The table lookup is used to approximate the check node operation in a Belief Propagation decoder. Vector summation is used as an equivalent operation to the variable node operation. Monte Carlo simulations have shown that the Table Vector Decoder can achieve a performance close to the Belief Propagation. The capability of the Table Vector Decoder depends on the number of reconstruction points and the placement of them. The main advantage of the Table Vector Decoder is that its complexity is unaffected by the Galois Field used. Instead, there will be a memory space requirement which depends on the desired number of reconstruction points.</p>
|
205 |
Axiological InvestigationsOlson, Jonas January 2005 (has links)
<p>The subject of this thesis is <i>formal axiology</i>, i.e., the discipline that deals with structural and conceptual questions about value. The main focus is on <i>intrinsic</i> or <i>final</i> value. The thesis consists of an introduction and six free-standing essays. The purpose of the introduction is to give a general background to the discussions in the essays. The introduction is divided into five sections. Section 1 outlines the subject matter and sketches the methodological framework. Section 2 discusses the supervenience of value, and how my use of that notion squares with the broader methodological framework. Section 3 defends the concept of intrinsic or final value. Section 4 discusses issues in value typology; particularly how intrinsic value relates to final value. Section 5 summarises the essays and provides some specific backgrounds to their respective themes.</p><p>The six essays are thematically divided into four categories: The first two deal with specific issues concerning analyses of value. Essay 1 is a comparative discussion of competing approaches in this area. Essay 2 discusses, and proposes a solution to, a significant problem for the so called ‘buck-passing’ analysis of value. Essay 3 discusses the ontological nature of the bearers of final value, and defends the view that they are particularised properties, or <i>tropes</i>. Essay 4 defends <i>conditionalism</i> about final value, i.e., the idea that final value may vary according to context. The last two essays focus on some implications of the formal axiological discussion for normative theory: Essay 5 discusses the charge that the buck-passing analysis prematurely resolves the debate between consequentialism and deontology; essay 6 suggests that conditionalism makes possible a reconciliation between consequentialism and moral particularism. </p>
|
206 |
Axiological InvestigationsOlson, Jonas January 2005 (has links)
The subject of this thesis is formal axiology, i.e., the discipline that deals with structural and conceptual questions about value. The main focus is on intrinsic or final value. The thesis consists of an introduction and six free-standing essays. The purpose of the introduction is to give a general background to the discussions in the essays. The introduction is divided into five sections. Section 1 outlines the subject matter and sketches the methodological framework. Section 2 discusses the supervenience of value, and how my use of that notion squares with the broader methodological framework. Section 3 defends the concept of intrinsic or final value. Section 4 discusses issues in value typology; particularly how intrinsic value relates to final value. Section 5 summarises the essays and provides some specific backgrounds to their respective themes. The six essays are thematically divided into four categories: The first two deal with specific issues concerning analyses of value. Essay 1 is a comparative discussion of competing approaches in this area. Essay 2 discusses, and proposes a solution to, a significant problem for the so called ‘buck-passing’ analysis of value. Essay 3 discusses the ontological nature of the bearers of final value, and defends the view that they are particularised properties, or tropes. Essay 4 defends conditionalism about final value, i.e., the idea that final value may vary according to context. The last two essays focus on some implications of the formal axiological discussion for normative theory: Essay 5 discusses the charge that the buck-passing analysis prematurely resolves the debate between consequentialism and deontology; essay 6 suggests that conditionalism makes possible a reconciliation between consequentialism and moral particularism.
|
207 |
Low-density Parity-Check decoding Algorithms / Low-density Parity-Check avkodare algoritmPirou, Florent January 2004 (has links)
Recently, low-density parity-check (LDPC) codes have attracted much attention because of their excellent error correcting performance and highly parallelizable decoding scheme. However, the effective VLSI implementation of and LDPC decoder remains a big challenge and is a crucial issue in determining how well we can exploit the benefits of the LDPC codes in the real applications. In this master thesis report, following a error coding background, we describe Low-Density Parity-Check codes and their decoding algorithm, and also requirements and architectures of LPDC decoder implementations.
|
208 |
Efficient Message Passing Decoding Using Vector-based MessagesGrimnell, Mikael, Tjäder, Mats January 2005 (has links)
The family of Low Density Parity Check (LDPC) codes is a strong candidate to be used as Forward Error Correction (FEC) in future communication systems due to its strong error correction capability. Most LDPC decoders use the Message Passing algorithm for decoding, which is an iterative algorithm that passes messages between its variable nodes and check nodes. It is not until recently that computation power has become strong enough to make Message Passing on LDPC codes feasible. Although locally simple, the LDPC codes are usually large, which increases the required computation power. Earlier work on LDPC codes has been concentrated on the binary Galois Field, GF(2), but it has been shown that codes from higher order fields have better error correction capability. However, the most efficient LDPC decoder, the Belief Propagation Decoder, has a squared complexity increase when moving to higher order Galois Fields. Transmission over a channel with M-PSK signalling is a common technique to increase spectral efficiency. The information is transmitted as the phase angle of the signal. The focus in this Master’s Thesis is on simplifying the Message Passing decoding when having inputs from M-PSK signals transmitted over an AWGN channel. Symbols from higher order Galois Fields were mapped to M-PSK signals, since M-PSK is very bandwidth efficient and the information can be found in the angle of the signal. Several simplifications of the Belief Propagation has been developed and tested. The most promising is the Table Vector Decoder, which is a Message Passing Decoder that uses a table lookup technique for check node operations and vector summation as variable node operations. The table lookup is used to approximate the check node operation in a Belief Propagation decoder. Vector summation is used as an equivalent operation to the variable node operation. Monte Carlo simulations have shown that the Table Vector Decoder can achieve a performance close to the Belief Propagation. The capability of the Table Vector Decoder depends on the number of reconstruction points and the placement of them. The main advantage of the Table Vector Decoder is that its complexity is unaffected by the Galois Field used. Instead, there will be a memory space requirement which depends on the desired number of reconstruction points.
|
209 |
Civil Law Claims On The Enforcement Of Competition Rules: A Comparative Study Of Us, Eu And Turkish LawsBulbul, Asli 01 December 2006 (has links) (PDF)
Private enforcement, which primarily represents individuals&rsquo / right to claim damage arisen from violations of competition law, supplements public enforcement and ensures indemnification of individual loss. However, private enforcement of competition law has fallen behind public law enforcement in laws presented in this study, other than those enforced in the USA. Realizing this fact, European
Commission, has recently focused on the enhancement and facilitation of private enforcement in the Community competition law. The lagging behind of private enforcement mainly sources from the cultural and traditional differences in the understanding of liability law between Anglo Saxon Law and Continental Law. Anglo Saxon law tradition is inclined to leave the matter to individual action, whereas Continental Law is in more favor of strengthening regulatory mechanisms.
More specific obstacles to the improvement of private enforcement are, yet not exhaustively, indefiniteness of legal basis of claims, involvement of complex economic analysis while stating the case, courts&rsquo / lack of technical knowledge,
indefinite relationship between judiciary and competition authorities, problems in proving damage and causality, absence of facilitating procedural mechanisms such as
class actions, treble damage and discovery rights. In the Community law context it is also highly probable to encounter peculiar problems arisen from co-existence of different national laws. Additionally, implementation of the Community competition law by national authorities may also lead to the weakening of the Single Market objective.
Through this study, we will present probable solutions by depicting all these problems.
|
210 |
Decentralized Estimation Under Communication ConstraintsUney, Murat 01 August 2009 (has links) (PDF)
In this thesis, we consider the problem of decentralized estimation under communication
constraints in the context of Collaborative Signal and Information Processing. Motivated
by sensor network applications, a high volume of data collected at distinct locations and
possibly in diverse modalities together with the spatially distributed nature and the
resource limitations of the underlying system are of concern. Designing processing
schemes which match the constraints imposed by the system while providing a
reasonable accuracy has been a major challenge in which we are particularly interested
in the tradeoff between the estimation performance and the utilization of communications
subject to energy and bandwidth constraints.
One remarkable approach for decentralized inference in sensor networks is to exploit
graphical models together with message passing algorithms. In this framework, after the
so-called information graph of the problem is constructed, it is mapped onto the
underlying network structure which is responsible for delivering the messages in
accordance with the schedule of the inference algorithm. However it is challenging to
provide a design perspective that addresses the tradeoff between the estimation
accuracy and the cost of communications. Another approach has been performing the
estimation at a fusion center based on the quantized information provided by the
peripherals in which the fusion and quantization rules are sought while taking a restricted
set of the communication constraints into account.
We consider two classes of in-network processing strategies which cover a broad range
of constraints and yield tractable Bayesian risks that capture the cost of communications
as well as the penalty for estimation errors. A rigorous design setting is obtained in the
form of a constrained optimization problem utilizing the Bayesian risks. These
processing schemes have been previously studied together with the structures that the
solutions exhibit in the context of decentralized detection in which a decision out of
finitely many choices is made.
We adopt this framework for the estimation problem. However, for the case,
computationally infeasible solutions arise that involve integral operators that are
impossible to evaluate exactly in general. In order not to compromise the fidelity of the
model we develop an approximation framework using Monte Carlo methods and obtain
particle representations and approximate computational schemes for both the in-network
processing strategies and the solution schemes to the design problem. Doing that, we
can produce approximating strategies for decentralized estimation networks under
communication constraints captured by the framework including the cost. The proposed
Monte Carlo optimization procedures operate in a scalable and efficient manner and can
produce results for any family of distributions of concern provided that samples can be
produced from the marginals. In addition, this approach enables a quantification of the
tradeoff between the estimation accuracy and the cost of communications through
a parameterized Bayesian risk.
|
Page generated in 0.0466 seconds