• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 727
  • 144
  • 102
  • 56
  • 25
  • 12
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 10
  • 5
  • 5
  • Tagged with
  • 1585
  • 1585
  • 489
  • 348
  • 339
  • 310
  • 282
  • 280
  • 265
  • 217
  • 184
  • 166
  • 148
  • 138
  • 132
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Coherent network error correction. / 網絡編碼與糾錯 / CUHK electronic theses & dissertations collection / Wang luo bian ma yu jiu cuo

January 2008 (has links)
Based on the weight properties of network codes, we present the refined versions of the Hamming bound, the Singleton bound and the Gilbert-Varshamov bound for linear network codes. We give two different algorithms to construct network codes with minimum distance constraints, both of which can achieve the refined Singleton bound. The first algorithm finds a codebook based on a given set of local encoding kernels defining a linear network code. The second algorithm finds a set of of local encoding kernels based on a given classical error-correcting code satisfying a certain minimum distance requirement. / First, the error correction/detection correction capabilities of a network code is completely characterized by a parameter which is equivalent to the minimum Hamming distance when the network code is linear and the weight measure on the error vectors is the Hamming weight. Our results imply that for a linear network code with the Hamming weight being the weight measure on the error vectors, the capability of the code is fully characterized by a single minimum distance. By contrast, for a nonlinear network code, two different minimum distances are needed for characterizing the capabilities of the code for error correction and for error detection. This leads to the surprising discovery that for a nonlinear network code, the number of correctable errors can be more than half of the number of detectable errors. (For classical algebraic codes, the number of correctable errors is always the largest integer not greater than half of the number of detectable errors.) / Network error correction provides a new method to correct errors in network communications by extending the strength of classical error-correcting codes from a point-to-point model to networks. This thesis considers a number of fundamental problems in coherent network error correction. / We further define equivalence classes of weight measures with respect to a general channel model. Specifically, for any given channel, the minimum weight decoders for two different weight measures are equivalent if the two weight measures belong to the same equivalence class. In the special case of network coding, we study four weight measures and show that all the four weight measures are in the same equivalent class for linear network codes. Hence they are all equivalent for error correction and detection when applying minimum weight decoding. / Yang, Shenghao. / Adviser: Raymond W.H. Yeung. / Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3708. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 89-93). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
292

Information transfer in open quantum systems

Levi, Elliott Kendrick January 2017 (has links)
This thesis covers open quantum systems and information transfer in the face of dissipation and disorder through numerical simulation. In Chapter 3 we present work on an open quantum system comprising a two-level system, single bosonic mode and dissipative environment; we have included the bosonic mode in the exact system treatment. This model allows us to gain an understanding of an environment's role in small energy transfer systems. We observe how the two-level system-mode coupling strength and the spectral density form characterising the environment interplay, affecting the system's coherent behaviour. We find strong coupling and a spectral density resonantly peaked on the two-level system oscillation frequency enhances the system's coherent oscillatory dynamics. Chapter 4 focusses on a physically motivated study of chain and ladder spin geometries and their use for entanglement transfer between qubits. We consider a nitrogen vacancy centre qubit implementation with nitrogen impurity spin-channels and demonstrate how matrix product operator techniques can be used in simulations of this physical system. We investigate coupling parameters and environmental decay rates with respect to transfer efficiency effects. Then, in turn, we simulate the effects of missing channel spins and disorder in the spin-spin coupling. We conclude by highlighting where our considered channel geometries outperform each other. The work in Chapter 5 is an investigation into the feasibility of routing entanglement between distant qubits in 2D spin networks. We no longer consider a physical implementation, but keep in mind the effects of dissipative environments on entanglement transfer systems. Starting with a single sending qubit-ancilla and multiple addressable receivers, we show it is possible to target a specific receiver and establish transferred entanglement between it and the sender's ancilla through eigenstate tunnelling techniques. We proceed to show that eigenstate tunnelling-mediated entanglement transfer can be achieved simultaneously from two senders across one spin network.
293

Adaptation from interactions between metabolism and behaviour : self-sensitive behaviour in protocells

Egbert, Matthew January 2012 (has links)
This thesis considers the relationship between adaptive behaviour and metabolism, using theoretical arguments supported by computational models to demonstrate mechanisms of adaptation that are uniquely available to systems based upon the metabolic organisation of self-production. It is argued how, by being sensitive to their metabolic viability, an organism can respond to the quality of its environment with respect to its metabolic well-being. This makes possible simple but powerful ‘self-sensitive' adaptive behaviours such as “If I am healthy now, keep doing the same as I have been doing – otherwise do something else.” This strategy provides several adaptive benefits, including the ability to respond appropriately to phenomena never previously experienced by the organism nor by any of its ancestors; the ability to integrate different environmental influences to produce an appropriate response; and sensitivity to the organism's present context and history of experience. Computational models are used to demonstrate these capabilities, as well as the possibility that self-sensitive adaptive behaviour can facilitate the adaptive evolution of populations of self-sensitive organisms through (i) processes similar to the Baldwin effect, (ii) increasing the likelihood of speciation events, and (iii) automatic behavioural adaptation to changes in the organism itself (such as genetic changes). In addition to these theoretical contributions, a computational model of self-sensitive behaviour is presented that recreates chemotaxis patterns observed in bacteria such as Azospirillum brasilense and Campylobacter jejuni. The models also suggest new explanations for previously unexplained asymmetric distributions of bacteria performing aerotaxis. More broadly, the work advocates further research into the relationship between behaviour and the metabolic organisation of self-production, an organisational property shared by all life. It also acts as an example of how abstract models that target theoretical concepts rather than natural phenomena can play a valuable role in the scientific endeavour.
294

The locus and source of verbal associations

Lazendic, Goran, Psychology, Faculty of Science, UNSW January 2006 (has links)
In this dissertation an attempt was made to uncover the source of verbal associations. The investigation focused on establishing the locus of representation for associative relationships in the cognitive system and whether this locus is different from that for semantic relationships. A picture naming task and an object decision task were used within the standard priming paradigm, in which the target is preceded by a prime. A dual-level model was proposed in which associative relatedness is represented at a lemma level that connects the lexical form representation of a word to its semantic information. According to this model an interaction between associative and categorical relatedness should occur in picture naming, but not in object decision, when primes and targets share both relationships, and this is what was observed. To investigate the mechanisms of associative priming, asymmetrically associated prime-target pairs were used to create two situations. In the forward priming condition the target was an associate of the prime (e.g., brick-house), and in the backward priming condition the prime was an associate of the target (e.g., babyrattle). Unexpectedly, facilitation was observed for backward priming at the short SOA in picture naming. Because no effect was observed for this condition in the object decision task, and given that forward priming produced facilitation in both tasks spreading activation was upheld as the mechanism for associative priming. In order to investigate whether the source of the relationship between associates might be in their latent semantic content, the impact of instrument relationships (e.g., grinder-coffee), script relationships (e.g., zoo-tiger), and proximity in multidimensional semantic space were also investigated in the picture naming task. Items that were close in semantic space, but did not share any semantic relationships, produced the same priming pattern as category co-ordinates in picture naming (i.e., interference), while instrumental and script relationships did not produce a priming pattern that matched either that observed for associative or categorical relatedness. These results were taken to indicate that the source of associative relationships is in the co-occurrence of words in the language, which further supported the main claim of a dual-level model where information about verbal associations is stored outside semantic memory.
295

Analysis and improvement of genetic algorithms using concepts from information theory.

Milton, John January 2009 (has links)
Evolutionary algorithms are based on the principles of biological evolution (Bre- mermann et al., 1966; Fraser, 1957; Box, 1957). Genetic algorithms are a class of evolutionary algorithm applicable to optimisation of a wide range of problems because they do not assume that the problem to be optimised is differentiable or convex. Potential solutions to a problem are encoded by allele sequences (genes) on an artificial genome in a manner analogous to biological DNA. Populations of these artificial genomes are then tested and bred together, combining artificial genetic material by the operation of crossover and mutation of genes, so that encoded solutions which more completely optimise the problem flourish and weaker solutions die out. Genetic algorithms are applied to a very broad range of problems in a variety of industries including financial modeling, manufacturing, data mining, engineering, design and science. Some examples are: • Traveling Salesman Problems such as vehicle routing, • Scheduling Problems such as Multiprocessor scheduling, and • Packing problems such as Shipping Container Operations. However, relative to the total volume of papers on genetic algorithms, few have focused on the theoretical foundations and identification of techniques to build effective genetic algorithms. Recent research has tended to focus on industry applications, rather than design techniques or parameter setting for genetic algorithms. There are of course exceptions to these observations. Nevertheless, the exceptions generally focus on a particular parameter or operator in relative isolation and do not attempt to find a foundation, approach or model which underpins them all. The objective of this Thesis is to establish theoretically sound methods for estimating appropriate parameter settings and structurally appropriate operators for genetic algorithms. The Thesis observes a link between some fundamental ideas in information theory and the relative frequency of alleles in a population. This observation leads to a systematic approach to determining optimum values for genetic algorithm parameters and the use of generational operators such as mutation, selection, crossover and termination criteria. The practical significance of the Thesis is that the outcomes form theoretically justified guidelines for researchers and practitioners. The Thesis establishes a model for the analysis of genetic algorithm be- haviour by applying fundamental concepts from information theory. The use of information theory grounds the model and contributions to a well established mathematical framework making them reliable and reproducible. The model and techniques contribute to the field of genetic algorithms by providing a clear and practical basis for algorithm design and tuning. Two ideas are central to the approach taken. Firstly, that evolutionary processes encode information into a population by altering the relative frequency of alleles. Secondly, that the key difference between a genetic algorithm and other algorithms is the generational operators, selection and crossover. Hence the model maximises a population’s information as represented by the relative frequency of solution alleles in the population, encourages the accumulation of these alleles and maximises the number of generations able to be processed. Information theory is applied to characterise the information sources used for mutation as well as to define selection thresholds in ranked populations. The importance of crossover in distributing alleles throughout a population and in promoting the accumulation of information in populations is analysed, while the Shannon–McMillan theorem is applied to identify practical termination criteria. The concept of ideal alleles as being those symbols in the appropriate loci, which form an optimal solution and the associated solution density of the population is central to this analysis. The term solution density is introduced to refer to the relative frequency of ideal alleles in the population at a particular generation. Solution density so defined represents a measure of a population’s fitness. By analysing the key genetic operators in terms of their effect on solution density, this Thesis identifies ten contributions. • A model for the analysis of genetic algorithm behaviour inspired by information theory. • A static selection threshold in ranked populations. • A dynamic selection threshold in ranked populations. • A maximum limit on the number of loci participating in epistasis is identified whereby more epistatic loci degrade directed search. • A practical limit to the amount of useful crossover is identified as sufficient. • An optimal crossover section length is found. • A cumulative scoring method for identifying solution density. • An entropy profile of ranked lists is described. • A practical termination criteria of most probable individuals based on the Shannon–McMillan theorem is provided. • An alternative genome representation which incorporates job–shop schedule problem knowledge in the genome rather than the algorithm’s generational operators is developed. Each of these contributions is validated by simulations, benchmark problems and application to a real–world problem.
296

A hybrid approach to expert and model based effort estimation

Baker, Daniel Ryan. January 1900 (has links)
Thesis (M.S.)--West Virginia University, 2007. / Title from document title page. Document formatted into pages; contains viii, 121 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 112-121).
297

Analysis and improvement of genetic algorithms using concepts from information theory.

Milton, John January 2009 (has links)
Evolutionary algorithms are based on the principles of biological evolution (Bre- mermann et al., 1966; Fraser, 1957; Box, 1957). Genetic algorithms are a class of evolutionary algorithm applicable to optimisation of a wide range of problems because they do not assume that the problem to be optimised is differentiable or convex. Potential solutions to a problem are encoded by allele sequences (genes) on an artificial genome in a manner analogous to biological DNA. Populations of these artificial genomes are then tested and bred together, combining artificial genetic material by the operation of crossover and mutation of genes, so that encoded solutions which more completely optimise the problem flourish and weaker solutions die out. Genetic algorithms are applied to a very broad range of problems in a variety of industries including financial modeling, manufacturing, data mining, engineering, design and science. Some examples are: • Traveling Salesman Problems such as vehicle routing, • Scheduling Problems such as Multiprocessor scheduling, and • Packing problems such as Shipping Container Operations. However, relative to the total volume of papers on genetic algorithms, few have focused on the theoretical foundations and identification of techniques to build effective genetic algorithms. Recent research has tended to focus on industry applications, rather than design techniques or parameter setting for genetic algorithms. There are of course exceptions to these observations. Nevertheless, the exceptions generally focus on a particular parameter or operator in relative isolation and do not attempt to find a foundation, approach or model which underpins them all. The objective of this Thesis is to establish theoretically sound methods for estimating appropriate parameter settings and structurally appropriate operators for genetic algorithms. The Thesis observes a link between some fundamental ideas in information theory and the relative frequency of alleles in a population. This observation leads to a systematic approach to determining optimum values for genetic algorithm parameters and the use of generational operators such as mutation, selection, crossover and termination criteria. The practical significance of the Thesis is that the outcomes form theoretically justified guidelines for researchers and practitioners. The Thesis establishes a model for the analysis of genetic algorithm be- haviour by applying fundamental concepts from information theory. The use of information theory grounds the model and contributions to a well established mathematical framework making them reliable and reproducible. The model and techniques contribute to the field of genetic algorithms by providing a clear and practical basis for algorithm design and tuning. Two ideas are central to the approach taken. Firstly, that evolutionary processes encode information into a population by altering the relative frequency of alleles. Secondly, that the key difference between a genetic algorithm and other algorithms is the generational operators, selection and crossover. Hence the model maximises a population’s information as represented by the relative frequency of solution alleles in the population, encourages the accumulation of these alleles and maximises the number of generations able to be processed. Information theory is applied to characterise the information sources used for mutation as well as to define selection thresholds in ranked populations. The importance of crossover in distributing alleles throughout a population and in promoting the accumulation of information in populations is analysed, while the Shannon–McMillan theorem is applied to identify practical termination criteria. The concept of ideal alleles as being those symbols in the appropriate loci, which form an optimal solution and the associated solution density of the population is central to this analysis. The term solution density is introduced to refer to the relative frequency of ideal alleles in the population at a particular generation. Solution density so defined represents a measure of a population’s fitness. By analysing the key genetic operators in terms of their effect on solution density, this Thesis identifies ten contributions. • A model for the analysis of genetic algorithm behaviour inspired by information theory. • A static selection threshold in ranked populations. • A dynamic selection threshold in ranked populations. • A maximum limit on the number of loci participating in epistasis is identified whereby more epistatic loci degrade directed search. • A practical limit to the amount of useful crossover is identified as sufficient. • An optimal crossover section length is found. • A cumulative scoring method for identifying solution density. • An entropy profile of ranked lists is described. • A practical termination criteria of most probable individuals based on the Shannon–McMillan theorem is provided. • An alternative genome representation which incorporates job–shop schedule problem knowledge in the genome rather than the algorithm’s generational operators is developed. Each of these contributions is validated by simulations, benchmark problems and application to a real–world problem.
298

A simple RLS-POCS solution for reduced complexity ADSL impulse shortening

Helms, Sheldon J. 03 September 1999 (has links)
Recently, with the realization of the World Wide Web, the tremendous need for high-speed data communications has grown. Several access techniques have been proposed which utilize the existing copper twisted pair cabling. Of these, the xDSL family, particularly ADSL and VDSL, have shown great promise in providing broadband or near-broadband access through the common telephone lines. A critical component of the ADSL and VDSL systems is the guard band needed to eliminate the interference caused by the previously transmitted blocks. This guard band must come in the form of redundant samples at the start of every transmit block, and be at least as long as the channel impulse response. Since the required guard band length is much greater than the length of the actual transmitted samples, techniques to shorten the channel impulse response must be considered. In this thesis, a new algorithm based on the RLS error minimization and POCS optimization techniques will be applied to the channel impulse-shortening problem in an ADSL environment. As will be shown, the proposed algorithm will provide a much better solution with a minimal increase in complexity as compared to the existing LMS techniques. / Graduation date: 2000
299

Classification context in a machine learning approach to predicting protein secondary structure

Langford, Bill T. 13 May 1993 (has links)
An important problem in molecular biology is to predict the secondary structure of proteins from their primary structure. The primary structure of a protein is the sequence of amino acid residues. The secondary structure is an abstract description of the shape of the folded protein, with regions identified as alpha helix, beta strands, and random coil. Existing methods of secondary structure prediction examine a short segment of the primary structure and predict the secondary structure class (alpha, beta, coil) of an individual residue centered in that segment. The last few years of research have failed to improve these methods beyond the level of 65% correct predictions. This thesis investigates whether these methods can be improved by permitting them to examine externally-supplied predictions for the secondary structure of other residues in the segment. The externally-supplied predictions are called the "classification context," because they provide contextual information about the secondary structure classifications of neighboring residues. The classification context could be provided by an existing algorithm that made initial secondary structure predictions, and then these could be taken as input by a second algorithm that would attempt to improve the predictions. A series of experiments on both real and simulated classification context were performed to measure the possible improvement that could be obtained from classification context. The results showed that the classification context provided by current algorithms does not yield improved performance when used as input by those same algorithms. However, if the classification context is generated by randomly damaging the correct classifications, substantial performance improvements are possible. Even small amounts of randomly damaged correct context improves performance. / Graduation date: 1994
300

Coding and decoding for time-discrete amplitude-continuous memoryless channels

January 1963 (has links)
[by] Jacob Ziv. / "Submitted to the Dept. of Electrical Engineering, M.I.T., January 8, 1962, in partial fulfillment of the requirements for the degree of Doctor of Science." Errata: p. 100-101. / Bibliography: p.99.

Page generated in 0.0341 seconds