• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2851
  • 1316
  • 345
  • 340
  • 168
  • 93
  • 69
  • 59
  • 44
  • 36
  • 26
  • 25
  • 21
  • 21
  • 21
  • Tagged with
  • 6638
  • 1250
  • 1185
  • 1075
  • 538
  • 514
  • 462
  • 440
  • 423
  • 416
  • 396
  • 356
  • 329
  • 317
  • 303
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Analysis and improvement of genetic algorithms using concepts from information theory.

Milton, John January 2009 (has links)
Evolutionary algorithms are based on the principles of biological evolution (Bre- mermann et al., 1966; Fraser, 1957; Box, 1957). Genetic algorithms are a class of evolutionary algorithm applicable to optimisation of a wide range of problems because they do not assume that the problem to be optimised is differentiable or convex. Potential solutions to a problem are encoded by allele sequences (genes) on an artificial genome in a manner analogous to biological DNA. Populations of these artificial genomes are then tested and bred together, combining artificial genetic material by the operation of crossover and mutation of genes, so that encoded solutions which more completely optimise the problem flourish and weaker solutions die out. Genetic algorithms are applied to a very broad range of problems in a variety of industries including financial modeling, manufacturing, data mining, engineering, design and science. Some examples are: • Traveling Salesman Problems such as vehicle routing, • Scheduling Problems such as Multiprocessor scheduling, and • Packing problems such as Shipping Container Operations. However, relative to the total volume of papers on genetic algorithms, few have focused on the theoretical foundations and identification of techniques to build effective genetic algorithms. Recent research has tended to focus on industry applications, rather than design techniques or parameter setting for genetic algorithms. There are of course exceptions to these observations. Nevertheless, the exceptions generally focus on a particular parameter or operator in relative isolation and do not attempt to find a foundation, approach or model which underpins them all. The objective of this Thesis is to establish theoretically sound methods for estimating appropriate parameter settings and structurally appropriate operators for genetic algorithms. The Thesis observes a link between some fundamental ideas in information theory and the relative frequency of alleles in a population. This observation leads to a systematic approach to determining optimum values for genetic algorithm parameters and the use of generational operators such as mutation, selection, crossover and termination criteria. The practical significance of the Thesis is that the outcomes form theoretically justified guidelines for researchers and practitioners. The Thesis establishes a model for the analysis of genetic algorithm be- haviour by applying fundamental concepts from information theory. The use of information theory grounds the model and contributions to a well established mathematical framework making them reliable and reproducible. The model and techniques contribute to the field of genetic algorithms by providing a clear and practical basis for algorithm design and tuning. Two ideas are central to the approach taken. Firstly, that evolutionary processes encode information into a population by altering the relative frequency of alleles. Secondly, that the key difference between a genetic algorithm and other algorithms is the generational operators, selection and crossover. Hence the model maximises a population’s information as represented by the relative frequency of solution alleles in the population, encourages the accumulation of these alleles and maximises the number of generations able to be processed. Information theory is applied to characterise the information sources used for mutation as well as to define selection thresholds in ranked populations. The importance of crossover in distributing alleles throughout a population and in promoting the accumulation of information in populations is analysed, while the Shannon–McMillan theorem is applied to identify practical termination criteria. The concept of ideal alleles as being those symbols in the appropriate loci, which form an optimal solution and the associated solution density of the population is central to this analysis. The term solution density is introduced to refer to the relative frequency of ideal alleles in the population at a particular generation. Solution density so defined represents a measure of a population’s fitness. By analysing the key genetic operators in terms of their effect on solution density, this Thesis identifies ten contributions. • A model for the analysis of genetic algorithm behaviour inspired by information theory. • A static selection threshold in ranked populations. • A dynamic selection threshold in ranked populations. • A maximum limit on the number of loci participating in epistasis is identified whereby more epistatic loci degrade directed search. • A practical limit to the amount of useful crossover is identified as sufficient. • An optimal crossover section length is found. • A cumulative scoring method for identifying solution density. • An entropy profile of ranked lists is described. • A practical termination criteria of most probable individuals based on the Shannon–McMillan theorem is provided. • An alternative genome representation which incorporates job–shop schedule problem knowledge in the genome rather than the algorithm’s generational operators is developed. Each of these contributions is validated by simulations, benchmark problems and application to a real–world problem.
282

Nonlinear intersubband dynamics in semiconductor nanostructures

Wijewardane, Harshani Ovamini, January 2007 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2007. / The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on December 17, 2007) Vita. Includes bibliographical references.
283

Analysis and improvement of genetic algorithms using concepts from information theory.

Milton, John January 2009 (has links)
Evolutionary algorithms are based on the principles of biological evolution (Bre- mermann et al., 1966; Fraser, 1957; Box, 1957). Genetic algorithms are a class of evolutionary algorithm applicable to optimisation of a wide range of problems because they do not assume that the problem to be optimised is differentiable or convex. Potential solutions to a problem are encoded by allele sequences (genes) on an artificial genome in a manner analogous to biological DNA. Populations of these artificial genomes are then tested and bred together, combining artificial genetic material by the operation of crossover and mutation of genes, so that encoded solutions which more completely optimise the problem flourish and weaker solutions die out. Genetic algorithms are applied to a very broad range of problems in a variety of industries including financial modeling, manufacturing, data mining, engineering, design and science. Some examples are: • Traveling Salesman Problems such as vehicle routing, • Scheduling Problems such as Multiprocessor scheduling, and • Packing problems such as Shipping Container Operations. However, relative to the total volume of papers on genetic algorithms, few have focused on the theoretical foundations and identification of techniques to build effective genetic algorithms. Recent research has tended to focus on industry applications, rather than design techniques or parameter setting for genetic algorithms. There are of course exceptions to these observations. Nevertheless, the exceptions generally focus on a particular parameter or operator in relative isolation and do not attempt to find a foundation, approach or model which underpins them all. The objective of this Thesis is to establish theoretically sound methods for estimating appropriate parameter settings and structurally appropriate operators for genetic algorithms. The Thesis observes a link between some fundamental ideas in information theory and the relative frequency of alleles in a population. This observation leads to a systematic approach to determining optimum values for genetic algorithm parameters and the use of generational operators such as mutation, selection, crossover and termination criteria. The practical significance of the Thesis is that the outcomes form theoretically justified guidelines for researchers and practitioners. The Thesis establishes a model for the analysis of genetic algorithm be- haviour by applying fundamental concepts from information theory. The use of information theory grounds the model and contributions to a well established mathematical framework making them reliable and reproducible. The model and techniques contribute to the field of genetic algorithms by providing a clear and practical basis for algorithm design and tuning. Two ideas are central to the approach taken. Firstly, that evolutionary processes encode information into a population by altering the relative frequency of alleles. Secondly, that the key difference between a genetic algorithm and other algorithms is the generational operators, selection and crossover. Hence the model maximises a population’s information as represented by the relative frequency of solution alleles in the population, encourages the accumulation of these alleles and maximises the number of generations able to be processed. Information theory is applied to characterise the information sources used for mutation as well as to define selection thresholds in ranked populations. The importance of crossover in distributing alleles throughout a population and in promoting the accumulation of information in populations is analysed, while the Shannon–McMillan theorem is applied to identify practical termination criteria. The concept of ideal alleles as being those symbols in the appropriate loci, which form an optimal solution and the associated solution density of the population is central to this analysis. The term solution density is introduced to refer to the relative frequency of ideal alleles in the population at a particular generation. Solution density so defined represents a measure of a population’s fitness. By analysing the key genetic operators in terms of their effect on solution density, this Thesis identifies ten contributions. • A model for the analysis of genetic algorithm behaviour inspired by information theory. • A static selection threshold in ranked populations. • A dynamic selection threshold in ranked populations. • A maximum limit on the number of loci participating in epistasis is identified whereby more epistatic loci degrade directed search. • A practical limit to the amount of useful crossover is identified as sufficient. • An optimal crossover section length is found. • A cumulative scoring method for identifying solution density. • An entropy profile of ranked lists is described. • A practical termination criteria of most probable individuals based on the Shannon–McMillan theorem is provided. • An alternative genome representation which incorporates job–shop schedule problem knowledge in the genome rather than the algorithm’s generational operators is developed. Each of these contributions is validated by simulations, benchmark problems and application to a real–world problem.
284

Defferential expression of isoforms of PSD-95 binding protein (GKAP/SAPAP1) during rat brain development / PSD-95結合蛋白質(GKAP/SAPAP1)のラット脳発育過程における発現の多様性

川嶋, 望 25 March 1998 (has links)
共著者あり。共著者名:Takamiya Kogo(高宮考悟), Sun Jie(孫傑), Kitabatake Akira(北畠顕), Sobue Kenji(祖父江憲治). / Hokkaido University (北海道大学) / 博士 / 医学
285

Randomly Coalescing Random Walk in Dimension $ge$ 3

jvdberg@cwi.nl 09 July 2001 (has links)
No description available.
286

Automatic isochoric apparatus for PVT and phase equilibrium studies of natural gas mixtures

Zhou, Jingjun 15 May 2009 (has links)
We have developed a new automatic apparatus for the measurement of the phase equilibrium and pVT properties of natural gas mixtures in our laboratory. Based on the isochoric method, the apparatus can operate at temperature from 200 K to 500 K at pressures up to 35 MPa, and yield absolute results in fully automated operation. Temperature measurements are accurate to 10 mK and pressure measurements are accurate to 0.002 MPa. The isochoric method utilizes pressure versus temperature measurements along an isomole and detects phase boundaries by locating the change in the slope of the isochores. The experimental data from four gas samples show that cubic equations of state, such as Peng-Robinson and Soave-Redich-Kwong have 1-20% errors in predicting hydrocarbon mixture dew points. The data also show that the AGA 8-DC92 equation of state has errors as large as 0.6% when predicting hydrocarbon mixture densities when its normal composition range is extrapolated.
287

Estimating Rio Grande wild turkey densities in Texas

Locke, Shawn Lee 02 June 2009 (has links)
Rio Grande wild turkeys (Meleagris gallopavo intermedia) are a highly mobile, wide ranging, and secretive species located throughout the arid regions of Texas. As a result of declines in turkey abundance within the Edwards Plateau and other areas, Texas Parks and Wildlife Department initiated a study to evaluate methods for estimating Rio Grande wild turkey abundance. Unbiased methods for determining wild turkey abundance have long been desired, and although several different methods have been examined few have been successful. The study objectives were to: (1) review current and past methods for estimating turkey abundance, (2) evaluate the use of portable thermal imagers to estimate roosting wild turkeys in three ecoregions, and (3) determine the effectiveness of distance sampling from the air and ground to estimate wild turkey densities in the Edwards Plateau Ecoregion of Texas. Based on the literature review and the decision matrix, I determined two methods for field evaluation (i.e., infrared camera for detecting roosting turkeys and distance sample from the air and ground). I conducted eight ground and aerial forward-looking infrared (FLIR) surveys (4 Edwards Plateau, 3 Rolling Plains, and 1 Gulf Prairies and Marshes) of roost sites during the study. In the three regions evaluated, I was unable to aerially detect roosting turkeys using the portable infrared camera due to altitudinal restrictions required for safe helicopter flight and lack of thermal contrast. A total of 560 km of aerial transects and 10 (800 km) road based transects also were conducted in the Edwards Plateau but neither method yielded a sufficient sample size to generate an unbiased estimate of the turkey abundance. Aerial and ground distance sampling and aerial FLIR surveys were limited by terrain and dense vegetation and a lack of thermal contrast, respectively. Study results suggest aerial FLIR and ground applications to estimate Rio Grande wild turkeys are of limited value in Texas. In my opinion, a method for estimating Rio Grande wild turkey densities on a regional scale does not currently exist. Therefore, the Texas Parks and Wildlife Department should reconsider estimating trends or using indices to monitor turkey numbers on a regional scale.
288

Double Ended Guillotine Break in a Prismatic Block VHTR Lower Plenum Air Ingress Scenario

Hartley, Jessica 2011 August 1900 (has links)
The double ended guillotine break leading to density-driven air ingress has been identified as a low probability yet high consequence event for Very High Temperature Reactor (VHTR). The lower plenum of the VHTR contains the core support structure and is composed of graphite. During an air ingress event, oxidation of the graphite structure under high temperature conditions in an oxygen containing environment could degrade the integrity of the core support structure. Following this large break, air from the reactor containment will begin to enter the lower plenum via two mechanisms: diffusion or density driven stratified flow. The large difference in time scales between the mechanisms leads to the need to perform high fidelity experimental studies to investigate the dominant the air ingress mechanism. A scaled test facility has been designed and built that allows the acquisition of velocity measurements during stratification after a pipe break. A non-intrusive optical measurement technique provides full-field velocity measurements profiles of the two species Particle Image Velocimetry (PIV). The data allow a more developed understanding of the fundamental flow features, the development of improved models, and possible mitigation strategies in such a scenario.Two brine-water experiments were conducted with different break locations. Flow fronts were analyzed and findings concluded that the flow has a constant speed through the pipe after the initial lock exchange. The time in which the flow enters the lower plenum is an important factor because it provides the window of opportunity for mitigation strategies in an actual reactor scenario. For both cases the flow of the heavier density liquid (simulating air ingress from the reactor containment) from the pipe enters the reactor vessel in under 6 seconds. The diffusion velocity and heavy flow front of the stratified flow layer were compared for the SF6/He gas case. It is seen that the diffusion plays less of a role as the transport mechanism in comparison to the density-driven stratified flow since the velocity of the diffusion is two orders of magnitude smaller than the velocity of the stratified flow mechanism. This is the reason for the need for density-driven stratified flow investigations following a LOCA. These investigations provided high-quality data for CFD validation in order for these models to depict the basic phenomena occurring in an air ingress scenario.
289

Nanostructures based on cyclic C6

Kuzmin, Stanislav 07 May 2013 (has links)
The properties of a new family of carbon structures based on stacked cyclic C6 rings and intercalated cyclic C6 structures: (C6)n and (C6)nMen-1 have been studied theoretically using ab initio DFT (Density Functional Theory). Calculations of the structural, electronic, and vibrational properties of a range of these molecules have been carried out using DFT techniques with the best correspondence to experimental results. The chemical and structural stability of structures based on stacks of cyclic C6 has also been estimated for pure carbon molecules (C6)n and for metal-organic sandwich molecules intercalated with Fe and Ru atoms. These have (C6)nFen-1 and (C6)n Run-1 compositions, respectively These structures are predicted to show a variety of new electronic, vibrational and magnetic properties. Ultra-small diameter tubular molecules are also found to have unique rotational electron states and high atomic orbital pi-sigma hybridization giving rise to a high density of electron states. All phonons in these structures have collinear wave vectors leading to an ultrahigh density of phonon states in dominant modes suggesting that some of these structures may exhibit superconductivity. These properties, as well as a predicted high electron mobility, make these structures promising as components in nanoelectronics. Experiments using femto-second laser pulses for the irradiation of organic liquids suggest that such structures may appear under certain conditions. In particular, a new type of iron carbide has been found in these experiments.
290

Understanding the electronic structure of LiFePO4 and FePO4

Hunt, Adrian 01 February 2007
This thesis has detailed the extensive analysis of the XAS and RIXS spectra of LiFePO4 and FePO4, with the primary focus on LiFePO4. One of the primary motivations for this study was to understand the electronic structure of the two compounds and, in particular, shed some light on the nature of electron correlation within the samples. Two classes of band structure calculations have come to light. One solution uses the Hubbard U parameter, and this solution exhibits a band gap of about 4 eV. Other solutions that use standard DFT electron correlation functionals yield band gaps between 0 and 1.0 eV. <p>The RIXS spectra of LiFePO4 and FePO4 were analyzed using Voigt function fitting, an uncommon practice for RIXS spectra. Each of the spectra was fit to a series of Voigt functions in an attempt to localize the peaks within the spectra. These peaks were determined to be RIXS events, and the energetic centers of these peaks were compared to a small band gap band structure calculation. The results of the RIXS analysis strongly indicate that the small gap solution is correct. This was a surprising result, given that LiFePO4 is an ionic, insulating transition metal oxide, showing all of the usual traits of a Mott-type insulator. <p>This contradiction was explained in terms of polaron formation. Polarons can severely distort the lattice, which changes the local charge density. This changes the local DOS such that the DOS probed by XAS or RIXS experiments is not necessarily in the ground state. In particular, polaron formation can reduce the band gap. Thus, the agreement between the small gap solution and experiment is false, in the sense that the physical assumptions that formed the basis of the small gap calculations do not reflect reality. Polaronic distortion was also tentatively put forward as an explanation for the discrepancy between partial fluorescence yield, total fluorescence yield, and total electron yield measurements of the XAS spectra of LiFePO4 and FePO4.

Page generated in 0.0501 seconds