• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 94
  • 51
  • 44
  • 9
  • 9
  • 7
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 251
  • 251
  • 71
  • 68
  • 55
  • 52
  • 51
  • 49
  • 49
  • 38
  • 36
  • 36
  • 35
  • 33
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Evoluční návrh kombinačních obvodů na počítačovém clusteru / Evolutionary Design of Combinational Circuits on Computer Cluster

Pánek, Richard January 2015 (has links)
This master's thesis deals with evolutionary algorithms and how them to use to design of combinational circuits. Genetic programming especially CGP is the most applicable to use for this type of task. Furthermore, it deals with computation on computer cluster and the use of evolutionary algorithms on them. For this computation is the most suited island models with CGP. Then a new way of recombination in CGP is designed to improve them. This design is implemented and tested on the computer cluster.
202

[pt] ESTUDO SOBRE CARACTERIZAÇÃO DE RESERVATÓRIOS POR PROGRAMAÇÃO GENÉTICA / [en] STUDIES ON RESERVOIR CHARACTERIZATION VIA GENETIC PROGRAMMING

JEFF MAYNARD GUILLEN 15 February 2016 (has links)
[pt] Na área de exploração e produção de petróleo são alocados grandes investimentos para conseguir diminuir os riscos associados à baixos níveis de produção, que podem ser minimizados mediante a acertada caracterização do reservatório de petróleo. Uma valiosa fonte de informação pode ser extraída de dados sísmicos 3D, obtidos do campo em estudo. O custo econômico de aquisição de esta base de dados para o reservatório completo é relativamente baixo, se comparado com uma amostragem direta por meio de perfurações de poços. Embora, a relação entre os dados sísmicos e as propriedades de reservatório seja considerada ambígua, esta deve ser integrada com informação confiável, como aquela obtida mediante perfilagem de poços. Fazendo uso dos abundantes dados sísmicos e das escassas, mas, precisas medições em perfurações existentes, foi desenvolvido neste trabalho um sistema baseado no algoritmo de Programação Genética (PG) para caracterizar geologicamente um reservatório de petróleo. PG é uma técnica de computação evolucionária capaz de estimar relações não lineares entre um conjunto de entrada e de saída, mediante uma expressão simbólica explícita. Para extrair informação adicional nos registros sísmicos são calculados atributos sísmicos, que facilitam a identificação de características estratigráficas ou estruturais do subsolo representadas indiretamente pela sísmica. Adicionalmente, é utilizado o método de inversão sísmica para o cálculo da impedância acústica, que é uma variável auxiliar derivada de sísmica calibrada com perfis de poço. Os atributos sísmicos junto com a impedância acústica servirão para a estimação de propriedades geológicas. Esta metodologia de trabalho foi testada em um reservatório real de grande complexidade geológica. Por meio de PG, foi representada satisfatoriamente a relação entre dados derivados da sísmica e a porosidade do campo, demostrando assim que PG é uma alternativa viável para a caracterização geológica de reservatórios. Posteriormente, foi realizada uma clusterização do campo baseada em características geofísicas que permitiram a construção de estimadores por PG especializados para cada zona. / [en] In the field of oil exploration and production a great deal of investment is allocated in reducing the risks associated to low production levels that can be minimized through an accurate oil reservoir characterization. A valuable source of information can be extracted from 3D seismic data, obtained from the studied reservoir. The economic cost of the acquisition of this data base for the whole reservoir is relatively low, if compared to the direct sampling method of well drilling. Being that the relationship between seismic data and reservoir properties is considered ambiguous, it must be integrated with reliable information, such as that obtained by well logging. Making use of abundant seismic data and scarce, yet accurate, measurements from the existing drillings, it was developed in this study a system based in the algorithm of Genetic Programming (GP), to geologically characterize an oil reservoir. GP is an evolutionary computational technique capable of estimating the non-linear relationships between input and output parameter, through an explicit symbolic expression. In order to extract additional information from seismic records, seismic attributes are calculated, which facilitate tasks of identifying stratigraphic and structural characteristics of the subsurface, represented indirectly by seismic data. Moreover, a seismic inversion method is used to estimate the acoustic impedance, an auxiliary variable derived from seismic data calibrated by well logs. The seismic attributes along with the acoustic impedance will be used to estimate geological properties. This workflow was tested on a real reservoir, thus presenting geological complexity. Through GP, the relationship between seismic derived data and the field porosity was represented satisfactorily, demonstrating that GP is a viable alternative for geologic reservoir characterization. Afterwards, the reservoir was divided in clusters according to geophysical properties, this allowed the construction of GP based estimators for each zone.
203

Integrated Software Pipelining

Eriksson, Mattias January 2009 (has links)
In this thesis we address the problem of integrated software pipelining for clustered VLIW architectures. The phases that are integrated and solved as one combined problem are: cluster assignment, instruction selection, scheduling, register allocation and spilling. As a first step we describe two methods for integrated code generation of basic blocks. The first method is optimal and based on integer linear programming. The second method is a heuristic based on genetic algorithms. We then extend the integer linear programming model to modulo scheduling. To the best of our knowledge this is the first time anybody has optimally solved the modulo scheduling problem for clustered architectures with instruction selection and cluster assignment integrated. We also show that optimal spilling is closely related to optimal register allocation when the register files are clustered. In fact, optimal spilling is as simple as adding an additional virtual register file representing the memory and have transfer instructions to and from this register file corresponding to stores and loads. Our algorithm for modulo scheduling iteratively considers schedules with increasing number of schedule slots. A problem with such an iterative method is that if the initiation interval is not equal to the lower bound there is no way to determine whether the found solution is optimal or not. We have proven that for a class of architectures that we call transfer free, we can set an upper bound on the schedule length. I.e., we can prove when a found modulo schedule with initiation interval larger than the lower bound is optimal. Experiments have been conducted to show the usefulness and limitations of our optimal methods. For the basic block case we compare the optimal method to the heuristic based on genetic algorithms. This work has been supported by The Swedish national graduate school in computer science (CUGS) and Vetenskapsrådet (VR).
204

Technika ALPS v kartézském genetickém programování / ALPS Technique in Cartesian Genetic Programming

Stanovský, Peter January 2009 (has links)
This work introduces a brief summary of softcomputing and the solutions to NP-hard problems. It especially deals with evolution algorithms and their basic types. The next part involves the study of cartesian genetic programming, which belongs to the field of evolution algorithms, used mainly in the evolution of digital circuits, symbolic regression, etc. A special chapter is devoted to the studies of new technique Age layered population structure, which deals with the problems of premature convergence, which suggests the way of how the population could be divided into subpopulations split up according to the age criteria. Thanks to the maintaining of sufficient diversity, it achieves substantially better solutions in comparison to the classical evolution algorithms. This papier includes the suggestion of two ways of incorporation of the ALPS technique into CGP. In the next part of work there were carried out tests on the classic problems, that would be solved with evolution algorithms. These tests were made with and without using ALPS technique. In the part of work "Experimental results" there was discussed a contribution of using ALPS technique in CGP against the classic CGP.
205

Evoluční návrh obvodů na úrovni tranzistorů / Evolutionary Circuit Design at the Transistor Level

Žaloudek, Luděk Unknown Date (has links)
This project deals with evolutionary design of electronic circuits with an emphasis on digital circuits. It describes the theoretical basics for the evolutionary design of circuits on computer systems, including the explanation of Genetic Programming and Evolutionary Strategies, possible design levels of electronic circuits, CMOS technology overview, also the overview of the most important evolutionary circuits design methods like development and Cartesian Genetic Programming. Next introduced is a new method of digital circuits design on the transistor level, which is based on CGP. Also a design system using this new method is introduced. Finally, the experiments performed with this system are described and evaluated.
206

Evolving user-specific emotion recognition model via incremental genetic programming / 漸進型遺伝的プログラミングによるユーザ特定型の感情認識モデルの進化に関する研究 / ゼンシンガタ イデンテキ プログラミング ニヨル ユーザ トクテイガタ ノ カンジョウ ニンシキ モデル ノ シンカ ニカンスル ケンキュウ

ユスフ ラハディアン, Rahadian Yusuf 22 March 2017 (has links)
本論文では,漸進型遺伝的プログラミングを用いて特定ユーザを対象にした感情認識モデルを進化的に実現する方法論について提案した.特徴量の木構造で解を表現する遺伝的プログラミングを用い,時間情報も含め顔表情データを取得できる汎用センサの情報を基にユーザ適応型の感情認識モデルを進化させた.同時に遺伝的プログラミングの非決定性,汎化性の欠如,過適応に対処するため,進化を漸進的に展開する機構を組み込んだ漸進型遺伝的プログラミング法を開発した. / This research proposes a model to tackle challenges common in Emotion Recognition based on facial expression. First, we use pervasive sensor and environment, enabling natural expressions of user, as opposed to unnatural expressions on a large dataset. Second, the model analyzes relevant temporal information, unlike many other researches. Third, we employ user-specific approach and adaptation to user. We also show that our evolved model by genetic programming can be analyzed on how it really works and not a black-box model. / 博士(工学) / Doctor of Philosophy in Engineering / 同志社大学 / Doshisha University
207

A Comparative Analysis Between Context-based Reasoning (cxbr) And Contextual Graphs (cxgs).

Lorins, Peterson Marthen 01 January 2005 (has links)
Context-based Reasoning (CxBR) and Contextual Graphs (CxGs) involve the modeling of human behavior in autonomous and decision-support situations in which optimal human decision-making is of utmost importance. Both formalisms use the notion of contexts to allow the implementation of intelligent agents equipped with a context sensitive knowledge base. However, CxBR uses a set of discrete contexts, implying that models created using CxBR operate within one context at a given time interval. CxGs use a continuous context-based representation for a given problem-solving scenario for decision-support processes. Both formalisms use contexts dynamically by continuously changing between necessary contexts as needed in appropriate instances. This thesis identifies a synergy between these two formalisms by looking into their similarities and differences. It became clear during the research that each paradigm was designed with a very specific family of problems in mind. Thus, CXBR best implements models of autonomous agents in environment, while CxGs is best implemented in a decision support setting that requires the development of decision-making procedures. Cross applications were implemented on each and the results are discussed.
208

Integrated Data Fusion And Mining (idfm) Technique For Monitoring Water Quality In Large And Small Lakes

Vannah, Benjamin 01 January 2013 (has links)
Monitoring water quality on a near-real-time basis to address water resources management and public health concerns in coupled natural systems and the built environment is by no means an easy task. Furthermore, this emerging societal challenge will continue to grow, due to the ever-increasing anthropogenic impacts upon surface waters. For example, urban growth and agricultural operations have led to an influx of nutrients into surface waters stimulating harmful algal bloom formation, and stormwater runoff from urban areas contributes to the accumulation of total organic carbon (TOC) in surface waters. TOC in surface waters is a known precursor of disinfection byproducts in drinking water treatment, and microcystin is a potent hepatotoxin produced by the bacteria Microcystis, which can form expansive algal blooms in eutrophied lakes. Due to the ecological impacts and human health hazards posed by TOC and microcystin, it is imperative that municipal decision makers and water treatment plant operators are equipped with a rapid and economical means to track and measure these substances. Remote sensing is an emergent solution for monitoring and measuring changes to the earth’s environment. This technology allows for large regions anywhere on the globe to be observed on a frequent basis. This study demonstrates the prototype of a near-real-time early warning system using Integrated Data Fusion and Mining (IDFM) techniques with the aid of both multispectral (Landsat and MODIS) and hyperspectral (MERIS) satellite sensors to determine spatiotemporal distributions of TOC and microcystin. Landsat satellite imageries have high spatial resolution, but such application suffers from a long overpass interval of 16 days. On the other hand, free coarse resolution sensors with daily revisit times, such as MODIS, are incapable of providing detailed water quality information because of low spatial resolution. This iv issue can be resolved by using data or sensor fusion techniques, an instrumental part of IDFM, in which the high spatial resolution of Landsat and the high temporal resolution of MODIS imageries are fused and analyzed by a suite of regression models to optimally produce synthetic images with both high spatial and temporal resolutions. The same techniques are applied to the hyperspectral sensor MERIS with the aid of the MODIS ocean color bands to generate fused images with enhanced spatial, temporal, and spectral properties. The performance of the data mining models derived using fused hyperspectral and fused multispectral data are quantified using four statistical indices. The second task compared traditional two-band models against more powerful data mining models for TOC and microcystin prediction. The use of IDFM is illustrated for monitoring microcystin concentrations in Lake Erie (large lake), and it is applied for TOC monitoring in Harsha Lake (small lake). Analysis confirmed that data mining methods excelled beyond two-band models at accurately estimating TOC and microcystin concentrations in lakes, and the more detailed spectral reflectance data offered by hyperspectral sensors produced a noticeable increase in accuracy for the retrieval of water quality parameters.
209

Falconet: Force-feedback Approach For Learning From Coaching And Observation Using Natural And Experiential Training

Stein, Gary 01 January 2009 (has links)
Building an intelligent agent model from scratch is a difficult task. Thus, it would be preferable to have an automated process perform this task. There have been many manual and automatic techniques, however, each of these has various issues with obtaining, organizing, or making use of the data. Additionally, it can be difficult to get perfect data or, once the data is obtained, impractical to get a human subject to explain why some action was performed. Because of these problems, machine learning from observation emerged to produce agent models based on observational data. Learning from observation uses unobtrusive and purely observable information to construct an agent that behaves similarly to the observed human. Typically, an observational system builds an agent only based on prerecorded observations. This type of system works well with respect to agent creation, but lacks the ability to be trained and updated on-line. To overcome these deficiencies, the proposed system works by adding an augmented force-feedback system of training that senses the agents intentions haptically. Furthermore, because not all possible situations can be observed or directly trained, a third stage of learning from practice is added for the agent to gain additional knowledge for a particular mission. These stages of learning mimic the natural way a human might learn a task by first watching the task being performed, then being coached to improve, and finally practicing to self improve. The hypothesis is that a system that is initially trained using human recorded data (Observational), then tuned and adjusted using force-feedback (Instructional), and then allowed to perform the task in different situations (Experiential) will be better than any individual step or combination of steps.
210

Contextualizing Observational Data For Modeling Human Performance

Trinh, Viet 01 January 2009 (has links)
This research focuses on the ability to contextualize observed human behaviors in efforts to automate the process of tactical human performance modeling through learning from observations. This effort to contextualize human behavior is aimed at minimizing the role and involvement of the knowledge engineers required in building intelligent Context-based Reasoning (CxBR) agents. More specifically, the goal is to automatically discover the context in which a human actor is situated when performing a mission to facilitate the learning of such CxBR models. This research is derived from the contextualization problem left behind in Fernlund's research on using the Genetic Context Learner (GenCL) to model CxBR agents from observed human performance [Fernlund, 2004]. To accomplish the process of context discovery, this research proposes two contextualization algorithms: Contextualized Fuzzy ART (CFA) and Context Partitioning and Clustering (COPAC). The former is a more naive approach utilizing the well known Fuzzy ART strategy while the latter is a robust algorithm developed on the principles of CxBR. Using Fernlund's original five drivers, the CFA and COPAC algorithms were tested and evaluated on their ability to effectively contextualize each driver's individualized set of behaviors into well-formed and meaningful context bases as well as generating high-fidelity agents through the integration with Fernlund's GenCL algorithm. The resultant set of agents was able to capture and generalized each driver's individualized behaviors.

Page generated in 0.0867 seconds