• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 144
  • 116
  • 4
  • Tagged with
  • 260
  • 256
  • 251
  • 181
  • 129
  • 129
  • 129
  • 125
  • 64
  • 51
  • 51
  • 51
  • 10
  • 10
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

ArtDev3D: An Artificial Development System

Høye, Johan January 2006 (has links)
<p>Evolutionary algorithms (EAs) are a class of population-based stochastic search algorithms which have proven themselves to be powerful tools in optimization problems where the search space is complex, contains many local optima, and is so large that an exhaustive search is not possible. An application area where EAs have great potential is in the design of electronic circuits. However, for this type of task such a large representation is typically required for each of the proposed solutions that using an EA approach is not feasible because of the immense computational power this would require. This limitation of EAs is known as the scalability problem: EAs perform well when dealing with problems requiring a small solution representation, but when the required size for these representations increases the EAs quickly become too computationally expensive to be useful. Numerous approaches for dealing with the scalability problem have been proposed. One of the more promising approaches is inspired by the way nature copes with scaling: the process of an organism growing from a single fertilized cell and into a multi-cellular being, called development. By adapting some of the mechanisms of development to a computer program, the EA can evolve a relatively small genome which when developed i.e. decompressed, using this program will represent a solution. There are, however, some problems regarding this approach. One issue is that biological development is such a complex process that implementing it in all its detail is neither feasible nor desired, meaning a decision regarding which mechanisms to implement and which ones to leave out must be made. Another issue is the increased difficulty to evolve a good solution. This occurs because EAs depend on a gradual refinement of the solution to be effective, but with this approach a small change in the genome may lead to a large change in the corresponding solution. This is because in this approach there is no longer a direct correspondence between the genotype space and the solution space, so that what is adjacent in the genotype space may be far apart in the solution space. This means that even though gradual refinement is achieved in genotype space, the changes in the corresponding solution space may appear to be more or less random A novel artificial development system, designed and implemented from scratch, is presented in this thesis. A novel system was built because, although a number of other such system already have been implemented, they are all in the experimental stage, and this system is though to be a useful supplement to the existing ones, providing more material to base the understanding of what may be useful in an artificial development system on. An explorative approach was taken where the implemented system was put through a number of tests to investigate its capabilities. First the systems ability to develop a varied set of different shapes was investigated. Secondly, four parameters were tested for their effect on the system's ability to develop good solutions: the initial number of neighbours, the number of chemical types used (both part of a precondition), the number of cell types available to the system, and the degree of symmetry in the target shapes. The experiments performed showed that the system is able to develop a number of shapes. For the four investigated parameters, indications were found that each has a profound effect on the systems ability to develop a given target.</p>
82

Tilrettelegging for prototyping av ny teknologi i en utviklingsomgivelse : En designers perspektiv

Evjen, Bjørn Sand January 2004 (has links)
<p>Når nye teknologier blir utviklet er de gjerne forbeholdt noen få, da stor kompleksitet kan innebære en for høy terskel til at andre enn fageksperter kan forstå teknologien fullt ut. For at en teknologi skal gjøres tilgjengelig til et større antall personer, kan det bli nødvendig å senke terskelen. Utfordringen blir da å forenkle teknologien og presentere den i en mer ”spiselig” utgave.</p><p>En stor del av aktiviteten innen teknologisk utvikling kan sies å være relatert til forenkling av teknologisk kompleksitet. I dagens teknologiintensive samfunn er forenkling helt nødvendig for at teknologi skal oppnå en viss utbredelse. Det kan være konkurranse mellom flere lignende teknologer, og det trenger ikke være den mest teknisk optimale, men like gjerne den mest brukervennlige løsningen som vinner utbredelse.</p><p>Bakgrunnen for hovedfagsoppgaven var at jeg ville se på hvordan man kunne forenkle ny teknologi slik at den ble lettere tilgjengelig for ikke-eksperter. Jeg ville se om teknologien kunne presenteres på en begripelig måte i en utviklingsomgivelse slik at man raskt kunne designe funksjonelle prototyper. For at teknologien skulle være lettere tilgjengelig måtte kompleksiteten reduseres slik at terskelen for å prøve ut den nye teknologien ble senket.</p>
83

Distributed Knowledge in Case-Based Reasoning : Knowledge Sharing and Reuse within the Semantic Web

Fidjeland, Mikael Kirkeby January 2006 (has links)
<p>The Semantic Web is an emerging framework for data reuse and sharing. By giving data clear semantics it allows for machine processing of this information. The Semantic Web technologies range from simple meta data to domain models using the Web Ontology Languge (OWL). Much of the semantics of OWL stems from the Knowledge Representation field of Description Logics. Case-Based Reasoning (CBR) uses specific knowledge in the form of cases to solve problems. The Creek system is a Knowledge-Intensive approach to CBR that combines specific knowledge with general domain knowledge. Part of this work is to define a OWL vocabulary for Creek. This will be concepts used to describe case-bases and domain models, and how these concepts are related to each other. Using this vocabulary, a knowledge model from a Creek system can be described in OWL and shared with others on the Semantic Web. We also examines how domain ontologies can be reused and imported into a Creek knowledge model. We propose a design for Creek to operate in the context of the Semantic Web and its distributed knowledge. Part of the design will be tested by implementation.</p>
84

Reducing catastrophic forgetting in neural networks using slow learning

Vik, Mikael Eikrem January 2006 (has links)
<p>This thesis describes a connectionist approach to learning and long-term memory consolidation, inspired by empirical studies on the roles of the hippocampus and neocortex in the brain. The existence of complementary learning systems is due to demands posed on our cognitive system because of the nature of our experiences. It has been shown that dual-network architectures utilizing information transfer successfully can avoid the phenomenon of catastrophic forgetting involved in multiple sequence learning. The experiments involves a Reverberated Simple Recurrent Network which is trained on multiple sequences with the memory being reinforced by means of self-generated pseudopatterns. My focus will be on the implications of how differentiated learning speed affects the level of forgetting, without explicit training on the data used to form the existing memory.</p>
85

Learning and Evolution in Complex Fitness Landscapes

Karlsen, Ero Stig January 2007 (has links)
<p>The Baldwin effect is the notion that life time adaptation can speed up evolution by 1) identifying good traits and 2) by genetic assimilation inscribing the traits in the population genetically. This thesis investigates the Baldwin effect by giving an introduction to its history, its current status in evolutionary biology and by reviewing some important experiments on the Baldwin effect in artificial life. It is shown that the Baldwin effect is perceived differently in the two fields; in evolutionary biology the phenomenon is surrounded by controversy, while the approach in artificial life seems to be more straight forward. Numerous computer simulations of the Baldwin effect have been conducted, and most report positive findings. I argue that the Baldwin effect has been interpreted differently in the literature, and that a more well-defined approach is needed. An experiment is performed where the effect of learning on evolution is observed in fitness landscapes of different complexity and with different learning costs. It is shown that the choice of operators and parameter settings are important when assessing the Baldwin effect in computer simulations. In particular I find that mutation has an important impact on the Baldwin effect. I argue that today's computer simulations are too abstract to serve as empirical evidence for the Baldwin effect, but that they nevertheless can be valuable indications of the phenomenon in nature. To assure the soundness of experiments on the Baldwin effect, the assumptions and choices made in the implementations need to be clarified and critically discussed. One important aspect is to compare the different experiments and their interpretations in an attempt to assess the coherence between the different simulations.</p>
86

Dialogue Learning in CCBR

Vartdal, Hans Arne January 2007 (has links)
<p>In the field of palliative care there is a need to create adaptive questionnaires to minimize the patient's "cognitive load" when acquiring data of the patient's subjective experience of pain. A conversational case based reasoning (CCBR) system can be used as a basis for such questionnaires, and dialogue learning as a method for reducing the number of questions asked, without deterioration of the data quality. In this thesis, methods for question ranking, dialogue inferring, and dialogue learning have been reviewed. A case based reasoning framework is introduced and improved, and based on this, a CCBR system with an extension for dialogue learning has been designed and implemented. The result was tested with well known datasets, as well as new data from a survey on patients' experience of pain. Evaluation shows that dialogue learning can be used to reduce the number of questions asked, but also reveals some problems when it comes to automatically evaluation of solutions found using query biased similarity measures.</p>
87

Incrementally Evolving a Dynamic Neural Network for Tactile-Olfactory Insect Navigation

Thuv, Øyvin Halfdan January 2007 (has links)
<p>This Masters thesis gives a thorough description of a study carried out in the Self-Organizing Systems group at the NTNU. Much {AI research in the later years has moved towards increased use of representationless strategies such as simulated neural networks. One technique for creating such networks is to evolve them using simulated Darwinian evolution. This is a powerful technique, but it is often limited by the computer resources available. One way to speed up evolution, is to focus the evolutionary search on a more narrow range of solutions. It is for example possible to favor evolution of a specific ``species'' by initializing the search with a specialized set of genes. A disadvantage of doing this is of course that many other solutions (or ``species'') are disregarded so that good solutions in theory may be lost. It is therefore necessary to find focusing strategies that are generally applicable and (with a high probability) only disregards solutions that are considered unimportant. Three different ways of focusing evolutionary search for cognitive behaviours are merged and evaluated in this thesis: On a macro level, incremental evolution is applied to partition the evolutionary search. On a micro level, specific properties of the chosen neural network model (CTRNNs) are exploited. The two properties are seeding initial populations with center-crossing neural networks and/or bifurcative neurons. The techniques are compared to standard, naive, evolutionary searches by applying them to the evolution of simulated neural networks for the walking and control of a six-legged mobile robot. A problem simple enough to be satisfactorily understood, but complex enough to be a challenge for a traditional evolutionary search.</p>
88

Evolving a 2D Model of an Eye using CPPNs

Storsveen, Anders January 2008 (has links)
<p>This papers uses CPPNs to evolve 2D models of an eye. These models are graded by a fitness function award high information retrieval. The papers shows resulting models with intersting properties and which are similar in form to real world eyes.</p>
89

Dokument-klynging (document clustering)

Galåen, Magnus January 2008 (has links)
<p>As document searching becomes more and more important with the rapid growth of document bases today, document clustering also becomes more important. Some of the most commonly used document clustering algorithms today, are pure statistical in nature. Other algorithms have emerged, adressing some of the issues with numerical algorithms, claiming to be better. This thesis compares two well-known algorithms: Elliptic K-Means and Suffix Tree Clustering. They are compared in speed and quality, and it is shown that Elliptic K-Means performs better in speed, while Suffix Tree Clustering (STC) performs better in quality. It is further shown that STC performs better using small portions of relevant text (snippets) on real web-data compared to the full document. It is also shown that a threshold value for base cluster merging is unneccesary. As STC is shown to perform adequately in speed when running on snippets only, it is concluded that STC is the better algorithm for the purpose of search results clustering.</p>
90

A Flexible Platform for Comparison of Artificial Development Models : Ngene - An Artificial Development Framework

Nguyen, Tommy Anh Tuan January 2008 (has links)
<p>In recent years, artificial development has been introduced to evolutionary algorithms as a means to overcome the scalability problem. Though in its early stages, it has been showing a lot of promise. Many studies have been conducted to improve our understanding of this methodology. Though many has been successful, there have been contradicting results. Further studies have been difficult because the results were obtained using not only different models, but on different platforms as well. Any comparison at this point will be full of uncertainties simply because there are too many factors to consider. I wish to contribute to the field of artificial development with this thesis. However, there will be no comparisons of various development models, or invention of a new model. I will leave these types tasks to better people. Instead, a platform to build development models will be introduced. The purpose of this platform will be made clear in course of the thesis. Two prominent models will be picked out to be ported to this platform, and it will be shown that not only is it possible to re-implement these models, it is also possible to reproduce the results. This feat will demonstrate the flexibility of the framework as well the benefits of using it.</p>

Page generated in 0.0792 seconds