• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 774
  • 206
  • Tagged with
  • 980
  • 968
  • 799
  • 592
  • 498
  • 498
  • 162
  • 162
  • 130
  • 130
  • 107
  • 104
  • 100
  • 62
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Improving sliding-block puzzle solving using meta-level reasoning

Spaans, Ruben Grønning January 2010 (has links)
<p>In this thesis, we develop a meta-reasoning system based on CBR which solves sliding-block puzzles. The meta-reasoning system is built on top of a search-based sliding-block puzzle solving program which was developed as part of the specialization project at NTNU. As part of the thesis work, we study existing literature on automatic puzzle solving methods and state space search, as well as the use of reasoning and meta-level reasoning applied to puzzles and games. The literature study forms the theoretical foundation for the development of the meta-reasoning system. The meta-reasoning system is further enhanced by adding a meta-control cycle which uses randomized search to generate new cases to apply to puzzles. In addition, we explore several ways of improving the underlying solver program by trying to solve hard puzzles by using the solution for easier variants, and by developing a more memory-efficient way of representing puzzle configurations. We evaluate the results of our system, and shows that it offers a slight improvement compared to solving the puzzles with a set of general cases, as well as showing vast improvement for a few isolated test cases, but the performance is slightly behind the hand-tuned parameters we found in the specialization project. We conclude our work by identifying parts of our system where improvement can be done, as well as suggesting other promising areas for further research.</p>
52

Adaptive Robotics : A behavior-based system for control of mobile robots

Johansen, Maria January 2010 (has links)
<p>This report will explore behavior-based robotics and relevant AI techniques. A system for autonomous control of mobile robots inspired by behavior-based robotics, in particular Rodney Brooks' subsumption architecture, have been implemented, adapted for use in a multiagent environment. The system is modular and flexible, allowing for easy addition and removal of system parts. A weight-based command fusion approach is taken to action selection, making it possible to satisfy multiple behaviors simultaneously.</p>
53

Dynamic Scheduling for Autonomous Robotics

Ellefsen, Kai Olav January 2010 (has links)
<p>This project report describes a hybrid genetic algorithm that works as a schedule generator for a complex robotic harvesting task. The task is set to a dynamic environment with a robotic opponent, making responsiveness of the planning algorithm particularly important. To solve this task, many previous scheduling algorithms were studied. Genetic algorithms have successfully been used in many dynamic scheduling tasks, due to their ability to incrementally adapt and optimize solutions when changes are made to the environment. Many of the previous approaches also used a separate heuristic to quicly adapt solutions to the new environment, making the algorithm more responsive. In addition, the study of previous work revealed the importance of population diversity when making a responsive genetic algorithm. Implementation was based on a genetic algorithm made as the author's fifth year specialization project for solving a static version of the same task. This algorithm was hybridized with a powerful local search technique that proved essential in generating good solutions for the complex harvesting task. When extending the algorithm to also work in a dynamically changing environment, several adaptations and extensions needed to be made, to make it more responsive. The extensions and adaptations include a fast-response heuristic for immediate adaptation to environmental changes, a decrease in genotype size to speed up local searches and a contingency planning module intending to solve problems before they arise. Experiments proved that the implemented dynamic planner successfully adapted its plans to a changing environment, clearly showing improvements compared to running a static plan. Further experiments also proved that the dynamic planner was able to deal with erroneous time estimates in its simulator module in a good way. Experiments on contingency planning gave no clear results, but indicated that using computational resources for planning ahead may be a good choice, if the contingency to plan for is carefully selected. As no unequivocal results were obtained, further studies of combining genetic algorithms and contingency planning may be an interesting task for future efforts.</p>
54

Distribuert database for posisjonslagring / Distributed database for location storage

Nygaard, Eirik Alderslyst January 2010 (has links)
<p>Sosial interaksjon er en nødvendig ting i dagens samfunn, og folk vi gjerne vite hvor vennene deres er. Denne oppgaven beskriver et system som kan motta lokasjonsinformasjon fra brukere og sender den ut til ønskede parter. Systemet er inndelt i forskjellige soner hvor hver er knyttet opp mot et fysisk område. En bruker har en hjemme-sone som har ansvar for alltid å holde den nyeste informasjonen til brukeren, sånn som hvem det er som skal ha lokasjonsoppdateringene han sender inn. Når en bruker beveger seg på tvers av soner vil den nødvendige informasjonen til brukeren blir sendt med til den nye sonen han går inn i for at den nye sonen kan være ansvarlig for å sende ut all informasjon. Men hjemmesonen har fortsatt oversikt over hvor brukeren er, og er ansvarlig i en situasjon hvor en bruker beveger seg på tvers av to soner så den nye sonen får riktig informasjon og passer på at den gamle sonen blir fortalt at brukere har forlatt den.</p>
55

Extraction-Based Automatic Summarization : Theoretical and Empirical Investigation of Summarization Techniques

Sizov, Gleb January 2010 (has links)
<p>A summary is a shortened version of a text that contains the main points of the original content. Automatic summarization is the task of generating a summary by a computer. For example, given a collection of news articles for the last week an automatic summarizer is able to create a concise overview of the important events. This summary can be used as the replacement for the original content or help to identify the events that a person is particularly interested in. Potentially, automatic summarization can save a lot of time for people that deal with a large amount of textual information. The straightforward way to generate a summary is to select several sentences from the original text and organize them in way to create a coherent text. This approach is called extraction-based summarization and is the topic of this thesis. Extraction-based summarization is a complex task that consists of several challenging subtasks. The essential part of the extraction-based approach is identification of sentences that contain important information. It can be done using graph-based representations and centrality measures that exploit similarities between sentences to identify the most central sentences. This thesis provide a comprehensive overview of methods used in extraction-based automatic summarization. In addition, several general natural language processing issues such as feature selection and text representation models are discussed with regard to automatic summarization. Part of the thesis is dedicated to graph-based representations and centrality measures used in extraction-based summarization. Theoretical analysis is reinforced with the experiments using the summarization framework implemented for this thesis. The task for the experiments is query-focused multi-document extraction-based summarization, that is, summarization of several documents according to a user query. The experiments investigate several approaches to this task as well as the use of different representation models, similarity and centrality measures. The obtained results indicate that use of graph centrality measures significantly improves the quality of generated summaries. Among the variety of centrality measure the degree-based ones perform better than path-based measures. The best performance is achieved when centralities are combined with redundancy removal techniques that prevent inclusion of similar sentences in a summary. Experiments with representation models reveal that a simple local term count representation performs better than the distributed representation based on latent semantic analysis, which indicates that further investigation of distributed representations in regard to automatic summarization is necessary. The implemented system performs quite good compared with the systems that participated in DUC 2007 summarization competition. Nevertheless, manual inspection of the generated summaries demonstrate some of the flaws of the implemented summarization mechanism that can be addressed by introducing advanced algorithms for sentence simplification and sentence ordering.</p>
56

Using Artificial Neural Networks To Forecast Financial Time Series

Aamodt, Rune January 2010 (has links)
This thesis investigates the application of artificial neural networks (ANNs) for forecasting financial time series (e.g. stock prices).The theory of technical analysis dictates that there are repeating patterns that occur in the historic prices of stocks, and that identifying these patterns can be of help in forecasting future price developments. A system was therefore developed which contains several ``agents'', each producing recommendations on the stock price based on some aspect of technical analysis theory. It was then tested if ANNs, using these recommendations as inputs, could be trained to forecast stock price fluctuations with some degree of precision and reliability.The predictions of the ANNs were evaluated by calculating the Pearson correlation between the predicted and actual price changes, and the ``hit rate'' (how often the predicted and the actual change had the same sign). Although somewhat mixed overall, the empirical results seem to indicate that at least some of the ANNs were able to learn enough useful features to have significant predictive power. Tests were performed with ANNs forecasting over different time frames, including intraday. The predictive performance was seen to decline on the shorter time scales.
57

Multimodal Volume to Volume Registration between Ultrasound and MRI

Ryen, Tommy January 2006 (has links)
<p>This master-thesis considers implementation of automated multimodal volume-to-volume registration of images, in order to provide neurosurgeons with valuable information for planning and intraoperative guidance. Focus has been on medical images from magnetic resonance (MR) and ultrasound (US) for use in surgical guidance. Prototype implementations for MRI-to-US registration have been proposed, and tested, using registration methods available in the Insight Toolkit (ITK). Mattes' Mutual Information has been the similarty metric, based on unpreprocessed angio-graphic volumes from both modalities. Only rigid transformations has been studied, and both types of Gradient Descent and Evolutionary optimizers has been examinated. The applications have been tested on clinical data from relevant surgical operations. The best results were obtained using an evolutional (1+1) optimizer for translational transformations only. This application was both fast and accurate. The other applications, using types of Gradient Descent optimizers, has proved to be significantly slower, inaccurate and more difficult to parameterize. It has been experienced that registration of angio-graphic volumes are easier to accomplish than registration of volumes of other weightings, due to their more similar characteristics. Angio-graphic images are also readily evaluated using volume renderings, but other methods should be constructed to provide a less subjective measure of success for the registration procedures. The obtained results indicate that automatic volume-to-volume registration of angio-graphic images from MRI and US, using Mattes' Mutual Information and an Evolutionary Optimizer, should be feasible for the neuronavigational system considered here, with sufficient accuracy. Further development include parameter-tuning of the applications, to possibly achieve increased accuracy. Additionally, a non-rigid registration application should be developed, to account for local deformations during surgery. Development of additional tools for performing accurate validation of registration results should be developed as well.</p>
58

Automatic Optimization of MPI Applications : Turning Synchronous Calls Into Asynchronous

Natvig, Thorvald January 2006 (has links)
<p>The availability of cheap computers with outstanding single-processor performance coupled with Ethernet and the development of open MPI implementations has led to a drastic increase in the number of HPC clusters. This, in turn, has led to many new HPC users. Ideally, all users are proficient programmers that always optimize their programs for the specific architecture they are running on. In practice, users only invest enough effort that their program runs correctly. While we would like to teach all HPC users how to be better programmers, we realize most users consider HPC a tool and would like to focus on their application problem. To this end, we present a new method for automatically optimizing any application's communication. By protecting the memory associated with MPI_Send, MPI_Recv and MPI_Sendrecv requests, we can let the request continue in the background as MPI_Isend or MPI_Irecv while the application is allowed to continue in the belief the request is finished. Once the data is accessed by the application, our protection will ensure we wait for the background transfer to finish before allowing the application to continue. Also presented is an alternate method with less overhead based on recognizing series of requests made between computation phases. We allow the requests in such a chain to overlap with each other, and once the end of such a chain of requests is reached, we wait for all the requests to complete. All of this is done without any user intervention at all. The method can be dynamically injected at runtime, which makes it applicable to any MPI program in binary form. We have implemented a 2D parallel red-black SOR PDE solver, which due to its alternating red and black cell transfers represents a "worst case" communication pattern for MPI programs with 2D data domain decomposition. We show that our new method will greatly improve the efficiency of this application on a cluster, yielding performance close to that of manual optimization.</p>
59

Virtual Reality Techniques in the Domain of Surgical Simulators

Haaland, Terje Sanderud January 2006 (has links)
<p>Virtual reality based surgical simulators offer an elegant approach to enhancing traditional training in surgery. For interactive surgery simulation to be useful, however, there are several requirements needed to be fulfilled. A good visualisation is needed. The physical behavior of an organ must be modeled realistically. Finally there is a need of a device capable of force feedback to realize the possibility of ``feeling'' a virtual object. In this thesis a basic prototype was developed to demonstrate all necessary concepts needed in a surgical simulator. The study was aimed at finding a suitable architecture and design for development of a surgical simulation application which is conceptually clear and easy to comprehend. Moreover, it was considered important that the prototype can provide a good basis for further experimentation. The main focus was on finding a satisfactory method that demonstrates the main concepts, while keeping the complexity as low as possible. In the developed prototype, the visual impression of 3D is present, the haptic feedback works satisfactory, and the physical modelling proved to be feasible for simulating a virtual object. The object oriented design resulted in a compact and clear application, where changes in the implementation can be applied locally without unwanted implications elsewhere in the code. Due to these qualities, implementing multi resolution and cutting was an easy task. Only minor changes to limited parts of the application was needed. This shows its suitability as a starting point for future experimenting and demonstration of consepts, in the field of surgical simulation.</p>
60

Reduction of search space using group-of-experts and RL.

Anderson, Tore Rune January 2007 (has links)
<p>This thesis is testing out the group of experts regime in the context of reinforcement learning with the aim of reducing the search space used in reinforcement learning. Having tested different abstracion levels with this approach, it is the hyphothesis that using this approach to reduce the search space is best done on a high abstraction level. All though reinforcement learning has many advantages in certain settings, and is a preferred tehcnique in many different contexts, it still has its challenges. This architecture does not solve these, but suggests a way of dealing with the curse of dimentionality, the scaling problem within reinforcement learning systems.</p>

Page generated in 0.0581 seconds