• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 134
  • 2
  • Tagged with
  • 136
  • 136
  • 134
  • 130
  • 88
  • 46
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Automatisk visuelt inspeksjonssystem / Automatic Visual Inspection System

Bårdsen, Per Gunnar January 2006 (has links)
<p>Denne hovedoppgaven er rettet mot en praktisk implementasjon av et automatisk visuelt inspeksjonssystem. På bakgrunn av en serie av treningsbilder er målsetningen at systemet skal kunne klassifisere objekters avbildninger som godkjent eller underkjent. Arbeidet har lagt stor vekt på at systemet skal virke på generelle objekter. Systemet er implementert i Microsoft Visual Studio .NET 2003 C++, og viktige elementer tilknyttet arbeidet beskrives i denne rapporten. Resultatene virker lovende, da inspeksjonssystemet gjennomsnittlig klassifiserer 91% riktig på de 8 bildesett som systemet er testet med. Videre planer gir imidlertid håp om å utbedre systemet betydelig. Disse planene presenteres som videre arbeid i slutten av rapporten.</p>
12

Improving sliding-block puzzle solving using meta-level reasoning

Spaans, Ruben Grønning January 2010 (has links)
<p>In this thesis, we develop a meta-reasoning system based on CBR which solves sliding-block puzzles. The meta-reasoning system is built on top of a search-based sliding-block puzzle solving program which was developed as part of the specialization project at NTNU. As part of the thesis work, we study existing literature on automatic puzzle solving methods and state space search, as well as the use of reasoning and meta-level reasoning applied to puzzles and games. The literature study forms the theoretical foundation for the development of the meta-reasoning system. The meta-reasoning system is further enhanced by adding a meta-control cycle which uses randomized search to generate new cases to apply to puzzles. In addition, we explore several ways of improving the underlying solver program by trying to solve hard puzzles by using the solution for easier variants, and by developing a more memory-efficient way of representing puzzle configurations. We evaluate the results of our system, and shows that it offers a slight improvement compared to solving the puzzles with a set of general cases, as well as showing vast improvement for a few isolated test cases, but the performance is slightly behind the hand-tuned parameters we found in the specialization project. We conclude our work by identifying parts of our system where improvement can be done, as well as suggesting other promising areas for further research.</p>
13

Adaptive Robotics : A behavior-based system for control of mobile robots

Johansen, Maria January 2010 (has links)
<p>This report will explore behavior-based robotics and relevant AI techniques. A system for autonomous control of mobile robots inspired by behavior-based robotics, in particular Rodney Brooks' subsumption architecture, have been implemented, adapted for use in a multiagent environment. The system is modular and flexible, allowing for easy addition and removal of system parts. A weight-based command fusion approach is taken to action selection, making it possible to satisfy multiple behaviors simultaneously.</p>
14

Dynamic Scheduling for Autonomous Robotics

Ellefsen, Kai Olav January 2010 (has links)
<p>This project report describes a hybrid genetic algorithm that works as a schedule generator for a complex robotic harvesting task. The task is set to a dynamic environment with a robotic opponent, making responsiveness of the planning algorithm particularly important. To solve this task, many previous scheduling algorithms were studied. Genetic algorithms have successfully been used in many dynamic scheduling tasks, due to their ability to incrementally adapt and optimize solutions when changes are made to the environment. Many of the previous approaches also used a separate heuristic to quicly adapt solutions to the new environment, making the algorithm more responsive. In addition, the study of previous work revealed the importance of population diversity when making a responsive genetic algorithm. Implementation was based on a genetic algorithm made as the author's fifth year specialization project for solving a static version of the same task. This algorithm was hybridized with a powerful local search technique that proved essential in generating good solutions for the complex harvesting task. When extending the algorithm to also work in a dynamically changing environment, several adaptations and extensions needed to be made, to make it more responsive. The extensions and adaptations include a fast-response heuristic for immediate adaptation to environmental changes, a decrease in genotype size to speed up local searches and a contingency planning module intending to solve problems before they arise. Experiments proved that the implemented dynamic planner successfully adapted its plans to a changing environment, clearly showing improvements compared to running a static plan. Further experiments also proved that the dynamic planner was able to deal with erroneous time estimates in its simulator module in a good way. Experiments on contingency planning gave no clear results, but indicated that using computational resources for planning ahead may be a good choice, if the contingency to plan for is carefully selected. As no unequivocal results were obtained, further studies of combining genetic algorithms and contingency planning may be an interesting task for future efforts.</p>
15

Extraction-Based Automatic Summarization : Theoretical and Empirical Investigation of Summarization Techniques

Sizov, Gleb January 2010 (has links)
<p>A summary is a shortened version of a text that contains the main points of the original content. Automatic summarization is the task of generating a summary by a computer. For example, given a collection of news articles for the last week an automatic summarizer is able to create a concise overview of the important events. This summary can be used as the replacement for the original content or help to identify the events that a person is particularly interested in. Potentially, automatic summarization can save a lot of time for people that deal with a large amount of textual information. The straightforward way to generate a summary is to select several sentences from the original text and organize them in way to create a coherent text. This approach is called extraction-based summarization and is the topic of this thesis. Extraction-based summarization is a complex task that consists of several challenging subtasks. The essential part of the extraction-based approach is identification of sentences that contain important information. It can be done using graph-based representations and centrality measures that exploit similarities between sentences to identify the most central sentences. This thesis provide a comprehensive overview of methods used in extraction-based automatic summarization. In addition, several general natural language processing issues such as feature selection and text representation models are discussed with regard to automatic summarization. Part of the thesis is dedicated to graph-based representations and centrality measures used in extraction-based summarization. Theoretical analysis is reinforced with the experiments using the summarization framework implemented for this thesis. The task for the experiments is query-focused multi-document extraction-based summarization, that is, summarization of several documents according to a user query. The experiments investigate several approaches to this task as well as the use of different representation models, similarity and centrality measures. The obtained results indicate that use of graph centrality measures significantly improves the quality of generated summaries. Among the variety of centrality measure the degree-based ones perform better than path-based measures. The best performance is achieved when centralities are combined with redundancy removal techniques that prevent inclusion of similar sentences in a summary. Experiments with representation models reveal that a simple local term count representation performs better than the distributed representation based on latent semantic analysis, which indicates that further investigation of distributed representations in regard to automatic summarization is necessary. The implemented system performs quite good compared with the systems that participated in DUC 2007 summarization competition. Nevertheless, manual inspection of the generated summaries demonstrate some of the flaws of the implemented summarization mechanism that can be addressed by introducing advanced algorithms for sentence simplification and sentence ordering.</p>
16

Using Artificial Neural Networks To Forecast Financial Time Series

Aamodt, Rune January 2010 (has links)
This thesis investigates the application of artificial neural networks (ANNs) for forecasting financial time series (e.g. stock prices).The theory of technical analysis dictates that there are repeating patterns that occur in the historic prices of stocks, and that identifying these patterns can be of help in forecasting future price developments. A system was therefore developed which contains several ``agents'', each producing recommendations on the stock price based on some aspect of technical analysis theory. It was then tested if ANNs, using these recommendations as inputs, could be trained to forecast stock price fluctuations with some degree of precision and reliability.The predictions of the ANNs were evaluated by calculating the Pearson correlation between the predicted and actual price changes, and the ``hit rate'' (how often the predicted and the actual change had the same sign). Although somewhat mixed overall, the empirical results seem to indicate that at least some of the ANNs were able to learn enough useful features to have significant predictive power. Tests were performed with ANNs forecasting over different time frames, including intraday. The predictive performance was seen to decline on the shorter time scales.
17

Multimodal Volume to Volume Registration between Ultrasound and MRI

Ryen, Tommy January 2006 (has links)
<p>This master-thesis considers implementation of automated multimodal volume-to-volume registration of images, in order to provide neurosurgeons with valuable information for planning and intraoperative guidance. Focus has been on medical images from magnetic resonance (MR) and ultrasound (US) for use in surgical guidance. Prototype implementations for MRI-to-US registration have been proposed, and tested, using registration methods available in the Insight Toolkit (ITK). Mattes' Mutual Information has been the similarty metric, based on unpreprocessed angio-graphic volumes from both modalities. Only rigid transformations has been studied, and both types of Gradient Descent and Evolutionary optimizers has been examinated. The applications have been tested on clinical data from relevant surgical operations. The best results were obtained using an evolutional (1+1) optimizer for translational transformations only. This application was both fast and accurate. The other applications, using types of Gradient Descent optimizers, has proved to be significantly slower, inaccurate and more difficult to parameterize. It has been experienced that registration of angio-graphic volumes are easier to accomplish than registration of volumes of other weightings, due to their more similar characteristics. Angio-graphic images are also readily evaluated using volume renderings, but other methods should be constructed to provide a less subjective measure of success for the registration procedures. The obtained results indicate that automatic volume-to-volume registration of angio-graphic images from MRI and US, using Mattes' Mutual Information and an Evolutionary Optimizer, should be feasible for the neuronavigational system considered here, with sufficient accuracy. Further development include parameter-tuning of the applications, to possibly achieve increased accuracy. Additionally, a non-rigid registration application should be developed, to account for local deformations during surgery. Development of additional tools for performing accurate validation of registration results should be developed as well.</p>
18

Reduction of search space using group-of-experts and RL.

Anderson, Tore Rune January 2007 (has links)
<p>This thesis is testing out the group of experts regime in the context of reinforcement learning with the aim of reducing the search space used in reinforcement learning. Having tested different abstracion levels with this approach, it is the hyphothesis that using this approach to reduce the search space is best done on a high abstraction level. All though reinforcement learning has many advantages in certain settings, and is a preferred tehcnique in many different contexts, it still has its challenges. This architecture does not solve these, but suggests a way of dealing with the curse of dimentionality, the scaling problem within reinforcement learning systems.</p>
19

Automatic stock market trading based on Technical Analysis

Larsen, Fredrik January 2007 (has links)
<p>The theory of technical analysis suggests that future stock price developement can be foretold by analyzing historical price fluctuations and identifying repetitive patterns. A computerized system, able to produce trade recommendations based on different aspects of this theory, has been implemented. The system utilizes trading agents, trained using machine learning techniques, capable of producing unified buy and sell signals. It has been evaluated using actual trade data from the Oslo Børs stock exchange over the period 1999-2006. Compared to the simple strategy of buying and holding, some of the agents have proven to yield good results, both during years with extremely good stock market returns, as well as during times of recession. In spite of the positive performance, anomalous results do exist and call for cautionous use of the system’s recommendations. Combining them with fundamental analysis appears to be a safe approach to achieve succesful stock market trading.</p>
20

Software agents applied in oil production

Engmo, Lise, Hallen, Lene January 2007 (has links)
<p>The current increase in instrumentation of oil production facilities leads to a higher availability of real-time sensor data. This is an enabler for better control and optimisation of the production, but current computer systems do not necessarily provide enough operator support for handling and utilising this information. It is believed that agent technology may be of use in this scenario, and the goal of this Master's thesis is to implement a proof-of-concept which demonstrates the suitability of such solutions in this domain. The agent system is developed using the Prometheus methodology for the design and the JACK Intelligent Agents framework for the implementation. A regular Java program which simulates the running of a very simplified oil field is also developed. The simulator provides the environment in which the agent system is put to the test. The agents' objective is to maximise the oil production while keeping the system within its envelope. Their performance is compared with results obtained by a human operator working with the same goal in the same environment. The metrics we use are the number of critical situations which occur, their duration, and the total amount of oil produced. The results indicate that the agents react faster to changes in the environment and therefore manage to reduce the amount and duration of critical situations, as well as producing more oil. This suggests a possibility of introducing an agent system to deal with some aspects of the production system control. This may contribute to a reduction of the information load on the operator, giving him/her more time to concentrate on situations which the agents are not able (or not allowed) to handle on their own.</p>

Page generated in 0.0956 seconds