• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 134
  • 2
  • Tagged with
  • 136
  • 136
  • 134
  • 130
  • 88
  • 46
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Dancing Robots

Tidemann, Axel January 2006 (has links)
<p>This Master’s thesis implements a multiple paired models architecture that is used to control a simulated robot. The architecture consists of several modules. Each module holds a paired forward/inverse model. The inverse model takes as input the current and desired state of the system, and outputs motor commands that will achieve the desired state. The forward model takes as input the current state and the motor commands acting on the environment, and outputs the predicted next state. The models are paired, due to the fact that the output of the inverse model is fed into the forward model. A weighting mechanism based on how well the forward model predicts determines how much a module will influence the total motor control. The architecture is a slight tweak of the HAMMER and MOSAIC architectures of Demiris and Wolpert, respectively. The robot is to imitate dance moves that it sees. Three experiments are done; in the first two the robot imitates another robot, whereas in the third experiment the robot imitates a movement pattern gathered from human data. The pattern was obtained using a Pro Reflex tracking system. After training the multiple paired models architecture, the performance and self-organization of the different modules are analyzed. Shortcomings with the architecture are pointed out along with directions for future work. The main results of this thesis is that the architecture does not self-organize as intended; instead the architecture finds its own way to separate the input space into different modules. This is also most likely attributed to a problem with the learning of the responsibility predictor of the modules. This problem must be solved for the architecture to work as designed, and is a good starting point for future work.</p>
2

GeneTUC : Event extraction from TQL logic

Søvik, Harald January 2006 (has links)
<p>As Natural Language Processing systems converge on a high percentage of successful deeply parsed text, parse success alone is an incomplete measure of the ``intelligence'' exhibited by the system. Because systems apply different grammars, dictionaries and programming languages, the internal representation of parsed text is often different from system to system, making it difficult to compare performance and exchange useful data such as tagged corpora or semantic interpretations. This report describes how semantically annotated corpora can be used to measure quality of Natural Language Processing systems. A selected corpus produced by the GENIA project were used as ``golden standard'' (event-annotated abstracts from MEDLINE). This corpus were sparse (19 abstracts), thus manual methods were employed to produce a mapping from the native GeneTUC knowledge format (TQL). This mapping were used to produce an evaluation of events in GeneTUC. We were able to attain a recall of 67% and average precision of 33% on the training data. These results suggest that the mapping is inadequate. On test data, the recall were 28% and average precision 21%. Because events is a new ``feature'' in NLP-applications, there are no large corpora that can be used for automated rule learning. The conclusion is that at least there exists a partial mapping from TQL to GENIA events, and that larger corpora and AI-methods should be applied to refine the mapping rules. In addition, we discovered that this mapping can be of use for extraction of protein-protein interactions.</p>
3

Automatic diagnosis of ultrasound images using standard view planes of fetal anatomy

Ødegård, Jan, Østen, Anders January 2006 (has links)
<p>The use of ultrasound has revolutionised the area of clinical fetal examinations. The possibility of detecting congenital abnormalities at an early stage of the pregnancy is highly important to maximise the chances of correcting the defect before it becomes life-threatening. The problems related to the routine procedure is its complexity and the fact that it requires a lot of knowledge about fetal anatomy. Because of the lack of training among midwives, especially in less developed countries, the results of the examinations are often limited. In addition, the quality of the ultrasound equipment is often restricted. These limitations imply the need for a standardised procedure for the examination to decrease the amount of time required, as well as an automatic method for proposing the diagnosis of the fetus. This thesis has proposed a solution for automatically making a diagnosis based on the contents of extracted ultrasound images. Based on the concept of standard view planes, a list of predefined images are obtained of the fetus during the routine ultrasonography. These images contain the most important organs to examine, and most common congenital abnormalities are therefore detectable in this set. In order to perform the analysis of the images, medical domain knowledge must be obtained and stored to enable reasoning about the findings in the ultrasound images. The findings are extracted through segmentation and each object is given a unique description. An organ database is developed to store descriptions about existing organs to recognise the extracted objects. Once the organs have been identified, a CBR system is applied to analyse the total contents of one standard view plane. The CBR system uses domain knowledge from the medical domain as well as previously solved problems to identify possible abnormalities in the case describing the standard view plane. When a solution is obtained, it is stored for later retrieval. This causes the reliability of future examinations to increase, because of the constant expansion of the knowledge base. The foundation of standard view planes ensures an effective procedure and the amount of training needed to learn the procedure is minimised due to the automatic extraction and analysis of the contents of the standard view plane. The midwife only has to learn which standard view planes to obtain, not the analysis of their contents.</p>
4

A Shared Memory Structure for Cooperative Problem Solving

Røssland, Kari January 2006 (has links)
<p>The contribution of this thesis is a framework architecture for cooperative distributed problem solving in multiagent systems using a shared memory structure. Our shared memory structure, the TEAM SPACE, coordinates the problem solving process that is based on a plan in form of a hierarchy of decomposed tasks.</p>
5

Implementation and evaluation of Norwegian Analyzer for use with DotLucene

Olsen, Bjørn Harald January 2006 (has links)
<p>This work has focused on improving retrieval performance of search in Norwegian document collections. The initiator of the thesis, InfoFinder Norge, desired an Norwegian analyzer for DotLucene. The standard analyzer used before did not support stopword elimination and stemming for Norwegian language. Norwegian Analyzer and standard analyzer were used in turns on the same document collection before indexing and querying, then the respective results were compared to discover efficiency improvements. An evaluation method based on Term Relevance Sets were investigated and used on DotLucene with use of the two analyzer approaches. Term Relevance Sets methodology were also compared with common measurements for relevance judging, and found useful for evaluation of IR systems. The evaluation results of Norwegian analyzer and standard analyzer gave clear indications that use of stopword elimination and stemming for Norwegian documents improves retrieval efficiency. Term Relevance Set-based evaluation was found reliable by comparing the results with precision measurements. Precision was increased with 16% with use of Norwegian Analyzer compared to use an standard analyzer with no content preprocessing support for Norwegian. Term Relevance Set evaluation with use of 10 ontopic terms and 10 offtopic terms gave an increased $tScore$ of 44%. The results show that counting term occurrences in the content of retrieved documents can be used to gain confidence that documents are either relevant or not relevant.</p>
6

Simulations of imitative learning

Barakat, Firas Risnes January 2006 (has links)
<p>This Master thesis presents simulations within the field of imitative learning. The thesis starts with a review of the work done in my depth study, looking at imitative learning in general. Further, forward and inverse models are studied, and a case study of a Wolpert et al article is done. An architecture using the recurrent neural network with parametric bias (RNNPB) and a PID-controller by Tani et al is presented, and later simulated using MATLAB and the breve simulation environment. It is tested if the RNNPB is suitable for imitative learning. The first experiment was quite successful, and interesting results were discovered. The second experiment was less successful. Generally, it was confirmed that RNNPB is able to reproduce actions, interact with the environment, and indicate situations using the parametric bias (PB). It was also observed that the PB values tend to reflect common characteristics in similar training patterns. A comparison between the forward and inverse model and the RNNPB model was done. The former appears to be more modular and a predictor of consequence of actions, while the latter predicts sequences and is able to represent the situation it is in. The work done to connect MATLAB and breve is also presented.</p>
7

Marvin - Intelligent Corridor Guide

Hartvigsen, Ole Kristian January 2006 (has links)
<p>Intelligent helpers are becoming increasingly popular as computer systems are being used in new areas and by new users every day. Programs and robots that communicate with users in a human-like way offer friendlier and easier use, especially for systems that are used by a random selection of people who shouldn't need prior knowledge of the interface. This project considers an intelligent helping system that performs a specific human-like task in a real world environment. The system is named Marvin and is going to be a guide for people who are unfamiliar with a building. Imagine entering a building full of hallways and doors, not knowing where to go, and having a robot greet you. You can speak to the robot just as if it was a human being and it will give you the information that you need or even lead you to the place where you want to go. In this project, a prototype simulator of Marvin is implemented to work in the third floor of the building of The Department of Computer and Information Science at Norwegian University of Science and Technology. Questions and requests to Marvin can be made through written natural language. The program answers questions with natural language sentences, additional map presentations, and simulated robot movement.</p>
8

User Interface for 3D Visualization with Emphasis on Combined Voxel and Surface Representation : Design Report

Lyngset, Runar Ylvisåker January 2006 (has links)
<p>The thesis presents a user interface design aimed at the scenario where a dual representation of a volume is desired in order to emphasize certain parts of a volume using surface graphics while the rest of the volume is rendered using direct volume rendering techniques. A typical situation in which this configuration can prove useful is when studying images acquired for medical purposes. Sometimes the user wants to identify and represent an organ using an opaque surface in an otherwise partly opaque visualization of the volume data set. The design is based on the visualization library VTK along with Trolltech Qt, a GUI Toolkit in C++. The choice of using VTK as a visualization library was made after evaluating similar systems. The report includes a state of the art chapter, the requirements for the system, the system design and the results achieved after implementing the design are shown.</p>
9

Benchmarking Catastrophic Forgetting in Neural Networks

Moe-Helgesen, Ole-Marius January 2006 (has links)
<p>Catastrophic Forgetting is a behavior seen in artificial neural networks (ANNs) when new information overwrites old in such a way that the old information is no longer usable. Since this happens very rapidly in ANNs, it leads to both major practical problems and problems using the artificial networks as models for the human brain. In this thesis I will approach the problem from the practical viewpoint and attempt to provide rules, guidelines, datasets and analysis methods that can aid researchers better analyze new ANN models in terms of catastrophic forgetting and thus lead to better solutions. I suggest two methods of analysis that measure the overlap between input patterns in the input space. I will show strong indications that these measurements can predict if a back-propagation network will retain information better or worse. I will also provide source code implemented in Matlab for analyzing datasets, both with the new suggested measurements and other existing ones, and for running experiments measuring the catastrophic forgetting.</p>
10

Automatic recognition of unwanted behavior

Løvlie, Erik Sundnes January 2006 (has links)
<p>The use of video surveillance in public areas is ever increasing. With that increase, it becomes impractical to continue using humans to view and respond to the surveillance video streams, due to the massive amount of information that must be processed. If one hope to use surveillance to avoid personal injuries, damage to property and so forth, instead of merely a forensic tool after the fact, humans must be replaced by artificial intelligence. This thesis examines the whole process of recognizing unwanted human behaviors from videos taken by surveillance cameras. An overview of the state of the art in automated security and human behavior recognition is given. Algorithms for motion detection and tracking are described and implemented. The motion detection algorithm uses background subtraction, and can deal with large amounts of random noise. It also detects and removes cast shadows. The tracking algorithm uses a spatial occupancy overlap test between the predicted positions of tracked objects and current foreground blobs. Merges/splits are handled by grouping/ungrouping objects and recovering the trajectory using distance between predicted position and foreground blobs. Behaviors that are unwanted in most public areas are discussed, and a set of such concrete behaviors described. New algorithms for recognizing chasing/fleeing scenarios and people lying on the floor are then presented. A real-time intelligent surveillance system capable of recognizing chasing/fleeing scenarios and people lying on the floor has been implemented, and results from analyzing real video sequences are presented. The thesis concludes with a discussion on the advantages and disadvantes of the presented algorithms, and suggestions for future research.</p>

Page generated in 0.0845 seconds