• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 628
  • 171
  • Tagged with
  • 799
  • 799
  • 799
  • 557
  • 471
  • 471
  • 136
  • 136
  • 94
  • 94
  • 88
  • 88
  • 6
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Learning robot soccer with UCT

Holen, Vidar, Marøy, Audun January 2008 (has links)
Upper Confidence bounds applied to Trees, or UCT, has shown promise for reinforcement learning problems in different kinds of games, but most of the work has been on turn based games and single agent scenarios. In this project we test the feasibility of using UCT in an action-filled multi-agent environment, namely the RoboCup simulated soccer league. Through a series of experiments we test both low level and high level approaches. We were forced to conclude that low level approaches are infeasible, and that while high level learning is possible, cooperative multi-agent planning did not emerge.
162

Intelligent agents in computer games

Løland, Karl Syvert January 2008 (has links)
In this project we examine whether or not a intelligent agent can learn how to play a computer game using the same inputs and outputs as a human. An agent architecture is chosen, implemented, and tested on a standard first person shooter game to see if it can learn how to play that game and find a goal in that game. We conclude the report by discussing potential improvements to the current implementation.
163

Tracking for Outdoor Mobile Augmented Reality : Further development of the Zion Augmented Reality Application

Strand, Tor Egil Riegels January 2008 (has links)
This report deals with providing tracking to an outdoor mobile augmented reality system and the Zion Augmented Reality Application. ZionARA is meant to display a virtual recreation of a 13th century castle on the site it once stood through an augmented reality Head Mounted Display. Mobile outdoor augmented/mixed reality puts special demands on what kind of equipment is practical. After briefly evaluating the different existing tracking methods, a solution based on GPS and an augmented inertial rotation tracker is further evaluated by trying them out in a real setting. While standard unaugmented GNSS trackers are unable to provide the level of accuracy necessary, a differential GPS receiver is found to be capable of delivering good enough coordinates. The final result is a new version of ZionARA that actually allows a person to walk around at the site of the castle and see the castle as it most likely once stood. Source code and data files for ZionARA are provided on a supplemental disc.
164

Early warnings of critical diagnoses

Alvestad, Stig January 2009 (has links)
A disease which is left untreated for a longer period is more likely to cause negative consequents for the patient. Even though the general practitioner is able to discover the disease quickly in most cases, there are patients who should have been discovered earlier. Electronic patient records store time-stamped health information about patients, recorded by the health personnel treating the patient. This makes it possible to do a retrospective analysis in order to determine whether there was sufficient information to give the diagnose earlier than the general practitioner actually did. Classification algorithms from the machine learning domain can utilise large collections of electronic patient records to build models which can predict whether a patient will get the disease or not. These models could be used to get more knowledge about these diseases and in a long-term perspective they could become a support for the general practitioner in daily practice. The purpose of this thesis is to design and implement a software system which can predict whether a patient will get a disease in the near future or not. The system should attempt to predict the disease before the general practitioner even suspects that the patient might have the disease. Further the objective is to use this system to identify warning signs which are used to make the predictions, and to analyse the usefulness of the predictions and the warning signs. The diseases asthma, diabetes 2 and hypothyroidism have been selected to be the test cases for our methodology. A set of suspicion-indicators which indicates that the general practitioner has suspected the disease are identified in an iterative process. These suspicion-indicators are subsequently used to limit the information available for the classification algorithms. This information is subsequently used to build prediction models, using different classification algoritms. The prediction models are evaluated in terms of various performance measures and the models themselves are analysed manually. Experiments are conducted in order to find favourable parameter values for the information extraction process. Because there are relatively few patients who have the disease test cases, the oversampling technique SMOTE is used to generate additional synthetical patients with the test cases. A set of suspicion-indicators has been identified in cooperation with domain experts. The availability of warning signs decreases as the information available for the classifier diminishes, while the performance of the classifiers is not affected to such a large degree. Applying the SMOTE oversampling technique improves the results for the prediction models. There is not much difference between the performance of the various classification algorithms. The improved problem formulation results in models which are more valid than before. A number of events which are used to predict the test cases have been identified, but their real-world importance remains to be evaluated by domain experts. The performance of the prediction models can be misguiding in terms of practical usefulness. SMOTE is a promising technique for generating additional data, but the evaluation techniques used here are not good enough to make any conclusions.
165

Utilizing GPUs for Real-Time Visualization of Snow

Eidissen, Robin January 2009 (has links)
A real-time implementation is achieved, including a GPU based fluid-solver and particle simulation. Snow buildup is implemented on a height mapped terrain.
166

Adaptive Robotics

Fjær, Dag Henrik, Massali, Kjeld Karim Berg January 2009 (has links)
This report explores continuous-time recurrent neural networks (CTRNNs) and their utility in the field of adaptive robotics. The networks herein are evolved in a simulated environment and evaluated on a real robot. The evolved CTRNNs are presented with simple cognitive tasks and the results are analyzed in detail.
167

Early Warnings of Corporate Bankruptcies Using Machine Learning Techniques

Gogstad, Jostein, Øysæd, Jostein January 2009 (has links)
The tax history of a company is used to predict corporate bankruptcies using Bayesian inference. Our developed model uses a combination of Naive Bayesian classification and Gaussian Processes. Based on a sample of 1184 companies, we conclude that the Naive Bayes-Gaussian Process model successfully forecasts corporate bankruptcies with high accuracy. A comparison is performed with the current system in place at one of the largest banks in Norway. We present evidence that our classification model, based solely on tax data, is better than the model currently in place.
168

MicroRNAs and Transcriptional Control

Skaland, Even January 2009 (has links)
Background: MicroRNAs are small non-coding transcripts that have regulatory roles in the genome. Cis natural antisense transcripts are transcripts overlapping a sense transcript at the same loci in the genome, but at the opposite strand. Such antisense transcripts are thought to have regulatory roles in the genome, and the hypothesis is that miRNAs might bind to such antisense transcripts and thus activate the overlapping sense transcript. Aim of study: The following two aims have been identified during this project: (1) investigate whether the non-coding transcript of cis-NATs show significant enrichment for conserved miRNA seed sites, and (2) to correlate miRNA expression with expression of the sense side of targeted cis-NAT pairs. Results: Seed sites within such antisense transcripts gave significant enrichment, suggesting that miRNAs might actually bind to such antisense transcripts. There is a significant negative correlation between the expression of mir-28 and the expression of its targeted antisense transcripts, whereas the other miRNAs have no significant correlations. Also, the 3’UTR of the sense side of cis-NAT pairs is longer and more conserved than random transcripts. Conclusion: This work has strengthened the hypothesis that miRNAs might bind to such antisense transcripts.
169

Practical use of Block-Matching in 3D Speckle Tracking

Nielsen, Karl Espen January 2009 (has links)
In this thesis, optimizations for speckle tracking are integrated into an existing framework for real-time tracking of deformable subdivision surfaces. This is employed in the segmentation of the the left ventricle (LV) in 3D echocardiography. The main purpose of the project was to optimize the efficiency of material point tracking, this leading to a more robust LV myocardial deformation field estimation. Block-matching is the most time consuming part of speckle tracking, and the corresponding algorithms used in this thesis are optimized based on a Single Instruction Multiple Data (SIMD) model, in order to achieve data level parallelism. The SIMD model is implemented by using Streaming SIMD Extensions (SSE) to improve the processing time for the computation of the sum of absolute differences, one possible metric for block matching purposes. Furthermore, a study is conducted to optimize parameters associated with speckle tracking in regards to both accuracy and computation time. This is tested by using simulated data sets of infarcted ventricles in 3D echocardiography. More specifically, the tests examine how the size of kernel blocks and search windows affect the accuracy and processing time of the tracking. It also compares the performance of kernel blocks specified in cartesian and beamspace coordinates. Finally, tracking-accuracy is compared and measured in different regions (apical, mid-level and basal segments) of the LV.
170

Throughput Computing on Future GPUs

Hovland, Rune Johan January 2009 (has links)
The general-purpose computing capabilities of the Graphics Processing Unit (GPU) have recently been given a great deal of attention by the High-Performance Computing (HPC) community. By allowing massively parallel applications to run efficiently on commodity graphics cards, ”personal supercomputers” are now available in desktop versions at a low price. For some applications, speedups of 70 times that of a single CPU implementation have been achieved. Among the most popular GPUs are those based on the NVIDIA Tesla Architecture which allows relatively easy development of GPU applications using the NVIDIA CUDA programming environment. While the GPU is gaining interest in the HPC community, others are more reluctant to embrace the GPU as a computational device. The focus on throughput and large data volumes separates Information Retrieval (IR) from HPC, since for IR it is critical to process large amounts of data efficiently, a task which the GPU currently does not excel at. Only recently has the IR community begun to explore the possibilities, and an implementation of a search engine for the GPU was published recently in April 2009. This thesis analyzes how GPUs can be improved to better suit large data volume applications. Current graphics cards have a bottleneck regarding the transfer of data between the host and the GPU. One approach to resolve this bottleneck is to include the host memory as part of the GPUs’ memory hierarchy. We develop a theoretical model, and based on this model, the expected performance improvement for high data volume applications are shown for both computationally-bound and data transfer-bound applications. The performance improvement for an existing search engine is also given based on the theoretical model. For this case, the improvements would result in a speedup between 1.389 and 1.874 for the various query-types supported by the search engine.

Page generated in 0.0619 seconds