• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 628
  • 171
  • Tagged with
  • 799
  • 799
  • 799
  • 557
  • 471
  • 471
  • 136
  • 136
  • 94
  • 94
  • 88
  • 88
  • 6
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Information Visualisation and the Electronic Health Record : Visualising Collections of Patient Histories from General Practice

Nordbø, Stein Jakob January 2006 (has links)
<p>This thesis investigates the question: "How can we use information visualisation to support retrospective, explorative analysis of collections of patient histories?" Building on experience from previous projects, we put forth our answer to the question by making the following contributions: * Reviewing relevant literature. * Proposing a novel design for visual exploration of collections of histories, motivated in a specific problem within general practice health care and existing work in the field of information visualisation. This includes both presentation and interactive navigation of the data. * Describing a query language and associated algorithms for specifying temporal patterns in a patient history. * Developing an interactive prototype to demonstrate our design, and performing a preliminary case study. This case study is not rigorous enough to conclude about the feasibility of the design, but it forms a foundation for improvements of the prototype and further evaluation at a later stage. We envision that our design can be instrumental in exploring experiences in terms of treatment processes. In addition, we believe that the visualisation can be useful to researchers looking at data to be statistically evaluated, in order to discover new hypotheses or get ideas for the best analysis strategies. Our main conclusion is that the proposed design seems promising, and we will further develop our results through a research project during the summer and autumn of 2006.</p>
102

Real-Time Wavelet Filtering on the GPU

Nielsen, Erik Axel Rønnevig January 2007 (has links)
<p>The wavelet transform is used for several applications including signal enhancement, compression (e.g. JPEG2000), and content analysis (e.g. FBI fingerprinting). Its popularity is due to fast access to high pass details at various levels of granularity. In this thesis, we present a novel algorithm for computing the discrete wavelet transform using consumer-level graphics hardware (GPUs). Our motivation for looking at the wavelet transform is to speed up the image enhancement calculation used in ultrasound processing. Ultrasound imaging has for many years been one of the most popular medical diagnostic tools. However, with the recent introduction of 3D ultrasound, the combination of a huge increase in data and a real-time requirement have made fast image enchancement techniques very important. Our new methods achieve a speedup of up to 30 compared to SIMD-optimised CPU-based implementations. It is also up to three times faster than earlier proposed GPU implementations. The speedup was made possible by analysing the underlying hardware and tailoring the algorithms to better fit the GPU than what has been done earlier. E.g. we avoid using lookup tables and dependent texture fetches that slowed down the earlier efforts. In addition, we use advanced GPU features like multiple render targets and texture source mirroring to minimise the number of texture fetches. We also show that by using the GPU, it is possible to offload the CPU so that it reduces its load from 29% to 1%. This is especially interesting for cardiac ultrasound scanners since they have a real-time requirement of up to 50 fps. The wavelet method developed in this thesis is so successful that GE Healthcare is including it in their next generation of cardiac ultrasound scanners which will be released later this year. With our proposed method, High-definition television (HDTV) denoising and other data intensive wavelet filtering applications, can be done in real-time.</p>
103

Experiments with hedonism, anticipation and reason in synaptic learning of facial affects : A neural simulation study within Connectology

Knutsen, Håvard Tautra January 2007 (has links)
<p>Connectology consist of three basic principles with each their own synaptic learning mechanism: Hedonism (the Skinner synapse), Anticipation (the Pavlov synapse) and Reason (the Hume synapse). This project studies the potentials and weaknesses of these mechanism in visual facial affect recognition. By exploiting the principles of hedonism, a supervision mechanism was created with the purpose of guiding the Pavlov synapses' learning towards the goal of facial affect recognition. Experiments showed that the network performed very poorly, and could not recognize facial affects. A deeper study of the supervision mechanism found a severe problem with its operation. An alternative supervision scheme was created, outside the principles of Connectology, to facilitate testing of the Pavlov synapses in a supervised setting. The Pavlov synapses performed very well. The synapses correctly anticipated all affects, however one of the four affects could not be discriminated from the others. The problem with discriminating the fourth affect was not a problem with the Pavlov learning mechanism, but rather of the neuronal representation of the affects. Hume synapses were then introduced in the hidden cluster. This was done to facilitate the forming of neuronal concepts of the different facial affects in different areas of the cluster. These representations, if successfully formed, should allow the Pavlov synapses to both antipate and discriminate between all facial affects. The forming of concepts did not happen, and thus the Hume synapse did not contribute to better results, but rather degraded them. The conclusion is that the Pavlov synapse lends itself well to learning by supervision, futher work is needed to create a functioning supervision mechanism within the principles of Connectology, and the application of the Hume synapse also calls for futher studies.</p>
104

Surface Reconstruction and Stereoscopic Video Rendering from Laser Scan Generated Point Cloud Data

Langø, Hans Martin, Tylden, Morten January 2007 (has links)
<p>This paper contains studies about the process of creating three-dimensional objects from point clouds. The main goal of this master thesis was to process a point cloud of the Nidaros Cathedral, mainly as a pilot project to create a standard procedure for future projects with similar goals. The main challenges were two-fold; both processing the data and creating stereoscopic videos presenting it. The approach to solving the problems include the study of earlier work on similar subjects, learning algorithms and tools, and finding the best procedures through trial and error. This resulted in a visually pleasing model of the cathedral, as well a stereoscopic video demonstrating it from all angles. The conclusion of the thesis is a pilot project demonstrating the dierent operations needed to overcome the challenges encountered during the work. The focus have been on presenting the procedures in such a way that they might be used in future projects of similar nature.</p>
105

Duplicate Detection with PMC -- A Parallel Approach to Pattern Matching

Leland, Robert January 2007 (has links)
<p>Fuzzy duplicate detection is an integral part of data cleansing. It consists of finding a set of duplicate records, correctly identifying the original or most representative record and removing the rest. The rate of Internet usage, and data availability and collectability is increasing so we get more and more access to data. A lot of this data is collected from, and entered by humans and this causes noise in the data from typing mistakes, spelling discrepancies, varying schemas, abbreviations, and more. Because of this data cleansing and approximate duplicate detection is now more important than ever. In fuzzy matching records are usually compared by measuring the edit distance between two records. This leads to problems with large data sets where there is a lot of record comparisons to be made so previous solutions have found ways to cut down on the amount of records to be compared. This is often done by creating a key which records are then sorted on with the intention of placing similar records near each other. There are several downsides to this, for example you need to sort and search through potentially large amounts of data several times to catch duplicate data accurately. This project differs in that it presents an approach to the problem which takes advantage of a multiple instruction stream, multiple data stream (MIMD) architecture called a Pattern Matching Chip (PMC), which allows large amounts of parallel character comparisons. This will allow you to do fuzzy matching against the entire data set very quickly, removing the need for clustering and re-arranging of the data which can often lead to omitted duplicates (false negatives). The main point of this paper will be to test the viability of this approach for duplicate detection, examining the performance, potential and scalability of the approach.</p>
106

Experimentation with inverted indexes for dynamic document collections

Bjørklund, Truls Amundsen January 2007 (has links)
<p>This report aims to asses the efficiency of various inverted indexes when the indexed document collection is dynamic. To achieve this goal, we experiment with three different overall structures: Remerge, hierarchical indexes and a naive B-tree index. An efficiency model is also developed. The resulting estimates for each structure from the efficiency model are compared to the actual results. We introduce two modifications to existing methods. The first is a new scheme for accumulating an index in memory during sort-based inversion. Even though the memory characteristics of this modified scheme are attractive, our experiments suggest that other proposed methods are more efficient. We also introduce a modification to the hierarchical indexes, which makes them more flexible. Tf-idf is used as the ranking scheme in all tested methods. Approximations to this scheme are suggested to make it more efficient in an updatable index. We conclude that in our implementation, the hierarchical index with the modification we have suggested performs best overall. We also conclude that the tf-idf ranking scheme is not fit for updatable indexes. The major problem with using the scheme is that it becomes difficult to make documents searchable immediately without sacrificing update speed.</p>
107

Learning robot soccer with UCT

Holen, Vidar, Marøy, Audun January 2008 (has links)
<p>Upper Confidence bounds applied to Trees, or UCT, has shown promise for reinforcement learning problems in different kinds of games, but most of the work has been on turn based games and single agent scenarios. In this project we test the feasibility of using UCT in an action-filled multi-agent environment, namely the RoboCup simulated soccer league. Through a series of experiments we test both low level and high level approaches. We were forced to conclude that low level approaches are infeasible, and that while high level learning is possible, cooperative multi-agent planning did not emerge.</p>
108

Implementation and testing of shadow tags in the M5 simulator

Vinsnesbakk, Sigmund January 2008 (has links)
<p>The performance gap between CPU and main memory is a limiting factor for the performance in computers. Caches are used to bridge this gap. Caches give higher performance if the correct blocks are in place when the CPU needs them. Prefetching is a technique that tries to fetch the correct blocks into cache before the CPU references them. Prefetching can be implemented in software and hardware. Software prefetching is static and cannot be adjusted in runtime. Hardware prefetching can be adjusted dynamically in runtime. Shadow tag based prefetching is a scheme for dynamically adjusting the configuration of a hardware prefetcher. The configuration is based on statistics retrieved from cache performance counters. Shadow tag based prefetching was tested on a uniprocessor architecture in my fifth year specialization project, on the SimpleScalar simulator. This gave an increase in performance on the SPEC CPU2000 benchmark suite. Uniprocessors represent the past in computer architecture. Chip-multiprocessors (CMP) are the new standard as they provide higher throughput with lower design complexity and power consumption. There is therefore a need for a shadow tag implementation on a CMP simulator. Shadow tags are regular cache address tags that are kept in a shadow cache. The shadow tags do not have the corresponding data arrays. Different prefetching configurations are tested on the shadow tags to see how they perform compared to the prefetching configuration used in the L2 cache prefetcher. The best configuration is applied to the L2 cache prefetcher dynamically. M5 is a complex and flexible simulator platform based on object-orientation. Software objects simulate the behavior of hardware units. I extend M5 with shadow tags in this project. This involves extending the Bus and Cache implementation in M5. The shadow tag extension is intended to be used also by other students and researchers. The extension has been validated on an uniprocessor and a CMP architecture.</p>
109

Intelligent agents in computer games

Løland, Karl Syvert January 2008 (has links)
<p>In this project we examine whether or not a intelligent agent can learn how to play a computer game using the same inputs and outputs as a human. An agent architecture is chosen, implemented, and tested on a standard first person shooter game to see if it can learn how to play that game and find a goal in that game. We conclude the report by discussing potential improvements to the current implementation.</p>
110

Tracking for Outdoor Mobile Augmented Reality : Further development of the Zion Augmented Reality Application

Strand, Tor Egil Riegels January 2008 (has links)
<p>This report deals with providing tracking to an outdoor mobile augmented reality system and the Zion Augmented Reality Application. ZionARA is meant to display a virtual recreation of a 13th century castle on the site it once stood through an augmented reality Head Mounted Display. Mobile outdoor augmented/mixed reality puts special demands on what kind of equipment is practical. After briefly evaluating the different existing tracking methods, a solution based on GPS and an augmented inertial rotation tracker is further evaluated by trying them out in a real setting. While standard unaugmented GNSS trackers are unable to provide the level of accuracy necessary, a differential GPS receiver is found to be capable of delivering good enough coordinates. The final result is a new version of ZionARA that actually allows a person to walk around at the site of the castle and see the castle as it most likely once stood. Source code and data files for ZionARA are provided on a supplemental disc.</p>

Page generated in 0.1285 seconds