Spelling suggestions: "subject:"komplekse datasystem"" "subject:"complekse datasystem""
51 |
Protein Remote Homology Detection using Motifs made with Genetic ProgrammingHåndstad, Tony January 2006 (has links)
<p>A central problem in computational biology is the classification of related proteins into functional and structural classes based on their amino acid sequences. Several methods exist to detect related sequences when the level of sequence similarity is high, but for very low levels of sequence similarity the problem remains an unsolved challenge. Most recent methods use a discriminative approach and train support vector machines to distinguish related sequences from unrelated sequences. One successful approach is to base a kernel function for a support vector machine on shared occurrences of discrete sequence motifs. Still, many protein sequences fail to be classified correctly for a lack of a suitable set of motifs for these sequences. We introduce a motif kernel based on discrete sequence motifs where the motifs are synthesised using genetic programming. The motifs are evolved to discriminate between different families of evolutionary origin. The motif matches in the sequence data sets are then used to compute kernels for support vector machine classifiers that are trained to discriminate between related and unrelated sequences. When tested on two updated benchmarks, the method yields significantly better results compared to several other proven methods of remote homology detection. The superiority of the kernel is especially visible on the problem of classifying sequences to the correct fold. A rich set of motifs made specifically for each SCOP superfamily makes it possible to classify more sequences correctly than with previous motif-based methods.</p>
|
52 |
Information Visualisation and the Electronic Health Record : Visualising Collections of Patient Histories from General PracticeNordbø, Stein Jakob January 2006 (has links)
<p>This thesis investigates the question: "How can we use information visualisation to support retrospective, explorative analysis of collections of patient histories?" Building on experience from previous projects, we put forth our answer to the question by making the following contributions: * Reviewing relevant literature. * Proposing a novel design for visual exploration of collections of histories, motivated in a specific problem within general practice health care and existing work in the field of information visualisation. This includes both presentation and interactive navigation of the data. * Describing a query language and associated algorithms for specifying temporal patterns in a patient history. * Developing an interactive prototype to demonstrate our design, and performing a preliminary case study. This case study is not rigorous enough to conclude about the feasibility of the design, but it forms a foundation for improvements of the prototype and further evaluation at a later stage. We envision that our design can be instrumental in exploring experiences in terms of treatment processes. In addition, we believe that the visualisation can be useful to researchers looking at data to be statistically evaluated, in order to discover new hypotheses or get ideas for the best analysis strategies. Our main conclusion is that the proposed design seems promising, and we will further develop our results through a research project during the summer and autumn of 2006.</p>
|
53 |
Real-Time Wavelet Filtering on the GPUNielsen, Erik Axel Rønnevig January 2007 (has links)
<p>The wavelet transform is used for several applications including signal enhancement, compression (e.g. JPEG2000), and content analysis (e.g. FBI fingerprinting). Its popularity is due to fast access to high pass details at various levels of granularity. In this thesis, we present a novel algorithm for computing the discrete wavelet transform using consumer-level graphics hardware (GPUs). Our motivation for looking at the wavelet transform is to speed up the image enhancement calculation used in ultrasound processing. Ultrasound imaging has for many years been one of the most popular medical diagnostic tools. However, with the recent introduction of 3D ultrasound, the combination of a huge increase in data and a real-time requirement have made fast image enchancement techniques very important. Our new methods achieve a speedup of up to 30 compared to SIMD-optimised CPU-based implementations. It is also up to three times faster than earlier proposed GPU implementations. The speedup was made possible by analysing the underlying hardware and tailoring the algorithms to better fit the GPU than what has been done earlier. E.g. we avoid using lookup tables and dependent texture fetches that slowed down the earlier efforts. In addition, we use advanced GPU features like multiple render targets and texture source mirroring to minimise the number of texture fetches. We also show that by using the GPU, it is possible to offload the CPU so that it reduces its load from 29% to 1%. This is especially interesting for cardiac ultrasound scanners since they have a real-time requirement of up to 50 fps. The wavelet method developed in this thesis is so successful that GE Healthcare is including it in their next generation of cardiac ultrasound scanners which will be released later this year. With our proposed method, High-definition television (HDTV) denoising and other data intensive wavelet filtering applications, can be done in real-time.</p>
|
54 |
Surface Reconstruction and Stereoscopic Video Rendering from Laser Scan Generated Point Cloud DataLangø, Hans Martin, Tylden, Morten January 2007 (has links)
<p>This paper contains studies about the process of creating three-dimensional objects from point clouds. The main goal of this master thesis was to process a point cloud of the Nidaros Cathedral, mainly as a pilot project to create a standard procedure for future projects with similar goals. The main challenges were two-fold; both processing the data and creating stereoscopic videos presenting it. The approach to solving the problems include the study of earlier work on similar subjects, learning algorithms and tools, and finding the best procedures through trial and error. This resulted in a visually pleasing model of the cathedral, as well a stereoscopic video demonstrating it from all angles. The conclusion of the thesis is a pilot project demonstrating the dierent operations needed to overcome the challenges encountered during the work. The focus have been on presenting the procedures in such a way that they might be used in future projects of similar nature.</p>
|
55 |
Experimentation with inverted indexes for dynamic document collectionsBjørklund, Truls Amundsen January 2007 (has links)
<p>This report aims to asses the efficiency of various inverted indexes when the indexed document collection is dynamic. To achieve this goal, we experiment with three different overall structures: Remerge, hierarchical indexes and a naive B-tree index. An efficiency model is also developed. The resulting estimates for each structure from the efficiency model are compared to the actual results. We introduce two modifications to existing methods. The first is a new scheme for accumulating an index in memory during sort-based inversion. Even though the memory characteristics of this modified scheme are attractive, our experiments suggest that other proposed methods are more efficient. We also introduce a modification to the hierarchical indexes, which makes them more flexible. Tf-idf is used as the ranking scheme in all tested methods. Approximations to this scheme are suggested to make it more efficient in an updatable index. We conclude that in our implementation, the hierarchical index with the modification we have suggested performs best overall. We also conclude that the tf-idf ranking scheme is not fit for updatable indexes. The major problem with using the scheme is that it becomes difficult to make documents searchable immediately without sacrificing update speed.</p>
|
56 |
Implementation and testing of shadow tags in the M5 simulatorVinsnesbakk, Sigmund January 2008 (has links)
<p>The performance gap between CPU and main memory is a limiting factor for the performance in computers. Caches are used to bridge this gap. Caches give higher performance if the correct blocks are in place when the CPU needs them. Prefetching is a technique that tries to fetch the correct blocks into cache before the CPU references them. Prefetching can be implemented in software and hardware. Software prefetching is static and cannot be adjusted in runtime. Hardware prefetching can be adjusted dynamically in runtime. Shadow tag based prefetching is a scheme for dynamically adjusting the configuration of a hardware prefetcher. The configuration is based on statistics retrieved from cache performance counters. Shadow tag based prefetching was tested on a uniprocessor architecture in my fifth year specialization project, on the SimpleScalar simulator. This gave an increase in performance on the SPEC CPU2000 benchmark suite. Uniprocessors represent the past in computer architecture. Chip-multiprocessors (CMP) are the new standard as they provide higher throughput with lower design complexity and power consumption. There is therefore a need for a shadow tag implementation on a CMP simulator. Shadow tags are regular cache address tags that are kept in a shadow cache. The shadow tags do not have the corresponding data arrays. Different prefetching configurations are tested on the shadow tags to see how they perform compared to the prefetching configuration used in the L2 cache prefetcher. The best configuration is applied to the L2 cache prefetcher dynamically. M5 is a complex and flexible simulator platform based on object-orientation. Software objects simulate the behavior of hardware units. I extend M5 with shadow tags in this project. This involves extending the Bus and Cache implementation in M5. The shadow tag extension is intended to be used also by other students and researchers. The extension has been validated on an uniprocessor and a CMP architecture.</p>
|
57 |
Tracking for Outdoor Mobile Augmented Reality : Further development of the Zion Augmented Reality ApplicationStrand, Tor Egil Riegels January 2008 (has links)
<p>This report deals with providing tracking to an outdoor mobile augmented reality system and the Zion Augmented Reality Application. ZionARA is meant to display a virtual recreation of a 13th century castle on the site it once stood through an augmented reality Head Mounted Display. Mobile outdoor augmented/mixed reality puts special demands on what kind of equipment is practical. After briefly evaluating the different existing tracking methods, a solution based on GPS and an augmented inertial rotation tracker is further evaluated by trying them out in a real setting. While standard unaugmented GNSS trackers are unable to provide the level of accuracy necessary, a differential GPS receiver is found to be capable of delivering good enough coordinates. The final result is a new version of ZionARA that actually allows a person to walk around at the site of the castle and see the castle as it most likely once stood. Source code and data files for ZionARA are provided on a supplemental disc.</p>
|
58 |
Profiling and Optimizing a Seismic Application on Modern Architectures : Profiling for performanceBach, Daniel Andreas January 2008 (has links)
<p>In this thesis, we discuss several profilers and use selected ones to optimize a seismic application for StatoilHydro, Norway's main oil company. Paralellization techniques are also discussed. The application scans multiple traces of seismic data and removes unwanted multiples(noise). Seismic applications are central in simulations aiding geophysicists in finding oil reservoirs. The motivation for selecting this particular application, Adafil, is that it needs to be faster to be useful in practice. Abstract Our work gives several useful general hints for how to parallelize and optimize such applications for modern architectures. Abstract The application is profiled using several tools, singeling out three hotspots. This thesis will show that this application has some L2 cache misses. which can be avoided with prefetching. The work also shows that specific parts of the code, among others one containing a convolution algorithm, can benefit greatly by using FFT to lower complexity from O(n^2) to O(n log n) for these parts, and by leveraging the adaptive implementations of FFTW leads to a significant speedup of the application.</p>
|
59 |
Utilizing GPUs for Real-Time Visualization of SnowEidissen, Robin January 2009 (has links)
<p>A real-time implementation is achieved, including a GPU based fluid-solver and particle simulation. Snow buildup is implemented on a height mapped terrain.</p>
|
60 |
Techniques and Tools for Optimizing Codes on Modern Architectures: : A Low-Level ApproachJensen, Rune Erlend January 2009 (has links)
<p>This thesis describes novel techniques and test implementations for optimizing numerically intensive codes. Our main focus is on how given algorithms can be adapted to run efficiently on modern microprocessor exploring several architectural features including, instruction selection, and access patterns related to having several levels of cache. Our approach is also shown to be relevant for multicore architectures. Our primary target applications are linear algebra routines in the form of matrix multiply with dense matrices. We analyze how current compilers, microprocessor and common optimization techniques (like loop tiling and date relocation) interact. A tunable assembly code generator is developed, built, and tested on a basic BLAS level-3 routine to side-step some of the performance issues of modern compilers. Our generator has been test on both the Intel Pentium 4 and Intel's Core 2 processors. For the Pentium 4, a 10.8 % speed-up is achieved over ATLAS's rank2k, and a 17% speed-up is achieved over MKL's implementation for 4000-by-4032 matrices. On the Core 2 we optimize our code for 2000-by-2000 matrices and achieved a 24% and 5% speed-up over ATLAS and MKL, respectively with our multi-threaded implementation. Also for other matrix sizes, descent speed-ups are shown. Considering that our implementation is far from fully tuned, we consider these result very respectable.</p>
|
Page generated in 0.1239 seconds