• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 774
  • 206
  • Tagged with
  • 980
  • 968
  • 799
  • 592
  • 498
  • 498
  • 162
  • 162
  • 130
  • 130
  • 107
  • 104
  • 100
  • 62
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

A Classifier for Microprocessor Processing Site Prediction in Human MicroRNAs

Helvik, Snorre Andreas January 2006 (has links)
<p>MircoRNAs are ~22nts long non-coding RNA sequences that play a central role in gene regulation. As the microRNAs are temporary and not necessarily expressed when RNA from tissue samples are sequenced, bioinformatics is an important part of microRNA discovery. Most of the computational microRNA discovery approaches are based on conservation between human and other species. Recent results, however, estimate that there exists around 350 microRNAs unique to human. It is therefore a need for methods that use characteristics in the primary microRNA transcript to predict microRNA candidates. The main problem with such methods is, however, that many of the characteristics in the primary microRNA transcript are correlated with the location where the Microprocessor complex cleaves the primary microRNA into the precursor, which is unknown until the candidate is experimentally verified. This work presents a method based on support vector machines (SVM) for Microprocessor processing site prediction in human microRNAs. The SVM correctly predicts the processing site for 43% of the known human microRNAs and shows a great performance distinguishing random hairpins and microRNAs. The processing site SVM is useful for microRNA discovery in two ways. One, the predicted processing sites can be used to build an SVM with more distinct features and, thus, increase the accuracy of the microRNA gene predictions. Two, it generates information that can be used to predict microRNA candidates directly, such as the score differences between the candidate's potential and predicted processing sites. Preliminary results show that an SVM that uses the predictions from the processing site SVM and trains explicitly to separate microRNAs and random hairpins performs better than current prediction-based approaches. This illustrates the potential gain of using the processing site predictions in microRNA gene prediction.</p>
102

Protein Remote Homology Detection using Motifs made with Genetic Programming

Håndstad, Tony January 2006 (has links)
<p>A central problem in computational biology is the classification of related proteins into functional and structural classes based on their amino acid sequences. Several methods exist to detect related sequences when the level of sequence similarity is high, but for very low levels of sequence similarity the problem remains an unsolved challenge. Most recent methods use a discriminative approach and train support vector machines to distinguish related sequences from unrelated sequences. One successful approach is to base a kernel function for a support vector machine on shared occurrences of discrete sequence motifs. Still, many protein sequences fail to be classified correctly for a lack of a suitable set of motifs for these sequences. We introduce a motif kernel based on discrete sequence motifs where the motifs are synthesised using genetic programming. The motifs are evolved to discriminate between different families of evolutionary origin. The motif matches in the sequence data sets are then used to compute kernels for support vector machine classifiers that are trained to discriminate between related and unrelated sequences. When tested on two updated benchmarks, the method yields significantly better results compared to several other proven methods of remote homology detection. The superiority of the kernel is especially visible on the problem of classifying sequences to the correct fold. A rich set of motifs made specifically for each SCOP superfamily makes it possible to classify more sequences correctly than with previous motif-based methods.</p>
103

Information Visualisation and the Electronic Health Record : Visualising Collections of Patient Histories from General Practice

Nordbø, Stein Jakob January 2006 (has links)
<p>This thesis investigates the question: "How can we use information visualisation to support retrospective, explorative analysis of collections of patient histories?" Building on experience from previous projects, we put forth our answer to the question by making the following contributions: * Reviewing relevant literature. * Proposing a novel design for visual exploration of collections of histories, motivated in a specific problem within general practice health care and existing work in the field of information visualisation. This includes both presentation and interactive navigation of the data. * Describing a query language and associated algorithms for specifying temporal patterns in a patient history. * Developing an interactive prototype to demonstrate our design, and performing a preliminary case study. This case study is not rigorous enough to conclude about the feasibility of the design, but it forms a foundation for improvements of the prototype and further evaluation at a later stage. We envision that our design can be instrumental in exploring experiences in terms of treatment processes. In addition, we believe that the visualisation can be useful to researchers looking at data to be statistically evaluated, in order to discover new hypotheses or get ideas for the best analysis strategies. Our main conclusion is that the proposed design seems promising, and we will further develop our results through a research project during the summer and autumn of 2006.</p>
104

Real-Time Wavelet Filtering on the GPU

Nielsen, Erik Axel Rønnevig January 2007 (has links)
<p>The wavelet transform is used for several applications including signal enhancement, compression (e.g. JPEG2000), and content analysis (e.g. FBI fingerprinting). Its popularity is due to fast access to high pass details at various levels of granularity. In this thesis, we present a novel algorithm for computing the discrete wavelet transform using consumer-level graphics hardware (GPUs). Our motivation for looking at the wavelet transform is to speed up the image enhancement calculation used in ultrasound processing. Ultrasound imaging has for many years been one of the most popular medical diagnostic tools. However, with the recent introduction of 3D ultrasound, the combination of a huge increase in data and a real-time requirement have made fast image enchancement techniques very important. Our new methods achieve a speedup of up to 30 compared to SIMD-optimised CPU-based implementations. It is also up to three times faster than earlier proposed GPU implementations. The speedup was made possible by analysing the underlying hardware and tailoring the algorithms to better fit the GPU than what has been done earlier. E.g. we avoid using lookup tables and dependent texture fetches that slowed down the earlier efforts. In addition, we use advanced GPU features like multiple render targets and texture source mirroring to minimise the number of texture fetches. We also show that by using the GPU, it is possible to offload the CPU so that it reduces its load from 29% to 1%. This is especially interesting for cardiac ultrasound scanners since they have a real-time requirement of up to 50 fps. The wavelet method developed in this thesis is so successful that GE Healthcare is including it in their next generation of cardiac ultrasound scanners which will be released later this year. With our proposed method, High-definition television (HDTV) denoising and other data intensive wavelet filtering applications, can be done in real-time.</p>
105

Experiments with hedonism, anticipation and reason in synaptic learning of facial affects : A neural simulation study within Connectology

Knutsen, Håvard Tautra January 2007 (has links)
<p>Connectology consist of three basic principles with each their own synaptic learning mechanism: Hedonism (the Skinner synapse), Anticipation (the Pavlov synapse) and Reason (the Hume synapse). This project studies the potentials and weaknesses of these mechanism in visual facial affect recognition. By exploiting the principles of hedonism, a supervision mechanism was created with the purpose of guiding the Pavlov synapses' learning towards the goal of facial affect recognition. Experiments showed that the network performed very poorly, and could not recognize facial affects. A deeper study of the supervision mechanism found a severe problem with its operation. An alternative supervision scheme was created, outside the principles of Connectology, to facilitate testing of the Pavlov synapses in a supervised setting. The Pavlov synapses performed very well. The synapses correctly anticipated all affects, however one of the four affects could not be discriminated from the others. The problem with discriminating the fourth affect was not a problem with the Pavlov learning mechanism, but rather of the neuronal representation of the affects. Hume synapses were then introduced in the hidden cluster. This was done to facilitate the forming of neuronal concepts of the different facial affects in different areas of the cluster. These representations, if successfully formed, should allow the Pavlov synapses to both antipate and discriminate between all facial affects. The forming of concepts did not happen, and thus the Hume synapse did not contribute to better results, but rather degraded them. The conclusion is that the Pavlov synapse lends itself well to learning by supervision, futher work is needed to create a functioning supervision mechanism within the principles of Connectology, and the application of the Hume synapse also calls for futher studies.</p>
106

Surface Reconstruction and Stereoscopic Video Rendering from Laser Scan Generated Point Cloud Data

Langø, Hans Martin, Tylden, Morten January 2007 (has links)
<p>This paper contains studies about the process of creating three-dimensional objects from point clouds. The main goal of this master thesis was to process a point cloud of the Nidaros Cathedral, mainly as a pilot project to create a standard procedure for future projects with similar goals. The main challenges were two-fold; both processing the data and creating stereoscopic videos presenting it. The approach to solving the problems include the study of earlier work on similar subjects, learning algorithms and tools, and finding the best procedures through trial and error. This resulted in a visually pleasing model of the cathedral, as well a stereoscopic video demonstrating it from all angles. The conclusion of the thesis is a pilot project demonstrating the dierent operations needed to overcome the challenges encountered during the work. The focus have been on presenting the procedures in such a way that they might be used in future projects of similar nature.</p>
107

Duplicate Detection with PMC -- A Parallel Approach to Pattern Matching

Leland, Robert January 2007 (has links)
<p>Fuzzy duplicate detection is an integral part of data cleansing. It consists of finding a set of duplicate records, correctly identifying the original or most representative record and removing the rest. The rate of Internet usage, and data availability and collectability is increasing so we get more and more access to data. A lot of this data is collected from, and entered by humans and this causes noise in the data from typing mistakes, spelling discrepancies, varying schemas, abbreviations, and more. Because of this data cleansing and approximate duplicate detection is now more important than ever. In fuzzy matching records are usually compared by measuring the edit distance between two records. This leads to problems with large data sets where there is a lot of record comparisons to be made so previous solutions have found ways to cut down on the amount of records to be compared. This is often done by creating a key which records are then sorted on with the intention of placing similar records near each other. There are several downsides to this, for example you need to sort and search through potentially large amounts of data several times to catch duplicate data accurately. This project differs in that it presents an approach to the problem which takes advantage of a multiple instruction stream, multiple data stream (MIMD) architecture called a Pattern Matching Chip (PMC), which allows large amounts of parallel character comparisons. This will allow you to do fuzzy matching against the entire data set very quickly, removing the need for clustering and re-arranging of the data which can often lead to omitted duplicates (false negatives). The main point of this paper will be to test the viability of this approach for duplicate detection, examining the performance, potential and scalability of the approach.</p>
108

Experimentation with inverted indexes for dynamic document collections

Bjørklund, Truls Amundsen January 2007 (has links)
<p>This report aims to asses the efficiency of various inverted indexes when the indexed document collection is dynamic. To achieve this goal, we experiment with three different overall structures: Remerge, hierarchical indexes and a naive B-tree index. An efficiency model is also developed. The resulting estimates for each structure from the efficiency model are compared to the actual results. We introduce two modifications to existing methods. The first is a new scheme for accumulating an index in memory during sort-based inversion. Even though the memory characteristics of this modified scheme are attractive, our experiments suggest that other proposed methods are more efficient. We also introduce a modification to the hierarchical indexes, which makes them more flexible. Tf-idf is used as the ranking scheme in all tested methods. Approximations to this scheme are suggested to make it more efficient in an updatable index. We conclude that in our implementation, the hierarchical index with the modification we have suggested performs best overall. We also conclude that the tf-idf ranking scheme is not fit for updatable indexes. The major problem with using the scheme is that it becomes difficult to make documents searchable immediately without sacrificing update speed.</p>
109

Learning robot soccer with UCT

Holen, Vidar, Marøy, Audun January 2008 (has links)
<p>Upper Confidence bounds applied to Trees, or UCT, has shown promise for reinforcement learning problems in different kinds of games, but most of the work has been on turn based games and single agent scenarios. In this project we test the feasibility of using UCT in an action-filled multi-agent environment, namely the RoboCup simulated soccer league. Through a series of experiments we test both low level and high level approaches. We were forced to conclude that low level approaches are infeasible, and that while high level learning is possible, cooperative multi-agent planning did not emerge.</p>
110

Implementation and testing of shadow tags in the M5 simulator

Vinsnesbakk, Sigmund January 2008 (has links)
<p>The performance gap between CPU and main memory is a limiting factor for the performance in computers. Caches are used to bridge this gap. Caches give higher performance if the correct blocks are in place when the CPU needs them. Prefetching is a technique that tries to fetch the correct blocks into cache before the CPU references them. Prefetching can be implemented in software and hardware. Software prefetching is static and cannot be adjusted in runtime. Hardware prefetching can be adjusted dynamically in runtime. Shadow tag based prefetching is a scheme for dynamically adjusting the configuration of a hardware prefetcher. The configuration is based on statistics retrieved from cache performance counters. Shadow tag based prefetching was tested on a uniprocessor architecture in my fifth year specialization project, on the SimpleScalar simulator. This gave an increase in performance on the SPEC CPU2000 benchmark suite. Uniprocessors represent the past in computer architecture. Chip-multiprocessors (CMP) are the new standard as they provide higher throughput with lower design complexity and power consumption. There is therefore a need for a shadow tag implementation on a CMP simulator. Shadow tags are regular cache address tags that are kept in a shadow cache. The shadow tags do not have the corresponding data arrays. Different prefetching configurations are tested on the shadow tags to see how they perform compared to the prefetching configuration used in the L2 cache prefetcher. The best configuration is applied to the L2 cache prefetcher dynamically. M5 is a complex and flexible simulator platform based on object-orientation. Software objects simulate the behavior of hardware units. I extend M5 with shadow tags in this project. This involves extending the Bus and Cache implementation in M5. The shadow tag extension is intended to be used also by other students and researchers. The extension has been validated on an uniprocessor and a CMP architecture.</p>

Page generated in 0.0361 seconds