• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 628
  • 171
  • Tagged with
  • 799
  • 799
  • 799
  • 557
  • 471
  • 471
  • 136
  • 136
  • 94
  • 94
  • 88
  • 88
  • 6
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Profiling and Optimizing a Seismic Application on Modern Architectures : Profiling for performance

Bach, Daniel Andreas January 2008 (has links)
<p>In this thesis, we discuss several profilers and use selected ones to optimize a seismic application for StatoilHydro, Norway's main oil company. Paralellization techniques are also discussed. The application scans multiple traces of seismic data and removes unwanted multiples(noise). Seismic applications are central in simulations aiding geophysicists in finding oil reservoirs. The motivation for selecting this particular application, Adafil, is that it needs to be faster to be useful in practice. Abstract Our work gives several useful general hints for how to parallelize and optimize such applications for modern architectures. Abstract The application is profiled using several tools, singeling out three hotspots. This thesis will show that this application has some L2 cache misses. which can be avoided with prefetching. The work also shows that specific parts of the code, among others one containing a convolution algorithm, can benefit greatly by using FFT to lower complexity from O(n^2) to O(n log n) for these parts, and by leveraging the adaptive implementations of FFTW leads to a significant speedup of the application.</p>
112

Early warnings of critical diagnoses

Alvestad, Stig January 2009 (has links)
<p>A disease which is left untreated for a longer period is more likely to cause negative consequents for the patient. Even though the general practitioner is able to discover the disease quickly in most cases, there are patients who should have been discovered earlier. Electronic patient records store time-stamped health information about patients, recorded by the health personnel treating the patient. This makes it possible to do a retrospective analysis in order to determine whether there was sufficient information to give the diagnose earlier than the general practitioner actually did. Classification algorithms from the machine learning domain can utilise large collections of electronic patient records to build models which can predict whether a patient will get the disease or not. These models could be used to get more knowledge about these diseases and in a long-term perspective they could become a support for the general practitioner in daily practice. The purpose of this thesis is to design and implement a software system which can predict whether a patient will get a disease in the near future or not. The system should attempt to predict the disease before the general practitioner even suspects that the patient might have the disease. Further the objective is to use this system to identify warning signs which are used to make the predictions, and to analyse the usefulness of the predictions and the warning signs. The diseases asthma, diabetes 2 and hypothyroidism have been selected to be the test cases for our methodology. A set of suspicion-indicators which indicates that the general practitioner has suspected the disease are identified in an iterative process. These suspicion-indicators are subsequently used to limit the information available for the classification algorithms. This information is subsequently used to build prediction models, using different classification algoritms. The prediction models are evaluated in terms of various performance measures and the models themselves are analysed manually. Experiments are conducted in order to find favourable parameter values for the information extraction process. Because there are relatively few patients who have the disease test cases, the oversampling technique SMOTE is used to generate additional synthetical patients with the test cases. A set of suspicion-indicators has been identified in cooperation with domain experts. The availability of warning signs decreases as the information available for the classifier diminishes, while the performance of the classifiers is not affected to such a large degree. Applying the SMOTE oversampling technique improves the results for the prediction models. There is not much difference between the performance of the various classification algorithms. The improved problem formulation results in models which are more valid than before. A number of events which are used to predict the test cases have been identified, but their real-world importance remains to be evaluated by domain experts. The performance of the prediction models can be misguiding in terms of practical usefulness. SMOTE is a promising technique for generating additional data, but the evaluation techniques used here are not good enough to make any conclusions.</p>
113

Utilizing GPUs for Real-Time Visualization of Snow

Eidissen, Robin January 2009 (has links)
<p>A real-time implementation is achieved, including a GPU based fluid-solver and particle simulation. Snow buildup is implemented on a height mapped terrain.</p>
114

Adaptive Robotics

Fjær, Dag Henrik, Massali, Kjeld Karim Berg January 2009 (has links)
<p>This report explores continuous-time recurrent neural networks (CTRNNs) and their utility in the field of adaptive robotics. The networks herein are evolved in a simulated environment and evaluated on a real robot. The evolved CTRNNs are presented with simple cognitive tasks and the results are analyzed in detail.</p>
115

Early Warnings of Corporate Bankruptcies Using Machine Learning Techniques

Gogstad, Jostein, Øysæd, Jostein January 2009 (has links)
<p>The tax history of a company is used to predict corporate bankruptcies using Bayesian inference. Our developed model uses a combination of Naive Bayesian classification and Gaussian Processes. Based on a sample of 1184 companies, we conclude that the Naive Bayes-Gaussian Process model successfully forecasts corporate bankruptcies with high accuracy. A comparison is performed with the current system in place at one of the largest banks in Norway. We present evidence that our classification model, based solely on tax data, is better than the model currently in place.</p>
116

MicroRNAs and Transcriptional Control

Skaland, Even January 2009 (has links)
<p>Background: MicroRNAs are small non-coding transcripts that have regulatory roles in the genome. Cis natural antisense transcripts are transcripts overlapping a sense transcript at the same loci in the genome, but at the opposite strand. Such antisense transcripts are thought to have regulatory roles in the genome, and the hypothesis is that miRNAs might bind to such antisense transcripts and thus activate the overlapping sense transcript. Aim of study: The following two aims have been identified during this project: (1) investigate whether the non-coding transcript of cis-NATs show significant enrichment for conserved miRNA seed sites, and (2) to correlate miRNA expression with expression of the sense side of targeted cis-NAT pairs. Results: Seed sites within such antisense transcripts gave significant enrichment, suggesting that miRNAs might actually bind to such antisense transcripts. There is a significant negative correlation between the expression of mir-28 and the expression of its targeted antisense transcripts, whereas the other miRNAs have no significant correlations. Also, the 3’UTR of the sense side of cis-NAT pairs is longer and more conserved than random transcripts. Conclusion: This work has strengthened the hypothesis that miRNAs might bind to such antisense transcripts.</p>
117

Practical use of Block-Matching in 3D Speckle Tracking

Nielsen, Karl Espen January 2009 (has links)
<p>In this thesis, optimizations for speckle tracking are integrated into an existing framework for real-time tracking of deformable subdivision surfaces. This is employed in the segmentation of the the left ventricle (LV) in 3D echocardiography. The main purpose of the project was to optimize the efficiency of material point tracking, this leading to a more robust LV myocardial deformation field estimation. Block-matching is the most time consuming part of speckle tracking, and the corresponding algorithms used in this thesis are optimized based on a Single Instruction Multiple Data (SIMD) model, in order to achieve data level parallelism. The SIMD model is implemented by using Streaming SIMD Extensions (SSE) to improve the processing time for the computation of the sum of absolute differences, one possible metric for block matching purposes. Furthermore, a study is conducted to optimize parameters associated with speckle tracking in regards to both accuracy and computation time. This is tested by using simulated data sets of infarcted ventricles in 3D echocardiography. More specifically, the tests examine how the size of kernel blocks and search windows affect the accuracy and processing time of the tracking. It also compares the performance of kernel blocks specified in cartesian and beamspace coordinates. Finally, tracking-accuracy is compared and measured in different regions (apical, mid-level and basal segments) of the LV.</p>
118

Throughput Computing on Future GPUs

Hovland, Rune Johan January 2009 (has links)
<p>The general-purpose computing capabilities of the Graphics Processing Unit (GPU) have recently been given a great deal of attention by the High-Performance Computing (HPC) community. By allowing massively parallel applications to run efficiently on commodity graphics cards, ”personal supercomputers” are now available in desktop versions at a low price. For some applications, speedups of 70 times that of a single CPU implementation have been achieved. Among the most popular GPUs are those based on the NVIDIA Tesla Architecture which allows relatively easy development of GPU applications using the NVIDIA CUDA programming environment. While the GPU is gaining interest in the HPC community, others are more reluctant to embrace the GPU as a computational device. The focus on throughput and large data volumes separates Information Retrieval (IR) from HPC, since for IR it is critical to process large amounts of data efficiently, a task which the GPU currently does not excel at. Only recently has the IR community begun to explore the possibilities, and an implementation of a search engine for the GPU was published recently in April 2009. This thesis analyzes how GPUs can be improved to better suit large data volume applications. Current graphics cards have a bottleneck regarding the transfer of data between the host and the GPU. One approach to resolve this bottleneck is to include the host memory as part of the GPUs’ memory hierarchy. We develop a theoretical model, and based on this model, the expected performance improvement for high data volume applications are shown for both computationally-bound and data transfer-bound applications. The performance improvement for an existing search engine is also given based on the theoretical model. For this case, the improvements would result in a speedup between 1.389 and 1.874 for the various query-types supported by the search engine.</p>
119

Structured data extraction: separating content from noise on news websites

Arizaleta, Mikel January 2009 (has links)
<p>In this thesis, we have treated the problem of separating content from noise on news websites. We have approached this problem by using TiMBL, a memory-based learning software. We have studied the relevance of the similarity in the training data and the effect of data size in the performance of the extractions.</p>
120

Seismic Shot Processing on GPU

Johansen, Owe January 2009 (has links)
<p>Today’s petroleum industry demand an ever increasing amount of compu- tational resources. Seismic processing applications in use by these types of companies have generally been using large clusters of compute nodes, whose only computing resource has been the CPU. However, using Graphics Pro- cessing Units (GPU) for general purpose programming is these days becoming increasingly more popular in the high performance computing area. In 2007, NVIDIA corporation launched their framework for developing GPU utilizing computational algorithms, known as the Compute Unied Device Architec- ture (CUDA), a wide variety of research areas have adopted this framework for their algorithms. This thesis looks at the applicability of GPU techniques and CUDA for off-loading some of the computational workload in a seismic shot modeling application provided by StatoilHydro to modern GPUs. This work builds on our recent project that looked at providing check- point restart for this MPI enabled shot modeling application. In this thesis, we demonstrate that the inherent data parallelism in the core finite-difference computations also makes our application well suited for GPU acceleration. By using CUDA, we show that we could do an efficient port our application, and through further refinements achieve significant performance increases. Benchmarks done on two different systems in the NTNU IDI (Depart- ment of Computer and Information Science) HPC-lab, are included. One system is a Intel Core2 Quad Q9550 @2.83GHz with 4GB of RAM and an NVIDIA GeForce GTX280 and NVIDIA Tesla C1060 GPU. Our sec- ond testbed was an Intel Core I7 Extreme (965 @3.20GHz) with 12GB of RAM hosting an NVIDIA Tesla S1070 (4X NVIDIA Tesla C1060). On this hardware, speedups up to a factor of 8-14.79 compared to the original se- quential code are achieved, confirming the potential of GPU computing in applications similar to the one used in this thesis.</p>

Page generated in 0.0485 seconds