• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 31
  • 2
  • Tagged with
  • 180
  • 175
  • 175
  • 162
  • 136
  • 29
  • 10
  • 10
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

MicroRNAs and Transcriptional Control

Skaland, Even January 2009 (has links)
<p>Background: MicroRNAs are small non-coding transcripts that have regulatory roles in the genome. Cis natural antisense transcripts are transcripts overlapping a sense transcript at the same loci in the genome, but at the opposite strand. Such antisense transcripts are thought to have regulatory roles in the genome, and the hypothesis is that miRNAs might bind to such antisense transcripts and thus activate the overlapping sense transcript. Aim of study: The following two aims have been identified during this project: (1) investigate whether the non-coding transcript of cis-NATs show significant enrichment for conserved miRNA seed sites, and (2) to correlate miRNA expression with expression of the sense side of targeted cis-NAT pairs. Results: Seed sites within such antisense transcripts gave significant enrichment, suggesting that miRNAs might actually bind to such antisense transcripts. There is a significant negative correlation between the expression of mir-28 and the expression of its targeted antisense transcripts, whereas the other miRNAs have no significant correlations. Also, the 3’UTR of the sense side of cis-NAT pairs is longer and more conserved than random transcripts. Conclusion: This work has strengthened the hypothesis that miRNAs might bind to such antisense transcripts.</p>
62

Practical use of Block-Matching in 3D Speckle Tracking

Nielsen, Karl Espen January 2009 (has links)
<p>In this thesis, optimizations for speckle tracking are integrated into an existing framework for real-time tracking of deformable subdivision surfaces. This is employed in the segmentation of the the left ventricle (LV) in 3D echocardiography. The main purpose of the project was to optimize the efficiency of material point tracking, this leading to a more robust LV myocardial deformation field estimation. Block-matching is the most time consuming part of speckle tracking, and the corresponding algorithms used in this thesis are optimized based on a Single Instruction Multiple Data (SIMD) model, in order to achieve data level parallelism. The SIMD model is implemented by using Streaming SIMD Extensions (SSE) to improve the processing time for the computation of the sum of absolute differences, one possible metric for block matching purposes. Furthermore, a study is conducted to optimize parameters associated with speckle tracking in regards to both accuracy and computation time. This is tested by using simulated data sets of infarcted ventricles in 3D echocardiography. More specifically, the tests examine how the size of kernel blocks and search windows affect the accuracy and processing time of the tracking. It also compares the performance of kernel blocks specified in cartesian and beamspace coordinates. Finally, tracking-accuracy is compared and measured in different regions (apical, mid-level and basal segments) of the LV.</p>
63

Throughput Computing on Future GPUs

Hovland, Rune Johan January 2009 (has links)
<p>The general-purpose computing capabilities of the Graphics Processing Unit (GPU) have recently been given a great deal of attention by the High-Performance Computing (HPC) community. By allowing massively parallel applications to run efficiently on commodity graphics cards, ”personal supercomputers” are now available in desktop versions at a low price. For some applications, speedups of 70 times that of a single CPU implementation have been achieved. Among the most popular GPUs are those based on the NVIDIA Tesla Architecture which allows relatively easy development of GPU applications using the NVIDIA CUDA programming environment. While the GPU is gaining interest in the HPC community, others are more reluctant to embrace the GPU as a computational device. The focus on throughput and large data volumes separates Information Retrieval (IR) from HPC, since for IR it is critical to process large amounts of data efficiently, a task which the GPU currently does not excel at. Only recently has the IR community begun to explore the possibilities, and an implementation of a search engine for the GPU was published recently in April 2009. This thesis analyzes how GPUs can be improved to better suit large data volume applications. Current graphics cards have a bottleneck regarding the transfer of data between the host and the GPU. One approach to resolve this bottleneck is to include the host memory as part of the GPUs’ memory hierarchy. We develop a theoretical model, and based on this model, the expected performance improvement for high data volume applications are shown for both computationally-bound and data transfer-bound applications. The performance improvement for an existing search engine is also given based on the theoretical model. For this case, the improvements would result in a speedup between 1.389 and 1.874 for the various query-types supported by the search engine.</p>
64

Seismic Shot Processing on GPU

Johansen, Owe January 2009 (has links)
<p>Today’s petroleum industry demand an ever increasing amount of compu- tational resources. Seismic processing applications in use by these types of companies have generally been using large clusters of compute nodes, whose only computing resource has been the CPU. However, using Graphics Pro- cessing Units (GPU) for general purpose programming is these days becoming increasingly more popular in the high performance computing area. In 2007, NVIDIA corporation launched their framework for developing GPU utilizing computational algorithms, known as the Compute Unied Device Architec- ture (CUDA), a wide variety of research areas have adopted this framework for their algorithms. This thesis looks at the applicability of GPU techniques and CUDA for off-loading some of the computational workload in a seismic shot modeling application provided by StatoilHydro to modern GPUs. This work builds on our recent project that looked at providing check- point restart for this MPI enabled shot modeling application. In this thesis, we demonstrate that the inherent data parallelism in the core finite-difference computations also makes our application well suited for GPU acceleration. By using CUDA, we show that we could do an efficient port our application, and through further refinements achieve significant performance increases. Benchmarks done on two different systems in the NTNU IDI (Depart- ment of Computer and Information Science) HPC-lab, are included. One system is a Intel Core2 Quad Q9550 @2.83GHz with 4GB of RAM and an NVIDIA GeForce GTX280 and NVIDIA Tesla C1060 GPU. Our sec- ond testbed was an Intel Core I7 Extreme (965 @3.20GHz) with 12GB of RAM hosting an NVIDIA Tesla S1070 (4X NVIDIA Tesla C1060). On this hardware, speedups up to a factor of 8-14.79 compared to the original se- quential code are achieved, confirming the potential of GPU computing in applications similar to the one used in this thesis.</p>
65

Seismic Data Compression and GPU Memory Latency

Haugen, Daniel January 2009 (has links)
<p>The gap between processing performance and the memory bandwidth is still increasing. To compensate for this gap various techniques have been used, such as using a memory hierarchy with faster memory closer to the processing unit. Other techniques that have been tested include the compression of data prior to a memory transfer. Bandwidth limitations exists not only at low levels within the memory hierarchy, but also between the central processing unit (CPU) and the graphics processing unit (GPU), suggesting the use of compression to mask the gap. Seismic datasets are often very large, e.g. several terabytes. This thesis explores compression of seismic data to hide the bandwidth limitation between the CPU and the GPU for seismic applications. The compression method considered is subband coding, with both run-length encoding (RLE) and Huffman encoding as compressors of the quantized data. These methods has shown on CPU implementations to give very good compression ratios for seismic data. A proof of concept implementation for decompression of seismic data on GPUs is developed. It consists of three main components: First the subband synthesis filter reconstructing the input data processed by the subband analysis filter. Second, the inverse quantizer generating an output close to the input given to the quantizer. Finally, the decoders decompressing the compressed data using Huffman and RLE. The results of our implementation show that the seismic data compression algorithm investigated is probably not suited to hide the bandwidth limitation between CPU and GPU. This is because of the steps taken to do the decompression are likely slower than a simple memory copy of the uncompressed seismic data. It is primarily the decompressors that are the limiting factor, but in our implementation the subband synthesis is also limiting. The sequential nature of the decompres- sion algorithms used makes them difficult to parallelize to make use of the processing units on the GPUs in an efficient way. Several suggestions for future work is then suggested as well as results showing how our GPU implementation can be very useful for data compres- sion for data to be sent over a network. Our compression results give a compression factor between 27 and 32, and a SNR of 24.67dB for a cube of dimension 643. A speedup of 2.5 for the synthesis filter compared to the CPU implementation is achieved (2029.00/813.76 2.5). Although not currently suited for the GPU-CPU compression, our implementations indicate</p>
66

Simulation of Fluid Flow Through Porous Rocks on Modern GPUs

Aksnes, Eirik Ola January 2009 (has links)
<p>It is important for the petroleum industry to investigate how fluids flow inside the complicated geometries of porous rocks, in order to improve oil production. The lattice Boltzmann method can be used to calculate the porous rock's ability to transport fluids (permeability). However, this method is computationally intensive and hence begging for High Performance Computing (HPC). Modern GPUs are becoming interesting and important platforms for HPC. In this thesis, we show how to implement the lattice Boltzmann method on modern GPUs using the NVIDIA CUDA programming environment. Our work is done in collaborations with Numerical Rocks AS and the Department of Petroleum Engineering at the Norwegian University of Science and Technology. To better evaluate our GPU implementation, a sequential CPU implementation is first prepared. We then develop our GPU implementation and test both implementation using three porous data sets with known permeabilities provided by Numerical Rocks AS. Our simulations of fluid flow get high performance on modern GPUs showing that it is possible to calculate the permeability of porous rocks of simulations sizes up to 368^3, which fit into the 4 GB memory of the NVIDIA Quadro FX 5800 card. The performances of the CPU and GPU implementations are measured in MLUPS (million lattice node updates per second). Both implementations achieve their highest performances using single floating-point precision, resulting in the maximum performance equal to 1.59 MLUPS and 184.30 MLUPS. Techniques for reducing round-off errors are also discussed and implemented.</p>
67

Tetrahedral mesh for needle insertion

Syvertsen, Rolf Anders January 2007 (has links)
This is a Master’s thesis in how to make a tetrahedral mesh for use in a needle insertion simulator. It also describes how it is possible to make the simulator, and how to improve it to make it as realistic as possible. The medical simulator uses a haptic device, a haptic scene graph and a FEM for realistic soft tissue deformation and interaction. In this project a tetrahedral mesh is created from a polygon model, and then the mesh has been loaded into the HaptX haptic scene graph. The objects in the mesh have been made as different haptic objects, and then they have got a simple haptic surface to make it possible to touch them. There has not been implemented any code for the Hybrid Condensed FEM that has been described.
68

Seismic processing using Parallel 3D FMM

Borlaug, Idar January 2007 (has links)
This thesis develops and tests 3D Fast Marching Method (FMM) algorithm and apply these to seismic simulations. The FMM is a general method for monotonically advancing fronts, originally developed by Sethian. It calculates the first arrival time for an advancing front or wave. FMM methods are used for a variety of applications including, fatigue cracks in materials, lymph node segmentation in CT images, computing skeletons and centerlines in 3D objects and for finding salt formations in seismic data. Finding salt formations in seismic data, is important for the oil industry. Oil often flows towards gaps in the soil below a salt formation. It is therefore, important to map the edges of the salt formation, for this the FMM can be used. This FMM creates a first arrival time map, which makes it easier to see the edges of the salt formation. Herrmann developed a 3D parallel algorithm of the FMM testing waves of constant velocity. We implemented and tested his algorithm, but since seismic data typically causes a large variation of the velocities, optimizations were needed to make this algorithm scale. By optimising the border exchange and eliminating much of the roll backs, we delevoped and implemented a much improved 3D FMM which achieved close to theoretical performance, for up to at least 256 nodes on the current supercomputer at NTNU. Other methods like, different domain decompositions for better load balancing and running more FMM picks simultaneous, will also be discussed.
69

Real-Time Simulation and Visualization of Large Sea Surfaces

Løset, Tarjei Kvamme January 2007 (has links)
The open ocean is the setting for enterprises that require extensive monitoring, planning and training. In the offshore industry, virtual environments have been embraced to improve such processes. The presented work focuses on real-time simulation and visualization of open seas. This implies very large water surfaces dominated by wind-driven waves, but also influenced by the presence of watercraft activity and offshore installations. The implemented system treats sea surfaces as periodic elevation fields, obtained by synthesis from statistically sampled frequency spectra. Apparent repeating structures across a surface, due to this periodic nature, are avoided by decomposing the elevation field synthesis, using two or more discrete spectra with different frequency scales. A GPU-based water solver is also included. Its implementation features a convenient input interface, which exploits hardware rasterization both for efficiency and to supply the algorithm with arbitrary data, e.g. smooth, connected deflective paths. Finally, polygonal representations of visible ocean regions are obtained using a GPU-accelerated tessellation scheme suitable for wave fields. The result is realistic, unbounded ocean surfaces with natural distributions of wind-driven waves, avoiding the artificial periodicity associated with previous similar techniques. Further, the simulation allows for superposed boat wakes and surface obstacles in regions of interest. With the proposed tessellation scheme, the visualization is economic with regards to data transfer, conforming with the goal of delivering highly interactive rendering rates.
70

Implementing LOD for physically-based real-time fire rendering

Tangvald, Lars January 2007 (has links)
In this paper, I present a framework for implementing level of detail (LOD) for a 3d physically based fire rendering running on the GPU. While realistic fire rendering that runs in real time exists, it is generally not used in real-time applications such as game, due to the high cost of running such a rendering. Most research into the rendering of fire is only concerned with the fire itself, and not how it can best be included in larger scenes with a multitude of other complex objects. I present methods for increasing the efficiency of a physically based fire rendering without harming its visual quality, by dynamically adjusting the detail level of the fire according to its importance for the current view. I adapt and use methods created both for LOD and for other areas to alter the detail level of the visualization and simulation of a fire rendering. The desired detail level is calculated by evaluating certain conditions such as visibility and distance from the viewpoint, and then used to adjust the detail level of the visualization and simulation of the fire. The implementation of the framework could not be completed in time, but a number of tests were run to determine the effect of the different methods used. These results indicate that by making adjustments to the simulation and visualization of the fire, large boosts in performance are gained without significantly harming the visual quality of the fire rendering.

Page generated in 0.0599 seconds