• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 31
  • 2
  • Tagged with
  • 180
  • 175
  • 175
  • 162
  • 136
  • 29
  • 10
  • 10
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Multimodal Behaviour Generation Frameworks in Virtual Heritage Applications : A Virtual Museum at Sverresborg

Stokes, Michael James January 2009 (has links)
<p>This masters thesis proposes that multimodal behaviour generation frameworks are an appropriate way to increase the believability of animated characters in virtual heritage applications. To investigate this proposal, an existing virtual museum guide application developed by the author is extended by integrating the Behavioural Markup Language (BML), and the open-source BML realiser SmartBody. The architectural and implementation decisions involved in this process are catalogued and discussed. The integration of BML and SmartBody results in a dramatic improvement in the quality of character animation in the application, as well as greater flexibility and extensibility, including the ability to create scripted sequences of behaviour for multiple characters in the virtual museum. The successful integration confirms that multimodal behaviour generation frameworks have a place in virtual heritage applications.</p>
42

Modeling Communication on Multi-GPU Systems

Spampinato, Daniele January 2009 (has links)
<p>Coupling commodity CPUs and modern GPUs give you heterogeneous systems that are cheap, high-performance with incredible FLOPS counts. Recent evolution of GPGPU models and technologies make these systems even more appealing as compute devices for a range of HPC applications including image processing, seismic processing and other physical modeling, as well as linear programming applications. In fact, graphics vendor such as NVIDIA and AMD are now targeting HPC with some of their products. Due to the power and frequency walls, the trend is now to use multiple GPUs on a given system, much like you will find multiple cores on CPU-based systems. However, increasing the hierarchy of resource wides the spectrum of factors that may impact on the performance of the system. The lack of good models for GPU-based, heterogeneous systems also makes it harder to understand which factors impact performance the most. The goal of this thesis is to analyze such factors by investigating and benchmarking NVIDIA's multi-GPU solution, their recent NVIDIA Tesla S1070 Computing System. This system combines four T10 GPUs making available up to 4 TFLOPS of computational power. Based on a comparative study of fundamental parallel computing models and on the specific heterogeneous features exposed by the system, we define a test space for performance analysis. As a case study, we develop a red-black, SOR PDE solver for Laplace equations with Dirichlet boundaries, well known for requiring constant communication in order to exchange neighboring data. To aid both design and analysis, we propose a model for multi-GPU systems targeting communication between the several GPUs. The main variables exposed by the benchmark application are: domain size and shape, kind of data partitioning, number of GPUs, width of the borders to exchange, kernels to use, and kind of synchronization between the GPU contexts. Among other results, the framework is able to point out the most critical bounds of the S1070 system when dealing with applications like the one in our case study. We show that the multi-GPU system greatly benefits from using all its four GPUs on very large data volumes. Our results show the four GPUs almost four times faster than a single GPU, and twice as fast as two. Our analysis outcomes also allow us to refine our static communication model, enriching it with regression-based predictions.</p>
43

FPGA realization of a public key block cipher

Fjellskaalnes, Stig January 2009 (has links)
<p>This report will cover the physical realization of a public key algorithm based on multivariate quadratic quasigroups. The intension is that this implementation will use real keys and data. Efforts are also taken in order to reduce area cost as much as possible. The solution will be described and analyzed. This will show wether the measures were successfull or not.</p>
44

Parallel Techniques for Estimation and Correction of Aberration in Medical Ultrasound Imaging

Herikstad, Åsmund January 2009 (has links)
<p>Medical ultrasound imaging is a great diagnostic tool for physicians because of its noninvasive nature. It is performed by directing ultrasonic sound into tissue and visualizing the echo signal. Aberration in the reflected signal is caused by inhomogeneous tissue varying the speed of sound, which results in a blurring of the image. Dr. Måsøy and Dr. Varslot at NTNU have developed and algorithm for estimating and correcting ultrasound aberration. This algorithm adaptively estimates the aberration and adjusts the next transmitted signal to account for the aberration, resulting in a clearer image. This master's thesis focuses on developing a parallelized version of this algorithm. Since NVIDIA CUDA (Compute Unified Device Architecture) is an architecture oriented towards general purpose computations on the GPU (Graphics Processing Unit), it also examines how suitable the parallelization is for modern GPUs. The goal is using the GPU to off-load the CPU with an aim of achieving real-time calculations of the correction filter. The ultrasound image creation is examined, including how the aberrations come into being. Next, how the algorithm can be implemented efficiently using the GPU is looked at using both NVIDIA's FFT (fast Fourier transform) library as well as developing several computational kernels to run on the GPU. Our findings show that the algorithm is highly parallelizable and achieves a speedup of over 5x when implemented on the GPU. This is, however, not fast enough for real-time correction, but taking into account suggestions for overcoming the limitations encountered, the study shows great promise for future work.</p>
45

Linear Programming on the Cell/BE

Eldhuset, Åsmund January 2009 (has links)
<p>Linear programming is a form of mathematical optimisation in which one seeks to optimise a linear function subject to linear constraints on the variables. It is a very versatile tool that has many important applications, one of them being modelling of production and trade in the petroleum industry. The Cell Broadband Engine, developed by IBM, Sony and Toshiba, is an innovative multicore architecture that has already been proven to have a great potential for high performance computing. However, developing applications for the Cell/BE is challenging, particularily due to the low-level memory management that is mandated by the architecture, and because careful optimisation by hand is often required to get the most out of the hardware. In this thesis, we investigate the opportunities for implementing a parallel solver for sparse linear programs on the Cell/BE. A parallel version of the standard simplex method is developed, and the ASYNPLEX algorithm by Hall and McKinnon is partially implemented on the Cell/BE. We have met substantial challenges when it comes to numerical stability, and this has prevented us from spending sufficient time on Cell/BE-specific optimisation and support for large data sets. Our implementations can therefore only be regarded as proofs of concept, but we provide analyses and discussions of several aspects of the implementations, which may guide the future work on this topic.</p>
46

Compression in XML search engines

Natvig, Ola January 2010 (has links)
<p>The structure of XML documents can be used by search engines to answer structured queries or to provide better relevancy. Several index structures exist for search in XML data. This study focuses on inverted lists with dictionary coded path types and dewey coded path instances. The dewey coded path index is large, but could be compressed. This study examines query processing with indexes encoded using well known integer coding methods VByte and PFor(delta) and methods tailored for the dewey index. Intersection queries and structural queries are evaluated. In addition to standard document level skipping, skip operations for path types are implemented and evaluated. Four extensions over plain PFor methods are proposed and tested. Path type sorting sorts dewey codes on their path types and store all deweys from one path type together. Column wise dewey storage stores the deweys in columns instead of rows. Prefix coding a well known method, is adapted to the column wise dewey storage, and dynamic column wise method chooses between row wise and column wise storage based on the compressed data. Experiments are performed on a XML collection based on Wikipedia. Queries are generated from the TREC 06 efficiency task query trace. Several different types of structural queries have been executed. Experiments show that column wise methods perform good on both intersection and structural queries. The dynamic column wise scheme is in most cases the best, and creates the smallest index. Special purpose skipping for path types makes some queries extremely fast and can be implemented with only limited storage footprint. The performance of in-memory search with multi-threaded query execution is limited by memory bandwidth.</p>
47

The Educational Game Editor : The Design of a Program for Making Educational Computer Games

Tollefsrud, John Ola January 2006 (has links)
<p>This report is about computer game based learning, how to make a program for making educational games, the possibilities to use a hypermedia structure for storage of the data in an educational game, and different learning theories related to computer game based learning. The first part is about the different learning theories behaviourism, cognitivism, constructivism, socio-constructivism, and situated learning. The different theories are related to learning games, and a classification of game based learning is also given. Hypermedia is a smart and efficient way of organizing data, and is a relevant solution for use in education and games. The relationship between data, information and wisdom is central, and how the hypermedia base is constructed and different information structures are described. The advantages and limitations of use of hypermedia in education are discussed, and examples of use, as in OPSYS and the Mobile instruction system, are given. There exist some computer games for use in higher education, and some of them are described. To make a good educational, many certain requirements have to be fulfilled both aspects in game design and learning aspects. The main part of the report is about the Educational Game Editor. The idea is to design a program for making computer games for use in education. Before the design, the Software Requirements Specification is presented, containing functional and quality requirements, and scenarios to exemplify the requirements. The conceptual design of the program gives an overall description and describes the phases of creating a game and the elements the game consists of: file management, object management, Library, and Tools. The main architectural drivers are usability and availability. The program must be easy to use and be stable and not crash. An example of making a simple game about the history of Trondheim explains how to use the program in steps, and gives a good guide for the users to make their own game.</p>
48

Flexible Discovery of Modules with Distance Constraints

Lekang, Øystein January 2006 (has links)
<p>Many authors argue that finding single transcription factor binding sites is not enough to be able to make predictions with regard to regulation in eukaryotic genes, as is the case with simpler prokaryotes. With eukaryotes combinations of transctiption factors must be modeled as a composite motif or module. In some cases even with a restriction on distance between individual sites, or within the module. Create a module discovery tool capable of using both deterministic patterns and position weight matrices as input, that can impose restrictions on distance. Use the tool for module discovery and evaluate the results.</p>
49

Analysis of fibre cross sections : Developing methods for image processing and visualisation utilising the GPU

Bergquist, Jørgen, Titlestad, Helge January 2006 (has links)
<p>Modern graphics processing units, GPUs, have evolved into high-performance processors with programmable vertex and pixel shaders. With these new abilities a new subfield of research, dubbed GPGPU for General Purpose computing on the GPU has emerged, in areas as oil exploration, processing of sound effects, neural networks, cryptography and image processing. As the GPUs functionality and performance are still increasing, more programmers are appealed by their computational power. To understand the performance of paper materials a detailed characterisation of the fibre cross-sections is necessary. Using scanning electron microscopy, SEM, fibres embedded in epoxy are depicted. These images have to be analysed and quantified. In this master thesis we explore the possibility of taking advantage of todays generation of GPUs performance when analysing digital images of fibre cross-sections. We implemented common algorithms such as the median filter, the SUSAN smoothing filter and various mathematical morphology operations using the high-level shader language OpenGL Shader Language, GLSL. When measured against equivalent image processing opreations run on the CPU, we have found our GPU solution to perform about the same. The operations run much faster on the GPU, but due to overhead of binding FBOs, intialising shader programs and transfering data between the CPU and the GPU, the end result is about the same on the GPU and CPU implementations. We have deliberatly worked with commodity hardware to see what one can gain with just replacing the graphics card in the engineer's PCs. With newer hardware the results would tilt heavily towards the GPU implementations. We have concluded that making a paper fibre cross-section analysis program based on GPU image processing with commodity hardware is indeed feasible, and would give some benefits to the user interactivity. But it is also harder to implement because the field is still young, with immature compilers and debugging tools and few solid libraries.</p>
50

A Classifier for Microprocessor Processing Site Prediction in Human MicroRNAs

Helvik, Snorre Andreas January 2006 (has links)
<p>MircoRNAs are ~22nts long non-coding RNA sequences that play a central role in gene regulation. As the microRNAs are temporary and not necessarily expressed when RNA from tissue samples are sequenced, bioinformatics is an important part of microRNA discovery. Most of the computational microRNA discovery approaches are based on conservation between human and other species. Recent results, however, estimate that there exists around 350 microRNAs unique to human. It is therefore a need for methods that use characteristics in the primary microRNA transcript to predict microRNA candidates. The main problem with such methods is, however, that many of the characteristics in the primary microRNA transcript are correlated with the location where the Microprocessor complex cleaves the primary microRNA into the precursor, which is unknown until the candidate is experimentally verified. This work presents a method based on support vector machines (SVM) for Microprocessor processing site prediction in human microRNAs. The SVM correctly predicts the processing site for 43% of the known human microRNAs and shows a great performance distinguishing random hairpins and microRNAs. The processing site SVM is useful for microRNA discovery in two ways. One, the predicted processing sites can be used to build an SVM with more distinct features and, thus, increase the accuracy of the microRNA gene predictions. Two, it generates information that can be used to predict microRNA candidates directly, such as the score differences between the candidate's potential and predicted processing sites. Preliminary results show that an SVM that uses the predictions from the processing site SVM and trains explicitly to separate microRNAs and random hairpins performs better than current prediction-based approaches. This illustrates the potential gain of using the processing site predictions in microRNA gene prediction.</p>

Page generated in 0.0447 seconds