• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 31
  • 2
  • Tagged with
  • 180
  • 175
  • 175
  • 162
  • 136
  • 29
  • 10
  • 10
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

[Lecture Games] Python programming game

Johnsen, Andreas Lyngstad, Ushakov, Georgy January 2011 (has links)
Pythia is a programming game that allows the player to change pieces of theirenvironment through use of the programming language Python. The idea is that thegame could be used as a part of teaching simple programming to first year universitystudents. The game should be fun enough for the students to keep playing, teachenough for it to earn a place as a teaching tool, and it should be usable by allstudents. It should also be possible for a teacher to create their own content for thegame.Pythia was implemented by extending the Python-interpreter Jython and building a game around it. The game was rendered using a simple hardware accelerationlibrary. A simple story was invented and there was some research on learning andprogramming in games.A set of levels was made, matching the story and introducing puzzles related tosimple programming. These levels were used in testing to collect data on usability,entertainment, and learning. There were also tests of the performance of the gameon several systems, and an evaluation was made on creating content for the game.The game has potential for being used to teach programming to first yearstudents, as testers found it to be both fun and educational. We do not know if itwould be possible to use it, as it does not currently run on thin clients. If studentscan run it, we feel that it should be possible for teachers to create puzzles thatemulate the teaching goal.
132

Innendørs kart og navigering : 3D visualisering og relasjoner til eksterne data / Indoor Maps and Navigation : 3D visualization and relations to external data

Meidell, Jon Villy Selnes January 2011 (has links)
Målet med denne oppgaven var å demonstrere et konsept om innendørs navigering med 3D kart-visning på android-platformen. Basert på tilgjengelig dokumentasjon, hjelp av noen utvalgte kode og applikasjonseksempler, samt utforsking av en android-enhet ble det gjort observasjoner som gjorde det mulig å selv utvikle en applikasjon for denne platformen.Prosjektet har gitt en god innsikt i android-platformen og programmering med OpenGL ES, som har gitt meg mye ny kunnskap.
133

Energy Aware RTOS for EFM32

Spalluto, Angelo January 2011 (has links)
Power consumption is a major concern for portable or battery-operated devices.Recently, new low power consumption techniques have been used to achieveacceptable autonomy battery-powered systems. FreeRTOS is a real-time kernel designedespecially for embedded low-power MCUs. Energy Micro develops and sellsenergy friendly microcontrollers based on the industry leading ARM Cortex-M332-bit architecture. The aim of this thesis is to propose a new FreeRTOS TicklessFramework solution that exploits the power modes provided by EFM32. Three differentsolutions have been proposed, such as FreeRTOS RTC, FreeRTOS Ticklesswith prescaling and FreeRTOS Tickless without prescaling. The simulations showedthat the Tickless Framework saves energy from 15x to 44x more than Original versionof FreeRTOS. Using a self-made benchmark the battery (1500 mAh) lifetimehas been increased from 11 days to 487 days.
134

Real-Time Rigid Body Interactions

Fossum, Fredrik January 2011 (has links)
Rigid body simulations are useful in many areas, most notably video games and computer animation.However, the requirements for accuracy and performance vary greatly between applications.In this project we combine methods and techniques from different sources to implement a rigid body simulation.The simulation uses a particle representation to approximate objects with the intent of reaching better performance at the cost of accuracy.We simulate cubes in order to showcase the behavior of our simulation, and also to highlight its flaws.We also write a graphical interface for our simulation using OpenGL which allows us to move and zoom around our simulation, and choose whether to render cube geometry or the particle representations.We show how our simulation behaves in a realistic way, and when running our simulation on a CPU we are able to simulate several hundred cubes in real-time.We use OpenCL to accelerate our simulation on a GPU, and take advantage of OpenCL/OpenGL interoperability to increase performance.Our OpenCL implementation achieves speedups up to 12 compared to the CPU version, and is able to simulate thousands of cubes in real-time.
135

The Lattice Boltzmann Simulation on Multi-GPU Systems

Valderhaug, Thor Kristian January 2011 (has links)
The Lattice Boltzmann Method (LBM) is widely used to simulate different types of flow, such as water, oil and gas in porous reservoirs. In the oil industry it is commonly used to estimate petrophysical properties of porous rocks, such as the permeability. To achieve the required accuracy it is necessary to use big simulation models requiring large amounts of memory. The method is highly data intensive making it suitable for offloading to the GPU. However, the limited amount of memory available on modern GPUs severely limits the size of the dataset possible to simulate.In this thesis, we increase the size of the datasets possible to simulate using techniques to lower the memory requirement while retaining numerical precision. These techniques improve the size possible to simulate on a single GPU by about 20 times for datasets with 15% porosity.We then develop multi-GPU simulations for different hardware configurations using OpenCL and MPI to investigate how LBM scales when simulating large datasets.The performance of the implementations are measured using three porous rock datasets provided by Numerical Rocks AS. By connecting two Tesla S2070s to a single host we are able to achieve a speedup of 1.95, compared to using a single GPU. For large datasets we are able to completely hide the host to host communication in a cluster configuration, showing that LBM scales well and is suitable for simulation on a cluster with GPUs. The correctness of the implementations is confirmed against an analytically known flow, and three datasets with known permeability also provided by Numerical Rocks AS.
136

Parallel Algorithms for Neuronal Spike Sorting

Bergheim, Thomas Stian, Skogvold, Arve Aleksander Nymo January 2011 (has links)
Neurons communicate through electrophysiological signals, which may be recorded using electrodes inserted into living tissue.When a neuron emits a signal, it is referred to as a spike, and an electrode can detect these from multiple neurons.Neuronal spike sorting is the process of classifying the spike activity based on which neuron each spike signal is emitted from.Advances in technology have introduced better recording equipment, which allows the recording of many neurons at the same time.However, clustering software is lagging behind.Currently, spike sorting is often performed semi-manually by experts, with computer assistance, in a drastically reduced feature space.This makes the clustering prone to subjectivity.Automating the process will make classification much more efficient, and may produce better results.Implementing accurate and efficient spike sorting algorithms is therefore increasingly important.We have developed parallel implementations of superparamagnetic clustering, a novel clustering algorithm, as well as k-means clustering, serving as a useful comparison.Several feature extraction methods have been implemented to test various input distributions with the clustering algorithms. To assess the quality of the results from the algorithms, we have also implemented different cluster quality algorithms.Our implementations have been benchmarked, and found to scale well both with increased problem sizes and when run on multi-core processors.The results from our cluster quality measurements are inconclusive, and we identify this as a problem related to the subjectivity in the manually classified datasets.To better assess the utility of the algorithms, comparisons with intracellular recordings should be performed.
137

Introducing SimiLite : Enabling Similarity Retrieval in SQL

Veøy, Kristian January 2011 (has links)
This project has implemented SimiLite, a plug-in to SQLite which en-ables the usage of metric indices in SQL tables. SimiLite can easily beextended with different indices, and the indices LAESA and SSSTreehas been implemented and verified.This project has also implemented a framework for easy comparisonof the indices within SimiLite.It was found that while SimiLite causes a slow-down of about 5-10compared to the reference solution for a light metric, this will balanceout quickly once the cost of the metric increases.
138

Using the Signature Quadratic Form Distance for Music Information Retrieval

Hitland, Håkon Haugdal January 2011 (has links)
This thesis is an investigation into how the signature quadratic form distance can be used to search in music.Using the method used for images by Beecks, Uysal and Seidl as a starting point,I create feature signatures from sound clips by clustering features from their frequency representations.I compare three different feature types, based on Fourier coefficients, mel frequency cepstrum coefficients (MFCCs), and the chromatic scale.Two search applications are considered.First, an audio fingerprinting system, where a music file is located by a short recorded clip from the song.I run experiments to see how the system's parameters affect the search quality, and show that it achieves some robustness to noise in the queries, though less so that comparable state-of-the-art methods.Second, a query-by-humming system where humming or singing by one user is used to search in humming/singing by other users.Here none of the tested feature types achieve satisfactory search performance. I identify and discuss some possible limitations of the selected feature types for this task.I believe that this thesis serves to demonstrate the versatility of the feature clustering approach, and may serve as a starting point for further research.
139

Optimizing a High Energy Physics (HEP) Toolkit on Heterogeneous Architectures

Lindal, Yngve Sneen January 2011 (has links)
A desired trend within high energy physics is to increase particle accelerator luminosities,leading to production of more collision data and higher probabilities of finding interestingphysics results. A central data analysis technique used to determine whether results areinteresting or not is the maximum likelihood method, and the corresponding evaluation ofthe negative log-likelihood, which can be computationally expensive. As the amount of datagrows, it is important to take benefit from the parallelism in modern computers. This, inessence, means to exploit vector registers and all available cores on CPUs, as well as utilizingco-processors as GPUs.This thesis describes the work done to optimize and parallelize a prototype of a centraldata analysis tool within the high energy physics community. The work consists of optimiza-tions for multicore processors, GPUs, as well as a mechanism to balance the load betweenboth CPUs and GPUs with the aim to fully exploit the power of modern commodity comput-ers. We explore the OpenCL standard thoroughly and we give an overview of its limitationswhen used in a large real-world software package. We reach a single-core speedup of ∼ 7.8xcompared to the original implementation of the toolkit for the physical model we use through-out this thesis. On top of that follows an increase of ∼ 3.6x with 4 threads on a commodityIntel processor, as well as almost perfect scalability on NUMA systems when thread affinityis applied. GPUs give varying speedups depending on the complexity of the physics modelused. With our model, price-comparable GPUs give a speedup of ∼ 2.5x compared to amodern Intel CPU utilizing 8 SMT threads.The balancing mechanism is based on real timings of each device and works optimally forlarge workloads when the API calls to the OpenCL implementation impose a small overheadand when computation timings are accurate.
140

Profiling, Optimization and Parallelization of a Seismic Inversion Code

Stinessen, Bent Ove January 2011 (has links)
Modern chip multi-processors offer increased computing power through hardware parallelism. However, for applications to exploit this parallelism, they have to be either designed for or adapted to the new processor architectures. Seismic processing applications usually handle large amounts of data that are well suited for the task-level parallelism found in multi-core shared memory computer systems. In this thesis, a large production code for seismic inversion is profiled and analyzed to find areas of the code suitable for parallel optimization. These code fragments are then optimized through parallelization and by using highly optimized multi-threaded libraries. Our parallelizations of the linearized AVO seismic inversion algorithm used in the application, scales up to 24 cores, with almost linear speedup up to 16 cores, on a quad twelve-core AMD Opteron system. Overall, our optimization efforts result in a performance increase of about 60 % on a dual quad-core AMD Opteron system.The optimization efforts are guided by the Seven Dwarfs taxonomy and proposed benchmarks. This thesis thus serves as a case study of their applicability to real-world applications.This work is done in collaborations with Statoil and builds on previous works by Andreas Hysing, a former HPC-Lab master student, and by the author.

Page generated in 0.0452 seconds