• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 6
  • Tagged with
  • 107
  • 107
  • 107
  • 42
  • 42
  • 31
  • 27
  • 27
  • 26
  • 26
  • 11
  • 6
  • 6
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Using the Signature Quadratic Form Distance for Music Information Retrieval

Hitland, Håkon Haugdal January 2011 (has links)
This thesis is an investigation into how the signature quadratic form distance can be used to search in music.Using the method used for images by Beecks, Uysal and Seidl as a starting point,I create feature signatures from sound clips by clustering features from their frequency representations.I compare three different feature types, based on Fourier coefficients, mel frequency cepstrum coefficients (MFCCs), and the chromatic scale.Two search applications are considered.First, an audio fingerprinting system, where a music file is located by a short recorded clip from the song.I run experiments to see how the system's parameters affect the search quality, and show that it achieves some robustness to noise in the queries, though less so that comparable state-of-the-art methods.Second, a query-by-humming system where humming or singing by one user is used to search in humming/singing by other users.Here none of the tested feature types achieve satisfactory search performance. I identify and discuss some possible limitations of the selected feature types for this task.I believe that this thesis serves to demonstrate the versatility of the feature clustering approach, and may serve as a starting point for further research.
52

Optimizing a High Energy Physics (HEP) Toolkit on Heterogeneous Architectures

Lindal, Yngve Sneen January 2011 (has links)
A desired trend within high energy physics is to increase particle accelerator luminosities,leading to production of more collision data and higher probabilities of finding interestingphysics results. A central data analysis technique used to determine whether results areinteresting or not is the maximum likelihood method, and the corresponding evaluation ofthe negative log-likelihood, which can be computationally expensive. As the amount of datagrows, it is important to take benefit from the parallelism in modern computers. This, inessence, means to exploit vector registers and all available cores on CPUs, as well as utilizingco-processors as GPUs.This thesis describes the work done to optimize and parallelize a prototype of a centraldata analysis tool within the high energy physics community. The work consists of optimiza-tions for multicore processors, GPUs, as well as a mechanism to balance the load betweenboth CPUs and GPUs with the aim to fully exploit the power of modern commodity comput-ers. We explore the OpenCL standard thoroughly and we give an overview of its limitationswhen used in a large real-world software package. We reach a single-core speedup of ∼ 7.8xcompared to the original implementation of the toolkit for the physical model we use through-out this thesis. On top of that follows an increase of ∼ 3.6x with 4 threads on a commodityIntel processor, as well as almost perfect scalability on NUMA systems when thread affinityis applied. GPUs give varying speedups depending on the complexity of the physics modelused. With our model, price-comparable GPUs give a speedup of ∼ 2.5x compared to amodern Intel CPU utilizing 8 SMT threads.The balancing mechanism is based on real timings of each device and works optimally forlarge workloads when the API calls to the OpenCL implementation impose a small overheadand when computation timings are accurate.
53

Profiling, Optimization and Parallelization of a Seismic Inversion Code

Stinessen, Bent Ove January 2011 (has links)
Modern chip multi-processors offer increased computing power through hardware parallelism. However, for applications to exploit this parallelism, they have to be either designed for or adapted to the new processor architectures. Seismic processing applications usually handle large amounts of data that are well suited for the task-level parallelism found in multi-core shared memory computer systems. In this thesis, a large production code for seismic inversion is profiled and analyzed to find areas of the code suitable for parallel optimization. These code fragments are then optimized through parallelization and by using highly optimized multi-threaded libraries. Our parallelizations of the linearized AVO seismic inversion algorithm used in the application, scales up to 24 cores, with almost linear speedup up to 16 cores, on a quad twelve-core AMD Opteron system. Overall, our optimization efforts result in a performance increase of about 60 % on a dual quad-core AMD Opteron system.The optimization efforts are guided by the Seven Dwarfs taxonomy and proposed benchmarks. This thesis thus serves as a case study of their applicability to real-world applications.This work is done in collaborations with Statoil and builds on previous works by Andreas Hysing, a former HPC-Lab master student, and by the author.
54

Brukerinvolvering i smidig utvikling: Utfordringer og muligheter / User Involvement in Agile Development: Challenges and Possibilities

Sandven, Håvard January 2011 (has links)
Smidig systemutvikling har i løpet av de siste årene blitt en svært populær utviklingsmetode for utarbeidelse av IT-systemer. Denne oppgaven ser nærmere på hvordan man integrerer arbeid med brukeropplevelse i en smidig utviklingsprosess. Selve studiet er todelt og bygger videre på arbeidet som ble gjennomført høsten 2010 i forstudiet <i>Brukervennlighet i smidig systemutvikling</i>.I første del av oppgaven ser vi på samspillet mellom kunde og leverandør for de tre prosjektene som ble studert i forprosjektet, og da spesielt med hensyn på brukerinvolvering. Informasjon om dette er hentet inn gjennom intervjuer med prosjektledere for utviklingsprosjektene og tilhørende representanter fra kundesiden. Studiet viser at utviklingsprosjekter er en svært kompleks prosess, der man har mange elementer å forholde seg til. Et viktig element som kom frem i dette studiet er at kunderepresentanten har en svært sentral rolle. Det kreves et godt samarbeid mellom kunderepresentanten og utviklere, med god to-veis kommunikasjon, for at utviklingsprosjektet skal bli vellykket. Med bakgrunn i resultatene i fra undersøkelsene om samspillet mellom kunde og utviklere ønsket vi i den andre delen av oppgaven å se nærmere på de generelle mulighetene og utfordringene som eksisterer mellom disiplinene brukeropplevelse og smidig utvikling. Dette innebar å intervjue noen representanter fra de to fagmiljøene for smidig utvikling og brukeropplevelse i Norge. Resultatene av intervjuene viste at det er et stort engasjement i begge fagmiljøene rundt denne problematikken. Det kom frem at dagens beste praksis er å alltid involvere brukeropplevelse tidlig i utviklingsprosessen. I tillegg vil det i en del tilfeller være hensiktsmessig å la brukeropplevelse ligge en iterasjon i forkant, da brukeropplevelse ønsker å ha et holistisk overblikk, mens utviklerne ønsker å fokusere mer på detaljene.Studiet viser også at det fortsatt eksisterer utfordringer i samspillet mellom brukeropplevelse og smidige utviklere, men at man er på riktig spor i forhold til å skape det gode samarbeidet. Dette har sammenheng med at smidig utvikling er relativt ungt, og er i en modningsfase der man søker å finne den optimale prosessen.
55

Self-Organization in Artificial Neural Networks with Biologically Inspired Spike-Rate Learning

Hjellvik, Anders January 2011 (has links)
Artificial intelligence and learning is a growing field. There are many ways of making a computer program learn, in most cases one have a specific problem one wants to solve and do not really care how it is solved. This thesis have a specific problem, but the main focus is on how it is solved. One of the most exciting ways to learn is by the so called unsupervised learning methods, where programs/agents learn without any human interaction. Psychologists and Neurologists have long tried to understand how the human brain works, but due to its complexity there are still some obstacles left before we will be able to simulate the different functionalities. This thesis is an attempt to get one step closer to solving the problem of how learning happens and memories form. If we were to be able to simulate human learning in a machine there is no telling where it could end. Jørn Hokland has put forward three learning rules that may describe how learning happens. These rules will be examined and then used in an artificial neural network with the intention to control a simulated robot. Artificial neural networks (ANNs) are more or less inspired by the biological neural networks (BNNs) found in humans and animals. As we will see this thesis seeks to be one of the more biologically inspired.
56

Reusing External Library Components in the Creek CBR System

Stiklestad, Erik January 2007 (has links)
The Creek system has an architecture that facilitates combined case-based and model-based reasoning. The jColibri system, developed by the CBR group of Universidad Complutense in Madrid, contains a library of CBR system components intended for sharing and reuse. The system also contains an ontology (CBROnto) of CBR tasks and methods for explicit modelling of a CBR systems, in addition to general CBR terminology. In this master degree project, Creek and jColibri are compared with the aim of developing a mechanism for importing jColibri components to Creek, so that they can be integrated into a running Creek system. The mechanism is exemplified through selection of a few specific components, and integration of these components into an implemented demonstrator system. In addition, efforts needed to bring Creek into the jColibri framework are identified.
57

Efficient Processes and Transparent Information Flow in Supply Chain Through Use of RFID

Sørensen, Anita January 2007 (has links)
RFID is an up-and-coming technology holding promise of closing information gaps in the supply chain. Information is probably the biggest driver of performance in supply chains today and information control is seen as a huge advantage in management. This report addresses how RFID may contribute to improve management and supply chain within warehouse management. By doing a case study of three Norwegian businesses the report finds processes in management and logistics. An evaluation of where a theoretical RFID implementation may impact show that RFID increases data and data collection in almost all identified processes. Both management and logistics in general have the potential of automating several processes and collected data results in increased information sharing. When businesses handle identified challenges in the technology RFID have huge possibilities in improving warehouse management and the supply chain in general.
58

Path Rasterizer for OpenVG / Path Rasterizer for OpenVG

Liland, Eivind Lyngsnes January 2007 (has links)
Vector graphics provide smooth, resolution-independent images and are used for user interfaces, illustrations,fonts and more in a wide range of applications.During the last years, handheld devices have become increasingly powerful and feature-rich. It is expectedthat an increasing number of devices will contain dedicated GPUs (graphics processing units)capable of high quality 3d graphics for games. It is of interest to use the same hardware for acceleratingvector graphics.OpenVG is a new API for vector graphics rendering on a wide range of devices from desktop to handheld.Implementations can use different algorithms and ways of accelerating the rendering process in hardware,transparent from the user application.State of the art vector graphics solutions perform much processing in the CPU, transfer large amounts ofvertex and polygon data from the CPU to GPU, and generally use the GPU in a suboptimal way. Moreefficient approaches are desirable.Recently developed algorithms provide efficient curve rendering with little CPU overhead and a significantreduction in vertex and polygon count. Some issues remain before the approach can be used forrendering in an OpenVG implementation.This thesis builds on these algorithms to develop an approach that can be used for a conformant OpenVGimplementation. A number of issues, mainly related to precision, robustness and missing features, areidentified. Solutions are suggested and either implemented in a prototype or left as future work.Preliminary tests compare the new approach to traditional approximation with line segments.Vertex and triangle count as well as the simulated tile list counts are lowered significantly and CPUoverhead from subdivision is avoided or greatly reduced in many common cases. CPU overhead fromtessellation is eliminated through the use of an improved stencil buffer technique.Data-sets with different properties show varying amounts of improvement from the new approach. Forsome data-sets, vertex and triangle count is lowered by up to 70% and subdivision is completely avoided,while for others there is no improvement.
59

Realtime capture and streaming of gameplay experiences

Amlie, Kristian January 2007 (has links)
Today's games are social on a level that could only be imagined before. With modern games putting a stronger emphasis on social networking than ever before, the identity in the game often becomes on par with the one in real life. Yet many games lack the really strong networking tools, especially when networking regarding players of different games is concerned.Geelix is a project which tries to enhance the social gaming aspect by providing sharing of what has been aptly named: gaming experiences. The motivation for this goal is to enable stronger support for letting friends take part in your online, or even offline, experiences. It is the belief that sharing gaming experiences is a key element in building strong social networks in games. This master thesis was written in relation to the Geelix project, where the focus was on enhancing the Geelix live sharing experience with advanced methods for video compression and streaming.
60

Intelligent Sliding Doors

Fosstveit, Håvar Aambø January 2012 (has links)
You can see sliding doors everywhere, be it at the grocery store or the hospital. These doors are today mostly based on naive, motion sensing, and hence not very intelligent in deciding to open or not. I propose a solution by replacing the traditional sensor with the more sophisticated Microsoft Kinect depth mapping sensor allowing for skeletal tracking and feature extraction. I have applied hidden markov models to the behavioural features to understand the human intentions. Combined with a few simple rules, this solution proved to be accurate in 4 out of 5 times in understanding the user's intention in a controlled laboratory test.

Page generated in 0.3366 seconds