• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 6
  • Tagged with
  • 107
  • 107
  • 107
  • 42
  • 42
  • 31
  • 27
  • 27
  • 26
  • 26
  • 11
  • 6
  • 6
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Design and Evaluation of a Recommender System for Course Selection

Unelsrød, Hans Fredrik January 2011 (has links)
In this thesis we will construct a recommender system for course selection in higher education (more specifically, at The Norwegian University of Science and Technology). Some of what makes our approach novel compared with existing solutions is that we weight each user in the collaborative filtering process based on their chosen degree subject (major) and wether or not the two users being compared are friends. Also we utilize both collaborative filtering and content-based recommendations in a hybrid solution. Another novel aspect of our solution is that we construct our system on top of an existing website for rating courses. This gives us unique access to a dataset containing thousands of user-ratings of courses.
32

Adaptive Aggregation of Recommender Systems

Bjørkøy, Olav Frihagen January 2011 (has links)
In the field of artificial intelligence, recommender systems are methods for predicting the relevance items to a users. The items can be just about anything, for example documents, articles, movies, music, events or other users. Recommender systems examine data such as ratings, query logs, user behavior and social connections to predict what each user will think of each item.Modern recommender systems combine multiple standard recommenders in order to leverage disjoint patterns in available data. By combining different methods, complex predictions that rely on much evidence can be made. These aggregations can for example be done by estimating weights that result in an optimal combination.However, we posit these systems have an important weakness. There exists an underlying, misplaced subjectivity to relevance prediction. Each chosen recommender system reflects one view of how users and items should be modeled. We believe the selection of recommender methods should be automatically chosen based on their predicted accuracy for each user and item. After all, a system that insists on being adaptive in one particular way is not really adaptive at all.This thesis presents a novel method for prediction aggregation that we call adaptive recommenders. Multiple recommender systems are combined on a per-user and per-item basis by estimating their individual accuracy in the current context. This is done by creating a secondary set of error estimating recommenders. The core insight is that standard recommenders can be used to estimate the accuracy of other recommenders. As far as we know, this type of adaptive prediction aggregation has not been done before.Prediction aggregation (combining scores) is tested in a recommendation scenario. Rank aggregation (sorting results lists) is tested in a personalized search scenario. Our initial results are promising and show that adaptive recommenders can outperform both standard recommenders and simple aggregation methods. We will also discuss the implications and limitations of our results.
33

Energy Aware RTOS for EFM32

Spalluto, Angelo January 2011 (has links)
Power consumption is a major concern for portable or battery-operated devices.Recently, new low power consumption techniques have been used to achieveacceptable autonomy battery-powered systems. FreeRTOS is a real-time kernel designedespecially for embedded low-power MCUs. Energy Micro develops and sellsenergy friendly microcontrollers based on the industry leading ARM Cortex-M332-bit architecture. The aim of this thesis is to propose a new FreeRTOS TicklessFramework solution that exploits the power modes provided by EFM32. Three differentsolutions have been proposed, such as FreeRTOS RTC, FreeRTOS Ticklesswith prescaling and FreeRTOS Tickless without prescaling. The simulations showedthat the Tickless Framework saves energy from 15x to 44x more than Original versionof FreeRTOS. Using a self-made benchmark the battery (1500 mAh) lifetimehas been increased from 11 days to 487 days.
34

Real-Time Rigid Body Interactions

Fossum, Fredrik January 2011 (has links)
Rigid body simulations are useful in many areas, most notably video games and computer animation.However, the requirements for accuracy and performance vary greatly between applications.In this project we combine methods and techniques from different sources to implement a rigid body simulation.The simulation uses a particle representation to approximate objects with the intent of reaching better performance at the cost of accuracy.We simulate cubes in order to showcase the behavior of our simulation, and also to highlight its flaws.We also write a graphical interface for our simulation using OpenGL which allows us to move and zoom around our simulation, and choose whether to render cube geometry or the particle representations.We show how our simulation behaves in a realistic way, and when running our simulation on a CPU we are able to simulate several hundred cubes in real-time.We use OpenCL to accelerate our simulation on a GPU, and take advantage of OpenCL/OpenGL interoperability to increase performance.Our OpenCL implementation achieves speedups up to 12 compared to the CPU version, and is able to simulate thousands of cubes in real-time.
35

The Lattice Boltzmann Simulation on Multi-GPU Systems

Valderhaug, Thor Kristian January 2011 (has links)
The Lattice Boltzmann Method (LBM) is widely used to simulate different types of flow, such as water, oil and gas in porous reservoirs. In the oil industry it is commonly used to estimate petrophysical properties of porous rocks, such as the permeability. To achieve the required accuracy it is necessary to use big simulation models requiring large amounts of memory. The method is highly data intensive making it suitable for offloading to the GPU. However, the limited amount of memory available on modern GPUs severely limits the size of the dataset possible to simulate.In this thesis, we increase the size of the datasets possible to simulate using techniques to lower the memory requirement while retaining numerical precision. These techniques improve the size possible to simulate on a single GPU by about 20 times for datasets with 15% porosity.We then develop multi-GPU simulations for different hardware configurations using OpenCL and MPI to investigate how LBM scales when simulating large datasets.The performance of the implementations are measured using three porous rock datasets provided by Numerical Rocks AS. By connecting two Tesla S2070s to a single host we are able to achieve a speedup of 1.95, compared to using a single GPU. For large datasets we are able to completely hide the host to host communication in a cluster configuration, showing that LBM scales well and is suitable for simulation on a cluster with GPUs. The correctness of the implementations is confirmed against an analytically known flow, and three datasets with known permeability also provided by Numerical Rocks AS.
36

Use of Mobile Devices and Multitouch Technologies to Enhance Learning Experiences

Solheim, Bendik January 2011 (has links)
The goal of this thesis was to investigate the usage of mobile devices with multitouch capabilities in the learning of procedural knowledge. A system, consisting of three prototypes, was to be implemented as a way of examining our two hypotheses:H1: Through using a conceptual model close to how the human mind perceive objects, we can increase consistency both in the creation of new user manuals and in the learning process.H2: By taking advantage of multitouch technologies we can introduce a more natural way of interacting on virtual representations of real-life objects.A lot of research was conducted on the usage of a conceptual model containing information on the physical attributes and the procedural knowledge to back our applications, and how this best could be realized. Existing technologies for creating 3D models was investigated, but was quickly discarded due to the unique representation that was needed to successfully integrate the model with GOMS. The research process concluded that an application for describing new devices would have to be developed as well.Three applications was developed to investigate our hypotheses: an application for describing the aspects of a device, written for Mac OS, a server for communicating with prolog over TCP, written in Java, and an application for displaying the device and allowing for interaction, written for the iOS platform. The final versions of these three prototypes made it possible to create objects consisting of cubes, storing them on the server, and rendering them on the mobile application. The report concludes by discussing the utility of our prototype in regards to the hypotheses. Although not in its optimal state, the prototype demonstrates the utility of pure gestural interfaces, and how well established technologies such as prolog and GOMS can be used to empower them. Finally, interesting extensions and further work based on this thesis is proposed, demonstrating its versatility.
37

Semi-automatic Test Case Generation

Undheim, Olav January 2011 (has links)
In the European research project CESAR, requirements can be specified using templates called boilerplates. Each statement of a requirement consists of a boilerplate with inserted values for the attributes of the boilerplate. By choosing attribute values from a domain ontology, a consistent language can be achieved. This thesis seeks to use the combination of boilerplates and a domain ontology in a semi-automatic test generation process.There are multiple ways to automate the test generation process, with various degrees of automation involved. One option is to use the boilerplates and the domain ontology to create a test model that can be used to generate tests. Another option is to use the information from the domain ontology to assist the user when he creates tests. In this thesis, the latter option is investigated and a tool named WikiTest is developed. WikiTest uses Semantic MediaWiki and Semantic Forms to utilize the ontology and assist the user in the test creation process. Using a Cucumber syntax, the tests can be specified in a relatively free format that does not sacrifice the ability to automate test execution. An experiment is conducted, where the results show that WikiTest is easier to use and leads to a higher test case quality than the alternatives can do. Being able to inspect the domain ontology while creating tests did not give the same results as when the ontology was integrated directly in the tool.
38

Testing of safety mechanisms in software-intensive systems

Bjørgan, Arne January 2011 (has links)
As software systems increasingly are used to control critical infrastructure, transportation systems and factory equipment, the use of proper testing methods has become more important. Systems that can cause harm to people, equipment or the environment they operate in are called safety critical systems.The suppliers of safety critical systems makes use of safety analysis methods to investigate possible hazards. The ouput from the analysis are possible causes and effects of the hazards found. These results are a large part of the basis for writing safety requirements for the system.The safety requirements should be tested thoroughly to avoid accidents. It is important that the right testing technique is applied to test these systems. The consequences of a system failure can be very high, so it is crucial to make use of a testing technique that has an approach that fits safety testing best. This thesis presents an experiment that looks into these questions. Also, the experiment investigates how the barrier model and safety analysis results helps in writing test cases for these systems.
39

hACME game : Administrative Interface

Berrum, Christian Grønnet, Johnsen, Morten Weel January 2011 (has links)
hACME game is a game based learning tool for teaching software security. The game is intended to help raising awareness and interest in the subject of software security. The purpose of the game is to make future software developers aware of how important security is. A key feature of the game is how it measures the participant's progress and the use of hints. This thesis focus on implementing an administrative interface for hACME game enabling easier access to statistics and results by admin users. The foundation for the thesis is a project written in the autumn semester of 2010 by Christian Berrum and Morten Weel Johnsen. The project thesis contained a conceptual design for the administrative interface.The result is a fully functional implementation of the administrative interface. The administrative interface consists of functionality for doing management tasks and getting statistics about the game and user progress. The thesis also focuses on the implementation of questionnaires in order to get information about the player's learning process.
40

Evolutionary Music Composition : A Quantitative Approach

Jensen, Johannes Høydahl January 2011 (has links)
Artificial Evolution has shown great potential in the musical domain. One task in which Evolutionary techniques have shown special promise is in the automatic creation or composition of music. However, a major challenge faced when constructing evolutionary music composition systems is finding a suitable fitness function.Several approaches to fitness have been tried. The most common is interactive evaluation. However, major efficiency challenges with such an approach have inspired the search for <i>automatic</i> alternatives.In this thesis, a music composition system is presented for the evolution of novel melodies. Motivated by the repetitive nature of music, a <i>quantitative</i> approach to automatic fitness is pursued. Two techniques are explored that both operate on frequency distributions of musical events. The first builds on <i>Zipf's Law</i>, which captures the scaling properties of music. Statistical <i>similarity</i> governs the second fitness function and incorporates additional domain knowledge learned from existing music pieces.Promising results show that pleasant melodies can emerge through the application of these techniques. The melodies are found to exhibit several favourable musical properties, including rhythm, melodic locality and motifs.

Page generated in 0.1761 seconds