271 |
Improving Player Performance by Developing Gaze Aware GamesNavarro, Diego January 2014 (has links)
Context. Eye tracking technology has been applied to video games, mainly, as an offline analysis tool or as an input for game control. Nevertheless, eye tracking systems applied to video games is a topic considered to be on an infant state that requires further development. The following study explore a different approach in how eye tracking systems can be used for video game interaction. Objectives. By implementing a gaze based interaction technique, a gaze aware space shooting game will be developed in order to provide in-game assistance that could improve player's performance. Method. With the help of a Tobii REX eye traking system, a set of 26 volunteers played two video games in a controlled environment. Both of the games had the same mechanics and elements, but only one of them implemented the gaze based interaction technique. The player performance was calculated in terms of the time needed by the players to finish each of the games. A statistic significance analysis was done in order to determine if the testing data provided sufficient evidence to conclude a performance improvement. Results. The results showed a reduction on the time needed to finish the game on the gaze aware prototype, having an average time difference of 74.03 seconds and overcoming a confidence level of 99.9\% when submitting the testing data to a paired t-Test. Also, the majority of the players chose the gaze aware game as the most enjoyable, in terms of their personal preferences. Conclusions. The testing results provided sufficient evidence to conclude that the gaze aware game improved the performance of all of the selected participants. This study provides a starting point for further development of eye tracking systems as a task assisting method on video game interaction.
|
272 |
Code Reviewer Recommendation : A Context-Aware Hybrid ApproachStrand, Anton, Gunnarsson, Markus January 2019 (has links)
Background. Code reviewing is a commonly used practice in software development. It refers to the process of reviewing new code changes, commonly before they aremerged with the code base. However, in order to perform the review, developers need to be assigned to that task. The problems with a manual assignment includes a time-consuming selection process; limited pool of known candidates; risk of high reuse of the same reviewers (high workload). Objectives. This thesis aims to attempt to address the above issues with a recommendation system. The idea is to receive feedback from experienced developers in order to expand upon identified reviewer factors; which can be used to determinethe suitability of developers as reviewers for a given change. Also, to develop and implement a solution that uses some of the most promising reviewer factors. The solution can later be deployed and validated through user and reviewer feedback in a real large-scale project. The developed recommendation system is named Carrot. Methods. An improvement case study was conducted at Ericsson. The identification of reviewer factors is found through literature review and semi-structured interviews. Validation of Carrot’s usability was conducted through static analysis,user feedback, and static validation. Results. The results show that Carrot can help identify adequate non-obvious reviewers and be of great assistance to new developers. There are mixed opinions on Carrot’s ability to assist with workload balancing and decrease of review lead time. The recommendations can be performed in a production environment in less than a quarter of a second. Conclusions. The implemented and validated approach indicates possible usefulness in performing recommendations, but could benefit significantly from further improvements. Many of the problems seen with the recommendations seem to be a result of corner-cases that are not handled by the calculations. The problems would benefit considerably from further analysis and testing.
|
273 |
Trajectory Planning for Autonomous Underwater Vehicles: A Stochastic Optimization ApproachAlbarakati, Sultan 30 August 2020 (has links)
In this dissertation, we develop a new framework for 3D trajectory planning of Autonomous Underwater Vehicles (AUVs) in realistic ocean scenarios. The work is divided into three parts. In the first part, we provide a new approach for deterministic trajectory planning in steady current, described using Ocean General Circulation Model (OGCM) data. We apply a Non-Linear Programming (NLP) to the optimal-time trajectory planning problem. To demonstrate the effectivity of the resulting model, we consider the optimal time trajectory planning of an AUV operating in the Red Sea and the Gulf of Aden. In the second part, we generalize our 3D trajectory planning framework to time-dependent ocean currents. We also extend the framework to accommodate multi-objective criteria, focusing specifically on the Pareto front curve between time and energy. To assess the effectiveness of the extended framework, we initially test the methodology in idealized settings. The scheme is then demonstrated for time-energy trajectory planning problems in the Gulf of Aden. In the last part, we account for uncertainty in the ocean current field, is described by an ensemble of flow realizations. The proposed approach is based on a non-linear stochastic programming methodology that uses a risk-aware objective function, accounting for the full variability of the flow ensemble. We formulate stochastic problems that aim to minimize a risk measure of the travel time or energy consumption, using a flexible methodology that enables the user to explore various objectives, ranging seamlessly from risk-neutral to risk-averse. The capabilities of the approach are demonstrated using steady and transient currents. Advanced visualization tools have been further designed to simulate results.
|
274 |
System-Level Techniques for Temperature-Aware Energy OptimizationBao, Min January 2010 (has links)
Energy consumption has become one of the main design constraints in today’s integrated circuits. Techniques for energy optimization, from circuit-level up to system-level, have been intensively researched. The advent of large-scale integration with deep sub-micron technologies has led to both high power densities and high chip working temperatures. At the same time, leakage power is becoming the dominant power consumption source of circuits, due to continuously lowered threshold voltages, as technology scales. In this context, temperature is an important parameter. One aspect, of particular interest for this thesis, is the strong inter-dependency between leakage and temperature. Apart from leakage power, temperature also has an important impact on circuit delay and, implicitly, on the frequency, mainly through its influence on carrier mobility and threshold voltage. For power-aware design techniques, temperature has become a major factor to be considered. In this thesis, we address the issue of system-level energy optimization for real-time embedded systems taking temperature aspects into consideration. We have investigated two problems in this thesis: (1) Energy optimization via temperature-aware dynamic voltage/frequency scaling (DVFS). (2) Energy optimization through temperature-aware idle time (or slack) distribution (ITD). For the above two problems, we have proposed off-line techniques where only static slack is considered. To further improve energy efficiency, we have also proposed online techniques, which make use of both static and dynamic slack. Experimental results have demonstrated that considerable improvement of the energy efficiency can be achieved by applying our temperature-aware optimization techniques. Another contribution of this thesis is an analytical temperature analysis approach which is both accurate and sufficiently fast to be used inside an energy optimization loop.
|
275 |
Safety Aware Platooning of Automated Electric Transport VehiclesJackson, Spencer Scott 01 May 2013 (has links)
Safety is a paramount concern when considering implementation of an automated highway where computers control the vehicles. Even with computer-fast reaction time there is inevitably some delay and if vehicles do not follow at safe distances, emergency braking maneuvers can cause dangerous collisions. This research investigates situations that might make automated vehicles have dangerous collisions and what standards the system design must hold to keep passengers safe.
|
276 |
Java API-Aware Code Generation Engine: A PrototypeVijyapurpu, Chandra Sekhar 01 May 2012 (has links)
Software reuse enhances a programmer's productivity and reduces programming errors. Improving software reuse through libraries and frameworks is a vast problem area. This thesis offers an approach to solve two sub-problems within the problem area- to identify the right library components, and to offer code snippets that use the components correctly. The Java API-Aware Code Generation Engine, or JAGE for short, is a prototype system that demonstrates the feasibility of generating semantically valid code snippets consisting of method calls to classes in the J2SDK library.
Developers often search for sample code snippets that describe how to use the library. This thesis describes the design and implementation of JAGE, which allows software developers to use an English sentence to generate helpful code snippets in Java. This thesis also discusses the related concepts in natural-language processing including ontology, Wordnet, and object-orientation in the area of automatic code snippet generation.
|
277 |
Gestion sensible au métier des processus à base de services / Business environment-aware management of service-based business processesBouchaala Charfeddine, Olfa 30 September 2016 (has links)
Face à un environnement métier très dynamique, les entreprises expriment un grand besoin de gestion de leurs processus métiers de point de vue métier. Il existe trois types d’approche de gestion sensible aux changements de l’environnement métier : à savoir les approches impératives, déclaratives et hybrides. Les approches déclaratives sont très flexibles. Cependant, elles sont très couteuses en termes de temps. En contre partie, les approches impératives sont très efficaces mais trop rigide. Les approches hybrides, à leur tour, essaient de concilier entre les approches impératives et déclaratives afin d’augmenter le niveau concurrentiel des entreprises. Néanmoins, elles nécessitent un effort d’alignement entre la logique métier et la logique du processus. En outre, nous constatons que certaines approches ne sont pas faisables en pratique puisqu’ils n’utilisent pas les standards des processus.De plus, l’efficacité et la flexibilité sont antagonistes. Par conséquent, dans cette thèse, nous nous intéressons à la gestion sensible au métier visant à : (1) concilier les techniques déclaratives et les techniques impératives en une approche hybride pour tirer profit de leurs avantages, (2) préserver les standards des processus, et (3) minimiser l'effort des concepteurs. Nous avons ainsi proposé une nouvelle approche hybride pour la gestion des processus métiers. Nous avons modélisé la gestion d’un processus métier par un processus de gestion connecté au premier qui permet de le superviser et le configurer. Ce processus de gestion est généré grâce à une modélisation sémantique des relations entre les processus métiers, les services et l’environnement métier. Nous avons également implémenté cette approche et l'évaluer en comparaison avec les approches existantes de gestion sensible aux changements de l'environnement métier / Continuous business environment changes urge companies to adapt their processes from a business environment point of view. Indeed, companies struggle to find a balance between adapting their processes and keeping competitiveness. While the imperative nature of business processes is too rigid to adapt them at run-time, the declarative one of the purely rule based business processes is, however, very time consuming. Hybrid approaches in turn try to reconcile between these approaches aiming to reach the market requirements. Nevertheless, they also need an effort for aligning business logic and process logic. Therefore, in this thesis, we focus on business environment-aware management of service-based business processes aiming at conciliating imperative and declarative approaches. Our challenge is to develop a hybrid management approach that preserves industry standards to describe and to manage SBPs as well as minimizes designers’ efforts. Based on a semantic modeling of business environment, business processes as well as their relationships, and a control dependency analysis of business processes, we are able to synthesize a controller, itself modeled as a process, connected to the business process to be monitored and configured at run-time. We also validated the feasibility of our management approach by implementing the framework Business Environment-Aware Management for Service-based Business processes (BEAM4SBP). Experimentations show the efficiency of our approach with respect to other BEAM approaches
|
278 |
Etude du passage à l'échelle des algorithmes de segmentation et de classification en télédétection pour le traitement de volumes massifs de données / Study of the scalability of segmentation and classification algorithms to process massive datasets for remote sensing applicationsLassalle, Pierre 06 November 2015 (has links)
Les récentes missions spatiales d'observation de la Terre fourniront des images optiques à très hautes résolutions spatiale, spectrale et temporelle générant des volumes de données massifs. L'objectif de cette thèse est d'apporter de nouvelles solutions pour le traitement efficace de grands volumes de données ne pouvant être contenus en mémoire. Il s'agit de lever les verrous scientifiques en développant des algorithmes efficaces qui garantissent des résultats identiques à ceux obtenus dans le cas où la mémoire ne serait pas une contrainte. La première partie de la thèse se consacre à l'adaptation des méthodes de segmentation pour le traitement d'images volumineuses. Une solution naïve consiste à découper l'image en tuiles et à appliquer la segmentation sur chaque tuile séparément. Le résultat final est reconstitué en regroupant les tuiles segmentées. Cette stratégie est sous-optimale car elle entraîne des modifications par rapport au résultat obtenu lors de la segmentation de l'image sans découpage. Une étude des méthodes de segmentation par fusion de régions a conduit au développement d'une solution permettant la segmentation d'images de taille arbitraire tout en garantissant un résultat identique à celui obtenu avec la méthode initiale sans la contrainte de la mémoire. La faisabilité de la solution a été vérifiée avec la segmentation de plusieurs scènes Pléiades à très haute résolution avec des tailles en mémoire de l'ordre de quelques gigaoctets. La seconde partie de la thèse se consacre à l'étude de l'apprentissage supervisé lorsque les données ne peuvent être contenues en mémoire. Dans le cadre de cette thèse, nous nous focalisons sur l'algorithme des forêts aléatoires qui consiste à établir un comité d'arbres de décision. Plusieurs solutions ont été proposées dans la littérature pour adapter cet algorithme lorsque les données d'apprentissage ne peuvent être stockées en mémoire. Cependant, ces solutions restent soit approximatives, car la contrainte de la mémoire réduit à chaque fois la visibilité de l'algorithme à une portion des données d'apprentissage, soit peu efficaces, car elles nécessitent de nombreux accès en lecture et écriture sur le disque dur. Pour pallier ces problèmes, nous proposons une solution exacte et efficace garantissant une visibilité de l'algorithme sur l'ensemble des données d'apprentissage. L'exactitude des résultats est vérifiée et la solution est testée avec succès sur de grands volumes de données d'apprentissage. / Recent Earth observation spatial missions will provide very high spectral, spatial and temporal resolution optical images, which represents a huge amount of data. The objective of this research is to propose innovative algorithms to process efficiently such massive datasets on resource-constrained devices. Developing new efficient algorithms which ensure identical results to those obtained without the memory limitation represents a challenging task. The first part of this thesis focuses on the adaptation of segmentation algorithms when the input satellite image can not be stored in the main memory. A naive solution consists of dividing the input image into tiles and segment each tile independently. The final result is built by grouping the segmented tiles together. Applying this strategy turns out to be suboptimal since it modifies the resulting segments compared to those obtained from the segmentation without tiling. A deep study of region-merging segmentation algorithms allows us to develop a tile-based scalable solution to segment images of arbitrary size while ensuring identical results to those obtained without tiling. The feasibility of the solution is shown by segmenting different very high resolution Pléiades images requiring gigabytes to be stored in the memory. The second part of the thesis focuses on supervised learning methods when the training dataset can not be stored in the memory. In the frame of the thesis, we decide to study the Random Forest algorithm which consists of building an ensemble of decision trees. Several solutions have been proposed to adapt this algorithm for processing massive training datasets, but they remain either approximative because of the limitation of memory imposes a reduced visibility of the algorithm on a small portion of the training datasets or inefficient because they need a lot of read and write access on the hard disk. To solve those issues, we propose an exact solution ensuring the visibility of the algorithm on the whole training dataset while minimizing read and write access on the hard disk. The running time is analysed by varying the dimension of the training dataset and shows that our proposed solution is very competitive with other existing solutions and can be used to process hundreds of gigabytes of data.
|
279 |
Power, Performance and Energy Models and Systems for Emergent ArchitecturesSong, Shuaiwen 10 April 2013 (has links)
Massive parallelism combined with complex memory hierarchies and heterogeneity in high-performance computing (HPC) systems form a barrier to efficient application and architecture design. The performance achievements of the past must continue over the next decade to address the needs of scientific simulations. However, building an exascale system by 2022 that uses less than 20 megawatts will require significant innovations in power and performance efficiency.
A key limitation of past approaches is a lack of power-performance policies allowing users to quantitatively bound the effects of power management on the performance of their applications and systems. Existing controllers and predictors use policies fixed by a knowledgeable user to opportunistically save energy and minimize performance impact. While the qualitative effects are often good and the aggressiveness of a controller can be tuned to try to save more or less energy, the quantitative effects of tuning and setting opportunistic policies on performance and power are unknown. In other words, the controller will save energy and minimize performance loss in many cases but we have little understanding of the quantitative effects of controller tuning. This makes setting power-performance policies a manual trial and error process for domain experts and a black art for practitioners. To improve upon past approaches to high-performance power management, we need to quantitatively understand the effects of power and performance at scale.
In this work, I have developed theories and techniques to quantitatively understand the relationship between power and performance for high performance systems at scale. For instance, our system-level, iso-energy-efficiency model analyzes, evaluates and predicts the performance and energy use of data intensive parallel applications on multi-core systems. This model allows users to study the effects of machine and application dependent characteristics on system energy efficiency. Furthermore, this model helps users isolate root causes of energy or performance inefficiencies and develop strategies for scaling systems to maintain or improve efficiency. I have also developed methodologies which can be extended and applied to model modern heterogeneous architectures such as GPU-based clusters to improve their efficiency at scale. / Ph. D.
|
280 |
A Language-Based Approach to Robust Context-Aware Software / 堅牢な文脈認識ソフトウェア開発のためのプログラミング言語の研究Inoue, Hiroaki 26 March 2018 (has links)
付記する学位プログラム名: デザイン学大学院連携プログラム / 京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第21217号 / 情博第670号 / 新制||情||115(附属図書館) / 京都大学大学院情報学研究科通信情報システム専攻 / (主査)教授 五十嵐 淳, 教授 石田 亨, 教授 山本 章博 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
Page generated in 0.0438 seconds