461 |
Understanding strategic adaptation in dual-task situations as cognitively bounded rational behaviorJanssen, C. P. January 2012 (has links)
In this thesis I explored when people interleave attention in dual-task settings. The hypothesis is that people try to perform in a cognitively bounded rational way. Performance is limited by constraints that come from the task environment and cognition. If, given these constraints, multiple strategies for interleaving tasks are available, then people will interleave tasks in a way that aligns with their local priority objective (Chapter 3), or which maximizes the value of an objective payoff function that evaluates performance (Chapter 4). This hypothesis was tested using a combination of experimental studies and computational cognitive models. Across a series of studies, the interplay between different constraints was investigated. In Chapters 5 and 6, I developed mathematical models to study what task combinations in general allowed for “ideal payoff manipulations” to study task interleaving. The work contributed to the existing literature in four ways: (1) it provided an overarching theory of skilled human dual-task performance and tested this in relatively applied settings, (2) the theory was formalized in computational cognitive models that can predict performance of unobserved strategies and that can bracket the (optimal) performance space, (3) linear and logarithmic tasks were identified as an ideal combination for achieving ideal payoff manipulations, and (4) results demonstrated that in multitasking situations attention is not necessarily interleaved solely at chunk boundaries and other “natural breakpoints”, but that this depends on a person’s priorities. The work has implications for driver distraction research, in that it helps in systematically understanding the performance trade-offs that people face when multitasking. Moreover, the modeling framework could be used for model-based evaluation of new mobile interfaces. Finally, the demonstration that priorities can strongly influence multitasking performance highlights the importance of public safety campaigns that emphasize awareness of driver safety. Limitations and further implications are discussed.
|
462 |
Search based software project managementRen, J. January 2013 (has links)
This thesis investigates the application of Search Based Software Engineering (SBSE) approach in the field of Software Project Management (SPM). With SBSE approaches, a pool of candidate solutions to an SPM problem is automatically generated and gradually evolved to be increasingly more desirable. The thesis is motivated by the observation from industrial practice that it is much more helpful to the project manager to provide insightful knowledge than exact solutions. We investigate whether SBSE approaches can aid the project managers in decision making by not only providing them with desirable solutions, but also illustrating insightful “what-if” scenarios during the phases of project initiation, planning and enactment. SBSE techniques can automatically “evolve” solutions to software requirement elicitation, project staffing and scheduling problems. However, the current state-of- the-art computer-aided software project management tools remain limited in several aspects. First, software requirement engineering is plagued by problems associated with unreliable estimates. The estimations made early are assumed to be accurate, but the projects are estimated and executed in an environment filled with uncertainties that may lead to delay or disruptions. Second, software project scheduling and staffing are two closely related problems that have been studied separately by most published research in the field of computer aided software project management, but software project managers are usually confronted with the complex trade-off and correlations of scheduling and staffing. Last, full attendance of required staff is usually assumed after the staff have been assigned to the project, but the execution of a project is subject to staff absences because of sickness and turnover, for example. This thesis makes the following main contributions: (1) Introducing an automated SBSE approach to Sensitivity Analysis for requirement elicitation, which helps to achieve more accurate estimations by directing extra estimation effort towards those error-sensitive requirements and budgets. (2) Demonstrating that Co-evolutionary approaches can simultaneously co-evolve solutions for both work package sequencing and project team sizing. The proposed approach to these two interrelated problems yields better results than random and single-population evolutionary algorithms. (3) Presenting co-evolutionary approaches that can guide the project manager to anticipate and ameliorate the impact of staff absence. (4) The investigations of seven sets of real world data on software requirement and software project plans reveal general insights as well as exceptions of our approach in practise. (5) The establishment of a tool that implements the above concepts. These contributions support the thesis that automated SBSE tools can be beneficial to solution generation, and most importantly, insightful knowledge for decision making in the practise of software project management.
|
463 |
Logic design testingKaposi, A. A. January 1971 (has links)
This thesis concerns the testing of the design of logic networks. It is shown that conventional test methods, such as hard-ware testing and computer simulation, fail to satisfy the test requirements of modern logic networks. A new method is devised,consisting of a series of computer based modular tests, which permit the comprehensive verification of designs. The feasibility of the new method has been demonstrated on a prototype system. The work is based on a systems engineering approach which permits viewing the problems of logic design as particular cases of the more general problems of designing large interactive engineering systems. The systems approach also permits the extension of the methods described in this thesis to other areas of engineering. As part of the thesis, a framework of systems engineering concepts is constructed.
|
464 |
A study of time and energy efficient algorithms for parallel and heterogeneous computingOjiaku, Jude-Thaddeus January 2016 (has links)
This PhD project is motivated by the need to develop and achieve better and energy efficient computing through the use of parallelism and heterogeneous systems. Our contribution consists of both theoretical aspects, as well as in-depth and comprehensive empirical studies that aim to provide more insight into parallel and heterogeneous computing. Our first problem is a theoretical problem that focuses on the scheduling of a special category of jobs known as deteriorating jobs. These kind of jobs will require more effort to complete them if postponed to a later time. They are intended to model several industrial processes including steel production, fire-fighting and financial management. We study the problem in the context of parallel machine scheduling in an online setting where jobs have arbitrary release times. Our main results show that List Scheduling is $(1+b_{max})$-competitive and that no deterministic algorithm is better than $(1+b_{max})^{1-\frac{1}{m}}$, where $b_{max}$ is the largest deteriorating rate. We also extend our results to online deterministic algorithms and show that no deterministic online algorithm is better than $(1+b_{max})$-competitive. Our next study concerns the scheduling of $n$ jobs with precedence constraints on $m$ parallel machines. We are interested in the precedence constraint known as chain precedence constraint where each job can have at most one predecessor and at most one successor. The jobs are modelled as directed acyclic graphs where nodes represent the jobs and edges represent the precedence constraints between jobs. The jobs have a strict deadline that must be met. The parallel machines are considered to be unrelated and a communication network connects each pair of machines. Execution of the jobs on the machines as well as communication across the network incurs costs in the form of time and energy. These costs are given by cost matrices that covers processing and communication. The goal is to construct a feasible schedule that minimizes the total energy required to execute the chain of jobs on the machines, such that all deadlines are met. We present a dynamic programming solution to the problem that leads to a pseudo polynomial time algorithm with running time $O(nm^2d_{max})$, where $d_{max}$ is the largest deadline. We show that the algorithm computes an optimal schedule where one exists. We then proceed to a similar problem that involves the scheduling of jobs to minimize flow time plus energy. This problem is based on a dynamic speed scaling heuristic in literature that is able to adjust the speed of a processor based on the number of \emph{active jobs}, called AJC. We present a comprehensive empirical study that consists of several job selection, speed selection and processor allocation heuristics. We also consider both single processor and multi processor settings. Our main goal is to investigate the viability of designing a fixed-speed counterpart for AJC, that is not as computationally intensive as AJC, while being very simple. We also evaluate the performance of this fixed speed heuristic and compare it with that of AJC. Our fourth and final study involves the use of graphics processing unit (GPU) as an accelerator for compute intensive tasks. The GPU has become a very popular multi processor for heterogeneous computing both from an economical point of view and performance standpoint. Firstly, we contribute to the development of a Bioinformatics tool, called GapsMis, by implementing a heterogeneous version that uses graphics processors for acceleration. GapsMis is a tool designed for the alignment of sequences, like protein and DNA sequences, and allows for the insertion of gaps in the alignment. Then we present a case study that aims to highlight the various aspects, including benefits and challenges, involved in developing heterogeneous applications that is vendor-agnostic. In order to do this we select four algorithms as case studies including GapsMis and the algorithm presented in our second problem. The other two algorithms are based on the Velocity-Verlet integration and the Fruchterman-Reingold force-based method for graph layout. We make use of the Open Computing Language (OpenCL) and C++ for implementation of the algorithms on a range of graphics processors from Advanced Micro Devices (AMD) and NVIDIA Corporation. We evaluate several factors that can affect performance of these applications on each hardware. We also compare the performance of our algorithms in a multi-GPU setting and against single and multi-core CPU implementations. Furthermore, several metrics are defined to capture several aspects of performance including execution time of application kernel(s), execution time of application including communication times, throughput, power and energy consumption.
|
465 |
Majority problems in distributed systems and clustering in structured graphsHamilton, D. D. January 2017 (has links)
This thesis focuses on the study of various algorithms for Distributed Computing and Machine Learning research areas. More precisely, the work within contains research into various communication protocols in different settings of Distributed Computing, accompanied by relevant analysis on protocol performance in time and space. These protocols are designed to operate in analogous environments using different models for communication, primarily population protocol and random walk variants. In our settings we aim to use as minimal memory as possible, achieving light weight protocols that are powerful in their capabilities and randomized as well as deterministic in nature. We also propose a novel technique of verification which enables multi-step protocols to work in synergy. These protocols generally never terminate, but converge and are difficult to disseminate results throughout the network to be used in dependent processes. With the verification technique proposed, protocols can become adaptive and stacked into a chain of dependent processes. We also provide experimental analysis of a subarea of Machine Learning, unsupervised clustering algorithms. Gaining inspiration from the agglomerative nature and techniques defined in classical hierarchical clustering as well as the Phylogenetic tree building methods, we provide a comprehensive study and evaluation of new method to agglomeratively combine `similar' data into clusters based on the general consensus of taxonomy and evaluation of clustering mechanisms.
|
466 |
Timed and choreographical multiparty session types theoryYang, Weizhen January 2016 (has links)
Multiparty session types (MPSTs) are the formalism for describing protocols among multiple participants from a global point of view. They are used to check type-safety and deadlock freedom for programs. In real-time distributed systems, however, protocol descriptions often include: time-related specifications, such as deadline and timeout, by imposing time restrictions on interactions to guarantee the quality of service; and parallel computation for improving efficiency. The original theory of multiparty session types lacks the ability for expressing time and parallel actions in the global description. Therefore, we propose the theory of timed choreographical session types to solve the problem. To express time-related specifications, we associate time constraints and clock resets to asynchronous message passing in global types and local types. We define the labelled transition systems of timed global and local types, and prove that soundness and completeness are preserved through projection. We add a delay primitive to session calculus to achieve a minimum extension of asynchronous π- calculus and prove theorems of type preservation and subject reduction. We also formalise the theorem of time error-freedom and properties of feasibility and wait- freedom to guarantee that no deadlock is caused by time error. Additionally, we show that timed global session types enjoy progress property if the global types without time constraints and resets are verified to progress. In the theory of choreographical session types, for the purpose of precise thread management, explicit fork/merge operations for parallel composition and choice/join for branching are defined in both global and local session types and session calculus. To ensure well-behaved communications, we formalise the definition of well-formedness for global types and present sufficient conditions for verifying well-formedness. Based on choreographical session calculus, we define the operational semantics and typing system for processes and prove the theorems of subject reduction and type safety. When we model scenarios of concurrent real-time systems, it is natural to combine the formalisms of timed session types and choreographical session types. Then we add time assertions to labelled message passing operations in choreographical global and local types, and delay primitive in session calculus. Due to the orthogonality of time and parallel composition, we easily prove the theorems of subject reduction, and time error-freedom within the timed choreographical context. Due to introduced complexity in global types, we present the progress enablement and time progress property to guarantee that processes will progress.
|
467 |
New algorithms and mathematical tools for phylogenetics beyond treesScholz, Guillaume January 2018 (has links)
Phylogenetic trees and networks are mathematical structures for representing the evolutionary history of a set of taxa. The need for methods to build such structures from various type of data, as well as the need to understand the story these data may tell, give rise to exciting new challenges for mathematics and computer sciences. This thesis presents some recent advances in both these directions. It features new mathematical methodology for reconstructing phylogenetic networks, and new computational tools for inferring complex evolutionary scenarios. These come with a thorough analysis, assessing their attractiveness in terms of their theoretical properties. It expands on previous results, which are themselves briefly reviewed, and conclude with potentially interesting further research questions.
|
468 |
Heuristic multicast routing algorithms in WSNs with incomplete network knowledgeKaterinchuk, Valeri January 2018 (has links)
No description available.
|
469 |
Visual speech synthesis using dynamic visemes and deep learning architecturesThangthai, Ausdang January 2018 (has links)
The aim of this work is to improve the naturalness of visual speech synthesis produced automatically from a linguistic input over existing methods. Firstly, the most important contribution is on the investigation of the most suitable speech units for the visual speech synthesis. We propose the use of dynamic visemes instead of phonemes or static visemes and found that dynamic visemes can generate better visual speech than either phone or static viseme units. Moreover, best performance is obtained by a combined phoneme-dynamic viseme system. Secondly, we examine the most appropriate model between hidden Markov model (HMM) and different deep learning models that include feedforward and recurrent structures consisting of one-to-one, many-to-one and many-to-many architectures. Results suggested that that frame-by-frame synthesis from deep learning approach outperforms state-based synthesis from HMM approaches and an encoder-decoder many-to-many architecture is better than the one-to-one and many-to-one architectures. Thirdly, we explore the importance of contextual features that include information at varying linguistic levels, from frame level up to the utterance level. Our findings found that frame level information is the most valuable feature, as it is able to avoid discontinuities in the visual feature sequence and produces a smooth and realistic animation output. Fourthly, we found that the two most common objective measures of correlation and root mean square error are not able to indicate realism and naturalness of human perceived quality. We introduce an alternative objective measure and show that the global variance is a better indicator of human perception of quality. Finally, we propose a novel method to convert a given text input and phoneme transcription into a dynamic viseme transcription in the case when a reference dynamic viseme sequence is not available. Subjective preference tests confirmed that our proposed method is able to produce animation, that are statistically indistinguishable from animation produced using reference data.
|
470 |
SAFEL : a Situation-Aware Fear Learning modelRizzi Raymundo, Caroline January 2017 (has links)
This thesis proposes a novel and robust online adaptation mechanism for threat prediction and prevention capable of taking into consideration complex contextual and temporal information in its internal learning processes. The proposed mechanism is a hybrid cognitive computational model named SAFEL (Situation-Aware FEar Learning), which integrates machine learning algorithms with concepts of situation-awareness from expert systems to simulate both the cued and contextual fear-conditioning phenomena. SAFEL is inspired by well-known neuroscience findings on the brain's mechanisms of fear learning and memory to provide autonomous robots with the ability to predict undesirable or threatening situations to themselves. SAFEL's ultimate goal is to allow autonomous robots to perceive intricate elements and relationships in their environment, learn with experience through autonomous environmental exploration, and adapt at execution time to environmental changes and threats. SAFEL consists of a hybrid architecture composed of three modules, each based on a different approach and inspired by a different region (or function) of the brain involved in fear learning. These modules are: the Amygdala Module (AM), the Hippocampus Module (HM) and the Working Memory Module (WMM). The AM learns and detects environmental threats while the HM makes sense of the robot's context. The WMM is responsible for combining and associating the two types of information processed by the AM and HM. More specifically, the AM simulates the cued conditioning phenomenon by creating associations between co-occurring aversive and neutral environmental stimuli. The AM represents the kernel of emotional appraisal and threat detection in SAFEL's architecture. The HM, in turn, handles environmental information at a higher level of abstraction and complexity than the AM, which depicts the robot's situation as a whole. The information managed by the HM embeds in a unified representation the temporal interactions of multiple stimuli in the environment. Finally, the WMM simulates the contextual conditioning phenomenon by creating associations between the contextual memory formed in the HM and the emotional memory formed in the AM, thus giving emotional meaning to the contextual information acquired in past experiences. Ultimately, any previously experienced pattern of contextual information triggers the retrieval of that stored contextual memory and its emotional meaning from the WMM, warning the robot that an undesirable situation is likely to happen in the near future. The main contribution of this work as compared to the state of the art is a domain-independent mechanism for online learning and adaptation that combines a fear-learning model with the concept of temporal context and is focused on real-world applications for autonomous robotics. SAFEL successfully integrates a symbolic rule-based paradigm for situation management with machine learning algorithms for memorizing and predicting environmental threats to the robot based on complex temporal context. SAFEL has been evaluated in several experiments, which analysed the performance of each module separately. Ultimately, we conducted a comprehensive case study in the robot soccer scenario to evaluate the collective work of all modules as a whole. This case study also analyses to which extent the emotional feedback of SAFEL can improve the intelligent behaviour of a robot in a practical real-world situation, where adaptive skills and fast/flexible decision-making are crucial.
|
Page generated in 0.0311 seconds