31 |
Reducing deadline miss rate for grid workloads running in virtual machines : a deadline-aware and adaptive approachKhalid, Omer January 2011 (has links)
This thesis explores three major areas of research; integration of virutalization into scientific grid infrastructures, evaluation of the virtualization overhead on HPC grid job’s performance, and optimization of job execution times to increase their throughput by reducing job deadline miss rate. Integration of the virtualization into the grid to deploy on-demand virtual machines for jobs in a way that is transparent to the end users and have minimum impact on the existing system poses a significant challenge. This involves the creation of virtual machines, decompression of the operating system image, adapting the virtual environment to satisfy software requirements of the job, constant update of the job state once it’s running with out modifying batch system or existing grid middleware, and finally bringing the host machine back to a consistent state. To facilitate this research, an existing and in production pilot job framework has been modified to deploy virtual machines on demand on the grid using virtualization administrative domain to handle all I/O to increase network throughput. This approach limits the change impact on the existing grid infrastructure while leveraging the execution and performance isolation capabilities of virtualization for job execution. This work led to evaluation of various scheduling strategies used by the Xen hypervisor to measure the sensitivity of job performance to the amount of CPU and memory allocated under various configurations. However, virtualization overhead is also a critical factor in determining job execution times. Grid jobs have a diverse set of requirements for machine resources such as CPU, Memory, Network and have inter-dependencies on other jobs in meeting their deadlines since the input of one job can be the output from the previous job. A novel resource provisioning model was devised to decrease the impact of virtualization overhead on job execution. Finally, dynamic deadline-aware optimization algorithms were introduced using exponential smoothing and rate limiting to predict job failure rates based on static and dynamic virtualization overhead. Statistical techniques were also integrated into the optimization algorithm to flag jobs that are at risk to miss their deadlines, and taking preventive action to increase overall job throughput.
|
32 |
User experience, performance, and social acceptability : usable multimodal mobile interactionWilliamson, Julie R. January 2012 (has links)
This thesis explores the social acceptability of multimodal interaction in public places with respect to acceptance, adoption and appropriation. Previous work in multimodal interaction has mainly focused on recognition and detection issues without thoroughly considering the willingness of users to adopt these kinds of interactions in their everyday lives. This thesis presents a novel approach to user experience that is theoretically motivated by phenomenology, practiced with mixed-methods, and analysed based on dramaturgical metaphors. In order to explore the acceptance of multimodal interfaces, this thesis presents three studies that look at users’ initial reactions to multimodal interaction techniques: a survey study focusing on gestures, an on-the-street user study, and a follow-up survey study looking at gesture and voice-based interaction. The investigation of multimodal interaction adoption is explored through two studies: an in situ user study of a performative interface and a focus group study using experience prototypes. This thesis explores the appropriation of multimodal interaction by demonstrating the complete design process of a multimodal interface using the performative approach to user experience presented in this thesis. Chapter 3 looks at users’ initial reactions to and acceptance of multimodal interactions. The results of the first survey explored location and audience as factors the influence how individuals behave in public places. Participants in the on-the-street study described the desirable visual aspects of the gestures as playful, cool, or embarrassing aspects of interaction and how gestures could be hidden as everyday actions. These results begin to explain why users accepted or rejected the gestures from the first survey. The second survey demonstrated that the presence of familiar spectators made interaction significantly more acceptable. This result indicates that performative interaction could be made more acceptable by interfaces that support collaborative or social interaction. Chapter 4 explores how users place interactions into a usability context for use in real world settings. In the first user study, participants took advantage of the wide variety of possible performances, and created a wide variety of input, from highly performative to hidden actions, based on location. The ability of this interface to support flexible interactions allowed users to demonstrate the the purposed of their actions differently based on the immediately co-located spectators. Participants in the focus group study discussed how they would go about placing multimodal interactions into real world contexts, using three approaches: relationship to the device, personal meaning, and relationship to functionality. These results demonstrate how users view interaction within a usability context and how that might affect social acceptability. Chapter 5 examines appropriation of multimodal interaction through the completion of an entire design process. The results of an initial survey were used as a baseline of comparison from which to design the following focus group study. Participants in the focus groups had similar motives for accepting multimodal interactions, although the ways in which these were expressed resulted in very different preferences. The desire to use technology in a comfortable and satisfying way meant different things in these different settings. During the ‘in the wild’ user study, participants adapted performance in order to make interaction acceptable in different contexts. In some cases, performance was hidden in public places or shared with familiar spectators in order to successfully incorporate interaction into public places.
|
33 |
Designing interfaces in public settingsReeves, Stuart January 2009 (has links)
The rapidly increasing reach of computation into our everyday public settings presents new and significant challenges for the design of interfaces. One key feature of these settings is the increased presence of third parties to interaction, watching or passing-by as conduct with an interface takes place. This thesis assumes a performative perspective on interaction in public, presenting a framework derived from four empirical studies of interaction in a diverse series of public places---museums and galleries, city streets and funfairs---as well as observations on a variety of computer science, art and sociological literatures. As these settings are explored, a number of basic framework concepts are built up: * The first study chapter presents a deployment of an interactive exhibit within an artistic installation, introducing a basic division of roles and the ways in which visitors may be seen as `audience to manipulations of interactive devices by `participants . It also examines how visitors in an audience role may transition to active participant and vice versa. * The second study chapter describes a storytelling event that employed a torch-based interface. This chapter makes a distinction between non-professional and professional members of settings, contrasting the role of `actor with that of participants. * The third study chapter examines a series of scientific and artistic performance events that broadcast live telemetry data from a fairground ride to a watching audience. The study expands the roles introduced in previous chapters through making a further distinction between `behind-the-scenes ---in which `orchestrators operate---and `centre-stage settings---in which actors present the rider s experience to the audience. * The final study chapter presents a performance art game conducted on city streets, in which participants follow a series of often ambiguous clues in order to lead them to their goal. This chapter introduces a further `front-of-house setting, the notion of a circumscribing performance `frame in which the various roles are situated, and the additional role of the `bystander as part of this. These observations are brought together into a design framework which analyses other literature to complement the earlier studies. This framework seeks to provide a new perspective on and language for human-computer interaction (HCI), introducing a series of sensitising concepts, constraints and strategies for design that may be employed in order to approach the various challenges presented by interaction in public settings.
|
34 |
Building a secured XML real-time interactive data exchange architectureRabadi, Yousef January 2011 (has links)
Nowadays, TCP and UDP communication protocols are the most widely used transport methods for carrying out XML data messages between different services. XML data security is always a big concern especially when using internet cloud. Common XML encryption techniques encrypt part of private sections of the XML file as an entire block of text and apply these techniques directly on them. Man-in-the-Middle and Cryptanalysts can generate statistical information, tap, sniff, hack, inject and abuse XML data messages. The purpose of this study is to introduce architecture of new approach of exchanging XML data files between different Services in order to minimize the risk of any alteration, data loss, data abuse, data misuse of XML critical business data information during transmission by implementing a vertical partitioning on XML files. Another aim is to create a virtual environment within internet cloud prior to data transmission in order to utilise the communication method and rise up the transmission performance along with resources utilisation and spreads the partitioned XML file (shredded) through several paths within multi agents that form a multipath virtual network. Virtualisation in cloud network infrastructure to take advantage of its scalability, operational efficiency, and control of data flow are considered in this architecture. A customized UDP Protocol in addition to a pack of modules in RIDX adds a reliable (Lossless) and Multicast data transmission to all nodes in a virtual cloud network. A comparative study has been made to measure the performance of the Real-time Interactive Data Exchange system (RIDX) using RIDX UDP protocol against standard TCP protocol. Starting from 4 nodes up to 10 nodes in the domain, the results showed an enhanced performance using RIDX architecture over the standard TCP protocol
|
35 |
Constraint based program transformation theoryNatelberg, Stefan January 2009 (has links)
The FermaT Transformation Engine is an industrial strength toolset for the migration of Assembler and Cobol based legacy systems to C. It uses an intermediate language and several dozen mathematical proven transformations to raise the abstraction level of a source code or to restructure and simplify it as needed. The actual program transformation process with the aid of this toolset is semi-automated which means that a maintainer has not only to apply one transformation after another but also to evaluate the transformation result. This can be a very difficult task especially if the given program is very large and if a lot of transformations have to be applied. Moreover, it cannot be assured that a transformation target will be achieved because it relies on the decisions taken by the respective maintainer which in turn are based on his personal knowledge. Even a small mistake can lead to a failure of the entire program transformation process which usually causes an extensive and time consuming backtrack. Furthermore, it is difficult to compare the results of different transformation sequences applied on the same program. To put it briefly, the manual approach is inflexible and often hard to use especially for maintainers with little knowledge about transformation theory. There already exist different approaches to solve these well known problems and to simplify the accessibility of the FermaT Transformation Engine. One recently presented approach is based on a particular prediction technique whereas another is based on various search tactics. Both intend to automatise the program transformation process. However, the approaches solve some problems but not without introducing others. On the one hand, the prediction based approach is very fast but often not able to provide a transformation sequence which achieves the defined program transformation targets. The results depend a lot on the algorithms which analyse the given program and on the knowledge which is available to make the right decisions during the program transformation process. On the other hand, the search based approach usually finds suitable results in terms of the given target but only in combination with small programs and short transformation sequences. It is simply not possible to perform an extensive search on a large-scale program in reasonable time. To solve the described problems and to extend the operating range of the FermaT Transformation Engine, this thesis proposes a constraint based program transformation system. The approach is semi-automated and provides the possibility to outline an entire program transformation process on the basis of constraints and transformation schemes. In this context, a constraint is a condition which has to be satisfied at some point during the application of a transformation sequence whereas a transformation scheme defines the search space which consists of a set of transformation sequences. After the constraints and the scheme have been defined, the system uses a unique knowledge-based prediction technique followed by a particular search tactic to reduce the number of transformation sequences within the search space and to find a transformation sequence which is applicable and which satisfies the given constraints. Moreover, it is possible to describe those transformation schemes with the aid of a formal language. The presented thesis will provide a definition and a classification of constraints for program transformations. It will discuss capabilities and effects of transformations and their value to define transformation sets. The modelling of program transformation processes with the aid of transformation schemes which in turn are based on finite automata will be presented and the inclusion of constraints into these schemes will be explained. A formal language to describe transformation schemes will be introduced and the automated construction of these schemes from the language will be shown. Furthermore, the thesis will discuss a unique prediction technique which uses the capabilities of transformations, an evaluation of the transformation sequences on the basis of transformation effects and a particular search tactic which is related to linear and tree search tactics. The practical value of the presented approach will be proven with the aid of three medium-scale case studies. The first one will show how to raise the abstraction level whereas the second one will show how to decrease the complexity of a particular program. The third one will show how to increase the execution speed of a selected program. Moreover, the work will be summarised and evaluated on the basis of the research questions. Its limitations will be disclosed and some suggestion for future work will be made.
|
36 |
Extension to models of coincident failure in multiversion softwareSalako, Kizito Oluwaseun January 2012 (has links)
Fault-tolerant architectures for software-based systems have been used in various practical applications, including Right control systems for commercial airliners (e.g. AIRBUS A340, A310) as part of an aircraft's so-called fiy-bY-'win: Right control system [1], the control systems for autonomous spacecrafts (e.g. Cassini-Huygens Saturn orbiter and probe) [2], rail interlocking systems [3] and nuclear reactor safety systems [4, 5]. The use of diverse, independently developed, functionally equivalent software modules in a fault-tolerant con- figura tion has been advocated as a means of achieving highly reliable systems from relatively less reliable system components [6, 7, 8, 9]. In this regard it had been postulated that [6] "The independence of programming efforts will greatly reduce the probability of identical softuiare faults occurring 'in two 01' more versions of the proqram." Experimental evaluation demonstrated that despite the independent creation of such versions positive failure correlation between the versions can be expected in practice [10, 11]. The conceptual models of Eckhardt et al [12] and Littlewood et al [13], referred to as the EL model and LM model respectively, were instrumental in pointing out sources of uncertainty that determine both the size and sign of such failure correlation. In particular, there are two important sources of uncertainty: The process of developing software: given sufficiently complex system requirements, the particular software version that will be produced from such a process is not knqwn with certainty. Consequently, complete knowledge of what the failure behaviour of the software will be is also unknown; The occurrence of demands during system operation: during system operation it may not be certain which demand 11 system will receive next from the environment. To explain failure correlation between multiple software versions the EL model introduced lite notion of difficulty: that is, given a demand that could occur during system operation there is a chance that a given software development team will develop a software component that fails when handling such a demand as part of the system. A demand with an associated high probability of developed software failing to handle it correctly is considered to be a "difficult" demand for a development team: a low probability of failure would suggest an "easy" demand. In the EL model different development. teams, even when isolated from each other, are identical in how likely they are to make mistakes while developing their respective software versions. Consequently, despite the teams possibly creating software versions that fail on different demands, in developing their respective versions the teams find the same demands easy, and the same demands difficult. The implication of this is the versions developed by the teams do not fail independently; if one observes t.he failure-of one team's version this could indicate that the version failed on a difficult. demand, thus increasing one's expectation that the second team's version will also fail on that demand. Succinctly put, due to correlated "difficulties" between the teams across the demands, "independently developed software cannot be expected to fail independently". The LM model takes this idea a step further by illustrating, under rather general practical conditions, that negative failure correlation is also possible; possible, because the teams may be sufficiently diverse in which demands they find "difficult". This in turn implies better reliability than would be expected under naive assumptions of failure independence between software modules built by the respective teams. Although these models provide such insight they also pose questions yet to be answered.
|
37 |
Garbage collection optimization for non uniform memory access architecturesAlnowaiser, Khaled Abdulrahman January 2016 (has links)
Cache-coherent non uniform memory access (ccNUMA) architecture is a standard design pattern for contemporary multicore processors, and future generations of architectures are likely to be NUMA. NUMA architectures create new challenges for managed runtime systems. Memory-intensive applications use the system’s distributed memory banks to allocate data, and the automatic memory manager collects garbage left in these memory banks. The garbage collector may need to access remote memory banks, which entails access latency overhead and potential bandwidth saturation for the interconnection between memory banks. This dissertation makes five significant contributions to garbage collection on NUMA systems, with a case study implementation using the Hotspot Java Virtual Machine. It empirically studies data locality for a Stop-The-World garbage collector when tracing connected objects in NUMA heaps. First, it identifies a locality richness which exists naturally in connected objects that contain a root object and its reachable set— ‘rooted sub-graphs’. Second, this dissertation leverages the locality characteristic of rooted sub-graphs to develop a new NUMA-aware garbage collection mechanism. A garbage collector thread processes a local root and its reachable set, which is likely to have a large number of objects in the same NUMA node. Third, a garbage collector thread steals references from sibling threads that run on the same NUMA node to improve data locality. This research evaluates the new NUMA-aware garbage collector using seven benchmarks of an established real-world DaCapo benchmark suite. In addition, evaluation involves a widely used SPECjbb benchmark and Neo4J graph database Java benchmark, as well as an artificial benchmark. The results of the NUMA-aware garbage collector on a multi-hop NUMA architecture show an average of 15% performance improvement. Furthermore, this performance gain is shown to be as a result of an improved NUMA memory access in a ccNUMA system. Fourth, the existing Hotspot JVM adaptive policy for configuring the number of garbage collection threads is shown to be suboptimal for current NUMA machines. The policy uses outdated assumptions and it generates a constant thread count. In fact, the Hotspot JVM still uses this policy in the production version. This research shows that the optimal number of garbage collection threads is application-specific and configuring the optimal number of garbage collection threads yields better collection throughput than the default policy. Fifth, this dissertation designs and implements a runtime technique, which involves heuristics from dynamic collection behavior to calculate an optimal number of garbage collector threads for each collection cycle. The results show an average of 21% improvements to the garbage collection performance for DaCapo benchmarks.
|
38 |
Applications of information sharing for code generation in process virtual machinesKyle, Stephen Christopher January 2016 (has links)
As the backbone of many computing environments today, it is important that process virtual machines be both performant and robust in mobile, personal desktop, and enterprise applications. This thesis focusses on code generation within these virtual machines, particularly addressing situations where redundant work is being performed. The goal is to exploit information sharing in order to improve the performance and robustness of virtual machines that are accelerated by native code generation. First, the thesis investigates the potential to share generated code between multiple threads in a dynamic binary translator used to perform instruction set simulation. This is done through a code generation design that allows native code to be executed by any simulated core and adding a mechanism to share native code regions between threads. This is shown to improve the average performance of multi-threaded benchmarks by 1.4x when simulating 128 cores on a quad-core host machine. Secondly, the ahead-of-time code generation system used for executing Android applications is improved through the use of profiling. The thesis investigates the potential for profiles produced by individual users of applications to be shared and merged together to produce a generic profile that still provides a lot of benefit for a new user who is then able to skip the expensive profiling phase. These profiles can not only be used for selective compilation to reduce code-size and installation time, but can also be used for focussed optimisation on vital code regions of an application in order to improve overall performance. With selective compilation applied to a set of popular Android applications, code-size can be reduced by 49.9% on average, while installation time can be reduced by 31.8%, with only an average 8.5% increase in the amount of sequential runtime required to execute the collected profiles. The thesis also shows that, among the tested users, the use of a crowd-sourced and merged profile does not significantly affect their estimated performance loss from selective compilation (0.90x-0.92x) in comparison to when they they perform selective compilation with their own unique profile (0.93x). Furthermore, by proposing a new, more powerful code generator for Android’s virtual machine, these same profiles can be used to perform focussed optimisation, which preliminary results show to increase runtime performance across a set of common Android benchmarks by 1.46x-10.83x. Finally, in such a situation where a new code generator is being added to a virtual machine, it is also important to test the code generator for correctness and robustness. The methods of execution of a virtual machine, such as interpreters and code generators, must share a set of semantics about how programs must be executed, and this can be exploited in order to improve testing. This is done through the application of domain-aware binary fuzzing and differential testing within Android’s virtual machine. The thesis highlights a series of actual code generation and verification bugs that were found in Android’s virtual machine using this testing methodology, as well as comparing the proposed approach to other state-of-the-art fuzzing techniques.
|
39 |
The role of simulation in developing and designing applications for 2-class motor imagery brain-computer interfacesQuek, Melissa January 2013 (has links)
A Brain-Computer Interface (BCI) can be used by people with severe physical disabilities such as Locked-in Syndrome (LiS) as a channel of input to a computer. The time-consuming nature of setting up and using a BCI, together with individual variation in performance and limited access to end users makes it difficult to employ techniques such as rapid prototyping and user centred design (UCD) in the design and development of applications. This thesis proposes a design process which incorporates the use of simulation tools and techniques to improve the speed and quality of designing BCI applications for the target user group. Two different forms of simulation can be distinguished: offline simulation aims to make predictions about a user’s performance in a given application interface given measures of their baseline control characteristics, while online simulation abstracts properties of inter- action with a BCI system which can be shown to, or used by, a stakeholder in real time. Simulators that abstract properties of BCI control at different levels are useful for different purposes. Demonstrating the use of offline simulation, Chapter 3 investigates the use of finite state machines (FSMs) to predict the time to complete tasks given a particular menu hierarchy, and compares offline predictions of task performance with real data in a spelling task. Chapter 5 aims to explore the possibility of abstracting a user’s control characteristics from a typical calibration task to predict performance in a novel control paradigm. Online simulation encompasses a range of techniques from low-fidelity prototypes built using paper and cardboard, to computer simulation models that aim to emulate the feel of control of using a BCI without actually needing to put on the BCI cap. Chapter 4 details the develop- ment and evaluation of a high fidelity BCI simulator that models the control characteristics of a BCI based on the motor-imagery (MI) paradigm. The simulation tools and techniques can be used at different stages of the application design process to reduce the level of involvement of end users while at the same time striving to employ UCD principles. It is argued that prioritising the level of involvement of end users at different stages in the design process is an important strategy for design: end user input is paramount particularly at the initial user requirements stage where the goals that are important for the end user of the application can be ascertained. The interface and specific interaction techniques can then be iteratively developed through both real and simulated BCI with people who have no or less severe physical disabilities than the target end user group, and evaluations can be carried out with end users at the final stages of the process. Chapter 6 provides a case study of using the simulation tools and techniques in the development of a music player application. Although the tools discussed in the thesis specifically concern a 2-class Motor Imagery BCI which uses the electroencephalogram (EEG) to extract brain signals, the simulation principles can be expected to apply to a range of BCI systems.
|
40 |
Τεχνικές κατανεμημένου φίλτρου Kalman σε δίκτυα αισθητήρωνΔιπλαράκος, Αναστάσιος 04 September 2013 (has links)
Στη παρούσα διπλωματική εργασία ασχολούμαστε με την ανάπτυξη τεχνικών για υλοποίηση ενός κατανεμημένου φίλτρου Kalman σε ένα Δίκτυο Αισθητήρων (WSN). Τα δίκτυα αυτά έχουν γνωρίσει τα τελευταία χρόνια ραγδαία ανάπτυξη λόγω των εξαιρετικά πολλών εφαρμογών τους σε διάφορα πεδία της ανθρώπινης δραστηριότητας. Το πρόβλημα που πραγματευόμαστε εδώ είναι η προσπάθεια εκτίμησης της κατάστασης μιας στοχαστικής διαδικασίας που επιτηρείται-παρακολουθείται απο το δίκτυο. Οι κόμβοι-αισθητήρες που συναπαρτίζουν αυτά τα δίκτυα έχουν συνήθως περιορίσμενες δυνατότητες ¨αίσθησης¨, πράγμα που σημαίνει ότι κάθε μεμονωμένος κόμβος αδυνατεί να παράξει μια καλή εκτίμηση της κατάστασης. Διάφορες τεχνικές εχούν προταθεί για την επίλυση τέτοιου είδους προβλημάτων όπως το κεντρικό Φίλτρο Kalman ή διάφορες αποκεντρωμένες προσεγγίσεις, οι οποιές είχαν όμως υψηλό υπολογιστικό κόστος και τις καθιστούσαν πρακτικά μη υλοποιήσιμες, ειδικά για δίκτυα με μεγάλο αριθμό κόμβων. Έτσι στη παρούσα εργασία , χρησιμοποιώντας ως εργαλείο την θεώρια των αλγορίθμων ¨Κοινής Συμφωνίας¨ (Consensus Algorithms) , κατασκευάζουμε Αλγορίθμους χαμήλης πολυπλοκότητας, για την υλοποίηση ενός Kατανεμημένου Φίλτρου Kalman, θεωρώντας ότι όλοι οι κόμβοι είναι ομότιμοι (peer-to-peer αρχιτεκτονικές), κάθε κόμβος επικοινωνεί μόνο με τους γειτονικούς του και δεν υπάρχουν Fusion Centers. Παρουσιάζουμε έτσι τρεις διαφορετικούς επαναληπτικούς αλγορίθμους βασιζομένων σε δύο διαφορετικές λογικές και τέλος προσομοιώνουμε και αξιολογούμε την επίδοση καθενός από αυτούς. / In this thesis we deal with the development of techniques for implementing a distributed Kalman filter in a sensor network (WSN). Ιn recent years these networks have experienced rapid growth, due to the numerous applications in various fields of human activity. The problem we discuss here is the attempt to estimate the state of a stochastic process which is monitored by nodes-sensors. The nodes that constitute these networks usually have limited “sense” capabilities, which means that each node is unable to produce a good estimate of the state, using only its own measurements. Various techniques have been proposed to solve this kind of problems, such as the Central Kalman Filter or several decentralized approaches, which did have a high computational cost that renders them impractical, especially for networks with a high number of nodes. Thus, in the present work, , we construct low complexity algorithms for the implementation of a Distirbuted Kalman Filter, using as a tool the theory of Consensus Algorithms and assuming that all nodes are peers (peer-to-peer architectures), each node communicates only with its neighboring and no Fusion Centers exist. Τhus, we present three different iterative algorithms based on two different approaches and finally simulate and evaluate the performance of each of them.
|
Page generated in 0.0282 seconds