• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 7
  • 1
  • Tagged with
  • 57
  • 10
  • 10
  • 7
  • 7
  • 6
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Scalable coordination of distributed in-memory transactions

Emerson, Ryan January 2016 (has links)
Coordinating transactions involves ensuring serializability in the presence of concurrent data accesses. Accomplishing it in an scalable manner for distributed in-memory transactions is the aim of this thesis work. To this end, the work makes three contributions. It first experimentally demonstrates that transaction latency and throughput scale considerably well when an atomic multicast service is offered to transaction nodes by a crash-tolerant ensemble of dedicated nodes and that using such a service is the most scalable approach compared to practices advocated in the literature. Secondly, we design, implement and evaluate a crash-tolerant and non-blocking atomic broadcast protocol, called ABcast, which is then used as the foundation for building the aforementioned multicast service. ABcast is a hybrid protocol, which consists of a pair of primary and backup protocols executing in parallel. The primary protocol is a deterministic atomic broadcast protocol that provides high performance when node crashes are absent, but blocks in their presence until a group membership service detects such failures. The backup protocol, Aramis, is a probabilistic protocol that does not block in the event of node crashes and allows message delivery to continue post-crash until the primary protocol is able to resume. Aramis design avoids blocking by assuming that message delays remain within a known bound with a high probability that can be estimated in advance, provided that recent delay estimates are used to (i) continually adjust that bound and (ii) regulate flow control. Aramis delivery of broadcasts preserve total order with a probability that can be tuned to be close to 1. Comprehensive evaluations show that this probability can be 99.99% or more. Finally, we assess the effect of low-probability order violations on implementing various isolation levels commonly considered in transaction systems. These three contributions together advance the state-of-art in two major ways: (i) identifying a service based approach to transactional scalability and (ii) establishing a practical alternative to the complex PAXOSiii style approach to building such a service, by using novel but simple protocols and open-source software frameworks.
22

Investigations of methods of linkage synthesis

Zanini, J. C. January 1975 (has links)
No description available.
23

Markov analysis of operating system techniques

Stewart, William James January 1974 (has links)
No description available.
24

Utilising a human centered design (HCD) approach for enhancing government systems in Saudi Arabia through usability evaluation from the user's perspective

Baglin, Faisal January 2015 (has links)
When aiming to successfully improve an existing software system, usability evaluation methods (UEMs) and user experience (UX) are key aspects for consideration. The UEMs identify the level of usability of the system through assessing: (1) the extent to which it is easy and pleasant for the user (Cockton, 2012); (2) the specific effects of the system user interface (UI) on the user; and (3) any other problems that the system may have (Dix et al., 2007). On the other hand, considering UX places usability in context through providing a comprehensive understanding of the users' perceptions during and after their interactions with a specific system (Kuniavsky, 2010). Undoubtedly, in most countries, there is a wide range of services, activities, and procedures that are supported by government systems (Buie and Murray, 2012). However, because of the lack of consideration of the usability requirements in addition to the limited attention given to the involvement of UX in system development (Downey and Rosales, 2011), a significant number of these government systems were designed without taking into account human-centred design guidelines (Johnson et al, 2005). Consequently, the success of these systems varies widely in terms of their usability (Downey and Rosales, 2011). In some cases, they fail to provide effective, efficient, and generally positive UX to people who interact with government systems from the outside, such as the citizens, or for those who work for the government on the inside, such as the employees (Buie and Murray, 2012). The research problem in this thesis addresses how UEMs, techniques, and tools can be integrated and developed to support the redesigning and enhancement of current government systems (Legacy Systems) in a developing country. More specifically, the main aim of the research work reported in this thesis is to develop a way of proposing appropriate methods for evaluating the usability of the current internal systems in the Saudi government context. In this regard, three studies were conducted to achieve the aims of the research. As a general approach for the thesis, Human-Centred Design (HCD) was adopted due to the fact that HCD is concerned with the integration of the users’ opinions into the software development process in order to achieve a usable system (Spencer, 2004). In addition, a mixed method of quantitative and qualitative approaches was used in all of the studies. In the development of this project, the first study was aimed at evaluating the usability of a current internal system of a governmental organization in Saudi Arabia, the Visa Issuance (VI) system, from the actual users’ points of view in order to identify the strengths and weaknesses of the system. A usability evaluation query technique was employed for collecting data via a survey method by targeting 135 participants who were the users of the VI system. The survey used both qualitative and quantitative instruments, namely a questionnaire and semi-structured interviews. In the second study, an experimental approach was applied and a comparative usability test was conducted between the current VI system and a suggested prototype design which was developed based on the outcomes of the usability evaluation in the first study. The results of this study showed improvements in the quality of the system (usefulness), the information, and the interface. After analysing these results, the iterative method was used in the third study to redesign the suggested prototype. Therefore, another comparison test was conducted between the two versions of the prototype and the results indicated enhancement in the UX by using the new version. This research developed a methodological framework for the usability evaluation of the current government systems which involved query techniques and user testing methods. It was formulated by combining different methods for guiding the redesign process, and testing was conducted throughout the entire research project. The results indicated that the involvement of query techniques as a preliminary step provides a quick, simple, and cost-effective way of identifying the usability problem areas in the VI government system. Furthermore, the usefulness of this developed framework could be beneficial in raising awareness and acceptance of the established methods among governmental organisations in other contexts in order to enhance their software systems effectively and improve the UX. It is hoped that this awareness of the fundamental usability methods could lead to developments in Information Communication Technology (ICT) for all communities (Holzinger, 2005) so that the advantages of making certain improvements could be shared with others. In addition, the outcomes of the two experiments conducted in this research provide some lessons that are considered valuable in the usability testing domain. In this regard, the results are expected to assist and support the usability practitioners and system developers who are concerned with improving the usability of existing internal software systems, and in planning and conducting usability testing sessions in government organisations through utilising such guidance about UEMs.
25

User aspects in synchronous visualisation of multiple photo streams

Zargham, Sam January 2016 (has links)
Photo sharing is becoming a common way of maintaining closeness and relationships with friends and family, and it can evoke pleasurable, enjoyable and exciting experiences. People have fun when sharing photos containing pleasant scenes or friends being caught doing something interesting. There has been a recent increase in studies that focus on the visualisation and sharing of photos using online services or sharing in the home environment using different digital technologies. Although previous studies have focussed on the important issues of photo sharing and visualisation, there is a dearth of research aimed at designing applications that enable people to share and visualise multiple photo streams that originate from multiple sources such as different people or capture devices. In addition, there is a lack of research that links new applications for photo sharing with user experience and the applications' value to the user. This thesis, firstly, offers a new design for synchronous sharing and visualisation of multiple photo streams using temporal and social metadata. Moreover, different features, called transition modes, were added to the system to give a better experience within the system. The experience of photo sharing, however, does not exist without any connection to people or events; it is a social experience depending on people, places and time. Hence, an experimental study was conducted with twenty users, and the results demonstrate high user demand for concurrent presentation of multiple media streams as well as recommended transitions for extending its potential. In the second phase of this thesis, the temporal aspects of multiple photo streams such as manual transition, continuity detection and user desired time were designed and implemented. Following that, the results of the user study demonstrate good comprehension of the users' own and shared photo streams, and their temporal structure, even when presented at relatively high speeds. Users were easily able to contextualise events, recall specific photos and find them using the proposed interface. The final interface is built from the lessons that were learned from the first two phases of this study. In this version, the user was able to share their photos in real time and see them in an ambient display. Our final system for real-time photo sharing as an ambient display was tested in three different trials with three different user groups consisting of extended family, close friends and workplace colleagues. The results showed high user interest for extended family members and in the workplace environment.
26

Inductive limits of operator systems

Mawhinney, Linda January 2016 (has links)
The aim of this thesis is to study inductive limits of operator systems. We begin by formalising the notion of the inductive limit for several categories related to, and including, the category of operator systems. Subsequently we observe how this structure interacts with other important operator system structures including tensor products, quotients and C*-extensions. Finally we have applied these results to inductive limits of graph operator systems. This has enabled the construction of an infinite graph operator system. Using this approach we have extended known results about graph operator systems to infinite graph operator systems.
27

Applications of weighting sequence models in system identification, adaptation and on-line computer control

Abbosh, F. G. January 1974 (has links)
No description available.
28

The effects of synchronisation and other forestry commissioning constraints on vehicle routing problem solution methods

Kent, Edward January 2016 (has links)
No description available.
29

Machine learning in compilers

Leather, Hugh January 2011 (has links)
Tuning a compiler so that it produces optimised code is a difficult task because modern processors are complicated; they have a large number of components operating in parallel and each is sensitive to the behaviour of the others. Building analytical models on which optimisation heuristics can be based has become harder as processor complexity increased and this trend is bound to continue as the world moves towards further heterogeneous parallelism. Compiler writers need to spend months to get a heuristic right for any particular architecture and these days compilers often support a wide range of disparate devices. Whenever a new processor comes out, even if derived from a previous one, the compiler’s heuristics will need to be retuned for it. This is, typically, too much effort and so, in fact, most compilers are out of date. Machine learning has been shown to help; by running example programs, compiled in different ways, and observing how those ways effect program run-time, automatic machine learning tools can predict good settings with which to compile new, as yet unseen programs. The field is nascent, but has demonstrated significant results already and promises a day when compilers will be tuned for new hardware without the need for months of compiler experts’ time. Many hurdles still remain, however, and while experts no longer have to worry about the details of heuristic parameters, they must spend their time on the details of the machine learning process instead to get the full benefits of the approach. This thesis aims to remove some of the aspects of machine learning based compilers for which human experts are still required, paving the way for a completely automatic, retuning compiler. First, we tackle the most conspicuous area of human involvement; feature generation. In all previous machine learning works for compilers, the features, which describe the important aspects of each example to the machine learning tools, must be constructed by an expert. Should that expert choose features poorly, they will miss crucial information without which the machine learning algorithm can never excel. We show that not only can we automatically derive good features, but that these features out perform those of human experts. We demonstrate our approach on loop unrolling, and find we do better than previous work, obtaining XXX% of the available performance, more than the XXX% of previous state of the art. Next, we demonstrate a new method to efficiently capture the raw data needed for machine learning tasks. The iterative compilation on which machine learning in compilers depends is typically time consuming, often requiring months of compute time. The underlying processes are also noisy, so that most prior works fall into two categories; those which attempt to gather clean data by executing a large number of times and those which ignore the statistical validity of their data to keep experiment times feasible. Our approach, on the other hand guarantees clean data while adapting to the experiment at hand, needing an order of magnitude less work that prior techniques.
30

Speeding up dynamic compilation : concurrent and parallel dynamic compilation

Bohm, Igor January 2013 (has links)
The main challenge faced by a dynamic compilation system is to detect and translate frequently executed program regions into highly efficient native code as fast as possible. To efficiently reduce dynamic compilation latency, a dynamic compilation system must improve its workload throughput, i.e. compile more application hotspots per time. As time for dynamic compilation adds to the overall execution time, the dynamic compiler is often decoupled and operates in a separate thread independent from the main execution loop to reduce the overhead of dynamic compilation. This thesis proposes innovative techniques aimed at effectively speeding up dynamic compilation. The first contribution is a generalised region recording scheme optimised for program representations that require dynamic code discovery (e.g. binary program representations). The second contribution reduces dynamic compilation cost by incrementally compiling several hot regions in a concurrent and parallel task farm. Altogether the combination of generalised light-weight code discovery, large translation units, dynamic work scheduling, and concurrent and parallel dynamic compilation ensures timely and efficient processing of compilation workloads. Compared to state-of-the-art dynamic compilation approaches, speedups of up to 2.08 are demonstrated for industry standard benchmarks such as BioPerf, Spec Cpu 2006, and Eembc. Next, innovative applications of the proposed dynamic compilation scheme to speed up architectural and micro-architectural performance modelling are demonstrated. The main contribution in this context is to exploit runtime information to dynamically generate optimised code that accurately models architectural and micro-architectural components. Consequently, compilation units are larger and more complex resulting in increased compilation latencies. Large and complex compilation units present an ideal use case for our concurrent and parallel dynamic compilation infrastructure. We demonstrate that our novel micro-architectural performance modelling is faster than state-of-the-art Fpga-based simulation, whilst providing the same level of accuracy.

Page generated in 0.1724 seconds