• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 15
  • 2
  • Tagged with
  • 83
  • 9
  • 7
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

On verification and controller synthesis for probabilistic systems at runtime

Ujma, Mateusz January 2015 (has links)
Probabilistic model checking is a technique employed for verifying the correctness of computer systems that exhibit probabilistic behaviour. A related technique is controller synthesis, which generates controllers that guarantee the correct behaviour of the system. Not all controllers can be generated offline, as the relevant information may only be available when the system is running, for example, the reliability of services may vary over time. In this thesis, we propose a framework based on controller synthesis for stochastic games at runtime. We model systems using stochastic two-player games parameterised with data obtained from monitoring of the running system. One player represents the controllable actions of the system, while the other player represents the hostile uncontrollable environment. The goal is to synthesize, for a given property specification, a controller for the first player that wins against all possible actions of the environment player. Initially, controller synthesis is invoked for the parameterised model and the resulting controller is applied to the running system. The process is repeated at runtime when changes in the monitored parameters are detected, whereby a new controller is generated and applied. To ensure the practicality of the framework, we focus on its three important aspects: performance, robustness, and scalability. We propose an incremental model construction technique to improve performance of runtime synthesis. In many cases, changes in monitored parameters are small and models built for consecutive parameter values are similar. We exploit this and incrementally build a model for the updated parameters reusing the previous model, effectively saving time. To address robustness, we develop a technique called permissive controller synthesis. Permissive controllers generalise the classical controllers by allowing the system to choose from a set of actions instead of just one. By using a permissive controller, a computer system can quickly adapt to a situation where an action becomes temporarily unavailable while still satisfying the property of interest. We tackle the scalability of controller synthesis with a learning-based approach. We develop a technique based on real-time dynamic programming which, by generating random trajectories through a model, synthesises an approximately optimal controller. We guide the generation using heuristics and can guarantee that, even in the cases where we only explore a small part of the model, we still obtain a correct controller. We develop a full implementation of these techniques and evaluate it on a large set of case studies from the PRISM benchmark suite, demonstrating significant performance gains in most cases. We also illustrate the working of the framework on a new case study of an open-source stock monitoring application.
32

A modular architecture for systematic text categorisation

Barnes, Andrew James January 2013 (has links)
This work examines and attempts to overcome issues caused by the lack of formal standardisation when defining text categorisation techniques and detailing how they might be appropriately integrated with each other. Despite text categorisation’s long history the concept of automation is relatively new, coinciding with the evolution of computing technology and subsequent increase in quantity and availability of electronic textual data. Nevertheless insufficient descriptions of the diverse algorithms discovered have lead to an acknowledged ambiguity when trying to accurately replicate methods, which has made reliable comparative evaluations impossible. Existing interpretations of general data mining and text categorisation methodologies are analysed in the first half of the thesis and common elements are extracted to create a distinct set of significant stages. Their possible interactions are logically determined and a unique universal architecture is generated that encapsulates all complexities and highlights the critical components. A variety of text related algorithms are also comprehensively surveyed and grouped according to which stage they belong in order to demonstrate how they can be mapped. The second part reviews several open-source data mining applications, placing an emphasis on their ability to handle the proposed architecture, potential for expansion and text processing capabilities. Finding these inflexible and too elaborate to be readily adapted, designs for a novel framework are introduced that focus on rapid prototyping through lightweight customisations and reusable atomic components. Being a consequence of inadequacies with existing options, a rudimentary implementation is realised along with a selection of text categorisation modules. Finally a series of experiments are conducted that validate the feasibility of the outlined methodology and importance of its composition, whilst also establishing the practicality of the framework for research purposes. The simplicity of experiments and results gathered clearly indicate the potential benefits that can be gained when a formalised approach is utilised.
33

Extending old languages for new architectures

White, Leo January 2014 (has links)
Architectures evolve quickly. The number of transistors available to chip designers doubles every 18 months, allowing increasingly complex architectures to be developed on a single chip. Power dissipation issues have forced chip designers to look for new ways to use the transistors at their disposal. This situation inevitably leads to new architectural features on a fairly regular basis. Enabling programmers to benefit from these new architectural features can be problematic. Since architectures change frequently, and compilers last for a long time, it is clear that compilers should be designed to be extensible. This thesis argues that to support evolving architectures a compiler should support the creation of high-level language extensions. In particular, it must support extending the compiler's middle-end. We describe the design of EMCC, a C compiler that allows extension of its front-, middle- and back-ends. OpenMP is an extension to the C programming language to support parallelism. It has recently added support for task-based parallelism, a dynamic form of parallelism made popular by Cilk. However, implementing task-based parallelism efficiently requires much more involved program transformation than the simple static parallelism originally supported by OpenMP. We use EMCC to create an implementation of OpenMP, with particular focus on efficient implementation of task-based parallelism. We also demonstrate the benefits of supporting high-level analysis through an extended middle-end, by developing and implementing an interprocedural analysis that improves the performance of task-based parallelism by allowing tasks to share stacks. We develop a novel generalisation of logic programming that we use to concisely express this analysis, and use this formalism to demonstrate that the analysis can be executed in polynomial time. Finally, we design extensions to OpenMP to support heterogeneous architectures.
34

Adaptation et cloud computing : un besoin d'abstraction pour une gestion transverse / Cloud computing : a need for abstraction to manage adaptation as an orthogonal concern

Daubert, Erwan 24 May 2013 (has links)
Le Cloud Computing est devenu l'un des grands paradigmes de l'informatique et propose de fournir les ressources informatiques sous forme de services accessibles au travers de l'Internet. Ces services sont généralement organisés selon trois types ou niveaux. On parle de modèle SPI pour “Software, Platform, Infrastructure” en anglais. De la même façon que pour les applications ``standard'', les services de Cloud doivent être capables de s'adapter de manière autonome afin de tenir compte de l'évolution de leur environnement. À ce sujet, il existe de nombreux travaux tels que ceux concernant la consolidation de serveur et l'économie d'énergie. Mais ces travaux sont généralement spécifiques à l'un des niveaux et ne tiennent pas compte des autres. Pourtant, comme l'a affirmé Kephart et al. en 2000, même s'il existe des adaptations à priori indépendantes les unes des autres, celles-ci ont un impact sur l'ensemble du système informatique dans lequel elles sont appliquées. De ce fait, une adaptation au niveau infrastructure peut avoir un impact au niveau plate-forme ou au niveau application. L'objectif de cette thèse est de fournir un support pour l'adaptation permettant de gérer celle-ci comme une problématique transverse au différents niveaux afin d'assurer la cohérence et l'efficacité de l'adaptation. Pour cela, nous proposons une abstraction capable de représenter l'ensemble des niveaux et servant de support pour la définition des reconfigurations. Cette abstraction repose sur les techniques de modèle à l'exécution (Model at Runtime en anglais) qui propose de porter les outils utilisés à la conception pour définir, valider et appliquer une nouvelle configuration pendant l'exécution du système lui-même. Afin de montrer l'utilisabilité de cette abstraction, nous présentons trois expérimentations permettant de montrer l'extensibilité et la généricité de notre solution, de montrerque l'impact sur les performances du système est faible, et de montrer que cette abstraction permet de faire de l'adaptation multiniveaux. / Cloud Computing is becoming the new paradigm for information technology to provide resources as Internet-based services. These services are basically categorized according to three layers also called SPI model (Software, Platform, Infrastructure). The same way as ``non-Cloud'' applications, Cloud services must be able to adapt themselves according to the evolution of their environment. There are many works on dynamic adaptation such as server consolidation and green computing but these works are generally specifics to one layer and do not take the others into account. However Kephart et al. have explain in 2000 that even if adaptations are, in theory, independant, they have an impact on the overall system. Consequently, an adaptation at the infrastructure layer can have an impact at the platform or at the application layers.This thesis provides an abstraction to manage adaptation as an orthogonal concern overs Cloud layers. Based on Model atRuntime (M@R) techniques which offer to use design tools to build and validate new configuration of the system at the runtime, this abstraction is able to modelize all the Cloud layers. To show the usability of this abstraction, we provide three experimentations showing the extensibility and genericity of our approach, showing that performance overhead on the system (infrastructure or platform) is weak and showing that the abstraction allows to build multi-layers adaptations.
35

On the probability of perfection of software-based systems

Zhao, Xingyu January 2016 (has links)
The probability of perfection becomes of interest as the realization of its role in the reliability assessment of software-based systems. It is not only important on its own, but also in the reliability assessment of 1-out-of-2 diverse systems. By “perfection”, it means that thesoftware will never fail in a specific operating environment. If we assume that failures of a software system can occur if and only if it contains faults, then it means that the system is “fault-free”. Such perfection is possible for sufficiently simple software. While the perfection can never be certain, so the interest lies in claims for the probability of perfection. In this thesis, firstly two different probabilities of perfection – an objective parameter characterizing a population property and a subjective confidence in the perfection of the specific software of interest – are distinguished and discussed. Then a conservative Bayesian method is used to claim about probability of perfection from various types of evidence, i.e. failure-free testing evidence, process evidence and formal proof evidence. Also, a “quasiperfection” notion is realized as a potentially useful approach to cover some shortages of perfection models. A possible framework to incorporate the various models is discussed at the end. There are generally two themes in this thesis: tackling the failure dependence issue in the reliability assessment of 1-out-of-2 diverse systems at both aleatory and epistemic levels; and degrading the well-known difficulty of specifying complete Bayesian priors into reasoning with only partial priors. Both of them are solved at the price of conservatism. In summary, this thesis provides 3 parallel sets of (quasi-)perfection models which could be used individually as a conservative end-to-end argument that reasoning from various types of evidence to the reliability of a software-based system. Although in some cases models here are providing very conservative results, some ways are proposed of dealing with the excessive conservatism. In other cases, the very conservative results could serve as warnings/support to safety engineers/regulators in the face of claims based on reasoning that is less rigorous than the reasoning in this thesis.
36

Correlation of affiliate performance against web evaluation metrics

Miehling, Mathew J. January 2014 (has links)
Affiliate advertising is changing the way that people do business online. Retailers are now offering incentives to third-party publishers for advertising goods and services on their behalf in order to capture more of the market. Online advertising spending has already over taken that of traditional advertising in all other channels in the UK and is slated to do so worldwide as well [1]. In this highly competitive industry, the livelihood of a publisher is intrinsically linked to their web site performance. Understanding the strengths and weaknesses of a web site is fundamental to improving its quality and performance. However, the definition of performance may vary between different business sectors or even different sites in the same sector. In the affiliate advertising industry, the measure of performance is generally linked to the fulfilment of advertising campaign goals, which often equates to the ability to generate revenue or brand awareness for the retailer. This thesis aims to explore the correlation of web site evaluation metrics to the business performance of a company within an affiliate advertising programme. In order to explore this correlation, an automated evaluation framework was built to examine a set of web sites from an active online advertising campaign. A purpose-built web crawler examined over 4,000 sites from the advertising campaign in approximately 260 hours gathering data to be used in the examination of URL similarity, URL relevance, search engine visibility, broken links, broken images and presence on a blacklist. The gathered data was used to calculate a score for each of the features which were then combined to create an overall HealthScore for each publishers. The evaluated metrics focus on the categories of domain and content analysis. From the performance data available, it was possible to calculate the business performance for the 234 active publishers using the number of sales and click-throughs they achieved. When the HealthScores and performance data were compared, the HealthScore was able to predict the publisher's performance with 59% accuracy.
37

Direct methods for deductive verification of temporal properties in continuous dynamical systems

Sogokon, Andrew January 2016 (has links)
This thesis is concerned with the problem of formal verification of correctness specifications for continuous and hybrid dynamical systems. Our main focus will be on developing and automating general proof principles for temporal properties of systems described by non-linear ordinary differential equations (ODEs) under evolution constraints. The proof methods we consider will work directly with the differential equations and will not rely on the explicit knowledge of solutions, which are in practice rarely available. Our ultimate goal is to increase the scope of formal deductive verification tools for hybrid system designs. We give a comprehensive survey and comparison of available methods for checking set invariance in continuous systems, which provides a foundation for safety verification using inductive invariants. Building on this, we present a technique for constructing discrete abstractions of continuous systems in which spurious transitions between discrete states are entirely eliminated, thereby extending previous work. We develop a method for automatically generating inductive invariants for continuous systems by efficiently extracting reachable sets from their discrete abstractions. To reason about liveness properties in ODEs, we introduce a new proof principle that extends and generalizes methods that have been reported previously and is highly amenable to use as a rule of inference in a deductive verification calculus for hybrid systems. We will conclude with a summary of our contributions and directions for future work.
38

Exploiting tightly-coupled cores

Bates, Daniel January 2014 (has links)
As we move steadily through the multicore era, and the number of processing cores on each chip continues to rise, parallel computation becomes increasingly important. However, parallelising an application is often difficult because of dependencies between different regions of code which require cores to communicate. Communication is usually slow compared to computation, and so restricts the opportunities for profitable parallelisation. In this work, I explore the opportunities provided when communication between cores has a very low latency and low energy cost. I observe that there are many different ways in which multiple cores can be used to execute a program, allowing more parallelism to be exploited in more situations, and also providing energy savings in some cases. Individual cores can be made very simple and efficient because they do not need to exploit parallelism internally. The communication patterns between cores can be updated frequently to reflect the parallelism available at the time, allowing better utilisation than specialised hardware which is used infrequently. In this dissertation I introduce Loki: a homogeneous, tiled architecture made up of many simple, tightly-coupled cores. I demonstrate the benefits in both performance and energy consumption which can be achieved with this arrangement and observe that it is also likely to have lower design and validation costs and be easier to optimise. I then determine exactly where the performance bottlenecks of the design are, and where the energy is consumed, and look into some more-advanced optimisations which can make parallelism even more profitable.
39

Causal effects of wiki site design on anxiety and usability

Cowan, Benjamin Richard January 2011 (has links)
Within society Information Technology (IT) is becoming pervasive. This is no more pronounced than in Higher Education where IT is almost ubiquitously used. Current developments have also seen Web 2.0 tools such as wikis being used in pedagogical contexts. Research in computer anxiety has identified that quality of initial experience may be important in the onset of anxiety towards IT. However the concept of computer anxiety is too vague to reflect likely reactions to specific IT scenarios especially in interactions with social technology such as wikis. Although wikis are growing in popularity little is known about users‟ emotional reaction towards contributing to them, how their experiences shape these emotions as well as the users‟ view of usability above that mentioned in qualitative research. Due to the interface, social and flexible nature of wikis users may be anxious towards editing. This research aims to offer causal insight into the influence of wiki site design characteristics on anxiety towards wiki editing and users usability evaluation of wiki editing experiences. Three experiment-based studies are presented addressing the effects of site characteristics such as in-built training spaces (i.e. tutorials and sandboxes commonly used on wikis), user editing identity as well as aspects inherent to wiki sites such as content flexibility, on anxiety felt by users in editing scenarios and users usability rating of their editing experiences. The research also aimed to identify whether initial experiences affected anxiety about further editing, as suggested by computer anxiety research, or whether emotions are only affected during editing experience. The findings of the initial study on in-built training spaces suggest that the concept of wiki anxiety measured in this research more accurately reflects anxiety experienced during interaction than computer anxiety. Additionally the in-built training spaces using tutorials were seen to lead to better first experiences for novice users in using the wiki markup interface than those without (such as when experiencing sandbox training spaces and no training). Similarly the presence of a tutorial reduced wiki anxiety during interaction but did not affect anxiety towards future editing. From these findings the work advanced to study the effect of identity salience on wiki anxiety during editing and wiki usability focusing on contributing content using a user group with experience editing wikis. This was so as to explore the effect of wiki characteristics on user experience variables above that from first exposure anxiety likely in novice users. The research found that participants were less anxious when editing the wiki anonymously than when editing using a pseudonym and full name identity. There was however no effect of identity salience on usability rating. Additionally the type of edit conducted by participants, in terms of addition or deletion and replacement of content, did not have a significant effect on either anxiety during editing or usability evaluation. Further research exploring the effect of flexibility and other user behaviour on user anxiety and usability evaluation when contributing subsequently found that there was no significant effect of flexibility on the wiki user experience variables. The work demonstrates successful empirical evaluation of the wiki user editing experience can be achieved and can lead to important causal insight into the effects of wiki site design on the users‟ experience. It also identifies aspects of the site that can lead to the reduction of anxiety towards editing during interaction and influence usability rating towards the system.
40

Memory consistency directed cache coherence protocols for scalable multiprocessors

Elver, Marco Iskender January 2016 (has links)
The memory consistency model, which formally specifies the behavior of the memory system, is used by programmers to reason about parallel programs. From a hardware design perspective, weaker consistency models permit various optimizations in a multiprocessor system: this thesis focuses on designing and optimizing the cache coherence protocol for a given target memory consistency model. Traditional directory coherence protocols are designed to be compatible with the strictest memory consistency model, sequential consistency (SC). When they are used for chip multiprocessors (CMPs) that provide more relaxed memory consistency models, such protocols turn out to be unnecessarily strict. Usually, this comes at the cost of scalability, in terms of per-core storage due to sharer tracking, which poses a problem with increasing number of cores in today’s CMPs, most of which no longer are sequentially consistent. The recent convergence towards programming language based relaxed memory consistency models has sparked renewed interest in lazy cache coherence protocols. These protocols exploit synchronization information by enforcing coherence only at synchronization boundaries via self-invalidation. As a result, such protocols do not require sharer tracking which benefits scalability. On the downside, such protocols are only readily applicable to a restricted set of consistency models, such as Release Consistency (RC), which expose synchronization information explicitly. In particular, existing architectures with stricter consistency models (such as x86) cannot readily make use of lazy coherence protocols without either: adapting the protocol to satisfy the stricter consistency model; or changing the architecture’s consistency model to (a variant of) RC, typically at the expense of backward compatibility. The first part of this thesis explores both these options, with a focus on a practical approach satisfying backward compatibility. Because of the wide adoption of Total Store Order (TSO) and its variants in x86 and SPARC processors, and existing parallel programs written for these architectures, we first propose TSO-CC, a lazy cache coherence protocol for the TSO memory consistency model. TSO-CC does not track sharers and instead relies on self-invalidation and detection of potential acquires (in the absence of explicit synchronization) using per cache line timestamps to efficiently and lazily satisfy the TSO memory consistency model. Our results show that TSO-CC achieves, on average, performance comparable to a MESI directory protocol, while TSO-CC’s storage overhead per cache line scales logarithmically with increasing core count. Next, we propose an approach for the x86-64 architecture, which is a compromise between retaining the original consistency model and using a more storage efficient lazy coherence protocol. First, we propose a mechanism to convey synchronization information via a simple ISA extension, while retaining backward compatibility with legacy codes and older microarchitectures. Second, we propose RC3 (based on TSOCC), a scalable cache coherence protocol for RCtso, the resulting memory consistency model. RC3 does not track sharers and relies on self-invalidation on acquires. To satisfy RCtso efficiently, the protocol reduces self-invalidations transitively using per-L1 timestamps only. RC3 outperforms a conventional lazy RC protocol by 12%, achieving performance comparable to a MESI directory protocol for RC optimized programs. RC3’s storage overhead per cache line scales logarithmically with increasing core count and reduces on-chip coherence storage overheads by 45% compared to TSO-CC. Finally, it is imperative that hardware adheres to the promised memory consistency model. Indeed, consistency directed coherence protocols cannot use conventional coherence definitions (e.g. SWMR) to be verified against, and few existing verification methodologies apply. Furthermore, as the full consistency model is used as a specification, their interaction with other components (e.g. pipeline) of a system must not be neglected in the verification process. Therefore, verifying a system with such protocols in the context of interacting components is even more important than before. One common way to do this is via executing tests, where specific threads of instruction sequences are generated and their executions are checked for adherence to the consistency model. It would be extremely beneficial to execute such tests under simulation, i.e. when the functional design implementation of the hardware is being prototyped. Most prior verification methodologies, however, target post-silicon environments, which when used for simulation-based memory consistency verification would be too slow. We propose McVerSi, a test generation framework for fast memory consistency verification of a full-system design implementation under simulation. Our primary contribution is a Genetic Programming (GP) based approach to memory consistency test generation, which relies on a novel crossover function that prioritizes memory operations contributing to non-determinism, thereby increasing the probability of uncovering memory consistency bugs. To guide tests towards exercising as much logic as possible, the simulator’s reported coverage is used as the fitness function. Furthermore, we increase test throughput by making the test workload simulation-aware. We evaluate our proposed framework using the Gem5 cycle accurate simulator in full-system mode with Ruby (with configurations that use Gem5’s MESI protocol, and our proposed TSO-CC together with an out-of-order pipeline). We discover 2 new bugs in the MESI protocol due to the faulty interaction of the pipeline and the cache coherence protocol, highlighting that even conventional protocols should be verified rigorously in the context of a full-system. Crucially, these bugs would not have been discovered through individual verification of the pipeline or the coherence protocol. We study 11 bugs in total. Our GP-based test generation approach finds all bugs consistently, therefore providing much higher guarantees compared to alternative approaches (pseudo-random test generation and litmus tests).

Page generated in 0.0259 seconds