• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • 1
  • Tagged with
  • 30
  • 30
  • 27
  • 14
  • 13
  • 12
  • 12
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Domain-specific languages for massively parallel processors

Cartey, Luke January 2013 (has links)
Massively Parallel Processors provide significantly higher peak performance figures than other forms of general purpose processors. However, this comes at a cost to the developer, who needs to deal with an increasingly complicated piece of hardware, for which applications need to be tweaked and optimised to achieve high performance. Domain-specific languages have been proposed as a potential solution to this complexity problem: generating GPU applications from high-level, declarative specifications. This thesis explores two related ideas: firstly, is it practical to synthesise DSLs from high-level languages, and secondly, how can we simplify the creation of such DSLs? This thesis proposes a novel approach whereby rather than considering single domains, we consider collections of collaborative domains in order to share common features and thus reduce the cost of development. We achieve this using a DSLs-within-a-DSL approach: a custom designed host language, into which extensions may be embedded. In order to ground our approach in a real case-study, we propose, design and develop a DSLs-within-a-DSL framework for bioinformatics. We use a restricted recursive functional language as the host language, and embed new DSLs into this language. Importantly, we describe how we can use a combination of novel and adopted automatic parallelisation techniques to synthesise a massively-parallel program for a GPU. This automatic parallelisation, achieved through the discovery of a schedule, and program synthesis techniques using the polyhedral model, interacts productively with our embedded extensions. To further simplify development, we provide a series of customisable heuristics for defining GPU parameters such as the block size (number of threads), grid size and location in the memory hierarchy of data-structures. This encapsulates GPU expertise within the compiler itself. We finally demonstrate that the total combination of these techniques results in applications with competitive performance, at much lower development cost and greater flexibility than comparable hand-coded applications.
2

AMBER : a domain-aware template based system for data extraction

Cheng, Wang January 2015 (has links)
The web is the greatest information source in human history, yet finding all offers for flats with gardens in London, Paris, and Berlin or all restaurants open after a screening of the latest blockbuster remain hard tasks – as that data is not easily amenable to processing. Extracting web data into databases for easier processing has been a resource-intensive process, requiring human supervision for every source from which to extract. This has been changing with approaches that replace human annotators with automated annotations. Such approaches could be successfully applied to restricted settings such as single attribute extraction or for domains with significant redundancy among sources. Multi-attribute objects are often presented on (i) Result pages, where multiple objects are presented on a single page as lists, tables or grids, with most important attributes and a summary description, (ii) Detail pages, where each page provides a detailed list of attributes and long description for a single entity, often in rich format. Both result and detail pages are having their own advantages. Extracting objects from result pages is orders of magnitude faster than from detail pages, and the links to detail pages are often only accessible through result pages. Detail pages have a complete list of attributes and full description of the entity. Early web data extraction approaches requires manual annotations for each web site to reach high accuracy, while a number of domain independent approaches only focus on unsupervised repeated structure segmentation. The former is limited in scaling and automation, while the latter is lacked in accuracy. Recent automated data extraction systems are often informed with an ontology and a set of object and attribute recognizers, however they have focused on extracting simple objects with few attributes from single-entity pages and avoided result pages. We present an automatic ontology-based multi-attribute object extraction system AMBER, which deals with both result and detail pages, achieves very high accuracy (>96%) with zero site-specific supervision, and is able to solve practical issues that arise in real-life data extraction tasks. AMBER is also applied as an important component of DIADEM, the first automatic full-site extraction system that is able to extract structured data from different domains without site-specific supervision, and has been tested through a large-scale evaluation (>10, 000) sites. On the result page side, AMBER achieves high accuracy through a novel domain- aware, path-based template discovery algorithm, and integrates annotations for all parts of the extraction, from identifying the primary list of objects, over segment- ing the individual objects, to aligning the attributes. Yet, AMBER is able to tolerate significant noise in the annotations, by combining these annotations with a novel algorithm for finding regular structures based on XPATH expressions that capture regular tree structures. On the detail page side, AMBER integrates boilerplate removal, dynamic lists identification and page dissimilarity calculation seamlessly to identify data region, then employs a set of fairly simple and cheaply computable features for attribute extraction. Besides, AMBER is the first approach that combines result page extraction and detail page extraction by integrating attributes extracted from result pages and the attributes found on corresponding detail pages. AMBER is able to identify attributes of objects with near perfect accuracy and to extract dozens of attributes with > 96% across several domains, even in presence of significant noise. It outperforms uninformed, automated approaches by a wide margin if given an ontology. Even in absence of an ontology, AMBER outperforms most previous systems on record segmentation.
3

Semantics and refinement for a concurrent object oriented language

Monteiro Borba, Paulo Henrique January 1995 (has links)
FOOPS is a concurrent object oriented specification language with an executable subset. In this thesis we propose an extension of FOOPS with features for specifying systems of distributed and autonomous objects. This extension supports most features of concurrent object oriented programming, including classes of objects with associated methods and attributes, object identity, dynamic object creation and deletion, overloading, polymorphism, inheritance with overriding, dynamic binding, concurrency, nondeterminism, atomic execution, evaluation of method expressions as background processes, and object protection. The main contribution of this thesis is to develop a framework for supporting formal development of software in the extension of FOOPS mentioned above. In particular, we introduce a structural operational semantics for FOOPS, a notion of refinement for concurrent object oriented programs, congruence properties of refinement of FOOPS programs, and tools for mechanising refinement proofs. The operational semantics is the core of the formal definition of FOOPS. It is used to define notions of refinement for FOOPS states, programs, and specifications. Those notions and associated proof techniques for proving refinement are used to illustrate stepwise formal development of programs in FOOPS. The congruence properties of refinement (with respect to some of FOOPS operators) justify compositional development of software in FOOPS. The tools help to validate the framework introduced in this thesis and motivate its use in practice.
4

Specification, implementation and verification of refactorings

Schaefer, Max January 2010 (has links)
Refactoring is the process of reorganising or restructuring code by means of behaviour-preserving program transformations, themselves called refactorings. Most modern development environments come with built-in support for refactoring in the form of automated refactorings that the user can perform at the push of a button. Implementing refactorings is notoriously complex, however, and even state-of-the-art implementations have very low standards of correctness and can introduce subtle changes of behaviour into refactored programs. In this thesis, we develop concepts and techniques that make it possible to give concise, modular specifications of refactorings. These specifications are precise enough to cover all details of the object language, and thus give rise to full featured, high-quality refactoring implementations. Their modularity, on the other hand, makes them amenable to formal proof, and hence opens the door to the rigorous verification of refactorings. We discuss a disciplined approach to maintaining name bindings and avoiding name capture by treating the binding from a name to the declaration it refers to as a dependency that the refactoring has to preserve. This approach readily generalises to other types of dependencies for capturing control flow, data flow and synchronisation behaviour. To implement complex refactorings, it is often helpful for the refactoring to internally work on a richer language with language extensions that make the transformation easier to express. We show how this allows the decomposition of refactorings into small microrefactorings that can be specified, implemented and verified in isolation. We evaluate our approach by giving specifications and implementations of many commonly used refactorings that are concise, yet match the implementations in the popular Java development environment Eclipse in terms of features, and outperform them in terms of correctness. We give detailed informal correctness proofs for some of our specifications, which are greatly aided by their modular structure. Finally, we discuss a rigorous formalisation of the central name binding framework used by most of our specifications in the theorem prover Coq, and show how its correctness can be established mechanically.
5

A refinement calculus for Z

Ana, Cavalcanti January 1997 (has links)
The lack of a method for developing programs from Z specifications is a difficulty that is now widely recognised. As a contribution to solving this problem, we present ZRC, a refinement calculus based on Morgan's work that incorporates the Z notation and follows its style and conventions. Other refinement techniques have been proposed for Z; ZRC builds upon some of them, but distinguishes itself in that it is completely formalised. As several other refinement techniques, ZRC is formalised in terms of weakest preconditions. In order to define the semantics of its language, ZRC-L, we construct a weakest precondition semantics for Z based on a relational semantics proposed by the Z standards panel. The resulting definition is not unexpected, but its construction provides evidence for its suitability and, additionally, establishes connections between predicate transformers and two different relational models. The weakest precondition semantics of the remaining constructs of ZRC-L justify several assumptions that permeate the formalisation of Morgan's refinement calculus. Based on the semantics of ZRC-L, we derive all laws of ZRC. Typically the refinement of a schema in ZRC begins with the application of a conversion law that translates it to a notation convenient for refinement, and proceeds with the application of refinement laws. The conversion laws of ZRC formalise the main strategies and rules of translation available in the literature; its set of refinement laws is extensive and includes support for procedures, parameters, recursion, and data refinement. Morgan and Back have proposed different formalisations of procedures and parameters in the context of refinement techniques. We investigate a surprising and intricate relationship between these works and the substitution operator that renames the free variables of a program, and reveal an inconsistency in Morgan's calculus, Back's approach does not suffer from this inconsistency, but he does not present refinement laws. We benefit from both works and use a model based on Back's formalism to derive refinement laws similar to those in Morgan's calculus. Furthermore, we derive additional laws that formalise Morgan's approach to recursion. Three case studies illustrate the application of ZRC. They show that ZRC can be useful as a technique of formal program development, but are by no means enough to ascertain the general adequacy of its conversion and refinement laws. Actually, since Z does not enforce a specific style of structuring specifications, it is likely that new laws will be proved useful for particular system specifications: two of our case studies exemplify this situation. Our hope is that ZRC and its formalisation will encourage further investigation into the refinement of Z specifications and the proper justification of any emerging strategies or techniques.
6

Modulation of target cells induced by Crimean-Congo hemorrhagic fever virus : the contribution in the pathogenesis of the disease / Modulation de cellules cibles induite par le virus de la fièvre hémorragique de Crimée-Congo : contribution à la pathogénèse de la maladie

De Oliveira Rodrigues, Raquel 11 January 2012 (has links)
Le virus de la fièvre hémorragique de Crimée-Congo (VFHCC) est un virus transmis par des tiques, appartenant au genre Nairovirus de la famille des Bunyaviridae. Il est réparti sur plus d’une trentaine de pays de plusieurs continents : Europe, Asie et Afrique ; avec une mortalité moyenne estimée de 30%. Le VFHCC peut causer à l’homme une maladie hémorragique très sévère et parmi divers symptômes, il peut induire une inflammation aigüe et des lésions hépatiques. Les phagocytes mononucléaires, les hépatocytes et les cellules endothéliales ont été décrits comme étant des cellules cibles lors d’études cliniques et post-mortem ainsi que lors d’études en modèle murin. Nous avons analysé, lors d’étude in vitro, la réponse cellulaire de cellules présentatrices de l’antigène (CPAs) et d’hépatocytes humains. Afin de mieux comprendre la pathogenèse du VFHCC, nous avons comparé la réponse de ces cellules avec celle du virus Dugbe (DUGV), un nairovirus génétiquement le plus proche de VFHCC considéré comme faiblement pathogénique. Finalement, afin améliorer la détection de DUGV in vitro et lors d’études épidémiologiques de terrain, nous avons développé un test moléculaire en temps réel pour détecter et quantifier DUGV.Nous avons observé que le VFHCC induisait une réponse inflammatoire chez les CPAs, en revanche, DUGV induisait une réponse plus forte, ce qui suggère que VFHCC induirait une inhibition sélective de certains médiateurs de la réponse pro-inflammatoire. Lors de l’infection in vitro des hépatocytes par le VFHCC, nous avons observé que le virus induisait du stress du RE, l’activation de l’IL-8 un médiateur pro-inflammatoire, et la modulation des deux voies pro-apoptotiques. Quand nous avons comparé cette réponse à celle induite par DUGV nous avons trouvé que la différence majeure était l’absence d’apoptose. Ces différences pourraient, en partie, expliquer le rôle du foie dans la pathogenèse induite par le VFHCC. / Crimean-Congo hemorrhagic fever virus (CCHFV) is a widely distributed tick-borne member of the Nairovirus genus (Bunyaviridae) inducing an average mortality rate of 30% in humans. CCHFV induces a severe hemorrhagic disease in infected patients that includes, among other bleeding symptoms, acute inflammation and liver lesions. The mononuclear phagocytes, the hepatocytes and the endothelial cells were described to be the main target cells in both human clinical studies and animal model in vivo studies.We analysed the in vitro cellular response of host antigen presenting cells (APC) and hepatocytes. Then, to better elucidate the pathogenesis of CCHFV, we compared the response of these cells after infection with Dugbe virus (DUGV), a mild pathogenic virus genetically close to CCHFV. In order to improve DUGV detection in vitro and in field studies, we also developed a molecular real-time quantitative tool to detect and quantify DUGV.We found that CCHFV induced an inflammatory response in both APCs tested; however DUGV induced a higher cytokine/chemokine response in these target cells than CCHFV. Our results suggest that CCHFV was able to selectively inhibit the activation of some inflammatory mediators in the in vitro infection and that CCHFV/DUGV cellular response differences could be relevant in pathogenesis. On the other hand, when we in vitro infected hepatocytes with CCHFV, we observed that it was able to induce ER-stress, activate IL-8 secretion and modulate both mitochondrial and death receptor pathways of apoptosis. When we compared this cellular response with that induced by DUGV, we found that the most striking difference was the absence of apoptosis. These differences could, in part, explain the role of the liver in the pathogenesis induced by CCHFV.
7

Efficient approaches to simulating individual-based cell population models

Harvey, Daniel Gordon January 2013 (has links)
Computational modelling of populations of cells has been applied to further understanding in a range of biological fields, from cell sorting to tumour development. The ability to analyse the emergent population-level effects of variation at the cellular and subcellular level makes it a powerful approach. As more detailed models have been proposed, the demand for computational power has increased. While developments in microchip technology continue to increase the power of individual compute units available to the research community, the use of parallel computing offers an immediate increase in available computing power. To make full use of parallel computing technology it is necessary to develop specialised algorithms. To that end, this thesis is concerned with the development, implementation and application of a novel parallel algorithm for the simulation of an off-lattice individual-based model of a population of cells. We first use the Message Passing Interface to develop a parallel algorithm for the overlapping spheres model which we implement in the Chaste software library. We draw on approaches for parallelising molecular dynamics simulations to develop a spatial decomposition approach to dividing data between processors. By using functions designed for saving and loading the state of simulations, our implementation allows for the parallel simulation of all subcellular models implemented in Chaste, as well as cell-cell interactions that depend on any of the cell state variables. Our implementation allows for faithful replication of model cells that migrate between processors during a simulation. We validate our parallel implementation by comparing results with the extensively tested serial implementation in Chaste. While the use of the Message Passing Interface means that our algorithm may be used on shared- and distributed-memory systems, we find that parallel performance is limited due to high communication costs. To address this we apply a series of optimisations that improve the scaling of our algorithm both in terms of compute time and memory consumption for given benchmark problems. To demonstrate an example application of our work to a biological problem, we extend our algorithm to enable parallel simulation of the Subcellular Element Model (S.A. Sandersius and T.J. Newman. Phys. Biol., 5:015002, 2008). By considering subcellular biomechanical heterogeneity we study the impact of a stiffer nuclear region within cells on the initiation of buckling of a compressed epithelial layer. The optimised parallel algorithm decreases computation time for a single simulation in this study by an order of magnitude, reducing computation time from over a week to a single day.
8

Program analysis with interpolants

Weissenbacher, Georg January 2010 (has links)
This dissertation discusses novel techniques for interpolation-based software model checking, an approximate method which uses Craig interpolation to compute invariants of programs. Our work addresses two aspects of program analyses based on model checking: verification (the construction of correctness proofs for programs) and falsification (the detection of counterexamples that violate the specification). In Hoare's calculus, a proof of correctness comprises assertions which establish that a program adheres to its specification. The principal challenge is to derive appropriate assertions and loop invariants. Contemporary software verification tools use Craig interpolation (as opposed to traditional predicate transformers such as the weakest precondition) to derive approximate assertions. The performance of the model checker is contingent on the Craig interpolants computed. We present novel interpolation techniques which provide the following advantages over existing methods. Firstly, the resulting interpolants are sound with respect to the bit-level semantics of programs, which is an improvement over interpolation systems that use linear arithmetic over the reals to approximate bit-vector arithmetic and/or do not support bit-level operations. Secondly, our interpolation systems afford us a choice of interpolants and enable us to fine-tune their logical strength and structure. In contrast, existing procedures are limited to a single ad-hoc choice of an interpolant. Interpolation-based verification tools are typically forced to refine an initial approximation repeatedly in order to achieve the accuracy required to establish or refute the correctness of a program. The detection of a counterexample containing a repetitive construct may necessitate one refinement step (involving the computation of additional interpolants) for each iteration of the loop. We present a heuristic that aims to avoid the repeated and computationally expensive construction of interpolants, thus enabling the detection of deeply buried defects such as buffer overflows. Finally, we present an implementation of our techniques and evaluate them on a set of standardised device driver and buffer overflow benchmarks.
9

Stream fusion : practical shortcut fusion for coinductive sequence types

Coutts, Duncan January 2011 (has links)
In functional programming it is common practice to build modular programs by composing functions where the intermediate values are data structures such as lists or arrays. A desirable optimisation for programs written in this style is to fuse the composed functions and thereby eliminate the intermediate data structures and their associated runtime costs. Stream fusion is one such fusion optimisation that can eliminate intermediate data structures, including lists, arrays and other abstract data types that can be viewed as coinductive sequences. The fusion transformation can be applied fully automatically by a general purpose optimising compiler. The stream fusion technique itself has been presented previously and many practical implementations exist. The primary contributions of this thesis address the issues of correctness and optimisation: whether the transformation is correct and whether the transformation is an optimisation. Proofs of shortcut fusion laws have typically relied on parametricity by making use of free theorems. Unfortunately, most functional programming languages have semantics for which classical free theorems do not hold unconditionally; additional side conditions are required. In this thesis we take an approach based not on parametricity but on data abstraction. Using this approach we prove the correctness of stream fusion for lists -- encompassing the fusion system as a whole, not merely the central fusion law. We generalise this proof to give a framework for proving the correctness of stream fusion for any abstract data type that can be viewed as a coinductive sequence and give as an instance of the framework, a simple model of arrays. The framework requires that each fusible function satisfies a simple data abstraction property. We give proofs of this property for several standard list functions. Previous empirical work has demonstrated that stream fusion can be an optimisation in many cases. In this thesis we take a more universal view and consider the issue of optimisation independently of any particular implementation or compiler. We make a semi-formal argument that, subject to certain syntactic conditions on fusible functions, stream fusion on lists is strictly an improvement, as measured by the number of allocations of data constructors. This detailed analysis of how stream fusion works may be of use in writing fusible functions or in developing new implementations of stream fusion.
10

Formal methods for the analysis of wireless network protocols

Fruth, Matthias January 2011 (has links)
In this thesis, we present novel software technology for the analysis of wireless networks, an emerging area of computer science. To address the widely acknowledged lack of formal foundations in this field, probabilistic model checking, a formal method for verification and performance analysis, is used. Contrary to test and simulation, it systematically explores the full state space and therefore allows reasoning about all possible behaviours of a system. This thesis contributes to design, modelling, and analysis of ad-hoc networks and randomised distributed coordination protocols. First, we present a new hybrid approach that effectively combines probabilistic model checking and state-of-the-art models from the simulation community in order to improve the reliability of design and analysis of wireless sensor networks and their protocols. We describe algorithms for the automated generation of models for both analysis methods and their implementation in a tool. Second, we study spatial properties of wireless sensor networks, mainly with respect to Quality of Service and energy properties. Third, we investigate the contention resolution protocol of the networking standard ZigBee. We build a generic stochastic model for this protocol and analyse Quality of Service and energy properties of it. Furthermore, we assess the applicability of different interference models. Fourth, we explore slot allocation protocols, which serve as a bandwidth allocation mechanism for ad-hoc networks. We build a generic model for this class of protocols, study real-world protocols, and optimise protocol parameters with respect to Quality of Service and energy constraints. We combine this with the novel formalisms for wireless communication and interference models, and finally we optimise local (node) and global (network) routing policies. This is the first application of probabilistic model checking both to protocols of the ZigBee standard and protocols for slot allocation.

Page generated in 0.0837 seconds