• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • 1
  • Tagged with
  • 30
  • 30
  • 27
  • 14
  • 13
  • 12
  • 12
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Monotonicity in shared-memory program verification

Kaiser, Alexander January 2013 (has links)
Predicate abstraction is a key enabling technology for applying model checkers to programs written in mainstream languages. It has been used very successfully for debugging sequential system-level C code. Although model checking was originally designed for analysing concurrent systems, there is little evidence of fruitful applications of predicate abstraction to shared-variable concurrent software. The goal of the present thesis is to close this gap. We propose an algorithmic solution implementing predicate abstraction that targets safety properties in non-recursive programs executed by an unbounded number of threads, which communicate via shared memory or higher-level mechanisms, such as mutexes and broadcasts. As system-level code makes frequent use of such primitives, their correct usage is critical to ensure reliability. Monotonicity - the property that thread actions remain executable when other threads are added to the current global state - is a natural and common feature of human-written concurrent software. It is also useful: if every thread’s memory is finite, monotonicity often guarantees the decidability of safety properties even when the number of running threads is unspecified. In this thesis, we show that the process of obtaining finite-data thread abstrac tions for model checking is not always compatible with monotonicity. Predicate-abstracting certain mainstream asynchronous software such as the ticket busy-wait lock algorithm results in non-monotone multi-threaded Boolean programs, despite the monotonicity of the input program: the monotonicity is lost in the abstraction. As a result, the unbounded thread Boolean programs do not give rise to well quasi-ordered systems [1], for which sound and complete safety checking algorithms are available. In fact, safety checking turns out to be undecidable for the obtained class of abstract programs, despite the finiteness of the individual threads’ state spaces. Our solution is to restore the monotonicity in the abstract program, using an inexpensive closure operator that precisely preserves all safety properties from the (non-monotone) abstract program without the closure. As a second contribution, we present a novel, sound and complete, yet empirically much improved algorithm for verifying abstractions, applicable to general well quasi-ordered systems. Our approach is to gradually widen the set of safety queries during the search by program states that involve fewer threads and are thus easier to decide, and are likely to finalise the decision on earlier queries. To counter the negative impact of "bad guesses", i.e. program states that turn out feasible, the search is supported by a parallel engine that generates such states; these are never selected for widening. We present an implementation of our techniques and extensive experiments on multi-threaded C programs, including device driver code from FreeBSD and Solaris. The experiments demonstrate that by exploiting monotonicity, model checking techniques - enabled by predicate abstraction - scale to realistic programs even of a few thousands of multi-threaded C code lines.
12

Application of software engineering methodologies to the development of mathematical biological models

Gill, Mandeep Singh January 2013 (has links)
Mathematical models have been used to capture the behaviour of biological systems, from low-level biochemical reactions to multi-scale whole-organ models. Models are typically based on experimentally-derived data, attempting to reproduce the observed behaviour through mathematical constructs, e.g. using Ordinary Differential Equations (ODEs) for spatially-homogeneous systems. These models are developed and published as mathematical equations, yet are of such complexity that they necessitate computational simulation. This computational model development is often performed in an ad hoc fashion by modellers who lack extensive software engineering experience, resulting in brittle, inefficient model code that is hard to extend and reuse. Several Domain Specific Languages (DSLs) exist to aid capturing such biological models, including CellML and SBML; however these DSLs are designed to facilitate model curation rather than simplify model development. We present research into the application of techniques from software engineering to this domain; starting with the design, development and implementation of a DSL, termed Ode, to aid the creation of ODE-based biological models. This introduces features beneficial to model development, such as model verification and reproducible results. We compare and contrast model development to large-scale software development, focussing on extensibility and reuse. This work results in a module system that enables the independent construction and combination of model components. We further investigate the use of software engineering processes and patterns to develop complex modular cardiac models. Model simulation is increasingly computationally demanding, thus models are often created in complex low-level languages such as C/C++. We introduce a highly-efficient, optimising native-code compiler for Ode that generates custom, model-specific simulation code and allows use of our structured modelling features without degrading performance. Finally, in certain contexts the stochastic nature of biological systems becomes relevant. We introduce stochastic constructs to the Ode DSL that enable models to use Stochastic Differential Equations (SDEs), the Stochastic Simulation Algorithm (SSA), and hybrid methods. These use our native-code implementation and demonstrate highly-efficient stochastic simulation, beneficial as stochastic simulation is highly computationally intensive. We introduce a further DSL to model ion channels declaratively, demonstrating the benefits of DSLs in the biological domain. This thesis demonstrates the application of software engineering methodologies, and in particular DSLs, to facilitate the development of both deterministic and stochastic biological models. We demonstrate their benefits with several features that enable the construction of large-scale, reusable and extensible models. This is accomplished whilst providing efficient simulation, creating new opportunities for biological model development, investigation and experimentation.
13

Model-driven development of information systems

Wang, Chen-Wei January 2012 (has links)
The research presented in this thesis is aimed at developing reliable information systems through the application of model-driven and formal techniques. These are techniques in which a precise, formal model of system behaviour is exploited as source code. As such a model may be more abstract, and more concise, than source code written in a conventional programming language, it should be easier and more economical to create, to analyse, and to change. The quality of the model of the system can be ensured through certain kinds of formal analysis and fixed accordingly if necessary. Most valuably, the model serves as the basis for the automated generation or configuration of a working system. This thesis provides four research contributions. The first involves the analysis of a proposed modelling language targeted at the model-driven development of information systems. Logical properties of the language are derived, as are properties of its compiled form---a guarded substitution notation. The second involves the extension of this language, and its semantics, to permit the description of workflows on information systems. Workflows described in this way may be analysed to determine, in advance of execution, the extent to which their concurrent execution may introduce the possibility of deadlock or blocking: a condition that, in this context, is synonymous with a failure to achieve the specified outcome. The third contribution concerns the validation of models written in this language by adapting existing techniques of software testing to the analysis of design models. A methodology is presented for checking model consistency, on the basis of a generated test suite, against the intended requirements. The fourth and final contribution is the presentation of an implementation strategy for the language, targeted at standard, relational databases, and an argument for its correctness, based on a simple, set-theoretic semantics for structure and operations.
14

Program synthesis from domain specific object models

Faitelson, David January 2008 (has links)
Automatically generating a program from its specification eliminates a large source of errors that is often unavoidable in a manual approach. While a general purpose code generator is impossible to build, it is possible to build a practical code generator for a specific domain. This thesis investigates the theory behind Booster — a domain specific, object based specification language and automatic code generator. The domain of Booster is information systems — systems that consist of a rich object model in which the objects refer to each other to form a complicated network of associations. The operations of such systems are conceptually simple (changing the attributes of objects, adding or removing new objects and creating or destroying associations) but they are tricky to implement correctly. The thesis focuses on the theoretical foundation of the Booster approach, in particular on three contributions: semantics, model completion, and code generation. The semantics of a Booster model is a single abstract data type (ADT) where the invariants and the methods of all the classes in the model are promoted to the level of the ADT. This is different from the traditional view that considers each class as a separate ADT. The thesis argues that the Booster semantics is a better model of object oriented systems. The second important contribution is the idea of model completion — a process that augments the postconditions of methods with additional predicates that follow from the system’s invariant and the method’s original intention. The third contribution describes a simple but effective code generation technique that is based on interpreting postconditions as executable statements and uses weakest preconditions to ensure that the generated code refines its specification.
15

Towards a knowledge management methodology for articulating the role of hidden knowledges

Smith, Simon Paul January 2012 (has links)
Knowledge Management Systems are deployed in organisations of all sizes to support the coordination and control of a range of intellectual assets, and the low cost infrastructures made available by the shift to ‘cloud computing’ looks to only increase the speed and pervasiveness of this move. However, their implementation has not been without its problems, and the development of novel interventions capable of supporting the mundane work of everyday organisational settings has ultimately been limited. A common source of trouble for those formulating such systems is said to be that some proportion of the knowledge held by a setting’s members is hidden from the undirected view of both The Organisation and its analysts - typically characterised as a tacit knowledge - and can therefore go unnoticed during the design and deployment of new technologies. Notwithstanding its utility, overuse of this characterisation has resulted in the inappropriate labelling of a disparate assortment of phenomena, some of which might be more appropriately re-specified as ‘hidden knowledges’: a standpoint which seeks to acknowledge their unspoken character without making any unwarranted claims regarding their cognitive status. Approaches which focus on the situated and contingent properties of the actual work carried out by a setting’s members - such as ethnomethodologically informed ethnography - have shown significant promise as a mechanism for transforming the role played by members’ practices into an explicit topic of study. Specifically they have proven particularly adept at noticing those aspects of members’ work that might ordinarily be hidden from an undirected view, such as the methodic procedures through which we can sometimes mean more than we can say in-just-so-many-words. Here - within the context of gathering the requirements for new Knowledge Management Systems to support the reuse of existing knowledge - the findings from the application of just such an approach are presented in the form of a Pattern Language for Knowledge Management Systems: a descriptive device that lends itself to articulating the role that such hidden knowledges are playing in everyday work settings. By combining these three facets, this work shows that it is possible to take a more meaningful approach towards noticing those knowledges which might ordinarily be hidden from view, and apply our new understanding of them to the design of Knowledge Management Systems that actively engage with the knowledgeable work of a setting’s members.
16

Development and application of image analysis techniques to study structural and metabolic neurodegeneration in the human hippocampus using MRI and PET

Bishop, Courtney Alexandra January 2012 (has links)
Despite the association between hippocampal atrophy and a vast array of highly debilitating neurological diseases, such as Alzheimer’s disease and frontotemporal lobar degeneration, tools to accurately and robustly quantify the degeneration of this structure still largely elude us. In this thesis, we firstly evaluate previously-developed hippocampal segmentation methods (FMRIB’s Integrated Registration and Segmentation Tool (FIRST), Freesurfer (FS), and three versions of a Classifier Fusion (CF) technique) on two clinical MR datasets, to gain a better understanding of the modes of success and failure of these techniques, and to use this acquired knowledge for subsequent method improvement (e.g., FIRSTv3). Secondly, a fully automated, novel hippocampal segmentation method is developed, termed Fast Marching for Automated Segmentation of the Hippocampus (FMASH). This combined region-growing and atlas-based approach uses a 3D Sethian Fast Marching (FM) technique to propagate a hippocampal region from an automatically-defined seed point in the MR image. Region growth is dictated by both subject-specific intensity features and a probabilistic shape prior (or atlas). Following method development, FMASH is thoroughly validated on an independent clinical dataset from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), with an investigation of the dependency of such atlas-based approaches on their prior information. In response to our findings, we subsequently present a novel label-warping approach to effectively account for the detrimental effects of using cross-dataset priors in atlas-based segmentation. Finally, a clinical application of MR hippocampal segmentation is presented, with a combined MR-PET analysis of wholefield and subfield hippocampal changes in Alzheimer’s disease and frontotemporal lobar degeneration. This thesis therefore contributes both novel computational tools and valuable knowledge for further neurological investigations in both the academic and the clinical field.
17

Techniques and tools for the verification of concurrent systems

Palikareva, Hristina January 2012 (has links)
Model checking is an automatic formal verification technique for establishing correctness of systems. It has been widely used in industry for analysing and verifying complex safety-critical systems in application domains such as avionics, medicine and computer security, where manual testing is infeasible and even minor errors could have dire consequences. In our increasingly parallelised world, concurrency has become pivotal and seamlessly woven within programming paradigms, however, extremely challenging when it comes to modelling and establishing correctness of intended behaviour. Tools for model checking concurrent systems face severe limitations due to scalability problems arising from the need to examine all possible interleavings (schedules) of executions of parallel components. Moreover, concurrency poses additional challenges to model checking, giving rise to phenomena such as nondeterminism, deadlock, livelock, etc. In this thesis we focus on adapting and developing novel model-checking techniques for concurrent systems in the setting of the process algebra CSP and its primary model checker FDR. CSP allows for a compact modelling and precise analysis of event-based concurrency, grounded on synchronous message passing as a fundamental mechanism of inter-component communication. In particular, we investigate techniques based on symbolic model checking, static analysis and abstraction, all of them exploiting the compositionality inherent in CSP and targeting to increase the scale of systems that can be tractably analysed. Firstly, we investigate symbolic model-checking techniques based on Boolean satisfiability (SAT), which we adapt for the traces model of CSP. We tailor bounded model checking (BMC), that can be used for bug detection, and temporal k-induction, which aims at establishing inductiveness of properties and is capable of both bug finding and establishing the correctness of systems. Secondly, we propose a static analysis framework for establishing livelock freedom of CSP processes, with lessons for other concurrent formalisms. As opposed to traditional exhaustive state-space exploration, our framework employs a system of rules on the syntax of a process to calculate a sound approximation of its fair/co-fair sets of events. The rules either safely classify a process as livelock-free or report inconclusiveness, thereby trading accuracy for speed. Finally, we develop a series of abstraction/refinement schemes for the traces, stable-failures and failures-divergences models of CSP and embed them into a fully automated and compositional CEGAR framework. For each of those techniques we present an implementation and an experimental evaluation on a set of CSP benchmarks.
18

Architecting the deployment of cloud-hosted services for guaranteeing multitenancy isolation

Ochei, Laud Charles January 2017 (has links)
In recent years, software tools used for Global Software Development (GSD) processes (e.g., continuous integration, version control and bug tracking) are increasingly being deployed in the cloud to serve multiple users. Multitenancy is an important architectural property in cloud computing in which a single instance of an application is used to serve multiple users. There are two key challenges of implementing multitenancy: (i) ensuring isolation either between multiple tenants accessing the service or components designed (or integrated) with the service; and (ii) resolving trade-offs between varying degrees of isolation between tenants or components. The aim of this thesis is to investigate how to architect the deployment of cloud-hosted service while guaranteeing the required degree of multitenancy isolation. Existing approaches for architecting the deployment of cloud-hosted services to serve multiple users have paid little attention to evaluating the effect of the varying degrees of multitenancy isolation on the required performance, resource consumption and access privilege of tenants (or components). Approaches for isolating tenants (or components) are usually implemented at lower layers of the cloud stack and often apply to the entire system and not to individual tenants (or components). This thesis adopts a multimethod research strategy to providing a set of novel approaches for addressing these problems. Firstly, a taxonomy of deployment patterns and a general process, CLIP (CLoud-based Identification process for deployment Patterns) was developed for guiding architects in selecting applicable cloud deployment patterns (together with the supporting technologies) using the taxonomy for deploying services to the cloud. Secondly, an approach named COMITRE (COmponent-based approach to Multitenancy Isolation Through request RE-routing) was developed together with supporting algorithms and then applied to three case studies to empirically evaluate the varying degrees of isolation between tenants enabled by multitenancy patterns for three different cloud-hosted GSD processes, namely-continuous integration, version control, and bug tracking. After that, a synthesis of findings from the three case studies was carried out to provide an explanatory framework and new insights about varying degrees of multitenancy isolation. Thirdly, a model-based decision support system together with four variants of a metaheuristic solution was developed for solving the model to provide an optimal solution for deploying components of a cloud-hosted application with guarantees for multitenancy isolation. By creating and applying the taxonomy, it was learnt that most deployment patterns are related and can be implemented by combining with others, for example, in hybrid deployment scenarios to integrate data residing in multiple clouds. It has been argued that the shared component is better for reducing resource consumption while the dedicated component is better in avoiding performance interference. However, as the experimental results show, there are certain GSD processes where that might not necessarily be so, for example, in version control, where additional copies of the files are created in the repository, thus consuming more disk space. Over time, performance begins to degrade as more time is spent searching across many files on the disk. Extensive performance evaluation of the model-based decision support system showed that the optimal solutions obtained had low variability and percent deviation, and were produced with low computational effort when compared to a given target solution.
19

O trabalho do professor iniciante de língua estrangeira e as ferramentas docentes: um caminho para compreender o desenvolvimento?

Martiny, Francieli Freudenberger 27 February 2015 (has links)
Submitted by Maike Costa (maiksebas@gmail.com) on 2016-01-12T14:42:22Z No. of bitstreams: 1 arquivototal.pdf: 5029721 bytes, checksum: b53aa0560a14846d606dfe9d3a578711 (MD5) / Made available in DSpace on 2016-01-12T14:42:22Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 5029721 bytes, checksum: b53aa0560a14846d606dfe9d3a578711 (MD5) Previous issue date: 2015-02-27 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / The main goal of this research is to investigate the role that tools play in the professional genre that is typical of the activity of beginning teachers as represented during the discursive activity fostered by the Instruction to the Double. This discussion aims at understanding the development of these teachers regarding the representations about the tools which are specific to teaching work. This study finds its theoretical support in the Sociodiscursive Interactionism proposals (BRONCKART, 1999; 2008a) – area that assigns to language a fundamental role in the constitution and the functioning of human psychic functions – and the Work Sciences, particularly the Clinic of Activity (CLOT, 2007; 2010) – which contributes to the understanding of teaching activity as work. The notion of development is built based on the works of Vygotsky (1997c; 2007; 2009), as complemented by Bronckart (2008c; 2013). Likewise, the definition of teaching tools goes through the analysis of Rabardel’s (1995) proposal concerning the Instrumental Theory and it proposes a functional perspective that considers action as the founder element. The data generation process of this interpretative and participant research was carried out through the device called Instruction to the Double, conducted with three beginning teachers – Taylor, Lina and Celestin – during their first year of activity as Foreign Language teachers. These teachers participated, during one academic year, in 19 sessions performed between the researcher/double and the beginning teacher/instructor, all of them complemented by comments written by the teachers as a response to the face to face interaction. Data analysis took into consideration the notion of textual architecture, as stated by Bronckart (2008c), focusing on the identification of Action Figures (BULEA, 2010) and tools functions in the activity represented by teachers. Main results point to the identification of tools that are typical to the work of these beginning teachers and to the (re)significations attributed to them during the process of activity analysis promoted by the Instruction to the Double. These outcomes lead to the perception of linguistic and textual aspects that may be identified as evidence of development. Thereby, experience building is characterized, following these discussions, as a movement that may be identified in the texts and that is based on singular experiences permeated by conflicts originated from subjective and collective aspects. / Esta pesquisa tem como principal objetivo investigar o papel das ferramentas no gênero profissional característico da atividade docente de professores iniciantes a partir das representações que eles constroem durante a atividade linguageira gerada pela Instrução ao Sósia. A proposição desta discussão visa compreender o desenvolvimento desses professores a partir das representações sobre as ferramentas típicas do trabalho docente. Este estudo encontra seu embasamento teórico nas propostas do Interacionismo Sociodiscursivo (BRONCKART, 1999; 2008a) – área que atribui à linguagem papel fundamental na constituição e no funcionamento posterior das funções psíquicas humanas – e das Ciências do Trabalho, particularmente da Clínica da Atividade (CLOT, 2007; 2010) – que contribui para a compreensão da atividade docente como trabalho. A noção de desenvolvimento é construída a partir dos trabalhos de Vygotsky (1997c; 2007; 2009), complementados por Bronckart (2008c; 2013). Da mesma maneira, a definição do conceito de ferramentas docentes passa pela análise da proposta de Rabardel (1995) a respeito da Teoria Instrumental e propõe uma perspectiva funcional que adote a ação como elemento fundador. A geração dos dados desta pesquisa interpretativa e participante foi realizada por meio do dispositivo de Instrução ao Sósia, conduzido com três professores iniciantes – Taylor, Lina e Celestin – durante seu primeiro ano de atuação profissional como docentes de Línguas Estrangeiras. Esses professores participaram, durante um ano letivo, de um total de 19 sessões realizadas entre a pesquisadora/sósia e o professor iniciante/instrutor, todas seguidas de comentários escritos pelos professores como resposta à interação face a face. A análise dos dados foi conduzida a partir da noção de arquitetura textual formulada por Bronckart (2008c), privilegiando a identificação das Figuras de Ação (BULEA, 2010) e da função das ferramentas na atividade representada pelos professores. Os principais resultados apontam para a identificação das ferramentas características do trabalho desses professores iniciantes e as (res)significações que são a elas atribuídas durante o processo de análise da atividade promovido pela Instrução ao Sósia. Essas evidências conduzem à percepção de aspectos linguístico-textuais dos textos desses professores que podem ser identificados como indícios de desenvolvimento. Desse modo, a construção da experiência é caracterizada, a partir da discussão desses resultados, como um movimento que pode ser textualmente identificado e que é formado a partir de vivências singulares permeadas por conflitos de ordem subjetiva e coletiva.
20

Reasoning with !-graphs

Merry, Alexander January 2013 (has links)
The aim of this thesis is to present an extension to the string graphs of Dixon, Duncan and Kissinger that allows the finite representation of certain infinite families of graphs and graph rewrite rules, and to demonstrate that a logic can be built on this to allow the formalisation of inductive proofs in the string diagrams of compact closed and traced symmetric monoidal categories. String diagrams provide an intuitive method for reasoning about monoidal categories. However, this does not negate the ability for those using them to make mistakes in proofs. To this end, there is a project (Quantomatic) to build a proof assistant for string diagrams, at least for those based on categories with a notion of trace. The development of string graphs has provided a combinatorial formalisation of string diagrams, laying the foundations for this project. The prevalence of commutative Frobenius algebras (CFAs) in quantum information theory, a major application area of these diagrams, has led to the use of variable-arity nodes as a shorthand for normalised networks of Frobenius algebra morphisms, so-called "spider notation". This notation greatly eases reasoning with CFAs, but string graphs are inadequate to properly encode this reasoning. This dissertation firstly extends string graphs to allow for variable-arity nodes to be represented at all, and then introduces !-box notation – and structures to encode it – to represent string graph equations containing repeated subgraphs, where the number of repetitions is abitrary. This can be used to represent, for example, the "spider law" of CFAs, allowing two spiders to be merged, as well as the much more complex generalised bialgebra law that can arise from two interacting CFAs. This work then demonstrates how we can reason directly about !-graphs, viewed as (typically infinite) families of string graphs. Of particular note is the presentation of a form of graph-based induction, allowing the formal encoding of proofs that previously could only be represented as a mix of string diagrams and explanatory text.

Page generated in 0.0865 seconds