Spelling suggestions: "subject:"computer science. amathematics."" "subject:"computer science. bmathematics.""
201 |
Cardiac mechanical model personalisation and its clinical applicationsXi, Jiahe January 2013 (has links)
An increasingly important research area within the field of cardiac modelling is the development and study of methods of model-based parameter estimation from clinical measurements of cardiac function. This provides a powerful approach for the quantification of cardiac function, with the potential to ultimately lead to the improved stratification and treatment of individuals with pathological myocardial mechanics. In particular, the diastolic function (i.e., blood filling) of left ventricle (LV) is affected by its capacity for relaxation, or the decay in residual active tension (AT) whose inhibition limits the relaxation of the LV chamber, which in turn affects its compliance (or its reciprocal, stiffness). The clinical determination of these two factors, corresponding to the diastolic residual AT and passive constitutive parameters (stiffness) in the cardiac mechanical model, is thus essential for assessing LV diastolic function. However these parameters are difficult to be assessed in vivo, and the traditional criterion to diagnose diastolic dysfunction is subject to many limitations and controversies. In this context, the objective of this study is to develop model-based applicable methodologies to estimate in vivo, from 4D imaging measurements and LV cavity pressure recordings, these clinically relevant parameters (passive stiffness and active diastolic residual tension) in computational cardiac mechanical models, which enable the quantification of key clinical indices characterising cardiac diastolic dysfunction. Firstly, a sequential data assimilation framework has been developed, covering various types of existing Kalman filters, outlined in chapter 3. Based on these developments, chapter 4 demonstrates that the novel reduced-order unscented Kalman filter can accurately retrieve the homogeneous and regionally varying constitutive parameters from the synthetic noisy motion measurements. This work has been published in Xi et al. 2011a. Secondly, this thesis has investigated the development of methods that can be applied to clinical practise, which has, in turn, introduced additional difficulties and opportunities. This thesis has presented the first study, to our best knowledge, in literature estimating human constitutive parameters using clinical data, and demonstrated, for the first time, that while an end-diastolic MR measurement does not constrain the mechanical parameters uniquely, it does provide a potentially robust indicator of myocardial stiffness. This work has been published in Xi et al. 2011b. However, an unresolved issue in patients with diastolic dysfunction is that the estimation of myocardial stiffness cannot be decoupled from diastolic residual AT because of the impaired ventricular relaxation during diastole. To further address this problem, chapter 6 presents the first study to estimate diastolic parameters of the left ventricle (LV) from cine and tagged MRI measurements and LV cavity pressure recordings, separating the passive myocardial constitutive properties and diastolic residual AT. We apply this framework to three clinical cases, and the results show that the estimated constitutive parameters and residual active tension appear to be a promising candidate to delineate healthy and pathological cases. This work has been published in Xi et al. 2012a. Nevertheless, the need to invasively acquire LV pressure measurement limits the wide application of this approach. Chapter 7 addresses this issue by analysing the feasibility of using two kinds of non-invasively available pressure measurements for the purpose of inverse parameter estimation. The work has been submitted for publication in Xi et al. 2012b.
|
202 |
Monotonicity in shared-memory program verificationKaiser, Alexander January 2013 (has links)
Predicate abstraction is a key enabling technology for applying model checkers to programs written in mainstream languages. It has been used very successfully for debugging sequential system-level C code. Although model checking was originally designed for analysing concurrent systems, there is little evidence of fruitful applications of predicate abstraction to shared-variable concurrent software. The goal of the present thesis is to close this gap. We propose an algorithmic solution implementing predicate abstraction that targets safety properties in non-recursive programs executed by an unbounded number of threads, which communicate via shared memory or higher-level mechanisms, such as mutexes and broadcasts. As system-level code makes frequent use of such primitives, their correct usage is critical to ensure reliability. Monotonicity - the property that thread actions remain executable when other threads are added to the current global state - is a natural and common feature of human-written concurrent software. It is also useful: if every thread’s memory is finite, monotonicity often guarantees the decidability of safety properties even when the number of running threads is unspecified. In this thesis, we show that the process of obtaining finite-data thread abstrac tions for model checking is not always compatible with monotonicity. Predicate-abstracting certain mainstream asynchronous software such as the ticket busy-wait lock algorithm results in non-monotone multi-threaded Boolean programs, despite the monotonicity of the input program: the monotonicity is lost in the abstraction. As a result, the unbounded thread Boolean programs do not give rise to well quasi-ordered systems [1], for which sound and complete safety checking algorithms are available. In fact, safety checking turns out to be undecidable for the obtained class of abstract programs, despite the finiteness of the individual threads’ state spaces. Our solution is to restore the monotonicity in the abstract program, using an inexpensive closure operator that precisely preserves all safety properties from the (non-monotone) abstract program without the closure. As a second contribution, we present a novel, sound and complete, yet empirically much improved algorithm for verifying abstractions, applicable to general well quasi-ordered systems. Our approach is to gradually widen the set of safety queries during the search by program states that involve fewer threads and are thus easier to decide, and are likely to finalise the decision on earlier queries. To counter the negative impact of "bad guesses", i.e. program states that turn out feasible, the search is supported by a parallel engine that generates such states; these are never selected for widening. We present an implementation of our techniques and extensive experiments on multi-threaded C programs, including device driver code from FreeBSD and Solaris. The experiments demonstrate that by exploiting monotonicity, model checking techniques - enabled by predicate abstraction - scale to realistic programs even of a few thousands of multi-threaded C code lines.
|
203 |
A fictitious domain approach for hybrid simulations of eukaryotic chemotaxisSeguis, Jean-Charles January 2013 (has links)
Chemotaxis, the phenomenon through which cells respond to external chemical signals, is one of the most important and universally observable in nature. It has been the object of considerable modelling effort in the last decades. The models for chemotaxis available in the literature cannot reconcile the dynamics of external chemical signals and the intracellular signalling pathways leading to the response of the cells. The reason is that models used for cells do not contain the distinction between the extracellular and intracellular domains. The work presented in this dissertation intends to resolve this issue. We set up a numerical hybrid simulation framework containing such description and enabling the coupling of models for phenomena occurring at extracellular and intracellular levels. Mathematically, this is achieved by the use of the fictitious domain method for finite elements, allowing the simulation of partial differential equations on evolving domains. In order to make the modelling of the membrane binding of chemical signals possible, we derive a suitable fictitious domain method for Robin boundary elliptic problems. We also display ways to minimise the computational cost of such simulation by deriving a suitable preconditioner for the linear systems resulting from the Robin fictitious domain method, as well as an efficient algorithm to compute fictitious domain specific linear operators. Lastly, we discuss the use of a simpler cell model from the literature and match it with our own model. Our numerical experiments show the relevance of the matching, as well as the stability and accuracy of the numerical scheme presented in the thesis.
|
204 |
Application of software engineering methodologies to the development of mathematical biological modelsGill, Mandeep Singh January 2013 (has links)
Mathematical models have been used to capture the behaviour of biological systems, from low-level biochemical reactions to multi-scale whole-organ models. Models are typically based on experimentally-derived data, attempting to reproduce the observed behaviour through mathematical constructs, e.g. using Ordinary Differential Equations (ODEs) for spatially-homogeneous systems. These models are developed and published as mathematical equations, yet are of such complexity that they necessitate computational simulation. This computational model development is often performed in an ad hoc fashion by modellers who lack extensive software engineering experience, resulting in brittle, inefficient model code that is hard to extend and reuse. Several Domain Specific Languages (DSLs) exist to aid capturing such biological models, including CellML and SBML; however these DSLs are designed to facilitate model curation rather than simplify model development. We present research into the application of techniques from software engineering to this domain; starting with the design, development and implementation of a DSL, termed Ode, to aid the creation of ODE-based biological models. This introduces features beneficial to model development, such as model verification and reproducible results. We compare and contrast model development to large-scale software development, focussing on extensibility and reuse. This work results in a module system that enables the independent construction and combination of model components. We further investigate the use of software engineering processes and patterns to develop complex modular cardiac models. Model simulation is increasingly computationally demanding, thus models are often created in complex low-level languages such as C/C++. We introduce a highly-efficient, optimising native-code compiler for Ode that generates custom, model-specific simulation code and allows use of our structured modelling features without degrading performance. Finally, in certain contexts the stochastic nature of biological systems becomes relevant. We introduce stochastic constructs to the Ode DSL that enable models to use Stochastic Differential Equations (SDEs), the Stochastic Simulation Algorithm (SSA), and hybrid methods. These use our native-code implementation and demonstrate highly-efficient stochastic simulation, beneficial as stochastic simulation is highly computationally intensive. We introduce a further DSL to model ion channels declaratively, demonstrating the benefits of DSLs in the biological domain. This thesis demonstrates the application of software engineering methodologies, and in particular DSLs, to facilitate the development of both deterministic and stochastic biological models. We demonstrate their benefits with several features that enable the construction of large-scale, reusable and extensible models. This is accomplished whilst providing efficient simulation, creating new opportunities for biological model development, investigation and experimentation.
|
205 |
Efficient simulation of cardiac electrical propagation using adaptive high-order finite elementsArthurs, Christopher J. January 2013 (has links)
This thesis investigates the high-order hierarchical finite element method, also known as the finite element p-version, as a computationally-efficient technique for generating numerical solutions to the cardiac monodomain equation. We first present it as a uniform-order method, and through an a priori error bound we explain why the associated cardiac cell model must be thought of as a PDE and approximated to high-order in order to obtain the accuracy that the p-version is capable of. We perform simulations demonstrating that the achieved error agrees very well with the a priori error bound. Further, in terms of solution accuracy for time taken to solve the linear system that arises in the finite element discretisation, it is more efficient that the state-of-the-art piecewise linear finite element method. We show that piecewise linear FEM actually introduces quite significant amounts of error into the numerical approximations, particularly in the direction perpendicular to the cardiac fibres with physiological conductivity values, and that without resorting to extremely fine meshes with elements considerably smaller than 70 micrometres, we can not use it to obtain high-accuracy solutions. In contrast, the p-version can produce extremely high accuracy solutions on meshes with elements around 300 micrometres in diameter with these conductivities. Noting that most of the numerical error is due to under-resolving the wave-front in the transmembrane potential, we also construct an adaptive high-order scheme which controls the error locally in each element by adjusting the finite element polynomial basis degree using an analytically-derived a posteriori error estimation procedure. This naturally tracks the location of the wave-front, concentrating computational effort where it is needed most and increasing computational efficiency. The scheme can be controlled by a user-defined error tolerance parameter, which sets the target error within each element as a proportion of the local magnitude of the solution as measured in the H^1 norm. This numerical scheme is tested on a variety of problems in one, two and three dimensions, and is shown to provide excellent error control properties and to be likely capable of boosting efficiency in cardiac simulation by an order of magnitude. The thesis amounts to a proof-of-concept of the increased efficiency in solving the linear system using adaptive high-order finite elements when performing single-thread cardiac simulation, and indicates that the performance of the method should be investigated in parallel, where it can also be expected to provide considerable improvement. In general, the selection of a suitable preconditioner is key to ensuring efficiency; we make use of a variety of different possibilities, including one which can be expected to scale very well in parallel, meaning that this is an excellent candidate method for increasing the efficiency of cardiac simulation using high-performance computing facilities.
|
206 |
The mathematical structure of non-locality and contextualityMansfield, Shane January 2013 (has links)
Non-locality and contextuality are key features of quantum mechanics that distinguish it from classical physics. We aim to develop a deeper, more structural understanding of these phenomena, underpinned by robust and elegant mathematical theory with a view to providing clarity and new perspectives on conceptual and foundational issues. A general framework for logical non-locality is introduced and used to prove that 'Hardy's paradox' is complete for logical non-locality in all (2,2,l) and (2,k,2) Bell scenarios, a consequence of which is that Bell states are the only entangled two-qubit states that are not logically non-local, and that Hardy non-locality can be witnessed with certainty in a tripartite quantum system. A number of developments of the unified sheaf-theoretic approach to non-locality and contextuality are considered, including the first application of cohomology as a tool for studying the phenomena: we find cohomological witnesses corresponding to many of the classic no-go results, and completely characterise contextuality for large families of Kochen-Specker-like models. A connection with the problem of the existence of perfect matchings in k-uniform hypergraphs is explored, leading to new results on the complexity of deciding contextuality. A refinement of the sheaf-theoretic approach is found that captures partial approximations to locality/non-contextuality and can allow Bell models to be constructed from models of more general kinds which are equivalent in terms of non-locality/contextuality. Progress is made on bringing recent results on the nature of the wavefunction within the scope of the logical and sheaf-theoretic methods. Computational tools are developed for quantifying contextuality and finding generalised Bell inequalities for any measurement scenario which complement the research programme. This also leads to a proof that local ontological models with `negative probabilities' generate the no-signalling polytopes for all Bell scenarios.
|
207 |
Foundations and applications of knowledge representation for structured entitiesMagka, Despoina January 2013 (has links)
Description Logics form a family of powerful ontology languages widely used by academics and industry experts to capture and intelligently manage knowledge about the world. A key advantage of Description Logics is their amenability to automated reasoning that enables the deduction of knowledge that has not been explicitly stated. However, in order to ensure decidability of automated reasoning algorithms, suitable restrictions are usually enforced on the shape of structures that are expressible using Description Logics. As a consequence, Description Logics fall short of expressive power when it comes to representing cyclic structures, which abound in life sciences and other disciplines. The objective of this thesis is to explore ontology languages that are better suited for the representation of structured objects. It is suggested that an alternative approach which relies on nonmonotonic existential rules can provide a promising candidate for modelling such domains. To this end, we have built a comprehensive theoretical and practical framework for the representation of structured entities along with a surface syntax designed to allow the creation of ontological descriptions in an intuitive way. Our formalism is based on nonmonotonic existential rules and exhibits a favourable balance between expressive power and computational as well as empirical tractability. In order to ensure decidability of reasoning, we introduce a number of acyclicity criteria that strictly generalise many of the existing ones. We also present a novel stratification condition that properly extends `classical' stratification and allows for capturing both definitional and conditional aspects of complex structures. The applicability of our formalism is supported by a prototypical implementation, which is based on an off-the-shelf answer set solver and is tested over a realistic knowledge base. Our experimental results demonstrate improvement of up to three orders of magnitude in comparison with previous evaluation efforts and also expose numerous modelling errors of a manually curated biochemical knowledge base. Overall, we believe that our work lays the practical and theoretical foundations of an ontology language that is well-suited for the representation of structured objects. From a modelling point of view, our approach could stimulate the adoption of a different and expressive reasoning paradigm for which robustly engineered mature reasoners are available; it could thus pave the way for the representation of a broader spectrum of knowledge. At the same time, our theoretical contributions reveal useful insights into logic-based knowledge representation and reasoning. Therefore, our results should be of value to ontology engineers and knowledge representation researchers alike.
|
208 |
Bifibrational duality in non-abelian algebra and the theory of databasesWeighill, Thomas 12 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: In this thesis we develop a self-dual categorical approach to some topics in
non-abelian algebra, which is based on replacing the framework of a category
with that of a category equipped with a functor to it. We also make some first
steps towards a possible link between this theory and the theory of databases
in computer science. Both of these theories are based around the study of
Grothendieck bifibrations and their generalisations. The main results in this
thesis concern correspondences between certain structures on a category which
are relevant to the study of categories of non-abelian group-like structures, and
functors over that category. An investigation of these correspondences leads
to a system of dual axioms on a functor, which can be considered as a solution
to the proposal of Mac Lane in his 1950 paper "Duality for Groups" that
a self-dual setting for formulating and proving results for groups be found.
The part of the thesis concerned with the theory of databases is based on a
recent approach by Johnson and Rosebrugh to views of databases and the view
update problem. / AFRIKAANSE OPSOMMING: In hierdie tesis word ’n self-duale kategoriese benadering tot verskeie onderwerpe
in nie-abelse algebra ontwikkel, wat gebaseer is op die vervanging van
die raamwerk van ’n kategorie met dié van ’n kategorie saam met ’n funktor
tot die kategorie. Ons neem ook enkele eerste stappe in die rigting van ’n skakel
tussen hierdie teorie and die teorie van databasisse in rekenaarwetenskap.
Beide hierdie teorieë is gebaseer op die studie van Grothendieck bifibrasies
en hul veralgemenings. Die hoof resultate in hierdie tesis het betrekking tot
ooreenkomste tussen sekere strukture op ’n kategorie wat relevant tot die studie
van nie-abelse groep-agtige strukture is, en funktore oor daardie kategorie.
’n Verdere ondersoek van hierdie ooreemkomste lei tot ’n sisteem van duale
aksiomas op ’n funktor, wat beskou kan word as ’n oplossing tot die voorstel
van Mac Lane in sy 1950 artikel “Duality for Groups” dat ’n self-duale konteks
gevind word waarin resultate vir groepe geformuleer en bewys kan word. Die
deel van hierdie tesis wat met die teorie van databasisse te doen het is gebaseer
op ’n onlangse benadering deur Johnson en Rosebrugh tot aansigte van
databasisse en die opdatering van hierdie aansigte.
|
209 |
Embedding an object calculus in the unifying theories of programmingSmith, Michael Anthony January 2010 (has links)
Hoare and He's Unifying Theories of Programming (UTP) provides a rich model of programs as relational predicates. This theory is intended to provide a single framework in which any programming paradigms, languages, and features, can be modelled, compared and contrasted. The UTP already has models for several programming formalisms, such as imperative programming, higher-order programming (e.g. programing with procedures), several styles of concurrent programming (or reactive systems), class-based object-orientation, and transaction processing. We believe that the UTP ought to be able to represent all significant computer programming language formalisms, in order for it to be considered a unifying theory. One gap in the UTP work is that of object-based object-orientation, such as that presented in Abadi and Cardelli's untyped object calculi (sigma-calculi). These sigma-calculi provide a prominent formalism of object-based object-oriented (OO) programs, which models programs as objects. We address this gap within this dissertation by presenting an embedding of an Abadi--Cardelli-style object calculus in the UTP. More formally, the thesis that his dissertation argues is that it is possible to provide an object-based object rientation to the UTP, with value- and reference-based objects, and a fully abstract model of references. We have made three contributions to our area of study: first, to extend the UTP with a notion of object-based object orientation, in contrast with the existing class-based models; second, to provide an alternative model of pointers (references) for the UTP that supports both value-based compound values (e.g. objects) and references (pointers), in contrast to existing UTP models with pointers that have reference-based compound values; and third, to model an Abadi-Cardelli notion of an object in the UTP, and thus demonstrate that it can unify this style of object formalism.
|
210 |
Program synthesis from domain specific object modelsFaitelson, David January 2008 (has links)
Automatically generating a program from its specification eliminates a large source of errors that is often unavoidable in a manual approach. While a general purpose code generator is impossible to build, it is possible to build a practical code generator for a specific domain. This thesis investigates the theory behind Booster — a domain specific, object based specification language and automatic code generator. The domain of Booster is information systems — systems that consist of a rich object model in which the objects refer to each other to form a complicated network of associations. The operations of such systems are conceptually simple (changing the attributes of objects, adding or removing new objects and creating or destroying associations) but they are tricky to implement correctly. The thesis focuses on the theoretical foundation of the Booster approach, in particular on three contributions: semantics, model completion, and code generation. The semantics of a Booster model is a single abstract data type (ADT) where the invariants and the methods of all the classes in the model are promoted to the level of the ADT. This is different from the traditional view that considers each class as a separate ADT. The thesis argues that the Booster semantics is a better model of object oriented systems. The second important contribution is the idea of model completion — a process that augments the postconditions of methods with additional predicates that follow from the system’s invariant and the method’s original intention. The third contribution describes a simple but effective code generation technique that is based on interpreting postconditions as executable statements and uses weakest preconditions to ensure that the generated code refines its specification.
|
Page generated in 0.0874 seconds