Spelling suggestions: "subject:"computer science (mathematics)"" "subject:"coomputer science (mathematics)""
71 |
Higher-order model checking with traversalsNeatherway, Robin Philip January 2014 (has links)
Higher-order recursion schemes are a powerful model of functional computation that grew out of traditional recursive program schemes and generalisations of grammars. It is common to view recursion schemes as generators of possibly-infinite trees, which Ong showed to have a decidable monadic second order theory and opened the door to applications in verification. Kobayashi later presented an intersection type characterisation of the model checking problem, on which most subsequent applied work is based. In recent work, recursion schemes have been considered to play a role similar to Boolean programs in verification of first-order imperative programs: a natural target for abstraction of programs with very large or infinite data domains. In this thesis we focus on the development of model checking algorithms for variants of recursion schemes. We start our contributions with a model checking algorithm inspired by the fully abstract game semantics of recursion schemes, but specified as a goal-directed approach to intersection type inference, that offers a unification of the views of Ong and Kobayashi. We build on this largely theoretical contribution with two orthogonal extensions and practical implementations. First, we develop a new extension of recursion schemes: higher-order recursion schemes with cases, which add non-determinism and a case construct operating over a finite data domain. These additions provide us with a more natural and succinct target for abstraction from functional programs: encoding data using functions inevitably results in an increase in the order and arity of the scheme, which have a direct impact on the worst-case complexity of the problem. We characterise the model checking problem using a novel intersection and union type system and give a practical algorithm for type inference in this system. We have carried out an empirical evaluation of the implementation --- the tool T<sub>RAV</sub>MC --- using a variety of problem instances from the literature and a new suite of problem instances derived via an abstraction-refinement procedure from functional programs. Second, we extend our approach from safety properties to all properties expressible in monadic second order logic using alternating parity tree automata as our specification language. We again provide an implementation and an empirical evaluation, which shows that despite the challenges accompanying liveness properties our tool scales beyond the current state of the art.
|
72 |
Weak cost automata over infinite treesVanden Boom, Michael T. January 2012 (has links)
Cost automata are traditional finite state automata enriched with a finite set of counters that can be manipulated on each transition. Based on the evolution of counter values, a cost automaton defines a function from the set of structures under consideration to the natural numbers extended with infinity, modulo a boundedness relation that ignores exact values but preserves boundedness properties. Historically, variants of cost automata have been used to solve problems in language theory such as the star height problem. They also have a rich theory in their own right as part of the theory of regular cost functions, which was introduced by Colcombet as an extension to the theory of regular languages. It subsumes the classical theory since a language can be associated with the function that maps every structure in the language to 0 and everything else to infinity; it is a strict extension since cost functions can count some behaviour within the input. Regular cost functions have been previously studied over finite words and trees. This thesis extends the theory to infinite trees, where classical parity automata are enriched with a finite set of counters. Weak cost automata, which have priorities {0,1} or {1,2} and an additional restriction on the structure of the transition function, are shown to be equivalent to a weak cost monadic logic. A new notion of quasi-weak cost automata is also studied and shown to arise naturally in this cost setting. Moreover, a decision procedure is given to determine whether or not functions definable using weak or quasi-weak cost automata are equivalent up to the boundedness relation, which also proves the decidability of the weak cost monadic logic over infinite trees. The semantics of these cost automata over infinite trees are defined in terms of cost-parity games which are two-player infinite games where one player seeks to minimize the counter values and satisfy the parity condition, and the other player seeks to maximize the counter values or sabotage the parity condition. The main contributions and key technical results involve proving that certain cost-parity games admit positional or finite-memory strategies. These results also help settle the decidability of some special cases of long-standing open problems in the classical theory. In particular, it is shown that it is decidable whether a regular language of infinite trees is recognizable using a nondeterministic co-Büchi automaton. Likewise, given a Büchi or co-Büchi automaton as input, it is decidable whether or not there is a weak automaton recognizing the same language.
|
73 |
Reweighting methods in high dimensional regressionFang, Zhou January 2012 (has links)
In this thesis, we focus on the application of covariate reweighting with Lasso-style methods for regression in high dimensions, particularly where p ≥ n. We apply a particular focus to the case of sparse regression under a-priori grouping structures. In such problems, even in the linear case, accurate estimation is difficult. Various authors have suggested ideas such as the Group Lasso and the Sparse Group Lasso, based on convex penalties, or alternatively methods like the Group Bridge, which rely on convergence under repetition to some local minimum of a concave penalised likelihood. We propose in this thesis a methodology that uses concave penalties to inspire a procedure whereupon we compute weights from an initial estimate, and then do a single second reweighted Lasso. This procedure -- the Co-adaptive Lasso -- obtains excellent results in empirical experiments, and we present some theoretical prediction and estimation error bounds. Further, several extensions and variants of the procedure are discussed and studied. In particular, we propose a Lasso style method of doing additive isotonic regression in high dimensions, the Liso algorithm, and enhance it using the Co-adaptive methodology. We also propose a method of producing rules based regression estimates for high dimensional non-parametric regression, that often outperforms the current leading method, the RuleFit algorithm. We also discuss extensions involving robust statistics applied to weight computation, repeating the algorithm, and online computation.
|
74 |
Distributed representations for compositional semanticsHermann, Karl Moritz January 2014 (has links)
The mathematical representation of semantics is a key issue for Natural Language Processing (NLP). A lot of research has been devoted to finding ways of representing the semantics of individual words in vector spaces. Distributional approaches—meaning distributed representations that exploit co-occurrence statistics of large corpora—have proved popular and successful across a number of tasks. However, natural language usually comes in structures beyond the word level, with meaning arising not only from the individual words but also the structure they are contained in at the phrasal or sentential level. Modelling the compositional process by which the meaning of an utterance arises from the meaning of its parts is an equally fundamental task of NLP. This dissertation explores methods for learning distributed semantic representations and models for composing these into representations for larger linguistic units. Our underlying hypothesis is that neural models are a suitable vehicle for learning semantically rich representations and that such representations in turn are suitable vehicles for solving important tasks in natural language processing. The contribution of this thesis is a thorough evaluation of our hypothesis, as part of which we introduce several new approaches to representation learning and compositional semantics, as well as multiple state-of-the-art models which apply distributed semantic representations to various tasks in NLP. Part I focuses on distributed representations and their application. In particular, in Chapter 3 we explore the semantic usefulness of distributed representations by evaluating their use in the task of semantic frame identification. Part II describes the transition from semantic representations for words to compositional semantics. Chapter 4 covers the relevant literature in this field. Following this, Chapter 5 investigates the role of syntax in semantic composition. For this, we discuss a series of neural network-based models and learning mechanisms, and demonstrate how syntactic information can be incorporated into semantic composition. This study allows us to establish the effectiveness of syntactic information as a guiding parameter for semantic composition, and answer questions about the link between syntax and semantics. Following these discoveries regarding the role of syntax, Chapter 6 investigates whether it is possible to further reduce the impact of monolingual surface forms and syntax when attempting to capture semantics. Asking how machines can best approximate human signals of semantics, we propose multilingual information as one method for grounding semantics, and develop an extension to the distributional hypothesis for multilingual representations. Finally, Part III summarizes our findings and discusses future work.
|
75 |
Analysis of sparse systemsDuff, Iain Spencer January 1972 (has links)
The aim of this thesis is to conduct a general investigation in the field of sparse matrices, to investigate and compare various techniques for handling sparse systems suggested in the literature, to develop some new techniques, and to discuss the feasibility of using sparsity techniques in the solution of overdetermined equations and the eigenvalue problem.
|
76 |
Logical abstract interpretationD'Silva, Vijay Victor January 2013 (has links)
Logical deduction and abstraction from detail are fundamental, yet distinct aspects of reasoning about programs. This dissertation shows that the combination of logic and abstract interpretation enables a unified and simple treatment of several theoretical and practical topics which encompass the model theory of temporal logics, the analysis of satisfiability solvers, and the construction of Craig interpolants. In each case, the combination of logic and abstract interpretation leads to more general results, simpler proofs, and a unification of ideas from seemingly disparate fields. The first contribution of this dissertation is a framework for combining temporal logics and abstraction. Chapter 3 introduces trace algebras, a new lattice-based semantics for linear and branching time logics. A new representation theorem shows that trace algebras precisely capture the standard trace-based semantics of temporal logics. We prove additional representation theorems to show how structures that have been independently discovered in static program analysis, model checking, and algebraic modal logic, can be derived from trace algebras by abstract interpretation. The second contribution of this dissertation is a framework for proving when two lattice-based algebras satisfy the same logical properties. Chapter 5 introduces functions called subsumption and bisubsumption and shows that these functions characterise logical equivalence of two algebras. We also characterise subsumption and bisubsumption using fixed points and finitary logics. We prove a representation theorem and apply it to derive the transition system analogues of subsumption and bisubsumption. These analogues strictly generalise the well studied notions of simulation and bisimulation. Our fixed point characterisations also provide a technique to construct property preserving abstractions. The third contribution of this dissertation is abstract satisfaction, an abstract interpretation framework for the design and analysis of satisfiability procedures. We show that formula satisfiability has several different fixed point characterisations, and that satisfiability procedures can be understood as abstract interpreters. Our main result is that the propagation routine in modern sat solvers is a greatest fixed point computation involving abstract transformers, and that clause learning is an abstract transformer for a form of negation. The final contribution of this dissertation is an abstract interpretation based analysis of algorithms for constructing Craig interpolants. We identify and analyse a lattice of interpolant constructions. Our main result is that existing algorithms are two of three optimal abstractions of this lattice. A second new result we derive in this framework is that the lattice of interpolation algorithms can be ordered by logical strength, so that there is a strongest and a weakest possible construction.
|
77 |
In silico modelling of tumour-induced angiogenesisConnor, Anthony J. January 2014 (has links)
Angiogenesis, the process by which new vessels form from existing ones, is a key event in the development of a large and malignant vascularised tumour. A rapid expansion in in vivo and in vitro angiogenesis research in recent years has led to increased knowledge about the processes underlying angiogenesis and to promising steps forward in the development of anti-angiogenic therapies for the treatment of various cancers. However, substantial gaps in knowledge persist and the development of effective treatments remains a major challenge. In this thesis we study tumour-induced angiogenesis within the context of a highly controllable experimental environment: the cornea micropocket assay. Using a multidisciplinary approach that combines experiments, image processing and analysis, and mathematical and computational modelling, we aim to provide mechanistic insight into the action of two angiogenic factors which are known to play central roles during tumour-induced angiogenesis: vascular endothelial growth factor A (VEGF-A) and basic fibroblast growth factor (bFGF). Image analysis techniques are used to extract quantitative data, which are both spatially and temporally resolved, from experimental images. These data are then used to develop and parametrise mathematical models describing the evolution of the corneal vasculature in response to both VEGF-A and bFGF. The first models developed in this thesis are one-dimensional continuum models of angiogenesis in which VEGF-A and/or bFGF are released from a pellet implanted into a mouse cornea. We also use an object-oriented framework, designed to facilitate the re-use and extensibility of hybrid multiscale models of angiogenesis and vascular tumour growth, to develop a complementary three-dimensional hybrid model of the same system. The hybrid model incorporates a new non-local cell sensing model which facilitates the formation of well-perfused and functional vascular networks in three dimensions. The continuum models are used to assess the utility of the cornea micropocket assay as a quantitative assay for angiogenesis, to characterise proposed synergies between VEGF-A and bFGF, and to generate experimentally testable predictions regarding the effect of anti-VEGF-A therapies on bFGF-induced angiogenesis. Meanwhile, the hybrid model is used to provide context for the comparison that is drawn between the continuum models and the data, to study the relative distributions of perfused and unperfused vessels in the evolving neovasculature, and to investigate the impact of tip cell sensing dysregulation on the angiogenic response in the cornea. We have found that by exploiting a close link with quantitative data we have been able to extend the predictive and hypothesis-testing capabilities of our models. As such, this thesis demonstrates the potential for integrating mathematical modelling with image analysis techniques to increase insight into the mechanisms underlying angiogenesis.
|
78 |
Posteriori exata e aproximada da Confiabilidade via Aproximação de Laplace das distribuições Gama Exponenciada e Weibull /Jorge, Luís Fernando. January 2019 (has links)
Orientador: Fernando Antonio Moala / Banca: Carlos Aparecido dos Santos / Banca: Manoel Ivanildo Silvestre Bezerra / Resumo: A Análise de Confiabilidade é uma área bem consolidada da estatística. Antes do surgimento dos métodos de Monte Carlo via Cadeia de Markov (MCMC), a aproximação de Laplace era muito utilizada para estimação dos parâmetros, porém após isso o MCMC passou a ser mais usado. Ao afinal deste trabalho, deseja-se realizar a comparação entre ambos os métodos de estimação buscando assim um método que produza bons resultados e com excelentes propriedades. Uma característica importante fornecida pela aproximação de Laplace é fato de obter uma forma fechada para distribuição a posteriori da confiabilidade, tanto no caso da Weibull como para Gama Exponenciada (GE). Também foi desenvolvido prioris conjugadas para o parâmetro e a confiabilidade para ambas distribuições. Esta propriedade facilita a obtenção dos momentos amostrais assim como o cálculo de intervalos. A distribuição Weibull é muito utilizada na análise de confiabilidade e outras áreas, assim como: climatologia, medicina, entre outras. A distribuição GE não é tão utilizada, mas apresenta diferentes comportamentos para o risco, além de possuir apenas um parâmetro para ser estimado. / Abstract: Reliability Analysis is a well-established area of statistics. Before the method Monte Carlo via Markov Chain (MCMC), the Laplace approximation was widely used for parameter estimation, but after that the MCMC became more used. At the end of this work, it is desired to carry out the comparison between both estimation methods, thus seeking a method that produces good results and excellent properties. An important feature provided by the Laplace approximation is that it obtains a closed form for posterior distribution of reliability, both in the case of Weibull and the Exponential Gamma (EG). We also developed conjugated priori for the parameter and the reliability for both distributions. This property makes it easier to obtain sample moments as well as the calculation of intervals. The Weibull distribution is widely used in the analysis of reliability and other areas, as well as: climatology, medicine, among others. The EG distribution is not so widely used, but presents different risk behaviors, besides having only one parameter to be estimated. / Mestre
|
79 |
Estabilidade de sistemas dinâmicos : Estudo do memristor /Moreira, Marília Davoli. January 2014 (has links)
Orientador: Vanessa Avansini Botta Pirani / Banca: Marcelo Messias / Banca: Anna Diva Plasencia Lotufo / Resumo: Neste trabalho, ser a apresentado um estudo detalhado da estabilidade dos pontos de equilíbrio de alguns modelos matemáticos que representam o funcionamento de um ciruito elétrico que possui o memristor em sua composição, além dos outros componentes elétricos, formados por sistemas de equações diferenciais ordinárias de terceira e quarta ordens, envolvendo funções lineares por partes. Em tal processo e de fundamental importância o conhecimento de resultados relacionados a zeros de polinômios, pois a análise da estabilidade de tais sistemas está relacionado a determina ção dos autovalores da matriz dos coeficientes do sistema. Em tal estudo ser a utilizado o Critério de Routh-Hurwitz. / Abstract: In this work, a detailed study of the stability of the equilibrium points of some mathematical models that represent the that represent the behavior of an electrical circuit with a memristor in your composition, consisting, consisting of ordinary di erential equations of third and fourth order systems, involving piecewise linear functions. In this theory is very important the study of results related to the zeros of polynomials, because the stability analysis of these systems is related to the eigenvalues of the coe cient matrix of the system. The Routh-Hurwitz criterion will be used. / Mestre
|
80 |
Construção de uma escala para avaliação da pro ciência em conteúdos matemáticos básicos /Rossi, Paola Rocchi. January 2015 (has links)
Orientador: Aparecida Doniseti Pires de Souza / Coorientador: Adriano Ferreti Borgatto / Banca: Mariana Curi / Banca: Maria Raquel Miotto Morelatti / Resumo: Este trabalho apresenta uma aplicação da Teoria da Resposta ao Item (TRI), mais especi camente do modelo logístico unidimensional de três parâmetros, para a construção de uma escala para medir pro ciência em conteúdos matemáticos básicos. Os itens que compõem o instrumento de avaliação foram elaborados a partir de uma Matriz de Refer ência construída com base nas matrizes do Sistema de Avaliação da Educação Básica (SAEB). Os temas abordados incluem espaço e forma, grandezas e medidas, número e funções, envolvendo álgebra e funções. A calibração dos parâmetros e a estimação das pro ciências foram feitas utilizando abordagem bayesiana. Os resultados mostraram que a maioria dos itens propostos permite avaliar a pro ciência do ingressante, sendo nove entre os trinta e dois que compuseram a prova, classi cados como itens âncoras. No entanto, novos itens precisam ser incluídos para que em parte da escala as habilidades sejam melhor estimadas / Abstract: This article o ers an application of the Item Response Theory (IRT), more speci cally, of the one-dimensional logistic model of three parameters, for the construction of a scale to measure pro ciency in basic mathematical content. The items making up the evaluation tool were developed from a Reference Matrix based on matrices of the Basic Education and Evaluation System (SAEB). The topics include space and form, quantities and measures, number and functions involving algebra and functions. The calibration of parameters and estimation of pro ciencies were carried out using the Bayesian approach. The results showed that most of the proposed items enable evaluation of entrant pro ciency, nine of the thirty-two making up the test being classi ed as anchor items. However, new items need to be included so that the scale of skills can be better estimated / Mestre
|
Page generated in 0.1319 seconds