Spelling suggestions: "subject:"settore MAT/01 - 1ogica matematica"" "subject:"settore MAT/01 - 1ogica matematical""
1 |
Choice, extension, conservation. From transfinite to finite proof methods in abstract algebraWessel, Daniel January 2018 (has links)
Maximality principles such as the ones going back to Kuratowski and Zorn ensure the existence of higher type ideal objects the use of which is commonly held indispensable for mathematical practice. The modern turn towards computational methods, which can be witnessed to have a strong influence on contemporary foundational studies, encourages a reassessment within a constructive framework of the methodological intricacies that go along with invocations of maximality principles. The common thread that can be followed through the chapters of this thesis is explained by the attempt to put the widespread use of ideal objects under constructive scrutiny. It thus walks the tracks of a revised Hilbert’s programme which has inspired a reapproach to constructive algebra by finitary means, and for which Scott’s entailment relations have already shown to provide a vital and utmost versatile tool. In this thesis several forms of the Kuratowski-Zorn Lemma are introduced and proved equivalent over constructive set theory; the notion of Jacobson radical is brought from commutative rings to a general ideal theory for single-conclusion entailment relations; a flexible conservation criterion of Scott for multi-conclusion entailment relations is put into action; elementary and constructive variants for algebraic extension theorems such as Sikorski’s on the injectivity of complete atomic Boolean algebras are phrased and proved in terms of entailment relations; and a point-free version of Sikora’s theorem on spaces of orderings of groups is obtained by a revisitation with syntactical means of some of the classical criteria for groups to be orderable.
|
2 |
Extended-order algebras and some applicationsDella Stella, Maria Emilia January 2013 (has links)
In this PhD thesis we study the extended-order algebras and their properties; moreover, we evaluate the possibility to apply them in other mathematical contexts, as, for instance, the fuzzy implicators and the many-valued relations.
|
3 |
Nonstandard Models in Measure Theory and in functional AnalysisBottazzi, Emanuele January 2017 (has links)
This thesis is concerned with the study of nonstandard models in measure theory and in functional analysis. In measure theory, we define elementary numerosities, that are additive measures that take on values in a non-archimedean field and for which the measure of every singleton is 1. We have shown that, by taking the ratio with a suitable unit of measurement, from a numerosity it can be defined a non-atomic real-valued measure, and that every non-atomic measure can be obtained from a numerosity by this procedure. We then used numerosities to develop a model for the probability of infinite sequences of coin tosses coherent with the original ideas of Laplace. In functional analysis, we introduce a space of functions of nonstandard analysis with a formally finite domain, that extends both the space of distributions and the space of Young measures. Among the applications of this space of functions, we develop a continuous-in-time, discrete-in-space nonstandard formulation for a class of ill-posed forward-backward parabolic equations, and on the study of the regularity and asymptotic behaviour of its nonstandard solutions. This approach proved to be a viable alternative to the study of the vanishing viscosity limit of the solution of a pseudoparabolic regularization of the original problem.
|
4 |
Query Answering over Contextualized RDF/OWL Knowledge with Expressive Bridge Rules: Decidable classesJoseph, Mathew January 2015 (has links)
In this thesis, we study the problem of reasoning and query answering over contextualized knowledge in quad format augmented with expressive forall-existential bridge rules. Such bridge rules contain conjunctions, existentially
quantified variables in the head, and are strictly more expressive than the bridge rules considered so far in similar setting. A set of quads together with forall-existential bridge rules is called a quad-system. We show that query answering over quad-systems in their unrestricted form is undecidable, in general. We propose various subclasses of quad-systems, for which query answering is decidable. Context-acyclic quad-systems do not allow the context dependency graph of the bridge rules to have cycles passing through triple-generating (value-generating) contexts, and hence guarantees the chase (deductive closure) to be finite. Csafe, msafe and safe classes of quad-systems restricts the structure of descendance graph of Skolem blank nodes generated during chase process to be directed acyclic graphs (DAGs) of bounded depth, and hence has finite chases. RR and restricted RR quad-systems do not allow for the creation of Skolem blank nodes, and hence restrict the chase to be of polynomial size. Besides the undecidability result of unrestricted quad-systems, tight complexity bounds has been established for each of the classes we have introduced. We then compare the problems, (resp. classes,) we address (resp. derive) in this thesis, for quad-systems with analogous problems (resp. classes) in the realm of forall-existential rules. We show that the query answering problem over quad-systems is polynomially equivalent to the query answering problem over ternary forall-existential rules, and
the technique of safety, we propose, is strictly more expressive than existing well known techniques such joint acyclicity and model maithful acyclicity, used for decidability guarantees, in the realm of forall-existential rules.
|
5 |
Dealing with Semantic Heterogeneity in ClassificationsMaltese, Vincenzo January 2012 (has links)
Many projects have dealt with mappings between classifications both in computer science and digital library communities. The adopted solutions range from fully manual to fully automatic approaches. Manual approaches are very precise, but automation becomes unavoidable when classifications contain thousands of nodes with millions of candidate correspondences. As fun-damental preliminary step towards automation, S-Match converts classifications into formal on-tologies, i.e. lightweight ontologies. Despite many solutions to the problem have been offered, with S-Match representing a state of the art matcher with good accuracy and run-time perfor-mance, there are still several open problems. In particular, the problems addressed in this thesis include: (a) Run-time performance. Due to the high number of calls to the SAT reasoning engine, semantic matching may require exponential time; (b) Maintenance. Current matching tools offer poor support to users for the process of creation, validation and maintenance of the correspond-ences; (c) Lack of background knowledge. The lack of domain specific background knowledge is one important cause of low recall. As significant progress to (a) and (b), we describe MinSMatch, a semantic matching tool we developed evolving S-Match that computes the minimal mapping between two lightweight ontologies. The minimal mapping is that minimal subset of correspondences such that all the others can be efficiently computed from them and are therefore said to be redundant. We provide a formal definition of minimal and, dually, redundant map-pings, evidence of the fact that the minimal mapping always exists and it is unique and a correct and complete algorithm for computing it. Our experiments demonstrate a substantial improve-ment in run-time. Based on this, we also developed a method to support users in the validation task that allows saving up to 99% of the time. We address problem (c) by creating and by making use of an extensible diversity-aware knowledge base providing a continuously growing quantity of properly organized knowledge. Our approach is centered on the fundamental notions of domain and context. Domains, developed by adapting the faceted approach from library science, are the main means by which diversity is captured and allow scaling as with them it is possible to add new knowledge as needed. Context allows a better disambiguation of the terms used and re-ducing the complexity of reasoning at run-time. As proof of the applicability of the approach, we developed the Space domain and applied it in the Semantic Geo-Catalogue (SGC) project.
|
6 |
Exploiting SAT and SMT Techniques for Automated Reasoning and Ontology Manipulation in Description LogicsVescovi, Michele January 2011 (has links)
Description Logics (DLs) are a family of logic-based knowledge representation formalisms aimed at representing the knowledge of an application domain in a structured way one of whose main characteristic is the emphasis on reasoning.
Since the last two decades Description logics have been widely studied and applied to numerous areas of computer science (including artificial intelligence, formal verification, database theory, natural language processing and distributed computing). In recent years, however, the interest in the problem of automated reasoning in Description Logics has seen a tremendous growth because of the explosion of new notable applications in the domains of Semantic Web and of bio-medical ontologies. The more real-world problems are represented through DL-based ontologies the more these new applications represents a challenge due to the required efficiency in handling complex logical constructors (e.g. numerical constraints) and in handling the huge dimensions of this kind of ontologies. For these reasons the development of efficient algorithms and procedures for reasoning in Description Logic has become crucial.
SAT-based technologies, in the meanwhile, proved to be mature and largely successful in many other automated reasoning fields, first of all on many very hard practical applications of formal verification, often huge characterized by problems of huge dimension. In the last twenty years, in fact, we have witnessed an impressive advance in the efficiency of SAT techniques, which has brought problems of hundreds of millions of clauses and variables to be at the reach of the freely-available state-of-the-art solvers. As a consequence, many problems have been successfully solved by mean of SAT-based techniques and these approaches are currently state-of-the-art in the respective communities (among all Model Checking). Furthermore, the progress in SAT-solving techniques, together with the concrete needs from real applications, have inspired significant research on richer and more expressive Boolean formalism, like Satisfiability Modulo Theories (SMT), and on the use of the SAT formalism to solve problems with a wider extent (e.g. to compute interpolants, unsatisfiable cores, all-SAT).
The research trends in Description Logic, its manifold practical applications, the quest of efficient and scalable procedures, on one hand, the wide variety of mature and efficient techniques offered by the SAT research area, on the other hand, motivated our research. In this thesis dissertation we explore the idea of exploiting the power and efficiency of state-of-the-art SAT-based techniques for automated reasoning and ontology manipulation in Description Logics, proposing a convenient alternative to the traditional tableau based algorithms.
With this aim we propose and develop novel and complete approaches able to solve Description Logic problems as SAT and SMT ones, by mean of sound and complete encodings. On these encodings we develop new procedures and optimizations techniques based on a variety of existing SAT-based formalisms and technologies. In this work, in particular, we focus on three gradually harder reasoning problems in Description Logics which tackle increasingly more expressive languages or increasingly harder reasoning services, among which we face also non-standard services supporting the debugging of ontologies like modularization and axiom pinpointing. We implemented our approaches in tools which integrate with the available SAT/SMT-solvers; finally, we show the effectiveness of our novel approaches through very extensive empirical evaluations on benchmarks and ontologies from real application, in which we compare our performance against the other state-of-the-art available systems.
Notice that any advance in the exploited Boolean reasoning techniques/tools will be freely inherited from our proposed approaches extending also to Description Logics/ontologies the benefits of the observable great and fast advance in the efficiency of SAT-based techniques.
|
7 |
Scalable Safety and Reliability Analysis via Symbolic Model Checking: Theory and ApplicationsMattarei, Cristian January 2016 (has links)
Assuring safety and reliability is fundamental when developing a safety critical system. Road, naval and avionic transportation; water and gas distribution; nuclear, eolic, and photovoltaic energy production are only some examples where it is mandatory to guarantee those properties. The continuous increasing in the design complexity of safety critical system calls for a never ending sought of new and more advanced analytical techniques. In fact, they are required to assure that undesired consequences are highly improbable. In this Thesis we introduce a novel methodology able to raise the bar in the area of automated safety and reliability analysis. The proposed approach integrates a series of techniques, based on symbolic model checking, into the current development process of safety critical systems. Moreover, our methodology and the resulting techniques are thereafter applied to a series of real-world case studies, developed in collaboration with authoritative entities such as NASA and the Boeing Company.
|
8 |
Aligning Controlled vocabularies for enabling semantic matching in a distributed knowledge management systemMorshed, Ahsan January 2010 (has links)
The underlying idea of the Semantic Web is that web content should be expressed not only in natural language but also in a language that can be unambiguously understood, interpreted and used by software agents, thus permitting them to find, share and integrate information more easily. The central notion of the Semantic Web's syntax are ontologies, shared vocabularies providing taxonomies of concepts, objects and relationships between them, which describe particular domains of knowledge. A vocabulary stores words, synonyms, word sense definitions (i.e. glosses), relations between word senses and concepts; such a vocabulary is generally referred to as the Controlled Vocabulary (CV) if choice or selection of terms are done by domain specialists. A facet is a distinct and dimensional feature of a concept or a term that allows a taxonomy, ontology or CV to be viewed or ordered in multiple ways, rather than in a single way. The facet is clearly defined, mutually exclusive, and composed of collectively exhaustive properties or characteristics of a domain. For example, a collection of rice might be represented using a name facet, place facet etc. This thesis presents a methodology for producing mappings between Controlled Vocabularies, based on a technique called \Hidden Semantic Matching". The \Hidden" word stands for it not relying on any sort of externally provided background knowledge. The sole exploited knowledge comes from the \semantic context" of the same CVs which are being matched. We build a facet for each concept of these CVs, considering more general concepts (broader terms), less general concepts (narrow terms) or related concepts (related terms).Together these form a concept facet (CF) which is then used to boost the matching process.
|
9 |
Constructivisation through Induction and ConservationFellin, Giulio 26 August 2022 (has links)
The topic of this thesis lies in the intersection between proof theory and algebraic logic. The main object of discussion, constructive reasoning, was introduced at the beginning of the 20th century by Brouwer, who followed Kant’s explanation of human intuition of spacial forms and time points: these are constructed step by step in a finite process by certain rules, mimicking constructions with straightedge and compass and the construction of natural numbers, respectively.
The aim of the present thesis is to show how classical reasoning, which admits some forms of indirect reasoning, can be made more constructive. The central tool that we are using are induction principles, methods that capture infinite collections of objects by considering their process of generation instead of the whole class. We start by studying the interplay between certain structures that satisfy induction and the calculi for some non-classical logics. We then use inductive methods to prove a few conservation theorems, which contribute to answering the question of which parts of classical logic and mathematics can be made constructive.
|
10 |
Proof-Theoretical Aspects of Well Quasi-Orders and Phase Transitions in Arithmetical ProvabilityBuriola, Gabriele 11 April 2024 (has links)
In this thesis we study the concept of well quasi-order, originally developed in order
theory but nowadays transversal to many areas, in the over-all context of proof
theory - more precisely, in reverse mathematics and constructive mathematics.
Reversed mathematics, proposed by Harvey Friedman, aims to classify the strength
of mathematical theorems by identifying the required axioms. In this framework,
we focus on two classical results relative to well quasi-orders: Kruskal’s theorem
and Higman’s lemma. Concerning the former, we compute the proof-theoretic
ordinals of two different versions establishing their non equivalence. Regarding
the latter, we study, over the base theory RCA0, the relations between Higman’s
original achievements and some versions of Kruskal’s theorem. For what concerns
constructive mathematics, which goes back to Brouwer’s reflections and rejects
the law of excluded middle in favour of more perspicuous reasoning principles, we
scrutinize the main definitions of well quasi-order establishing their constructive
nature; moreover, a new constructive proof of Higman’s lemma is proposed paving
the way for a systematic analysis of well quasi-orders within constructive means.
On top of all this we consider a peculiar phenomenon in proof theory, namely
phase transitions in provability. Building upon previous results about provability in
Peano Arithmetic, we locate the threshold separating provability and unprovability
for statements regarding Goodstein sequences, Hydra games and Ackermannian
functions. / In questa tesi studiamo il concetto di well quasi-order, originariamente sviluppato nella teoria degli ordini ma oggi trasversale a molti ambiti, nel contesto generale della teoria della dimostrazione - più precisamente, in reverse mathematics e matematica costruttiva. La reverse mathematics, proposta da Harvey Friedman, mira a classificare la forza dei teoremi matematici individuando gli assiomi richiesti. In questo contesto, ci concentriamo su due risultati classici relativi ai well quasiorder: il teorema di Kruskal e il lemma di Higman. Per quanto riguarda il primo, abbiamo calcolato gli ordinali proof-teoretici di due diverse versioni stabilendone la non equivalenza. Per quanto riguarda il secondo, studiamo, sopra la teoria di
base RCA0, le relazioni tra i risultati originali di Higman e alcuni versioni del teorema di Kruskal. Per quanto riguarda la matematica costruttiva, che si rifà alle riflessioni di Brouwer e rifiuta la legge del terzo escluso a favore di principidi ragionamento più perspicui, esaminiamo attentamente le principali definizioni di well quasi-order stabilendone la natura costruttiva; inoltre, viene proposta una nuova dimostrazione costruttiva del lemma di Higman aprendo la strada per una sistematica analisi dei well quasi-order all’interno di metodi costruttivi. Oltre a questo consideriamo un fenomeno peculiare nella teoria della dimostrazione, vale a dire le transizioni di fase nella dimostrabilità. Basandoci su risultati precedenti sulla dimostrabilità nell’aritmetica di Peano, abbiamo individuato la soglia
che separa dimostrabilità e indimostrabilità per enunciati riguardanti sequenze di Goodstein, Hydra games e funzioni ackermanniane.
|
Page generated in 0.0959 seconds