• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 151
  • 26
  • 20
  • 10
  • 8
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 283
  • 283
  • 95
  • 46
  • 42
  • 33
  • 30
  • 27
  • 27
  • 26
  • 25
  • 24
  • 23
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Justifications dans les approches ASP basées sur les règles : application au backjumping dans le solveur ASPeRiX / Justifications in rule-based ASP computations : application to backjumping in the ASPeRiX solver

Beatrix, Christopher 03 November 2016 (has links)
L’ Answer Set Programming (ASP) est un formalisme capable de représenter des connaissances en Intelligence Artificielle à l’aide d’un programme logique au premier ordre pouvant contenir des négations par défaut. En quelques années, plusieurs solveurs performants ont été proposés pour calculer les solutions d’un programme ASP que l’on nomme answer sets.Nous nous intéressons ici plus particulièrement au solveur ASPeRiX qui instancie les règles au premier ordre à la volée durant le calcul des answer sets. Pour réaliser cela, ASPeRiX applique un chaînage avant sur les règles à partir de littéraux précédemment déterminés.L’étude de ce solveur nous amène notamment à considérer la notion de justification dans le cadre d’une approche de calcul d’ answer sets basée sur les règles. Les justifications permettent d’expliquer pourquoi certaines propriétés sont vérifiées. Parmi celles-ci, nous nous concentrons particulièrement sur les raisons d’échecs qui justifient pourquoi certaines branches de l’arbre de recherche n’aboutissent pas à un answer set.Cela nous conduit à implémenter une version d’ ASPeRiX proposant du backjumping qui évite de parcourir systématiquement toutes les branches de l’arbre de recherche grâce aux informations fournies par les raisons d’échecs. / Answer set programming (ASP) is a formalism able to represent knowledge in Artificial Intelligence thanks to a first order logic program which can contain default negations. In recent years, several efficient solvers have been proposed to compute the solutions of an ASP program called answer sets. We are particularly interested in the ASPeRiX solver that instantiates the first order rules on the fly during the computation of answer sets. It applies a forward chaining of rules from literals previously determined. The study of this solver leads us to consider the concept of justification as part of a rule-based approach for computing answer sets. Justifications enable to explain why some properties are true or false. Among them, we focus particularly on the failure reasons which justify why some branches of the search tree does not result in an answer set. This encourages us to implement a version of ASPeRiX with backjumping in order to jump to the last choice point related to the failure in the search tree thanks to information provided by the failure reasons.
72

Efficient equational reasoning for the Inst-Gen Framework

Sticksel, Christoph January 2011 (has links)
We can classify several quite different calculi for automated reasoning in first-order logic as instantiation-based methods (IMs). Broadly speaking, unlike in traditional calculi such as resolution where the first-order satisfiability problem is tackled by deriving logical conclusions, IMs attempt to reduce the first-order satisfiability problem to propositional satisfiability by intelligently instantiating clauses. The Inst-Gen-Eq method is an instantiation-based calculus which is complete for first-order clause logic modulo equality. Its distinctive feature is that it combines first-order reasoning with efficient ground satisfiability checking, which is delegated in a modular way to any state-of-the-art ground solver for satisfiability modulo theories (SMT). The first-order reasoning modulo equality employs a superposition-style calculus which generates the instances needed by the ground solver to refine a model of a ground abstraction or to witness unsatisfiability. The thesis addresses the main issue in the Inst-Gen-Eq method, namely efficient extraction of instances, while providing powerful redundancy elimination techniques. To that end we introduce a novel labelled unit superposition calculus with sets, AND/OR trees and ordered binary decision diagrams (OBDDs) as labels. The different label structures permit redundancy elimination each to a different extent. We prove completeness of redundancy elimination from labels and further integrate simplification inferences based on term rewriting. All presented approaches, in particular the three labelled calculi are implemented in the iProver-Eq system and evaluated on standard benchmark problems.
73

A Flexible, Natural Deduction, Automated Reasoner for Quick Deployment of Non-Classical Logic

Mukhopadhyay, Trisha 20 March 2019 (has links)
Automated Theorem Provers (ATP) are software programs which carry out inferences over logico-mathematical systems, often with the goal of finding proofs to some given theorem. ATP systems are enormously powerful computer programs, capable of solving immensely difficult problems. Currently, many automated theorem provers exist like E, vampire, SPASS, ACL2, Coq etc. However, all the available theorem provers have some common problems: (1) Current ATP systems tend not to try to find proofs entirely on their own. They need help from human experts to supply lemmas, guide the proof, etc. (2) There is not a single proof system available which provides fully automated platforms for both First Order Logic (FOL) and other Higher Order Logic (HOL). (3) Finally, current proof systems do not have an easy way to quickly deploy and reason over new logical systems, which a logic researcher may want to test. In response to these problems, I introduce the MATR framework. MATR is a platform-independent, codelet-based (independently operating processes) proof system with an easy-to-use Graphical User Interface (GUI), where multiple codelets can be selected based on the formal system desired. MATR provides a platform for different proof strategies like deduction and backward reasoning, along with different formal systems such as non-classical logics. It enables users to design their own proof system by selecting from the list of codelets without needing to write an ATP from scratch.
74

Improved First Order Formulation for Buckling Analysis of Functionally Graded Beams

Vallejos, Augusto, Ayala, Shammely, Arciniega, Roman 30 September 2020 (has links)
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado. / In this research, an improved first order formulation is presented to study the critical buckling load in functionally graded beams. The formulation has five independent variables in comparison with the Timoshenko theory that has three. The Trefftz criterion is utilized with incremental and fundamental states to define the stability analysis. Virtual work statements are derived for the finite element model where the field variables are interpolated by Lagrange polynomials. The numerical results are compared and verified with other formulations found in literature. Parametric studies are also carried out for buckling behavior due to different slenderness ratios, power-law indices and boundary conditions. Applications of the model to functionally graded materials show the validity of the present approach.
75

Alloy4PV : un Framework pour la Vérification de Procédés Métiers / Alloy4PV : a Framework for Business Process Verification

Laurent, Yoann 15 January 2015 (has links)
Dans cette thèse, nous avons tout d'abord fait une étude de l'état de l'art dans les différents domaines des procédés (métier, logiciel, militaire, médical, etc) afin d'identifier et de catégoriser les principales propriétés à garantir. À partir de cette étude, nous avons défini une bibliothèque de propriétés générique et paramétrable pour tout modèle de procédé. Ensuite, nous avons défini un framework pour la vérification de procédés appelé Alloy4PV. Il utilise un sous-ensemble des diagrammes d'activités UML 2 comme langage de modélisation. Afin d'effectuer la vérification de procédés, nous avons (1) défini un modèle formel des diagrammes d'activités basé sur la sémantique fUML (le standard de l'OMG donnant une sémantique à un sous-ensemble de UML) en utilisant la logique de premier ordre, (2) implémenté cette formalisation en utilisant le langage Alloy afin d'effectuer du model-checking borné, et (3) automatisé, dans un outil graphique intégré à Eclipse, la possibilité d'exprimer et de vérifier des propriétés sur toutes les perspectives du procédé. / In this thesis, we realized a study of the start-of-the-art on different process domains (business, software, military, medical, etc.). The aim was to identify and categorize critical properties that can be verified on any process model. This study resulted in a library of generic and configurable properties. As a second step, we have defined a framework for process verification called Alloy4PV. This framework uses a subset of UML 2 Activity Diagram as a process modeling language. For process verification, (1) we defined a formal model of UML 2 Activity Diagram based on the fUML semantics, the OMG standard that gives a semantic to a subset of UML 2. This was achieved using first-order logic, (2) we implemented this formalization using the Alloy language in order to perform bounded model-checking, and (3) we automatized in a graphical tool integrated to Eclipse, the possibility to express and verify properties on all the perspectives of a process model. This contribution resulted in a tool which is under evaluation by our MerGE project’s partners and to five publications in conferences proceedings.
76

Buckling analysis of laminated composite beams by using an improved first order formulation

Ayala, Shammely, Vallejos, Augusto, Arciniega, Roman 01 January 2021 (has links)
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado. / In this work, a finite element model based on an improved first-order formulation (IFSDT) is developed to analyze buckling phenomenon in laminated composite beams. The formulation has five independent variables and takes into account thickness stretching. Threedimensional constitutive equations are employed to define the material properties. The Trefftz criterion is used for the stability analysis. The finite element model is derived from the principle of virtual work with high-order Lagrange polynomials to interpolate the field variables and to prevent shear locking. Numerical results are compared and validated with those available in literature. Furthermore, a parametric study is presented.
77

Linearization and first-order expansion of the rocking motion of rigid blocks stepping on viscoelastic foundation

Palmeri, Alessandro, Makris, N. January 2008 (has links)
In structural mechanics there are several occasions where a linearized formulation of the original nonlinear problem reduces considerably the computational effort for the response analysis. In a broader sense, a linearized formulation can be viewed as a first-order expansion of the dynamic equilibrium of the system about a `static¿ configuration; yet caution should be exercised when identifying the `correct¿ static configuration. This paper uses as a case study the rocking response of a rigid block stepping on viscoelastic supports, whose non-linear dynamics is the subject of the companion paper, and elaborates on the challenge of identifying the most appropriate static configuration around which a first-order expansion will produce the most dependable results in each regime of motion. For the regime when the heel of the block separates, a revised set of linearized equations is presented, which is an improvement to the unconservative equations published previously in the literature. The associated eigenvalues demonstrate that the characteristics of the foundation do not affect the rocking motion of the block once the heel separates.
78

Expressiveness and Succinctness of First-Order Logic on Finite Words

Weis, Philipp P 13 May 2011 (has links)
Expressiveness, and more recently, succinctness, are two central concerns of finite model theory and descriptive complexity theory. Succinctness is particularly interesting because it is closely related to the complexity-theoretic trade-off between parallel time and the amount of hardware. We develop new bounds on the expressiveness and succinctness of first-order logic with two variables on finite words, present a related result about the complexity of the satisfiability problem for this logic, and explore a new approach to the generalized star-height problem from the perspective of logical expressiveness. We give a complete characterization of the expressive power of first-order logic with two variables on finite words. Our main tool for this investigation is the classical Ehrenfeucht-Fra¨ıss´e game. Using our new characterization, we prove that the quantifier alternation hierarchy for this logic is strict, settling the main remaining open question about the expressiveness of this logic. A second important question about first-order logic with two variables on finite words is about the complexity of the satisfiability problem for this logic. Previously it was only known that this problem is NP-hard and in NEXP. We prove a polynomialsize small-model property for this logic, leading to an NP algorithm and thus proving that the satisfiability problem for this logic is NP-complete. Finally, we investigate one of the most baffling open problems in formal language theory: the generalized star-height problem. As of today, we do not even know whether there exists a regular language that has generalized star-height larger than 1. This problem can be phrased as an expressiveness question for first-order logic with a restricted transitive closure operator, and thus allows us to use established tools from finite model theory to attack the generalized star-height problem. Besides our contribution to formalize this problem in a purely logical form, we have developed several example languages as candidates for languages of generalized star-height at least 2. While some of them still stand as promising candidates, for others we present new results that prove that they only have generalized star-height 1.
79

Strong conceptual completeness and various stability theoretic results in continuous model theory

Albert, Jean-Martin January 2010 (has links)
<p>In this thesis we prove a strong conceptual completeness result for first-order continuous logic. Strong conceptual completeness was proved in 1987 by Michael Makkai for classical first-order logic, and states that it is possible to recover a first-order theory T by looking at functors originating from the category Mod(T) of its models. </p> <p> We then give a brief account of simple theories in continuous logic, and give a proof that the characterization of simple theories using dividing holds in continuous structures. These results are a specialization of well established results for thick cats which appear in [Ben03b] and in [Ben03a].</p> <p> Finally, we turn to the study of non-archimedean Banach spaces over non-trivially valued fields. We give a natural language and axioms to describe them, and show that they admit quantifier elimination, and are N0-stable. We also show that the theory of non-archimedean Banach spaces has only one N 1-saturated model in any cardinality. </p> / Thesis / Doctor of Philosophy (PhD)
80

Run Length Texture Analysis of Thoracolumbar Facia Sonographic Images: A Comparison of Subjects with And Without Low Back Pain (LBP)

Al Khafaji, Ghaidaa Ghanim 06 July 2023 (has links)
Low back pain is one of the most common and disabling musculoskeletal disorders worldwide and the third most common reason for surgery in the United States. The lower back, or lumbar region, supports most of the body's weight; it controls spinal movement and stability through the interaction between bones, nerves, muscles, ligaments, and fascia within the lumbar region. Any disorder of those tissues could cause low back pain (LBP); emerging evidence indicates that the thoracolumbar fascia (TLF) is the lower back's most pain-sensitive soft tissue structure. TLF consists of dense connective tissue separated by loose connective tissue, allowing TLF layers to pass easily during torso movement. A series of foundational studies found that patients enduring long-term low back pain have different TLF structures than those without LBP. Injuries may result in adhesions and fibrosis, which may cause adjacent dense connective tissue layers to lose independent motion, limiting movement and causing pain. LBP is diagnosed by investigating the patient's medical history to identify symptoms and then examining the patient to determine the cause of the pain. If the pain persists after diagnosis and treatment, further investigation is required; an ultrasound scan is used as the next step. Ultrasound (US) imaging is a non-invasive and instantaneous method to evaluate soft, connective tissue structures such as muscles, tendons, ligaments, and fascia. Even though measuring echo intensity helps evaluate the soft tissues, this method still has limitations in diagnosing LBP; 90 % of all LBP patients are diagnosed with non-specific LBP, referred to as pain with no definitive cause . An in-depth investigation of US images could potentially provide more specificity in identifying sources of LBP. By providing information about soft tissue structure, texture analysis could increase US images' diagnostic power. The texture of an ultrasound image is the variation of pixel intensities throughout the region of interest (ROI) that produces different patterns; texture analysis is an approach that quantifies the characteristic variation of pixel intensities within ROI to describe tissue morphological characteristics. First-order texture analysis, second-order texture analysis, and grey-level run length texture analysis are types of analysis that could be applied to quantify parameters that describe the features of the texture; the grey-level analysis is usually conducted in four directions of the texture. This study has four objectives; the first objective is to use first-order and second-order analysis to determine texture parameters and determine whether those parameters can differentiate between individuals with and without LBP. The second objective is to use grey level run length analysis to quantify texture parameters in four directions (0^°,45^°,90^°,135^°) and examine whether those parameters can differentiate between individuals with and without LBP. The third objective is to determine the correlation between the first, second, and run length parameters. The fourth objective is to explore how first-order, second order and grey level run length parameters are affected by US machine settings. A custom-written MATLAB program was developed to quantify first and second-order texture parameters and grey-level run length parameters. Using JMP software, each parameter was statistically compared between individuals with and without LBP. Among nine first- and second-order texture parameters, four showed statistically significant differences between individuals with and without LBP. Among 44 run-length parameters, 9 showed statistically significant differences between individuals with and without LBP. The current study also revealed some strong correlations between first, second, and run length parameters; it also shows that the US machine setting has minor effects on the three types of parameters. Although the present study was conducted on a relatively small sample size, the results indicate that one direction of grey level run length analysis and first and second-order texture analysis can differentiate between people with and without LBP. / Master of Science / Low back pain (LBP) is one of the most common and disabling musculoskeletal disorders worldwide and the third most common reason for surgery in the United States. Due to LBP's effect on mobility, it is one of the leading causes of absence from work, early retirement, and long-term disability payments. The thoracolumbar fascia (TLF), a connective tissue that stabilizes the trunk, pelvis, and spine, is considered the most sensitive tissue to LBP. LBP diagnosis is based on the patient's medical history to identify symptoms and then on an examination to determine the cause. If the pain persists after diagnosis and treatment, imaging is recommended as the next step. Ultrasound (US) imaging produces a cross-sectional image of the structure and has been used to compare TLF structure in people with and without LBP. Additional analyses must be done to increase US images' ability to diagnose LBP. In the current project, three types of analysis of US images were performed; first-order, second-order, and grey level run length analyses were performed to determine parameters for the images of the two groups of people; selected parameters were noted to distinguish between people with and without LBP.

Page generated in 0.0499 seconds