• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 32
  • 11
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 121
  • 18
  • 18
  • 18
  • 17
  • 15
  • 14
  • 14
  • 13
  • 12
  • 10
  • 9
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Managing and Consuming Completeness Information for RDF Data Sources

Darari, Fariz 04 July 2017 (has links) (PDF)
The ever increasing amount of Semantic Web data gives rise to the question: How complete is the data? Though generally data on the Semantic Web is incomplete, many parts of data are indeed complete, such as the children of Barack Obama and the crew of Apollo 11. This thesis aims to study how to manage and consume completeness information about Semantic Web data. In particular, we first discuss how completeness information can guarantee the completeness of query answering. Next, we propose optimization techniques of completeness reasoning and conduct experimental evaluations to show the feasibility of our approaches. We also provide a technique to check the soundness of queries with negation via reduction to query completeness checking. We further enrich completeness information with timestamps, enabling query answers to be checked up to when they are complete. We then introduce two demonstrators, i.e., CORNER and COOL-WD, to show how our completeness framework can be realized. Finally, we investigate an automated method to generate completeness statements from text on the Web via relation cardinality extraction.
22

Managing and Consuming Completeness Information for RDF Data Sources

Darari, Fariz 20 June 2017 (has links)
The ever increasing amount of Semantic Web data gives rise to the question: How complete is the data? Though generally data on the Semantic Web is incomplete, many parts of data are indeed complete, such as the children of Barack Obama and the crew of Apollo 11. This thesis aims to study how to manage and consume completeness information about Semantic Web data. In particular, we first discuss how completeness information can guarantee the completeness of query answering. Next, we propose optimization techniques of completeness reasoning and conduct experimental evaluations to show the feasibility of our approaches. We also provide a technique to check the soundness of queries with negation via reduction to query completeness checking. We further enrich completeness information with timestamps, enabling query answers to be checked up to when they are complete. We then introduce two demonstrators, i.e., CORNER and COOL-WD, to show how our completeness framework can be realized. Finally, we investigate an automated method to generate completeness statements from text on the Web via relation cardinality extraction.
23

An algebraic framework for reasoning about security

Rajaona, Solofomampionona Fortunat 03 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: Stepwise development of a program using refinement ensures that the program correctly implements its requirements. The specification of a system is “refined” incrementally to derive an implementable program. The programming space includes both specifications and implementable code, and is ordered with the refinement relation which obeys some mathematical laws. Morgan proposed a modification of this “classical” refinement for systems where the confidentiality of some information is critical. Programs distinguish between “hidden” and “visible” variables and refinement has to bear some security requirement. First, we review refinement for classical programs and present Morgan’s approach for ignorance pre- serving refinement. We introduce the Shadow Semantics, a programming model that captures essential properties of classical refinement while preserving the ignorance of hidden variables. The model invalidates some classical laws which do not preserve security while it satisfies new laws. Our approach will be algebraic, we propose algebraic laws to describe the properties of ignorance preserving refinement. Thus completing the laws proposed in. Moreover, we show that the laws are sound in the Shadow Semantics. Finally, following the approach of Hoare and He for classical programs, we give a completeness result for the program algebra of ignorance preserving refinement. / AFRIKAANSE OPSOMMING: Stapsgewyse ontwikkeling van ’n program met behulp van verfyning verseker dat die program voldoen aan die vereistes. Die spesifikasie van ’n stelsel word geleidelik ”verfyn” wat lei tot ’n implementeerbare kode, en word georden met ‘n verfyningsverhouding wat wiskundige wette gehoorsaam. Morgan stel ’n wysiging van hierdie klassieke verfyning voor vir stelsels waar die vertroulikheid van sekere inligting van kritieke belang is. Programme onderskei tussen ”verborgeën ”sigbare” veranderlikes en verfyning voldoen aan ’n paar sekuriteitsvereistes. Eers hersien ons verfyning vir klassieke programme en verduidelik Morgan se benadering tot onwetendheid behoud. Ons verduidelik die ”Shadow Semantics”, ’n programmeringsmodel wat die noodsaaklike eienskappe van klassieke verfyning omskryf terwyl dit die onwetendheid van verborge veranderlikes laat behoue bly. Die model voldoen nie aan n paar klassieke wette, wat nie sekuriteit laat behoue bly nie, en dit voldoen aan nuwe wette. Ons benadering sal algebraïese wees. Ons stel algebraïese wette voor om die eienskappe van onwetendheid behoudende verfyning te beskryf, wat dus die wette voorgestel in voltooi. Verder wys ons dat die wette konsekwent is in die ”Shadow Semantics”. Ten slotte, na aanleiding van die benadering in vir klassieke programme, gee ons ’n volledigheidsresultaat vir die program algebra van onwetendheid behoudende verfyning.
24

On the Power and Universality of Biologically-inspired Models of Computation / Étude de la puissance d'expression et de l'universalité des modèles de calcul inspirés par la biologie

Ivanov, Sergiu 23 June 2015 (has links)
Cette thèse adresse les problèmes d'universalité et de complétude computationelle pour plusieurs modèles de calcul inspirés par la biologie. Il s'agit principalement des systèmes d'insertion/effacement, réseaux de processeurs évolutionnaires, ainsi que des systèmes de réécriture de multi-ensembles. Les résultats décrits se classent dans deux catégories majeures : l'étude de la puissance de calcul des opérations d'insertion et d'effacement avec ou sans mécanismes de contrôle, et la construction des systèmes de réécriture de multi-ensembles universels de petite taille. Les opérations d'insertion et d'effacement consistent à rajouter ou supprimer une sous-chaîne dans une chaîne de caractères dans un contexte donné. La motivation pour l'étude de ces opérations vient de la biologie, ainsi que de la linguistique et de la théorie des langages formels. Dans la première partie de ce manuscrit nous examinons des systèmes d'insertion/effacement correspondant à l'édition de l'ARN, un processus qui insère ou supprime des fragments de ces molécules. Une particularité importante de l'édition de l'ARN est que le endroit auquel se font les modifications est déterminé par des séquences de nucléotides se trouvant toujours du même côté du site de modification. En termes d'insertion et d'effacement, ce phénomène se modéliserait par des règles possédant le contexte uniquement d'un seul côté. Nous montrons qu'avec un contexte gauche de deux caractères il est possible d'engendrer tous les langages rationnels. D'autre part, nous prouvons que des contextes plus longs n'augmentent pas la puissance de calcul du modèle. Nous examinons aussi les systèmes d’insertion/effacement utilisant des mécanismes de contrôle d’application des règles et nous montrons l'augmentation de la puissance d'expression. Les opérations d'insertion et d'effacement apparaissent naturellement dans le domaine de la sécurité informatique. Comme exemple on peut donner le modèle des grammaires gauchistes (leftist grammar), qui ont été introduites pour l'étude des systèmes critiques. Dans cette thèse nous proposons un nouvel instrument graphique d'analyse du comportement dynamique de ces grammaires. La deuxième partie du manuscrit s'intéresse au problème d'universalité qui consiste à trouver un élément concret capable de simuler le travail de n'importe quel autre dispositif de calcul. Nous commençons par le modèle de réseaux de processeurs évolutionnaires, qui abstrait le traitement de l'information génétique. Nous construisons des réseaux universels ayant un petit nombre de règles. Nous nous concentrons ensuite sur les systèmes de réécriture des multi-ensembles, un modèle qui peut être vu comme une abstraction des réactions biochimiques. Pour des raisons historiques, nous formulons nos résultats en termes de réseaux de Petri. Nous construisons des réseaux de Petri universels et décrivons des techniques de réduction du nombre de places, de transitions et d'arcs inhibiteurs, ainsi que du degré maximal des transitions. Une bonne partie de ces techniques repose sur une généralisation des machines à registres introduite dans cette thèse et qui permet d'effectuer plusieurs tests et opérations en un seul changement d'état / The present thesis considers the problems of computational completeness and universality for several biologically-inspired models of computation: insertion-deletion systems, networks of evolutionary processors, and multiset rewriting systems. The presented results fall into two major categories: study of expressive power of the operations of insertion and deletion with and without control, and construction of universal multiset rewriting systems of low descriptional complexity. Insertion and deletion operations consist in adding or removing a subword from a given string if this subword is surrounded by some given contexts. The motivation for studying these operations comes from biology, as well as from linguistics and the theory of formal languages. In the first part of the present work we focus on insertion-deletion systems closely related to RNA editing, which essentially consists in inserting or deleting fragments of RNA molecules. An important feature of RNA editing is the fact that the locus the operations are carried at is determined by certain sequences of nucleotides, which are always situated to the same side of the editing site. In terms of formal insertion and deletion, this phenomenon is modelled by rules which can only check their context on one side and not on the other. We show that allowing one-symbol insertion and deletion rules to check a two-symbol left context enables them to generate all regular languages. Moreover, we prove that allowing longer insertion and deletion contexts does not increase the computational power. We further consider insertion-deletion systems with additional control over rule applications and show that the computational completeness can be achieved by systems with very small rules. The motivation for studying insertion-deletion systems also comes from the domain of computer security, for the purposes of which a special kind of insertion-deletion systems called leftist grammars was introduced. In this work we propose a novel graphical instrument for visual analysis of the dynamics of such systems. The second part of the present thesis is concerned with the universality problem, which consists in finding a fixed element able to simulate the work any other computing device. We start by considering networks of evolutionary processors (NEPs), a computational model inspired by the way genetic information is processed in the living cell, and construct universal NEPs with very few rules. We then focus on multiset rewriting systems, which model the chemical processes running in the biological cell. For historical reasons, we formulate our results in terms of Petri nets. We construct a series of universal Petri nets and give several techniques for reducing the numbers of places, transitions, inhibitor arcs, and the maximal transition degree. Some of these techniques rely on a generalisation of conventional register machines, proposed in this thesis, which allows multiple register checks and operations to be performed in a single state transition
25

CoDEL - A Relationally Complete Language for Database Evolution

Herrmann, Kai, Voigt, Hannes, Behrend, Andreas, Lehner, Wolfgang 02 June 2016 (has links) (PDF)
Software developers adapt to the fast-moving nature of software systems with agile development techniques. However, database developers lack the tools and concepts to keep pace. Data, already existing in a running product, needs to be evolved accordingly, usually by manually written SQL scripts. A promising approach in database research is to use a declarative database evolution language, which couples both schema and data evolution into intuitive operations. Existing database evolution languages focus on usability but did not aim for completeness. However, this is an inevitable prerequisite for reasonable database evolution to avoid complex and error-prone workarounds. We argue that relational completeness is the feasible expressiveness for a database evolution language. Building upon an existing language, we introduce CoDEL. We define its semantic using relational algebra, propose a syntax, and show its relational completeness.
26

Power functions and exponentials in o-minimal expansions of fields

Foster, T. D. January 2010 (has links)
The principal focus of this thesis is the study of the real numbers regarded as a structure endowed with its usual addition and multiplication and the operations of raising to real powers. For our first main result we prove that any statement in the language of this structure is equivalent to an existential statement, and furthermore that this existential statement can be chosen independently of the concrete interpretations of the real power functions in the statement; i.e. one existential statement will work for any choice of real power functions. This result we call uniform model completeness. For the second main result we introduce the first order theory of raising to an infinite power, which can be seen as the theory of a class of real closed fields, each expanded by a power function with infinite exponent. We note that it follows from the first main theorem that this theory is model-complete, furthermore we prove that it is decidable if and only if the theory of the real field with the exponential function is decidable. For the final main theorem we consider the problem of expanding an arbitrary o-minimal expansion of a field by a non-trivial exponential function whilst preserving o-minimality. We show that this can be done under the assumption that the structure already defines exponentiation on a bounded interval, and a further assumption about the prime model of the structure.
27

Contraintes pragmatiques de complétude et linguistique des contributions en théorie du texte et de l'organisation textuelle : élaboration d'une heuristique appliquée au roman de formation / Pragmatic constraints of completeness and linguistics of contributions in text theory and textual organization : development of an heuristic applied to Bildungsroman

Portugues, Yann 01 December 2011 (has links)
Notre thèse a pour ambition de faire émerger un niveau linguistique supérieur à la phrase et dont la prise encompte est indispensable à toute caractérisation et compréhension de ce qu’est un texte. A ce titre, lapragmatique, comme science du dire, est sans doute la plus à même de fournir une caractérisationsatisfaisante de ce niveau. Dans sa conception gricéenne, elle a mis en avant un principe de coopération etl’existence de maximes conversationnelles, sans approfondir, ni même discuter, le niveau auquel ce principeet ces maximes seraient attachés, celui-là même de contribution, alors que c’est précisément cette notion-ciqui, a priori, et quand on considère la maxime de quantité, est de fait un ensemble de phrases (ou mêmed’énoncés), ensemble qui certes peut à l’occasion se réduire à un(e) seul(e) énoncé (phrase) mais qui, dansla plupart des cas, définit un niveau langagier intermédiaire entre l’énoncé et la totalité de ce qui est dit dansl’échange.La thèse montre alors principalement que ces contributions sont des ensembles de phrases satisfaisant descontraintes pragmatiques, avec, entre autres, une contrainte de complétude forte, issue de la maxime dequantité, que l’on pose comme heuristique. De fait, les textes sont des contributions (plus précisément desmacro-contributions constituées de micro-contributions) et doivent être décrits comme tels. L’étudeempirique, appliquée au roman de formation, de la pertinence textuelle, de l'intégration textuelle et de ladisposition textuelle met à jour un certain nombre de phénomènes qui caractérisent le texte en tant que tel. / The main purpose of our thesis is to study and uncover a linguistic level superior to sentence’s level andwhich must inevitably be considered before any determination and understanding of the nature of a text. IfPragmatics (the science of saying and meaning) is undoubtedly the key to reach this goal, due to theemergence of the notion of contribution in the formulation of conversational maxims by Paul Grice, the thesisshows that this rather clandestine emergence and the consequent confusion between utterance’s level andthe level of contribution must be replaced among other things : i) by a full recognition of contributions as setsof utterances satisfying pragmatic constraints, among which is the constraint of completeness (maxim ofquantity); ii) by the recognition of the fact that texts are contributions and must be described as such; iii) bythe recognition of the fact that (macro-)contributions may include (micro-)contributions; iv) by the empiricalstudy of contributional and textual relevance, textual integration and textual organization.This thesis combines thus a contribution to the linguistics of contributions and a contributional approach toboth text theory and Bildungsroman.
28

A integralidade do cuidado na Estratégia Saúde da Família -  um aporte para uma anamnese ampliada / Comprehensiveness in health care in the Family Health Strategy - a contribution to a larger history

Walter, Josef 15 February 2016 (has links)
A Estratégia Saúde da Família (ESF) nem sempre contempla a integralidade, apesar deste ser um princípio estruturante do Sistema Único de Saúde (SUS), nas definições da Organização Mundial de Saúde (OMS) e da legislação brasileira que ampliam o conceito de saúde, atribuindo ao processo saúde/doença as influências das determinações sociais. As profissões da saúde ainda funcionam com abordagem reducionista. Infelizmente, conceitos, definições e legislação não são suficientes para conscientizar e tão pouco transformar as práticas de saúde. Esse trabalho propõe a elaboração de um questionário anamnetico ampliado focada em pacientes poliqueixosos, baseada na teoria de Fernando González Rey o que significa trabalhar com o enfoque histórico-cultural e subjetividade, portanto mais humanizado, a fim de experimentar outros modos de intervir no processo saúde/doença e romper sua abordagem fragmentada positivista visando contribuir para uma melhora na relação profissional/usuário na ESF. Para testar a efetividade do questionário, foi realizado um piloto que indicou ser importante para se alcançar os objetivos esperados, a necessidade da capacitação da equipe e a alteração nos processos de trabalho encontrados atualmente na ESF. / The Family Health Strategy (FHS) does not always include comprehensiveness, although this is a structuring principle of the Unified Health System (SUS), the definitions of the World Health Organization (WHO) and the Brazilian law that expand the concept of health, assigning the health / disease process influences the social determinants. The health professions still work with reductionist approach. Unfortunately, concepts, definitions and laws are not enough to raise awareness and so little change health practices. This paper proposes development of an expanded Anamnesis questionnaire focused on many complaints patients, based on Fernando González Rey theory which means working with the historical-cultural approach and subjectivity therefore more human, in order to experience other ways of intervening in the health / disease and break its positivist piecemeal approach to contribute to an improvement in the professional / user in the FHS. To test the effectiveness of the questionnaire, a pilot test was performed which indicated it is important to train the staff in order to achieve the expected goals, as well as a transformation in the work processes found in the FHS.
29

Aspectos da teoria de funções modais / Aspects of the theory of modal functions

Falcão, Pedro Alonso Amaral 10 December 2012 (has links)
Apresentamos alguns aspectos da teoria de funções modais, que é o correlato modal da teoria de funções de verdade. Enquanto as fórmulas da lógica proposicional clássica expressam funções de verdade, as fórmulas da lógica proposicional modal (S5) expressam funções modais. Generalizamos alguns dos teoremas da teoria de funções de verdade para o caso modal; em particular, exibimos provas da completude funcional de alguns conjuntos de funções modais e definimos uma (nova) noção de reduto vero-funcional de funções modais, bem como a composição de funções modais em termos destes redutos. / We present some aspects of the theory of modal functions, which is the modal correlate of the theory of truth-functions. While the formulas of classical propositional logic express truth-functions, the formulas of modal propositional logic (S5) express modal functions. We generalize some theorems of the theory of truth-functions to the modal case; in particular, we show the functional completeness of some sets of modal functions and define a (new) notion of truth-functional reduct of modal functions, as well as the composition of modal functions in terms of such reducts.
30

Pilotage de projets en conception collaborative de produits : définition d'un indicateur quantitatif / Prject management of collaborative product design : definition of a quantitative indicator

Fleche, Damien 11 December 2015 (has links)
Aujourd’hui, le processus de conception de produits fait face à une mondialisation des marchés, conduit par des équipes géographiquement distribuées. Ces équipes sont ainsi amenées à travailler ensemble afin de concevoir ces produits nouveaux. Les activités de conception ont donc évolué au fil du temps pour pouvoir constamment répondre aux nouvelles contraintes industrielles, de la même manière que les processus de fabrication se sont adaptés aux marchés. Ainsi, afin de faciliter les phases de travail en commun, de nouvelles stratégies de gestion de la collaboration, notamment à travers de nouveaux systèmes d’information, sont mises en place. Ces systèmes d’information sont nombreux et prennent différentes formes, ce qui rend souvent difficiles la sélection et le pilotage de ces derniers. Or, pour les équipes projet, la gestion de ces nouveaux outils informatiques fait partie intégrante des éléments clés du processus de conception de produits. Ainsi, dans le cadre de nos travaux, nous nous focalisons sur l’aide au pilotage de l’ingénierie collaborative en mode projet pour la conception et le développement de produits matériels techniques. Notre objectif est d’aider le chef projet à mieux gérer son projet en utilisant au moment adéquat l’outil d’aide à la collaboration le plus adapté. Dans nos travaux, nous avons souligné la nécessité d’utiliser un indicateur quantitatif de pilotage de la conception collaborative. Cet indicateur apporte ainsi une approche complémentaire de l’évaluation de la pertinence de la collaboration en cours, en prenant en compte son impact sur l’évolution du projet. Le calcul de cet indicateur s’appuie sur une métrique spécifique et concerne l’évolution de la complétude de la donnée CAO. De plus, nous avons montré que ce nouvel indicateur peut être intégré à une approche organisationnelle de type PLM afin de faciliter le stockage des données et le calcul de la complétude, cette dernière étant liée aux outils utilisés et aux jalons projet. / Today, product design process is facing a market globalization led by distributed teams. Moreover, the international market context in which the companies evolves, leads them to work in large multi-disciplinary collaborative teams using in collaborative practices. In this context, product design process is led by the integration and optimization of stakeholders’ collaboration. Thus, to facilitate collaboration steps, new management strategies are defined and new information systems can be used. These information systems are numerous and take various forms, leading to difficulties for companies to select one of them and manage them. However, to the design teams, the management and the choice of those are key elements of the product design process.Toward this ends, in the present thesis, we focalize our research on the topic of collaborative design project management. Our objective is to assist the project leader to better manage her or his product design project using optimal collaborative tool all along the design project. We have underlined the necessity to use quantitative and non-intrusive indicator during the management of collaborative design phases in order to subjective evaluation. The tracking of this indicator is performed in parallel to the existing approaches of the evaluation of the suitability of the collaboration. It defines the impact of the collaboration steps on the design project evolution. The computation of this indicator is based on a precise metric which details the completeness of the CAD model based on the used collaborative tools and the project milestones. Moreover, we have showed that this new indicator can be integrated to an organizational approach, as a PLM, to facilitate data storage and completeness computation.

Page generated in 0.1476 seconds