• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 199
  • 187
  • 118
  • 26
  • 15
  • 8
  • 7
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 622
  • 167
  • 161
  • 159
  • 135
  • 116
  • 98
  • 96
  • 94
  • 87
  • 82
  • 70
  • 63
  • 62
  • 58
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Σχεδιασμός, ανάλυση, ανάπτυξη και εφαρμογή οντολογιών

Ζορμπά, Αλεξάνδρα 10 June 2009 (has links)
Μέσα από αυτή την εργασία προσεγγίσαμε την έννοια των οντολογιών και εξετάσαμε μερικά είδη Οντολογιών καθώς και μια υφιστάμενη κατηγοριοποίηση. Αναφέραμε γλώσσες αναπαράστασης Οντολογιών, και αναλύσαμε τις σημαντικότερες από αυτές. Περιγράψαμε συνοπτικά μερικά εργαλεία κατασκευής οντολογιών με μεγαλύτερη έμφαση στο λογισμικό PROTÉGÉ 2000. Θίξαμε θέματα που αφορούν την μηχανική οντολογιών και παρουσιάσαμε περιληπτικά μερικές μεθοδολογίες ανάπτυξης Οντολογιών. Αναλύσαμε σε θεωρητικό επίπεδο τα βασικά βήματα ανάπτυξης μιας οντολογίας μέσα από ένα υπαρκτό παράδειγμα και παρουσιάσαμε την ανάπτυξη μιας οντολογίας που πραγματοποιήθηκε σε περιβάλλον του λογισμικού PROTÉGÉ για τις ανάγκες της παρούσας διπλωματικής. / Through this work we approached the concept of Ontologies and discussed an existing classification. We mentioned Ontologies representation languages, and we analyzed the most important of these. We outlined some ontologies construction tools with a greater emphasis on software PROTÉGÉ 2000. We raised issues relating to engineering Ontologies and we summarized some methodologies. We analyzed, at the theoretical level, the basic steps in developing an Ontology through a real example and we showed the development of an ontology held in PROTÉGÉ 2000 software environment for the purposes of this project.
2

Justification based explanation in ontologies

Horridge, Matthew January 2011 (has links)
The Web Ontology Language, OWL, is the latest standard in logic based ontology languages. It is built upon the foundations of highly expressive Description Logics, which are fragments of First Order Logic. These logical foundations mean that it is possible to compute what is entailed by an OWL ontology. The reasons for entailments can range from fairly simple localised reasons through to highly non-obvious reasons. In both cases, without tool support that provides explanations for entailments, it can be very difficult or impossible to understand why an entailment holds. In the OWL world, justifications, which are minimal entailing subsets of ontologies, have emerged as the dominant form of explanation. This thesis investigates justification based explanation techniques. The core of the thesis is devoted to defining and analysing Laconic and Precise Justifications. These are fine-grained justifications whose axioms do not contain any superfluous parts. Optimised algorithms for computing these justifications are presented, and an extensive empirical investigation shows that these algorithms perform well on state of the art, large and expressive bio-medical ontologies. The investigation also highlights the prevalence of superfluity in real ontologies, along with the related phenomena of justification masking. The practicality of computing Laconic Justifications coupled with the prevalence of non-laconic justifications in the wild indicates that Laconic and Precise justifications are likely to be useful in practice. The work presented in this thesis should be of interest to researchers in the area of knowledge representation and reasoning, and developers of reasoners and ontology editors, who wish to incorporate explanation generation techniques into their systems.
3

Analyse des causes d'échec des projets d'affaires à partir d'études de cas en entreprises, et proposition d'un modèle de domaine en langage UML / Firm-based investigation into business project failure and developing business project management domain using unified modelling language

Wong, Siaw Ming 24 September 2010 (has links)
En dépit des efforts destinés à accroitre la maturité de la profession dans le domaine de la gestion de projet, le taux d’échec des projets d’affaires (par opposition aux projets techniques) reste élevé. On s’est aperçu que les standards actuels en matière de gestion de projet ne prenaient pas en compte les contraintes liées au contexte d’exécution des projets, et que de ce fait, la gestion de projets d’affaires n’avait pas été étudiée en profondeur. L’objectif de ce travail de recherche transdisciplinaire est donc d’abord d’obtenir une meilleure compréhension du sujet en essayant de comprendre pourquoi l’échec d’un projet d’affaires est considéré comme un échec du point de vue de l’organisation, puis de formaliser la connaissance acquise dans un format qui permette par la suite de l’enrichir et de l’appliquer. En nous appuyant sur le modèle des systèmes ouverts, trois études de cas ont été conduites, avec pour objectif d’étudier l’effet modérateur des différents types de structures des organisations et des systèmes d’information pour la gestion de projet sur la relation causale entre la compétence en gestion de projet et le succès des projets d’affaires. Il résulte de ce travail que le succès des projets d’affaires devrait être mesuré en termes de réalisation des objectifs du projet mais aussi de l’organisation. Ce travail a également permis d’identifier les composants essentiels de la gestion de projets d’affaires : (1) “Compétences de base pour la gestion de projet ”; (2) “Gestion intégrée de programme” et (3) “Système d’information intégré pour la gestion de projet”. Dans les trois études de cas, il apparait également de manière déterminante que les facteurs d’ordre organisationnel ont un impact significatif sur la réussite du projet. Une théorie est proposée, qui postule qu’un projet d’affaires a de grandes chances d’échouer s’il n’est pas géré comme une partie intégrante de l’entreprise, en le traitant comme une opération courante au sein de l’entreprise. Cela signifie que la manière dont les projets d’affaires sont gérés aujourd’hui devrait être revue. Le rôle de l’informatique dans l’assistance à la gestion de ces projets devrait également être revu. Et il faudrait sans doute aussi faire une plus grande différence entre les projets d’affaires et les projets « traditionnels » plus techniques. D’autre part, la formalisation de la connaissance acquise au cours de ces études de cas a été effectuée en développant un modèle de domaine à l’aide du langage de modélisation UML. Et l’approche de modélisation du domaine a été élaborée en modifiant l’étape de conceptualisation dans le processus traditionnel d’ingénierie d’ontologie. En prenant comme point de départ le cadre théorique qui prend en compte l’essentiel des composants de la gestion de projets d’affaires, le modèle a été construit en quatre étapes : (1) définition de la portée du travail en développant chaque composant à partir des normes en vigueur ; (2) intégration de ces développements en réutilisant les travaux réalisés et proposés par d’autres chercheurs ; (3) développement et (4) évaluation des spécifications UML décrivant aussi bien les aspects structurels que dynamiques du sujet traité. Le fait d’avoir réussi à développer un modèle du domaine et à montrer de quelle manière il pouvait être mis en œuvre directement pour développer un système d’information pour la gestion de projet ainsi que des ontologies portant sur les connaissances liées à la gestion de projet a montré que l’approche consistant à construire une base sémantique commune permettant de travailler à la modélisation de systèmes applicatifs et d’ontologies est à la fois réalisable et valide. De plus, le modèle de domaine proposé peut servir de socle permettant d’accumuler progressivement la connaissance du domaine, dans la mesure où l’approche de modélisation a pris en compte la possibilité d’intégrer des travaux et propositions antérieurs. Ce résultat ouvre de nouvelles perspectives de développement de logiciels s’appuyant sur un modèle de domaine qui est directement issu de travaux de recherch / Despite the efforts to improve the maturity of the project management profession, the failure rate of business projects remains high. It was realized that current project management standards do not take contextual requirements into consideration and business project management really has not been addressed in totality. The purpose of this interdisciplinary study therefore, is to obtain a better understanding of the subject matter by investigating why business project fails from the organization’s perspective; and to specify the acquired knowledge in a format that facilitates future expansion and application.Based upon the open systems model, 3 case studies were conducted to examine the moderating effect of the types of organization structure and Project Management Information System (PMIS) support on the causal relationship between project management competency and business project success. It was found that business project success should be measured in terms of meeting both project and organization objectives; and the essential components of business project management were identified to be (1) “Core business project management competencies”; (2) “Integrated programme management” and (3) “Integrated PMIS”. It was conclusive that organizational factors do pose a significant impact in attainment of business project success in all 3 cases; and a theory that business project is likely to fail if it is not managed as an integral part of business enterprise with equal emphasis as its business-as-usual operations has been proposed. This implies that the way business project management is executed today should be reviewed; the role of IT in support of project management work should be reassessed; and a clear distinction between business project management and traditional project management should perhaps be made.The specification of the acquired knowledge on the other hand, was achieved by developing a domain model using UML; based on a domain modelling approach which was devised by modifying the conceptualization step of conventional ontology engineering process. Using the theoretical framework that captures the essential business project management components as the starting point, the model was constructed in 4 steps namely (1) defining the scope of work by expanding each component in the framework using prevailing standards; (2) integrating the defined scope with reusable existing work; (3) developing & (4) testing the UML specifications which describe both structural and behavioural aspects of the subject matter. The successful creation of the domain model and the demonstration of how it can be used directly in the development of the desired PMIS and project knowledge ontologies showed that the approach of building a common semantic foundation to support both application system modelling and ontology modelling is workable and effective. Furthermore, since the modelling approach has built in the ability to reuse existing work, the domain model can be used as a foundation that accumulates domain knowledge progressively. This opens up a new horizon where software systems could be built based on domain model which is a direct reflection of basic research findings; and software systems in the future would compete primarily from the non-functional perspective as a result.
4

Ontology based code generation for datalogger

Mehalingam, Senthilkumar. January 2006 (has links)
Thesis (M.S.)--State University of New York at Binghamton, Dept. of Computer Science, 2006. / Includes bibliographical references.
5

Systemtechnische Gestaltung der Informationsarchitektur im Entwicklungsprozess /

Schulz, Armin Peter. January 2002 (has links)
Texte remanié de: Dissertation--München--Fakultät für Maschinenwesen der Technischen Universität München, 2001.
6

A framework and methodology for ontology mediation through semantic and syntactic mapping

Muthaiyah, Saravanan. January 2008 (has links)
Thesis (Ph. D.)--George Mason University, 2008. / Vita: p. 177. Thesis director: Larry Kerschberg. Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Information Technology. Title from PDF t.p. (viewed July 3, 2008). Includes bibliographical references (p. 169-176). Also issued in print.
7

Module-based classification of OWL ontologies

Matentzoglu, Nicolas Alexander January 2016 (has links)
Classification is a core reasoning service provided by most OWL reasoners. Classification in general is hard - up to 2NExptime for SROIQ(D), the Description Logic which is underpinning the Web Ontology Language (OWL). While it has been shown that classification is practical for a wide range of inputs, there are still ontologies for which classification takes an unreasonable amount of time for purposes such as ontology engineering (frequent classifications after updates). A natural optimisation strategy is divide and conquer, that is, to decompose the ontology into subsets which are hopefully easier to classify and whose classifications can be combined into a complete classification of the whole ontology. Unfortunately, an arbitrary subset may not be self-contained, i.e. it might be missing information that is needed to determine entailments over its signature. Moreover, such a subset can be potentially harder to classify than the whole ontology. In order to mitigate those problems, classification preserving decompositions (CPDs) must be designed with care that they support complete classification which is, in practice, more efficient than monolithic classification. Locality-based modules are subsets of an ontology that provide certain guarantees with respect to the entities (concepts, roles) in its signature - in particular, modules are self-contained. In this thesis we explore the use of syntactic locality-based modules for underpinning classification-preserving decompositions. In particular, we empirically explore their potential to avoid subsumption tests and reduce subsumption test hardness and weigh those benefits against detrimental effects such as overhead (for example the time it takes to compute the decomposition) and redundancy (a consequence of potentially overlapping chunks in the decomposition). The main contributions of this thesis are an in-depth empirical characterisation of these effects, an extensible framework for observing CPDs in action up until a granularity of individual subsumption tests, a large, public corpus of observations and its analysis and insights on experimental methodologies around OWL reasoning.
8

The modular structure of an ontology : atomic decomposition and its applications

Del Vescovo, Chiara January 2013 (has links)
Ontologies are descriptions of the knowledge about a domain of interest encoded in computer processable languages, e.g., Description Logics, which are decidable fragments of First Order Logic. The main aim of ontologies is to define unambiguous vocabularies to facilitate knowledge sharing and integration. A critical issue with ontologies consists of their increasing complexity. To address this problem several notions of modularity have been recently proposed. Modularity notions can help in two ways: 1) If we know what sub-part of the ontology we want to work with, obtaining the appropriate module will allow us to work with that sub-part in a principled way; 2) a notion of module might induce a modular structure which allows users to explore the entire ontology in a sensible manner (perhaps finding appropriate sub-parts to work on). However, the most popular notion --- locality based modules --- while excelling at modular extraction have thus far resisted attempts to induce a modular structure. Indeed, due to their nature, locality based modules tend to occur in unfeasible numbers in ontologies. We tackle this problem by identifying basic building blocks of modules as sets of axioms which 'cling together', that is, sets of axiom such that if any element appears in a module, then all the rest due. This notion of an 'atom'' proves key to defining a useful family of locality based modular structures, the (Labelled) Atomic Decompositions ((L)ADs). In this thesis, we define (L)AD and explore its properties. We show that ADs are efficiently computable and, with appropraite labellings, provide a reasonably terse representation of the entire set of locality based modules. From ADs, we are able to distinguish so-called "genuine" modules, i.e., modules that cannot be decomposed further as the union of two or more modules. Finally, we explore several of the applications to which (L)ADs have been applied including module extraction, ontology comprehension, and modular reasoning.
9

Semantic Prioritization of Novel Causative Genomic Variants in Mendelian and Oligogenic Diseases

Boudellioua, Imene 21 March 2019 (has links)
Recent advances in Next Generation Sequencing (NGS) technologies have facilitated the generation of massive amounts of genomic data which in turn is bringing the promise that personalized medicine will soon become widely available. As a result, there is an increasing pressure to develop computational tools to analyze and interpret genomic data. In this dissertation, we present a systematic approach for interrogating patients’ genomes to identify candidate causal genomic variants of Mendelian and oligogenic diseases. To achieve that, we leverage the use of biomedical data available from extensive biological experiments along with machine learning techniques to build predictive models that rival the currently adopted approaches in the field. We integrate a collection of features representing molecular information about the genomic variants and information derived from biological networks. Furthermore, we incorporate genotype-phenotype relations by exploiting semantic technologies and automated reasoning inferred throughout a cross-species phenotypic ontology network obtained from human, mouse, and zebra fish studies. In our first developed method, named PhenomeNet Variant Predictor (PVP), we perform an extensive evaluation of a large set of synthetic exomes and genomes of diverse Mendelian diseases and phenotypes. Moreover, we evaluate PVP on a set of real patients’ exomes suffering from congenital hypothyroidism. We show that PVP successfully outperforms state-of-the-art methods, and provides a promising tool for accurate variant prioritization for Mendelian diseases. Next, we update the PVP method using a deep neural network architecture as a backbone for learning and illustrate the enhanced performance of the new method, DeepPVP on synthetic exomes and genomes. Furthermore, we propose OligoPVP, an extension of DeepPVP that prioritizes candidate oligogenic combinations in personal exomes and genomes by integrating knowledge from protein-protein interaction networks and we evaluate the performance of OligoPVP on synthetic genomes created by known disease-causing digenic combinations. Finally, we discuss some limitations and future steps for extending the applicability of our proposed methods to identify the genetic underpinning for Mendelian and oligogenic diseases.
10

Embedding Ontologies Using Category Theory Semantics

Zhapa-Camacho, Fernando 28 March 2022 (has links)
Ontologies are a formalization of a particular domain through a collection of axioms founded, usually, in Description Logic. Within its structure, the knowledge in the axioms contain semantic information of the domain and that fact has motivated the development of methods that capture such knowledge and, therefore, can perform different tasks such as prediction and similarity computation. Under the same motivation, we present a new method to capture semantic information from an ontology. We explore the logical component of the ontologies and their theoretical connections with their counterparts in Category Theory, as Category Theory develops a structural representation of mathematical systems and the structures found there have strong relationships with Logic founded in the so-called Curry-Howard-Lambek isomorphism. In this regard, we have developed a method that represents logical axioms as Categorical diagrams and uses the commutativity property of such diagrams as a constraint to generate embeddings of ontology classes in Rn. Furthermore, as a contribution in terms of software tools, we developed mOWL: Machine Learning Library With Ontologies. mOWL is a software library that incorporates methods in the state of the art, usually in Machine Learning, which utilizes ontologies as background knowledge. We rely on mOWL to implement the proposed method and compare it with the existing ones.

Page generated in 0.0495 seconds