• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 116
  • 37
  • 26
  • 19
  • 18
  • 9
  • 8
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 315
  • 121
  • 101
  • 96
  • 63
  • 55
  • 44
  • 32
  • 29
  • 29
  • 28
  • 22
  • 22
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Evaluating conjunctive and graph queries over the EL profile of OWL 2

Stefanoni, Giorgio January 2015 (has links)
OWL 2 EL is a popular ontology language that is based on the EL family of description logics and supports regular role inclusions,axioms that can capture compositional properties of roles such as role transitivity and reflexivity. In this thesis, we present several novel complexity results and algorithms for answering expressive queries over OWL 2 EL knowledge bases (KBs) with regular role inclusions. We first focus on the complexity of conjunctive query (CQ) answering in OWL 2 EL and show that the problem is PSpace-complete in combined complexity, the complexity measured in the total size of the input. All the previously known approaches encode the regular role inclusions using finite automata that can be worst-case exponential in size, and thus are not optimal. In our PSpace procedure, we address this problem by using a novel, succinct encoding of regular role inclusions based on pushdown automata with a bounded stack. Moreover, we strengthen the known PSpace lower complexity bound and show that the problem is PSpace-hard even if we consider only the regular role inclusions as part of the input and the query is acyclic; thus, our algorithm is optimal in knowledge base complexity, the complexity measured in the size of the KB, as well as for acyclic queries. We then study graph queries for OWL 2 EL and show that answering positive, converse- free conjunctive graph queries is PSpace-complete. Thus, from a theoretical perspective, we can add navigational features to CQs over OWL 2 EL without an increase in complexity. Finally, we present a practicable algorithm for answering CQs over OWL 2 EL KBs with only transitive and reflexive composite roles. None of the previously known approaches target transitive and reflexive roles specifically, and so they all run in PSpace and do not provide a tight upper complexity bound. In contrast, our algorithm is optimal: it runs in NP in combined complexity and in PTime in KB complexity. We also show that answering CQs is NP-hard in combined complexity if the query is acyclic and the KB contains one transitive role, one reflexive role, or nominals—concepts containing precisely one individual.
182

SWoDS: Semantic Web (of Data) Service

Andrade, Leandro José Silva 05 December 2014 (has links)
Submitted by Santos Davilene (davilenes@ufba.br) on 2016-05-25T16:24:08Z No. of bitstreams: 1 DissertacaoMestradoDCC_Leandro_Andrade.pdf: 4292793 bytes, checksum: 81fe16e2cd1e5c84283f5931ba388398 (MD5) / Made available in DSpace on 2016-05-25T16:24:08Z (GMT). No. of bitstreams: 1 DissertacaoMestradoDCC_Leandro_Andrade.pdf: 4292793 bytes, checksum: 81fe16e2cd1e5c84283f5931ba388398 (MD5) / Criada com a proposta inicial de conectar basicamente documentos HTML, a Web hoje expandiu suas capacidades, tornando-se um ambiente bastante heterogêneo de aplicações, recursos, dados e usuários que interagem entre si. A proposta da Web Semântica, associada aos Serviços Web, busca estabelecer padrões que viabilizem a comunicação entre aplicações heterogêneas na Web. A Web de Dados, outra linha de evolução da Web, fornece orientações (Linked Data) sobre como usar as tecnologias da Web Semântica para publicar e definir ligações semânticas entre dados de diferentes fontes. Contudo, existe uma lacuna na integração entre aplicações baseadas em Serviços Web e aplicações da Web de Dados. Essa lacuna ocorre porque os Serviços Web são “executados”, enquanto que a Web de Dados é “consultada”. Dessa forma, esta dissertação apresenta o Semantic Web (of Data) Services (SWoDS) com objetivo de prover Serviços Web a partir de bases Linked Data. O Semantic Web (of Data) Services pode preencher a lacuna entre Serviços Web e aplicações baseadas na Web de Dados, fazendo que a Web de Dados seja “executada” através de Serviços Web Semânticos. Assim, permitindo que dados Linked Data, através do SWoDS, integrem aos Serviços Web, por meio de operações de composição automática e descoberta de serviços.
183

Semantic Matching for Stream Reasoning

Dragisic, Zlatan January 2011 (has links)
Autonomous system needs to do a great deal of reasoning during execution in order to provide timely reactions to changes in their environment. Data needed for this reasoning process is often provided through a number of sensors. One approach for this kind of reasoning is evaluation of temporal logical formulas through progression. To evaluate these formulas it is necessary to provide relevant data for each symbol in a formula. Mapping relevant data to symbols in a formula could be done manually, however as systems become more complex it is harder for a designer to explicitly state and maintain thismapping. Therefore, automatic support for mapping data from sensors to symbols would make system more flexible and easier to maintain. DyKnow is a knowledge processing middleware which provides the support for processing data on different levels of abstractions. The output from the processing components in DyKnow is in the form of streams of information. In the case of DyKnow, reasoning over incrementally available data is done by progressing metric temporal logical formulas. A logical formula contains a number of symbols whose values over time must be collected and synchronized in order to determine the truth value of the formula. Mapping symbols in formula to relevant streams is done manually in DyKnow. The purpose of this matching is for each variable to find one or more streams whose content matches the intended meaning of the variable. This thesis analyses and provides a solution to the process of semantic matching. The analysis is mostly focused on how the existing semantic technologies such as ontologies can be used in this process. The thesis also analyses how this process can be used for matching symbols in a formula to content of streams on distributed and heterogeneous platforms. Finally, the thesis presents an implementation in the Robot Operating System (ROS). The implementation is tested in two case studies which cover a scenario where there is only a single platform in a system and a scenario where there are multiple distributed heterogeneous platforms in a system. The conclusions are that the semantic matching represents an important step towards fully automatized semantic-based stream reasoning. Our solution also shows that semantic technologies are suitable for establishing machine-readable domain models. The use of these technologies made the semantic matching domain and platform independent as all domain and platform specific knowledge is specified in ontologies. Moreover, semantic technologies provide support for integration of data from heterogeneous sources which makes it possible for platforms to use streams from distributed sources.
184

Från luddig verklighet till strikt formalism : Utveckling av en metod för den semantiska webben

Hahne, Fredrik, Lindgren, Åsa January 2005 (has links)
Internet is the world’s largest source of information, and it is expanding every day. It is possible to find all kind of information as long as you know how and where to look for it, but still it is only the words itself that are searched for. We have with this essay tried to find an approach that makes it possible to give the word a meaning or a context. We have, as a starting point used the Socrates method, which is a method that breaks down texts into its smallest elements and forms activities. We have redone these activities to ontologies by forming general and specific descriptions of the activities. The ontologies are meant to create a common language for as well humans as computers, where meaning and context are built in. After we have created our ontologies we used Web Ontology Language, OWL, which is the ontology language that is considered being closest to a standard. It has been developed for the semantic web, and that is the ultimate objective of our essay. The semantic web is meant to be an extension of the existing web, and it will include comprehension for computers. We have become conscious that the semantic web would be a great improvement for both humans and computers, since it will be a lot easier to find the information you are looking for. / Internet är världens största källa till information och det expanderar för varje dag. Det är möjligt att hitta all slags information om man bara vet vart och hur man ska leta, ändå är det bara orden som eftersöks. Vi har med vår uppsats försökt ta fram ett tillvägagångssätt som gör det möjligt att ge orden en betydelse eller ett sammanhang. Som utgångspunkt har vi använt oss av Sokratesmetoden, vilket är en metod som bryter ner texter till dess minsta beståndsdelar, och bildar aktiviteter. Dessa aktiviteter har vi gjort om till ontologier genom att bilda generella och specifika beskrivningar av aktiviteterna. Ontologier är tänkta att skapa ett gemensamt språk för människor och datorer, där betydelse och sammanhang byggs in. När vi skapat våra ontologier använde vi oss av Web Ontology Language, OWL, vilket är ett ontologispråk som anses vara närmast en standard. Detta språk har utvecklats för att kunna användas för den semantiska webben, vilken även är slutmålet med vår uppsats. Den semantiska webben är tänkt att utöka den befintliga webben, och ska bygga in förståelse även för datorer. Vi har insett att den semantiska webben skulle innebära en stor förbättring för såväl människor som datorer, då det skulle bli enklare att hitta eftersökt information.
185

Analysis of Patterns in Handwritten Spelling Errors among Students with Various Specific Learning Disabilities

Winkler, Laura Ann 30 June 2016 (has links)
Students diagnosed with specific learning disabilities struggle with spelling accuracy, but they do so for different reasons. For instance, students with dysgraphia, dyslexia, and oral-written language learning disability (OWL-LD) have distinct areas of weakness in cognitive processing and unique difficulties with the linguistic features necessary for accurate spelling (Silliman & Berninger, 2011). This project considered the spelling errors made by such students to determine if their unique learning profiles lead to distinct misspelling patterns. Academic summaries handwritten by 33 students diagnosed with dysgraphia (n=13), dyslexia (n=15), and OWL-LD (n=5) were analyzed for type/complexity and number of spelling errors. Additionally, the differences in error frequency and complexity were analyzed based on whether academic material had been listened to or read. Misspellings were extracted from the students' essays and evaluated using an unconstrained linguistic scoring system (POMAS). Then, the complexity/severity of the misspelling was computed using a complexity metric (POMplexity). Statistical results revealed that children within the diagnostic categories of dysgraphia, dyslexia, and OWL-LD appear to produce errors that are similar in complexity and frequency. Hence, students with specific learning disabilities do not appear to make patterns and numbers of errors specific to their diagnosis. Additionally, statistical results indicated that all students produced similar numbers of errors in both the reading and listening conditions, indicating that the mode of presentation did not affect spelling accuracy. When spelling errors were analyzed qualitatively, some differences across diagnostic categories and variability within groups was noted. Students with dysgraphia produced misspellings involving a phoneme addition or omission. Phonological and orthographic errors typical of younger children were characteristic of misspellings produced by students with dyslexia. Individuals with OWL-LD tended to omit essential vowels and were more likely to misspell the same word in multiple different ways. Overall, these results indicate that the subcategories of dysgraphia, dyslexia, and OWL-LD represent of gradients of impairment within the overarching category of specific learning disabilities. However, even within those subcategories, there is a wide degree of variability. Diagnostic categories, then, may suggest areas of linguistic weakness, but subcategories alone cannot be used for determining the nature of spelling intervention.
186

Using Semantic Web Technology in Requirements Specifications

Kroha, Petr, Labra Gayo, José Emilio 05 November 2008 (has links) (PDF)
In this report, we investigate how the methods developed for using in Semantic Web technology could be used in capturing, modeling, developing, checking, and validating of requirements specifications. Requirements specification is a complex and time-consuming process. The goal is to describe exactly what the user wants and needs before the next phase of the software development cycle will be started. Any failure and mistake in requirements specification is very expensive because it causes the development of software parts that are not compatible with the real needs of the user and must be reworked later. When the analysis phase of a project starts, analysts have to discuss the problem to be solved with the customer (users, domain experts) and then write the requirements found in form of a textual description. This is a form the customer can understand. However, any textual description of requirements can be (and usually is) incorrect, incomplete, ambiguous, and inconsistent. Later on, the analyst specifies a UML model based on the requirements description written by himself before. However, users and domain experts cannot validate the UML model as most of them do not understand (semi-)formal languages such as UML. It is well-known that the most expensive failures in software projects have their roots in requirements specifications. Misunderstanding between analysts, experts, users, and customers (stakeholders) is very common and brings projects over budget. The goal of this investigation is to do some (at least partial) checking and validation of the UML model using a predefined domain-specific ontology in OWL, and to process some checking using the assertions in descriptive logic. As we described in our previous papers, we have implemented a tool obtaining a modul (a computer linguistic component) that can generate a text of requirements description using information from UML models, so that the stakeholders can read it and decide whether the analyst's understanding is right or how different it is from their own one. We argue that the feedback caused by the UML model checking (by ontologies and OWL DL reasoning) can have an important impact on the quality of the outgoing requirements. This report contains a description and explanation of methods developed and used in Semantic Web Technology and a proposed concept for their use in requirements specification. It has been written during my sabbatical in Oviedo and it should serve as a starting point for theses of our students who will implement ideas described here and run some experiments concerning the efficiency of the proposed method.
187

Drug repositioning and indication discovery using description logics

Croset, Samuel January 2014 (has links)
Drug repositioning is the discovery of new indications for approved or failed drugs. This practice is commonly done within the drug discovery process in order to adjust or expand the application line of an active molecule. Nowadays, an increasing number of computational methodologies aim at predicting repositioning opportunities in an automated fashion. Some approaches rely on the direct physical interaction between molecules and protein targets (docking) and some methods consider more abstract descriptors, such as a gene expression signature, in order to characterise the potential pharmacological action of a drug (Chapter 1). On a fundamental level, repositioning opportunities exist because drugs perturb multiple biological entities, (on and off-targets) themselves involved in multiple biological processes. Therefore, a drug can play multiple roles or exhibit various mode of actions responsible for its pharmacology. The work done for my thesis aims at characterising these various modes and mechanisms of action for approved drugs, using a mathematical framework called description logics. In this regard, I first specify how living organisms can be compared to complex black box machines and how this analogy can help to capture biomedical knowledge using description logics (Chapter 2). Secondly, the theory is implemented in the Functional Therapeutic Chemical Classification System (FTC - https://www.ebi.ac.uk/chembl/ftc/), a resource defining over 20,000 new categories representing the modes and mechanisms of action of approved drugs. The FTC also indexes over 1,000 approved drugs, which have been classified into the mode of action categories using automated reasoning. The FTC is evaluated against a gold standard, the Anatomical Therapeutic Chemical Classification System (ATC), in order to characterise its quality and content (Chapter 3). Finally, from the information available in the FTC, a series of drug repositioning hypotheses were generated and made publicly available via a web application (https://www.ebi.ac.uk/chembl/research/ftc-hypotheses). A subset of the hypotheses related to the cardiovascular hypertension as well as for Alzheimer’s disease are further discussed in more details, as an example of an application (Chapter 4). The work performed illustrates how new valuable biomedical knowledge can be automatically generated by integrating and leveraging the content of publicly available resources using description logics and automated reasoning. The newly created classification (FTC) is a first attempt to formally and systematically characterise the function or role of approved drugs using the concept of mode of action. The open hypotheses derived from the resource are available to the community to analyse and design further experiments.
188

Ontology-Driven Self-Organization of Politically Engaged Social Groups / Ontology-Driven Self-Organization of Politically Engaged Social Groups

Belák, Václav January 2009 (has links)
This thesis deals with the use of knowledge technologies in support of self-organization of people with joint political goals. It first provides a theoretical background for a development of a social-semantic system intended to support self-organization and then it applies this background in the development of a core ontology and algorithms for support of self-organization of people. It also presents a design and implementation of a proof-of-concept social-semantic web application that has been built to test our research. The application stores all data in an RDF store and represents them using the core ontology. Descriptions of content are disambiguated using the WordNet thesaurus. Emerging politically engaged groups can establish themselves into local political initiatives, NGOs, or even new political parties. Therefore, the system may help people easily participate on solutions of issues which are influencing them.
189

Impact analysis in description logic ontologies

Goncalves, Joao Rafael Landeiro De sousa January 2014 (has links)
With the growing popularity of the Web Ontology Language (OWL) as a logic-based ontology language, as well as advancements in the language itself, the need for more sophisticated and up-to-date ontology engineering services increases as well. While, for instance, there is active focus on new reasoners and optimisations, other services fall short of advancing at the same rate (it suffices to compare the number of freely-available reasoners with ontology editors). In particular, very little is understood about how ontologies evolve over time, and how reasoners’ performance varies as the input changes. Given the evolving nature of ontologies, detecting and presenting changes (via a so-called diff) between them is an essential engineering service, especially for version control systems or to support change analysis. In this thesis we address the diff problem for description logic (DL) based ontologies, specifically OWL 2 DL ontologies based on the SROIQ DL. The outcomes are novel algorithms employing both syntactic and semantic techniques to, firstly, detect axiom changes, and what terms had their meaning affected between ontologies, secondly, categorise their impact (for example, determining that an axiom is a stronger version of another), and finally, align changes appropriately, i.e., align source and target of axiom changes (so the stronger axiom with the weaker one, from our example), and axioms with the terms they affect. Subsequently, we present a theory of reasoner performance heterogeneity, based on field observations related to reasoner performance variability phenomena. Our hypothesis is that there exist two kinds of performance behaviour: an ontology/reasoner combination can be performance-homogeneous or performance-heterogeneous. Finally, we verify that performance-heterogeneous reasoner/ontology combinations contain small, performance-degrading sets of axioms, which we call hot spots. We devise a performance hot spot finding technique, and show that hot spots provide a promising basis for engineering efficient reasoners.
190

Visualization of Ontologies on the Semantic Web / Vizualizace ontologií na sémantickém webu

Dudáš, Marek January 2012 (has links)
For ontology development, sharing and usage, availability of a suitable visualization method is essential. Much research has been done in this area, but an ideal method is still missing. One of the reasons might be that most of the available tools offer a general visualization but various use cases require specific approaches to the visualization. This master thesis gives a general overview of current visualization methods and their implementations. Both the methods and specific visualization tools are evaluated from the perspective of several possible use case categories. Special focus is given on visualization of ontology transformations. As none of the available implementations is suitable for this use case as is, an alternative approach is proposed. This approach is based on using several existing visualization implementations together and allowing switching between them using a zoom-like function. It is experimentally implemented as a Protégé plugin.

Page generated in 0.0155 seconds