• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 116
  • 37
  • 26
  • 19
  • 18
  • 9
  • 8
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 315
  • 121
  • 101
  • 96
  • 63
  • 55
  • 44
  • 32
  • 29
  • 29
  • 28
  • 22
  • 22
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Learning OWL Class Expressions

Lehmann, Jens 24 June 2010 (has links) (PDF)
With the advent of the Semantic Web and Semantic Technologies, ontologies have become one of the most prominent paradigms for knowledge representation and reasoning. The popular ontology language OWL, based on description logics, became a W3C recommendation in 2004 and a standard for modelling ontologies on the Web. In the meantime, many studies and applications using OWL have been reported in research and industrial environments, many of which go beyond Internet usage and employ the power of ontological modelling in other fields such as biology, medicine, software engineering, knowledge management, and cognitive systems. However, recent progress in the field faces a lack of well-structured ontologies with large amounts of instance data due to the fact that engineering such ontologies requires a considerable investment of resources. Nowadays, knowledge bases often provide large volumes of data without sophisticated schemata. Hence, methods for automated schema acquisition and maintenance are sought. Schema acquisition is closely related to solving typical classification problems in machine learning, e.g. the detection of chemical compounds causing cancer. In this work, we investigate both, the underlying machine learning techniques and their application to knowledge acquisition in the Semantic Web. In order to leverage machine-learning approaches for solving these tasks, it is required to develop methods and tools for learning concepts in description logics or, equivalently, class expressions in OWL. In this thesis, it is shown that methods from Inductive Logic Programming (ILP) are applicable to learning in description logic knowledge bases. The results provide foundations for the semi-automatic creation and maintenance of OWL ontologies, in particular in cases when extensional information (i.e. facts, instance data) is abundantly available, while corresponding intensional information (schema) is missing or not expressive enough to allow powerful reasoning over the ontology in a useful way. Such situations often occur when extracting knowledge from different sources, e.g. databases, or in collaborative knowledge engineering scenarios, e.g. using semantic wikis. It can be argued that being able to learn OWL class expressions is a step towards enriching OWL knowledge bases in order to enable powerful reasoning, consistency checking, and improved querying possibilities. In particular, plugins for OWL ontology editors based on learning methods are developed and evaluated in this work. The developed algorithms are not restricted to ontology engineering and can handle other learning problems. Indeed, they lend themselves to generic use in machine learning in the same way as ILP systems do. The main difference, however, is the employed knowledge representation paradigm: ILP traditionally uses logic programs for knowledge representation, whereas this work rests on description logics and OWL. This difference is crucial when considering Semantic Web applications as target use cases, as such applications hinge centrally on the chosen knowledge representation format for knowledge interchange and integration. The work in this thesis can be understood as a broadening of the scope of research and applications of ILP methods. This goal is particularly important since the number of OWL-based systems is already increasing rapidly and can be expected to grow further in the future. The thesis starts by establishing the necessary theoretical basis and continues with the specification of algorithms. It also contains their evaluation and, finally, presents a number of application scenarios. The research contributions of this work are threefold: The first contribution is a complete analysis of desirable properties of refinement operators in description logics. Refinement operators are used to traverse the target search space and are, therefore, a crucial element in many learning algorithms. Their properties (completeness, weak completeness, properness, redundancy, infinity, minimality) indicate whether a refinement operator is suitable for being employed in a learning algorithm. The key research question is which of those properties can be combined. It is shown that there is no ideal, i.e. complete, proper, and finite, refinement operator for expressive description logics, which indicates that learning in description logics is a challenging machine learning task. A number of other new results for different property combinations are also proven. The need for these investigations has already been expressed in several articles prior to this PhD work. The theoretical limitations, which were shown as a result of these investigations, provide clear criteria for the design of refinement operators. In the analysis, as few assumptions as possible were made regarding the used description language. The second contribution is the development of two refinement operators. The first operator supports a wide range of concept constructors and it is shown that it is complete and can be extended to a proper operator. It is the most expressive operator designed for a description language so far. The second operator uses the light-weight language EL and is weakly complete, proper, and finite. It is straightforward to extend it to an ideal operator, if required. It is the first published ideal refinement operator in description logics. While the two operators differ a lot in their technical details, they both use background knowledge efficiently. The third contribution is the actual learning algorithms using the introduced operators. New redundancy elimination and infinity-handling techniques are introduced in these algorithms. According to the evaluation, the algorithms produce very readable solutions, while their accuracy is competitive with the state-of-the-art in machine learning. Several optimisations for achieving scalability of the introduced algorithms are described, including a knowledge base fragment selection approach, a dedicated reasoning procedure, and a stochastic coverage computation approach. The research contributions are evaluated on benchmark problems and in use cases. Standard statistical measurements such as cross validation and significance tests show that the approaches are very competitive. Furthermore, the ontology engineering case study provides evidence that the described algorithms can solve the target problems in practice. A major outcome of the doctoral work is the DL-Learner framework. It provides the source code for all algorithms and examples as open-source and has been incorporated in other projects.
212

Um ambiente para especifica??o e execu??o AD-HOC de processos de neg?cio baseados em servi?os Web

Mendes J?nior, Jos? Reginaldo de Sousa 29 August 2008 (has links)
Made available in DSpace on 2014-12-17T15:47:50Z (GMT). No. of bitstreams: 1 JoseRSMJ.pdf: 1414617 bytes, checksum: 1d2d0cfcaa6654701268463c98f3c2e9 (MD5) Previous issue date: 2008-08-29 / Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico / Recently the focus given to Web Services and Semantic Web technologies has provided the development of several research projects in different ways to addressing the Web services composition issue. Meanwhile, the challenge of creating an environment that provides the specification of an abstract business process and that it is automatically implemented by a composite service in a dynamic way is considered a currently open problem. WSDL and BPEL provided by industry support only manual service composition because they lack needed semantics so that Web services are discovered, selected and combined by software agents. Services ontology provided by Semantic Web enriches the syntactic descriptions of Web services to facilitate the automation of tasks, such as discovery and composition. This work presents an environment for specifying and ad-hoc executing Web services-based business processes, named WebFlowAH. The WebFlowAH employs common domain ontology to describe both Web services and business processes. It allows processes specification in terms of users goals or desires that are expressed based on the concepts of such common domain ontology. This approach allows processes to be specified in an abstract high level way, unburdening the user from the underline details needed to effectively run the process workflow / O enfoque dado ultimamente ?s tecnologias de Servi?os Web e da Web Sem?ntica tem proporcionado o desenvolvimento de v?rios projetos de pesquisa abordando, de diferentes maneiras, o tema da composi??o de servi?os Web. Entretanto, o desafio de criar um ambiente que favore?a a especifica??o de um processo de neg?cio abstrato e que seja implementado automaticamente por servi?os compostos de forma din?mica ? considerado atualmente um problema em aberto. Os padr?es WSDL e BPEL providos pela a ind?stria de software suportam apenas a composi??o manual de servi?os, pois falta a eles a sem?ntica necess?ria para que os servi?os Web sejam descobertos, selecionados e combinados por agentes de software. As ontologias de servi?o providas pela Web Sem?ntica enriquecem as descri??es sint?ticas dos servi?os Web de modo a facilitar a automa??o de tarefas, como a descoberta e a composi??o. Este trabalho prop?e um ambiente para a especifica??o e execu??o ad-hoc de processos de neg?cio baseados em servi?os Web chamado WebFlowAH. Este ambiente emprega uma ontologia de dom?nio comum para descrever tanto os servi?os Web e quanto os processos de neg?cio. Ele permite a especifica??o de processos em termos de desejos ou objetivos do usu?rio que s?o expressos por conceitos da ontologia de dom?nio comum. Tal abordagem permite que processos de neg?cio sejam especificados de maneira abstrata e de alto n?vel, liberando o usu?rio de detalhes necess?rios para executar eficazmente o workflow do processo
213

AutoWebS: um Ambiente para Modelagem e Gera??o Autom?tica de Servi?osWeb Sem?nticos / AutoWebS: Um Ambiente para Modelagem e Gera??o Autom?tica de Servi?os Web Sem?nticos

Silva, Thiago Pereira da 06 August 2012 (has links)
Made available in DSpace on 2014-12-17T15:48:03Z (GMT). No. of bitstreams: 1 ThiagoPS_DISSERT.pdf: 3143029 bytes, checksum: 0f97ea16a97dc298694ca58c37e62914 (MD5) Previous issue date: 2012-08-06 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / Typically Web services contain only syntactic information that describes their interfaces. Due to the lack of semantic descriptions of the Web services, service composition becomes a difficult task. To solve this problem, Web services can exploit the use of ontologies for the semantic definition of service s interface, thus facilitating the automation of discovering, publication, mediation, invocation, and composition of services. However, ontology languages, such as OWL-S, have constructs that are not easy to understand, even for Web developers, and the existing tools that support their use contains many details that make them difficult to manipulate. This paper presents a MDD tool called AutoWebS (Automatic Generation of Semantic Web Services) to develop OWL-S semantic Web services. AutoWebS uses an approach based on UML profiles and model transformations for automatic generation of Web services and their semantic description. AutoWebS offers an environment that provides many features required to model, implement, compile, and deploy semantic Web services / Tipicamente servi?os Web cont?m apenas informa??es sint?ticas que descrevem suas interfaces e a falta de descri??es sem?nticas torna a composi??o de servi?osWeb uma tarefa dif?cil. Para resolver este problema, pode-se usar ontologias para a defini??o sem?ntica da interface dos servi?os, facilitando a automa??o da descoberta, publica??o, media??o, invoca??o e composi??o dos servi?os. No entanto, linguagens que permitem se descrever semanticamente servi?os Web utilizando ontologias, como OWL-S, t?m constru??es que n?o s?o f?ceis de entender, mesmo para desenvolvedoresWeb, e as ferramentas existentes levam aos usu?rios muitos detalhes que as tornam dif?ceis de serem manipuladas. Este trabalho apresenta uma ferramenta chamada AutoWebS (Automatic Generation of Semantic Web Services) para o desenvolvimento de servi?os Web sem?nticos. O AutoWebS usa uma abordagem baseada em perfis UML e transforma??es entre modelos para a gera??o autom?tica de servi?osWeb e sua descri??o sem?ntica em OWL-S. O AutoWebS disponibiliza um ambiente que oferece recursos para modelar, implementar, compilar e implantar servi?os Web sem?nticos
214

Ontologies et web sémantique pour une construction évolutive d'applications dédiées à la logistique / Ontologies and semantic web for an evolutive development of logistic applications

Hendi, Hayder 04 December 2017 (has links)
Le domaine de la logistique implique souvent la résolution de problèmes combinatoires complexes. Ces derniers font également implicitement référence à des processus, acteurs, activités et méthodes concernant divers aspects qu'il faut considérer. Ainsi, un même problème peut faire intervenir des processus de vente/achat, transport/livraison et gestion de stock. Ces processus sont tellement divers et interconnectés qu'il est difficile pour un logisticien de tous les maîtriser. Dans cette thèse, nous proposons l'explicitation, par le biais d'ontologies, de connaissances conceptuelles et sémantiques concernant les processus logistiques. Ces connaissances explicites sont alors mises à contribution pour construire un système à base de connaissances permettant de guider les logisticiens dans la construction, de façon incrémentale et semi-automatique, de solutions informatiques à un problème qui leur est posé à un moment donné. Nous mettons en oeuvre une ontologie concernant le domaine de la logistique connectée à une ontologie associée à la problématique de l'optimisation. Nous établissons ainsi un lien sémantique explicite entre le domaine de la logistique et celui de l'optimisation. Cela permet aux logisticiens d'identifier de façon précise et sans ambigüité le problème logistique auquel il est confronté et les problèmes d'optimisation associés. L'identification des problèmes conduit alors à un processus de choix des solutions allant du choix du processus logistique précis à mettre en oeuvre à celui de la méthode de résolution du problème combinatoire et cela jusqu'à la découverte du composant informatique à invoquer et qui est matérialisé par un service web. L'approche que nous avons adoptée et mise en oeuvre a été expérimentée avec les problèmes de routage de véhicules, le problème de transport ferroviaire de passagers et le problème de terminaux de conteneurs. / Logistics problems are often complex combinatorial problems. These may also implicitly refer to the processes, actors, activities, and methods concerning various aspects that need to be considered. Thus the same process may involve the processes of sale/purchase, transport/delivery, and stock management. These processes are so diverse and interconnected that it is difficult for a logistic expert to compete all of them. In this thesis, we propose the explications with the help of ontologies of conceptual ans semantic knowledge concerning the logistic processes. This explicit knowledge is then used to develop a reasoning system to guide the logistic expert for an incremental and semi-automatic construction of a software solution to an instantly posed problem. We define an ontology concerning the inter-connected logistics and associated optimization problem. We, henceforth, establish an explicit semantic link between the domains of logistics and the optimization. It may allow the logistic expert to identify precisely and unambiguously the confronted logistic problem and the associated optimization problem. The identification of the problems then leads to a process to choose the solutions ranging from the choice of the precise logistic process to be implemented to that of the method to solve the combinatorial problem until the discovery of the software component to be invoked and which is implemented by a web service. The approach we have adopted and implemented has been experimented with the "Vehicle Routing Problems", the "Passenger Train Problem" and the "Container Terminal problems".
215

Extração de casos de teste utilizando Redes de Petri hierárquicas e validação de resultados utilizando OWL. / Test case extraction using hierarchical Petri Nets and results validation using OWL.

August Baumgartner Neto 27 April 2015 (has links)
Este trabalho propõe dois métodos para teste de sistemas de software: o primeiro extrai ideias de teste de um modelo desenvolvido em rede de Petri hierárquica e o segundo valida os resultados após a realização dos testes utilizando um modelo em OWL-S. Estes processos aumentam a qualidade do sistema desenvolvido ao reduzir o risco de uma cobertura insuficiente ou teste incompleto de uma funcionalidade. A primeira técnica apresentada consiste de cinco etapas: i) avaliação do sistema e identificação dos módulos e entidades separáveis, ii) levantamento dos estados e transições, iii) modelagem do sistema (bottom-up), iv) validação do modelo criado avaliando o fluxo de cada funcionalidade e v) extração dos casos de teste usando uma das três coberturas de teste apresentada. O segundo método deve ser aplicado após a realização dos testes e possui cinco passos: i) primeiro constrói-se um modelo em OWL (Web Ontology Language) do sistema contendo todas as informações significativas sobre as regras de negócio da aplicação, identificando as classes, propriedades e axiomas que o regem; ii) em seguida o status inicial antes da execução é representado no modelo através da inserção das instâncias (indivíduos) presentes; iii) após a execução dos casos de testes, a situação do modelo deve ser atualizada inserindo (sem apagar as instâncias já existentes) as instâncias que representam a nova situação da aplicação; iv) próximo passo consiste em utilizar um reasoner para fazer as inferências do modelo OWL verificando se o modelo mantém a consistência, ou seja, se não existem erros na aplicação; v) finalmente, as instâncias do status inicial são comparadas com as instâncias do status final, verificando se os elementos foram alterados, criados ou apagados corretamente. O processo proposto é indicado principalmente para testes funcionais de caixa-preta, mas pode ser facilmente adaptado para testes em caixa branca. Obtiveram-se casos de testes semelhantes aos que seriam obtidos em uma análise manual mantendo a mesma cobertura do sistema. A validação provou-se condizente com os resultados esperados, bem como o modelo ontológico mostrouse bem fácil e intuitivo para aplicar manutenções. / This paper proposes two test methods for system software testing: the first one extracts test workflow processes from a model developed in Hierarchical Petri Nets and the other validates results after test execution using a domain model in OWL-S. Both processes increase the quality of the system developed by reducing the risk of having an insufficient coverage or an incomplete functionality test. The first technique consists of five steps: i) system evaluation and identification of separable sub modules and entities, ii) identification of states and transitions, iii) system modeling (bottom-up), iv) validation of the created model by evaluating the workflow for each functionality, and v) extraction of test cases using one of the three test coverage presented. The second method must be applied after the execution of the previous method and has also five steps: i) first a system model in OWL (Web Ontology Language) is built containing all significant information and business rules of the application; ii) then, the initial status before the test execution is represented in the model by the insertion of the instances (individuals) presented; iii) after the execution of test cases, the state model is updated by inserting (without deleting already existing instances) new instances to represent the domain sate after test; iv) in the next step we use a reasoner to make OWL model checking inferences to prove model consistency, that is, if there is no error in the application; finally, the initial status instances is compared with the final status in order to verify if these instances have been changed, created or deleted correctly. The process is indicated for blackbox functional tests, but can be easily adapted for white-box tests. There was obtained test cases similar to those that will be obtained in a manual analysis keeping the same test coverage. Validation has proved to be consistent compare to the expected results. Also, the ontological model has showed to be easy and intuitive for maintenance.
216

Ontologies pour la gestion de sécurité ferroviaire : intégration de l'analyse dysfonctionnelle dans la conception / Ontologies for railway safety management : integration of the dysfunctional analysis into the design

Debbech, Sana 14 October 2019 (has links)
La sécurité-innocuité est une propriété émergente des systèmes critiques de sécurité (SCS), notamment les systèmes ferroviaires. Cet aspect émergent complexifie leur processus du développement et nécessite un raisonnement judicieux permettant de diminuer les dangers. Cette thèse propose une approche ontologique qui intègre les activités de sécurité dès les premières phases de conception des SCS. Ce cadre structuré offre une harmonisation sémantique entre les domaines impliqués, tels que l'ingénierie de sécurité et l'Ingénierie des Exigences Dirigée par les Buts (IEDB). La logique métier intégrée dans cette approche est validée par des cas d'étude ferroviaires d'accidents réels et d'une mission télé-opérée. Dans un premier temps, nous avons proposé une ontologie d'analyse dysfonctionnelle appelée DAO et fondée sur l'ontologie de haut niveau UFO. DAO considère les aspects sociaux-techniques et environnementaux des SCS et intègre les différents types de fautes et de propriétés cognitives liés respectivement aux défaillances techniques et aux erreurs humaines. Le modèle conceptuel de DAO est exprimé en OntoUML et formalisé en langage OWL afin de fournir un support de raisonnement. Ensuite, un pont sémantique est établi entre les mesures de sécurité, les buts de sécurité et les exigences de sécurité par le développement d'une ontologie de gestion de sécurité orientée-but, appelée GOSMO. La gestion des décisions de sécurité s’appuie sur la réinterprétation du modèle de contrôle d'accès Or-Bac d'un point de vue sécurité-innocuité. Afin d'assurer la cohérence globale des exigences, GOSMO permet de structurer la gestion des évolutions des exigences et leur traçabilité. / Safety is an emergent property of safety critical systems (SCS), including railway systems. This emergent aspect exacerbates their development process and requires a thorough reasoning to reduce hazards. This thesis proposes an ontological approach that integrates safety activities from the early design stages of SCS. This structured framework provides a semantic harmonization between the involved domains, such as safety engineering and Goal Oriented Requirements Engineering (GORE). The business logic integrated in this approach is validated by real rail accident scenarios and a remotely operated task. At first, we proposed a dysfunctional analysis ontology called DAO and based on the high-level ontology UFO. DAO considers the socio-technical and environmental aspects of SCS and integrates the different types of faults and cognitive properties that are respectively related to technical failures and human errors. The DAO conceptual model is expressed in OntoUML and formalized in OWL language in order to provide a reasoning support. Then, a semantic bridge is established between safety measures, safety goals and safety requirements through the development of a goal-oriented security management ontology, called GOSMO. The management of safety decisions is based on the reinterpretation of the Or-Bac access control model from a safety point of view. In order to ensure the overall consistency of requirements, GOSMO allows structuring the management of requirements changes and their traceability
217

Learning OWL Class Expressions

Lehmann, Jens 09 June 2010 (has links)
With the advent of the Semantic Web and Semantic Technologies, ontologies have become one of the most prominent paradigms for knowledge representation and reasoning. The popular ontology language OWL, based on description logics, became a W3C recommendation in 2004 and a standard for modelling ontologies on the Web. In the meantime, many studies and applications using OWL have been reported in research and industrial environments, many of which go beyond Internet usage and employ the power of ontological modelling in other fields such as biology, medicine, software engineering, knowledge management, and cognitive systems. However, recent progress in the field faces a lack of well-structured ontologies with large amounts of instance data due to the fact that engineering such ontologies requires a considerable investment of resources. Nowadays, knowledge bases often provide large volumes of data without sophisticated schemata. Hence, methods for automated schema acquisition and maintenance are sought. Schema acquisition is closely related to solving typical classification problems in machine learning, e.g. the detection of chemical compounds causing cancer. In this work, we investigate both, the underlying machine learning techniques and their application to knowledge acquisition in the Semantic Web. In order to leverage machine-learning approaches for solving these tasks, it is required to develop methods and tools for learning concepts in description logics or, equivalently, class expressions in OWL. In this thesis, it is shown that methods from Inductive Logic Programming (ILP) are applicable to learning in description logic knowledge bases. The results provide foundations for the semi-automatic creation and maintenance of OWL ontologies, in particular in cases when extensional information (i.e. facts, instance data) is abundantly available, while corresponding intensional information (schema) is missing or not expressive enough to allow powerful reasoning over the ontology in a useful way. Such situations often occur when extracting knowledge from different sources, e.g. databases, or in collaborative knowledge engineering scenarios, e.g. using semantic wikis. It can be argued that being able to learn OWL class expressions is a step towards enriching OWL knowledge bases in order to enable powerful reasoning, consistency checking, and improved querying possibilities. In particular, plugins for OWL ontology editors based on learning methods are developed and evaluated in this work. The developed algorithms are not restricted to ontology engineering and can handle other learning problems. Indeed, they lend themselves to generic use in machine learning in the same way as ILP systems do. The main difference, however, is the employed knowledge representation paradigm: ILP traditionally uses logic programs for knowledge representation, whereas this work rests on description logics and OWL. This difference is crucial when considering Semantic Web applications as target use cases, as such applications hinge centrally on the chosen knowledge representation format for knowledge interchange and integration. The work in this thesis can be understood as a broadening of the scope of research and applications of ILP methods. This goal is particularly important since the number of OWL-based systems is already increasing rapidly and can be expected to grow further in the future. The thesis starts by establishing the necessary theoretical basis and continues with the specification of algorithms. It also contains their evaluation and, finally, presents a number of application scenarios. The research contributions of this work are threefold: The first contribution is a complete analysis of desirable properties of refinement operators in description logics. Refinement operators are used to traverse the target search space and are, therefore, a crucial element in many learning algorithms. Their properties (completeness, weak completeness, properness, redundancy, infinity, minimality) indicate whether a refinement operator is suitable for being employed in a learning algorithm. The key research question is which of those properties can be combined. It is shown that there is no ideal, i.e. complete, proper, and finite, refinement operator for expressive description logics, which indicates that learning in description logics is a challenging machine learning task. A number of other new results for different property combinations are also proven. The need for these investigations has already been expressed in several articles prior to this PhD work. The theoretical limitations, which were shown as a result of these investigations, provide clear criteria for the design of refinement operators. In the analysis, as few assumptions as possible were made regarding the used description language. The second contribution is the development of two refinement operators. The first operator supports a wide range of concept constructors and it is shown that it is complete and can be extended to a proper operator. It is the most expressive operator designed for a description language so far. The second operator uses the light-weight language EL and is weakly complete, proper, and finite. It is straightforward to extend it to an ideal operator, if required. It is the first published ideal refinement operator in description logics. While the two operators differ a lot in their technical details, they both use background knowledge efficiently. The third contribution is the actual learning algorithms using the introduced operators. New redundancy elimination and infinity-handling techniques are introduced in these algorithms. According to the evaluation, the algorithms produce very readable solutions, while their accuracy is competitive with the state-of-the-art in machine learning. Several optimisations for achieving scalability of the introduced algorithms are described, including a knowledge base fragment selection approach, a dedicated reasoning procedure, and a stochastic coverage computation approach. The research contributions are evaluated on benchmark problems and in use cases. Standard statistical measurements such as cross validation and significance tests show that the approaches are very competitive. Furthermore, the ontology engineering case study provides evidence that the described algorithms can solve the target problems in practice. A major outcome of the doctoral work is the DL-Learner framework. It provides the source code for all algorithms and examples as open-source and has been incorporated in other projects.
218

Representing chemical structures using OWL and discriptions graphs

Hastings, Joanna Kathleen 11 1900 (has links)
Objects can be said to be structured when their representation also contains their parts. While OWL in general can describe structured objects, description graphs are a recent, decidable extension to OWL which support the description of classes of structured objects whose parts are related in complex ways. Classes of chemical entities such as molecules, ions and groups (parts of molecules) are often characterised by the way in which the constituent atoms of their instances are connected via chemical bonds. For chemoinformatics tools and applications, this internal structure is represented using chemical graphs. We here present a chemical knowledge base based on the standard chemical graph model using description graphs, OWL and rules. We include in our ontology chemical classes, groups, and molecules, together with their structures encoded as description graphs. We show how role-safe rules can be used to determine parthood between groups and molecules based on the graph structures and to determine basic chemical properties. Finally, we investigate the scalability of the technology used through the development of an automatic utility to convert standard chemical graphs into description graphs, and converting a large number of diverse graphs obtained from a publicly available chemical database. / Computer Science (School of Computing) / M. Sc. (Computer Science)
219

Geology of the Owl Head Mining District, Pinal County, Arizona

Barter, Charles F. January 1962 (has links)
The Owl Head mining District is located in south-central Pinal County, Arizona, within the Basin and Range province. Land forms, particularity pediments, characteristic of this province are abundant in this area. Precambrian rocks of the Owl Head mining district include the Pinal schist; gneiss; intrusions of granite, quartz monzonite and quartz diorite; and small amounts of Dripping Spring quartzite and metamorphosed Mescal limestone. These have been intruded by dikes and plugs of diorite and andesite, and are unconformably overlain by volcanic rocks and continental sedimentary rocks of Tertiary and Quaternary age. No rocks of the Paleozoic and Mesozoic eras have been recognized. The structural trends of the Owl Head mining district probably reflect four major lineament directions. The dominant structural trends found in the area are north and northwest. Subordinate to these directions are northeast and easterly trends. The strike of the northerly trend varies from due north to N30°E and was probably developed during the Mazatzal Revolution. The northwest trend has probably been superposed over the northerly trend at some later date. Copper mineralization is abundant in the area and prospecting by both individuals and mining companies has been extensive. To date no ore body of any magnitude has been found, but evidence suggests that an economic copper deposit may exist within the area. The copper mineralization visible at the surface consists mainly of the secondary copper minerals chrysocolla, malachite, azurite, and chalcocite with chrysocolla being by far the most abundant. Copper minerals are found to occur in all rocks older than middle Tertiary age. Placer magnetite deposits are found in the alluvial material of this area, and one such deposit is now being mined.
220

Migrace kalouse ušatého (Asio otus) v podmínkách střední Evropy / Long-eared owl (Asio otus) migration within Central Europe

Fraitágová, Iveta January 2014 (has links)
The aim of the present thesis is to give a review of the ringing recoveries of the Long - eared owl (Asio otus) in the territory of the former Czechoslovakia and the Czech Republic. The bird ringing data used in this thesis come from the archive of the National Museum in Prague. The parts of the thesis are as follows:1) history of the ringing in the Czech Republic; 2) bird adaptation on the flight and migration; 3) control of the migration; 4) the data of the Long- eared owls ringed as the young birds in the nest (pulli); 5) the data of the Long - eared owls ringed as adults (ad), that were caught and checked during their wintering in the Czech Republic; 6) the recoveries of the Long- eared owls ringed by various European Bird Ringing Centres and found in the Czech Republic; 7) cause of the mortality of the Long eared owl;8) census of the Long - eared owl recoveries in the Czech Republic from 1934 till 2011 (appendix). Key words: Migration, the Long-eared Owl, Ringing, Ringing Recoveries, Mortality

Page generated in 0.0367 seconds