Spelling suggestions: "subject:"ontologies"" "subject:"antologies""
341 |
Ontologies pour la gestion de sécurité ferroviaire : intégration de l'analyse dysfonctionnelle dans la conception / Ontologies for railway safety management : integration of the dysfunctional analysis into the designDebbech, Sana 14 October 2019 (has links)
La sécurité-innocuité est une propriété émergente des systèmes critiques de sécurité (SCS), notamment les systèmes ferroviaires. Cet aspect émergent complexifie leur processus du développement et nécessite un raisonnement judicieux permettant de diminuer les dangers. Cette thèse propose une approche ontologique qui intègre les activités de sécurité dès les premières phases de conception des SCS. Ce cadre structuré offre une harmonisation sémantique entre les domaines impliqués, tels que l'ingénierie de sécurité et l'Ingénierie des Exigences Dirigée par les Buts (IEDB). La logique métier intégrée dans cette approche est validée par des cas d'étude ferroviaires d'accidents réels et d'une mission télé-opérée. Dans un premier temps, nous avons proposé une ontologie d'analyse dysfonctionnelle appelée DAO et fondée sur l'ontologie de haut niveau UFO. DAO considère les aspects sociaux-techniques et environnementaux des SCS et intègre les différents types de fautes et de propriétés cognitives liés respectivement aux défaillances techniques et aux erreurs humaines. Le modèle conceptuel de DAO est exprimé en OntoUML et formalisé en langage OWL afin de fournir un support de raisonnement. Ensuite, un pont sémantique est établi entre les mesures de sécurité, les buts de sécurité et les exigences de sécurité par le développement d'une ontologie de gestion de sécurité orientée-but, appelée GOSMO. La gestion des décisions de sécurité s’appuie sur la réinterprétation du modèle de contrôle d'accès Or-Bac d'un point de vue sécurité-innocuité. Afin d'assurer la cohérence globale des exigences, GOSMO permet de structurer la gestion des évolutions des exigences et leur traçabilité. / Safety is an emergent property of safety critical systems (SCS), including railway systems. This emergent aspect exacerbates their development process and requires a thorough reasoning to reduce hazards. This thesis proposes an ontological approach that integrates safety activities from the early design stages of SCS. This structured framework provides a semantic harmonization between the involved domains, such as safety engineering and Goal Oriented Requirements Engineering (GORE). The business logic integrated in this approach is validated by real rail accident scenarios and a remotely operated task. At first, we proposed a dysfunctional analysis ontology called DAO and based on the high-level ontology UFO. DAO considers the socio-technical and environmental aspects of SCS and integrates the different types of faults and cognitive properties that are respectively related to technical failures and human errors. The DAO conceptual model is expressed in OntoUML and formalized in OWL language in order to provide a reasoning support. Then, a semantic bridge is established between safety measures, safety goals and safety requirements through the development of a goal-oriented security management ontology, called GOSMO. The management of safety decisions is based on the reinterpretation of the Or-Bac access control model from a safety point of view. In order to ensure the overall consistency of requirements, GOSMO allows structuring the management of requirements changes and their traceability
|
342 |
Automatic sensor discovery and management to implement effective mechanism for data fusion and data aggregation / Découverte et gestion autonomique des capteurs pour une mise en oeuvre de mécanismes efficaces de fusion et d’agrégation de donnéesNachabe Ismail, Lina 06 October 2015 (has links)
Actuellement, des descriptions basées sur de simples schémas XML sont utilisées pour décrire un capteur/actuateur et les données qu’il mesure et fournit. Ces schémas sont généralement formalisés en utilisant le langage SensorML (Sensor Model Language), ne permettant qu’une description hiérarchique basique des attributs des objets sans aucune notion de liens sémantiques, de concepts et de relations entre concepts. Nous pensons au contraire que des descriptions sémantiques des capteurs/actuateurs sont nécessaires au design et à la mise en œuvre de mécanismes efficaces d’inférence, de fusion et de composition de données. Cette ontologie sémantique permettra de masquer l’hétérogénéité des données collectées et facilitera leur fusion et leur composition au sein d’un environnement de gestion de capteur similaire à celui d’une architecture ouverte orientée services. La première partie des travaux de cette thèse porte donc sur la conception et la validation d’une ontologie sémantique légère, extensible et générique de description des données fournies par un capteur/actuateur. Cette description ontologique de données brutes devra être conçue : • d’une manière extensible et légère afin d’être applicable à des équipements embarqués hétérogènes, • comme sous élément d’une ontologie de plus haut niveau (upper level ontology) utilisée pour modéliser les capteurs et actuateurs (en tant qu’équipements et non plus de données fournies), ainsi que les informations mesurées (information veut dire ici donnée de plus haut niveau issue du traitement et de la fusion des données brutes). La seconde partie des travaux de cette thèse portera sur la spécification et la qualification : • d’une architecture générique orientée service (SOA) permettant la découverte et la gestion d’un capteur/actuateur, et des données qu’il fournit (incluant leurs agrégation et fusion en s’appuyant sur les mécanismes de composition de services de l’architecture SOA), à l’identique d’un service composite de plus haut niveau, • d’un mécanisme amélioré de collecte de données à grande échelle, au dessus de cette ontologie descriptive. L’objectif des travaux de la thèse est de fournir des facilitateurs permettant une mise en œuvre de mécanismes efficaces de collecte, de fusion et d’agrégation de données, et par extension de prise de décisions. L’ontologie de haut niveau proposée sera quant à elle pourvue de tous les attributs permettant une représentation, une gestion et une composition des ‘capteurs, actuateurs et objets’ basées sur des architectures orientées services (Service Oriented Architecture ou SOA). Cette ontologie devrait aussi permettre la prise en compte de l’information transporter (sémantique) dans les mécanismes de routage (i.e. routage basé information). Les aspects liés à l’optimisation et à la modélisation constitueront aussi une des composantes fortes de cette thèse. Les problématiques à résoudre pourraient être notamment : • La proposition du langage de description le mieux adapté (compromis entre richesse, complexité et flexibilité), • La définition de la structure optimum de l’architecture de découverte et de gestion d’un capteur/actuateur, • L’identification d’une solution optimum au problème de la collecte à grande échelle des données de capteurs/actuateurs / The constant evolution of technology in terms of inexpensive and embedded wireless interfaces and powerful chipsets has leads to the massive usage and development of wireless sensor networks (WSNs). This potentially affects all aspects of our lives ranging from home automation (e.g. Smart Buildings), passing through e-Health applications, environmental observations and broadcasting, food sustainability, energy management and Smart Grids, military services to many other applications. WSNs are formed of an increasing number of sensor/actuator/relay/sink devices, generally self-organized in clusters and domain dedicated, that are provided by an increasing number of manufacturers, which leads to interoperability problems (e.g., heterogeneous interfaces and/or grounding, heterogeneous descriptions, profiles, models …). Moreover, these networks are generally implemented as vertical solutions not able to interoperate with each other. The data provided by these WSNs are also very heterogeneous because they are coming from sensing nodes with various abilities (e.g., different sensing ranges, formats, coding schemes …). To tackle this heterogeneity and interoperability problems, these WSNs’ nodes, as well as the data sensed and/or transmitted, need to be consistently and formally represented and managed through suitable abstraction techniques and generic information models. Therefore, an explicit semantic to every terminology should be assigned and an open data model dedicated for WSNs should be introduced. SensorML, proposed by OGC in 2010, has been considered an essential step toward data modeling specification in WSNs. Nevertheless, it is based on XML schema only permitting basic hierarchical description of the data, hence neglecting any semantic representation. Furthermore, most of the researches that have used semantic techniques for developing their data models are only focused on modeling merely sensors and actuators (this is e.g. the case of SSN-XG). Other researches dealt with data provided by WSNs, but without modelling the data type, quality and states (like e.g. OntoSensor). That is why the main aim of this thesis is to specify and formalize an open data model for WSNs in order to mask the aforementioned heterogeneity and interoperability between different systems and applications. This model will also facilitate the data fusion and aggregation through an open management architecture like environment as, for example, a service oriented one. This thesis can thus be split into two main objectives: 1)To formalize a semantic open data model for generically describing a WSN, sensors/actuators and their corresponding data. This model should be light enough to respect the low power and thus low energy limitation of such network, generic for enabling the description of the wide variety of WSNs, and extensible in a way that it can be modified and adapted based on the application. 2)To propose an upper service model and standardized enablers for enhancing sensor/actuator discovery, data fusion, data aggregation and WSN control and management. These service layer enablers will be used for improving the data collection in a large scale network and will facilitate the implementation of more efficient routing protocols, as well as decision making mechanisms in WSNs
|
343 |
Role of description logic reasoning in ontology matchingReul, Quentin H. January 2012 (has links)
Semantic interoperability is essential on the Semantic Web to enable different information systems to exchange data. Ontology matching has been recognised as a means to achieve semantic interoperability on the Web by identifying similar information in heterogeneous ontologies. Existing ontology matching approaches have two major limitations. The first limitation relates to similarity metrics, which provide a pessimistic value when considering complex objects such as strings and conceptual entities. The second limitation relates to the role of description logic reasoning. In particular, most approaches disregard implicit information about entities as a source of background knowledge. In this thesis, we first present a new similarity function, called the degree of commonality coefficient, to compute the overlap between two sets based on the similarity between their elements. The results of our evaluations show that the degree of commonality performs better than traditional set similarity metrics in the ontology matching task. Secondly, we have developed the Knowledge Organisation System Implicit Mapping (KOSIMap) framework, which differs from existing approaches by using description logic reasoning (i) to extract implicit information as background knowledge for every entity, and (ii) to remove inappropriate correspondences from an alignment. The results of our evaluation show that the use of Description Logic in the ontology matching task can increase coverage. We identify people interested in ontology matching and reasoning techniques as the target audience of this work
|
344 |
Gamification of collaborative learning scenarios: an ontological engineering approach to deal with the motivation problem caused by computer-supported collaborative learning scripts / Gamificação de cenários de aprendizagem colaborativa: uma abordagem de engenharia de ontologias para lidar com o problema de motivação causado por scripts de aprendizagem colaborativa com suporte computacionalChallco, Geiser Chalco 19 October 2018 (has links)
Increase both students motivation and learning outcomes in Collaborative Learning (CL) activities is a challenge that the Computer-Supported Collaborative Learning (CSCL) community has been addressing in the past last years. The use of CSCL scripts to structure and orchestrate the CL process has been shown to be effective to support meaningful interactions and better learning, but the scripted collaboration often does not motivate students to participate in the CL process, which makes more difficult the use of scripts over time in CL activities. To deal with the motivational problems, the researchers, educators and practitioners are now looking at gamification as a solution to motivate and engage students. However, the gamification is a complex task, requiring from instructional designers and practitioners, knowledge about game elements (such as leaderboards and point systems), game design (e.g. how to combine game elements) and their impact on motivation, engagement and learning. Moreover, the gamification is too context-dependent, requiring personalization for each participant and situation. Thus, to address these issues, an ontological engineering approach to gamify CL sessions has been proposed and conducted in this dissertation. In this approach, an ontology has been formalized to enable the systematic representation of knowledge extracted from theories and best practices related to gamification. In this ontology, the concepts, extracted from practices and theories related to gamification, and identified as relevant to deal with the motivational problems in scripted collaborative learning, have been formalized as ontological structures to be used by computer-based mechanisms and procedures in intelligent-theory aware systems. These mechanisms and procedures with ontological structures aim to provide support to give advices and recommendations that will help instructional designers and practitioners to gamify CL sessions. To validate this approach, and to demonstrate its effectiveness and efficiency into deal with the motivational problems in scripted collaborative learning, four empirical studies were conducted in real situations at the University of São Paulo with undergraduate Computer Science and Computer Engineering students. The results of the empirical studies demonstrated that, for CL activities where the CSCL scripts are used as a method to orchestrate and structure the CL process, the ontological engineering approach to gamify CL scenarios is an effective and efficient solution to deal with the motivational problems because the CL sessions obtained by this approach affected in a proper way the participants motivation and learning outcomes. / Aumentar a motivação e os resultados de aprendizagem dos estudantes nas atividades de aprendizagem colaborativa é um desafio que a comunidade de Aprendizagem Colaborativa com Suporte Computacional tem abordado nos últimos anos. O uso de scripts para estruturar e orquestrar o processo de aprendizagem colaborativa demonstrou ser eficaz para dar suporte as interações significativas e um melhor aprendizado, mas a colaboração com scripts muitas vezes não motiva os alunos a participar do processo de aprendizagem colaborativa, o que dificulta o uso de scripts ao longo do tempo em atividades de aprendizgem colaborativas. Para lidar com problemas de motivação, os pesquisadores, educadores e profissionais estão agora olhando a gamificação como uma solução para motivar e envolver os alunos. No entanto, a gamificação é uma tarefa complexa, exigindo de projetistas instrucionais e profissionais, conhecimento sobre elementos do jogo (e.g. leaderboards e sistemas de pontos), design de jogos (e.g. como combinar elementos do jogo) e seu impacto na motivação, engajamento e aprendizado. Além disso, a gamificação é muito dependente do contexto, exigindo personalização para cada participante e situação. Assim, para abordar esses problemas, uma abordagem de engenharia ontologias para gamificar sessões de aprendizagem colaborativa foi proposto e desenvolvida nesta dissertação. Nessa abordagem, uma ontologia foi formalizada para possibilitar a representação sistemática de conhecimentos extraídos de teorias e melhores práticas relacionadas à gamificação. Na ontologia, os conceitos, extraídos de práticas e teorias relacionadas à gamificação, e identificados como relevantes para lidar com problemas motivacionais na aprendizagem colaborativa com scripts, foram formalizados como estruturas ontológicas a serem utilizadas por mecanismos e procedimentos informatizados em sistemas inteligentes cientes de teorias. Esses mecanismos e procedimentos com estruturas ontológicas visam fornecer suporte para dar conselhos e recomendações que ajudarão os projetistas instrucionais e profissionais a gamificar as sessões de aprendizagem colaborativa. Para validar a abordagem e demonstrar sua eficácia e eficiência em lidar com problemas motivacionais na aprendizagem colaborativa com scripts, quatro estudos empíricos foram conduzidos em situações reais na Universidade de São Paulo com estudantes de graduação em Ciência da Computação e Engenharia da Computação. Os resultados dos estudos empíricos demonstraram que, para as atividades de aprendizagem colaborativa no que os scripts são usados como um método para orquestrar e estruturar o processo da aprendizagem colaborativa, a abordagem de engenharia ontológica para gamificar cenários de aprendizagem colaborativa é um eficaz e eficiente solução para lidar com problemas motivacionais porque as sessões de aprendizagem colaborativa obtidas por essa abordagem afetaram de maneira adequada a motivação e os resultados de aprendizagem dos participantes.
|
345 |
Extração e consulta de informações do Currículo Lattes baseada em ontologias / Ontology-based Queries and Information Extraction from the Lattes CVGalego, Eduardo Ferreira 06 November 2013 (has links)
A Plataforma Lattes é uma excelente base de dados de pesquisadores para a sociedade brasileira, adotada pela maioria das instituições de fomento, universidades e institutos de pesquisa do País. Entretanto, é limitada quanto à exibição de dados sumarizados de um grupos de pessoas, como por exemplo um departamento de pesquisa ou os orientandos de um ou mais professores. Diversos projetos já foram desenvolvidos propondo soluções para este problema, alguns inclusive desenvolvendo ontologias a partir do domínio de pesquisa. Este trabalho tem por objetivo integrar todas as funcionalidades destas ferramentas em uma única solução, a SOS Lattes. Serão apresentados os resultados obtidos no desenvolvimento desta solução e como o uso de ontologias auxilia nas atividades de identificação de inconsistências de dados, consultas para construção de relatórios consolidados e regras de inferência para correlacionar múltiplas bases de dados. Além disto, procura-se por meio deste trabalho contribuir com a expansão e disseminação da área de Web Semântica, por meio da criação de uma ferramenta capaz de extrair dados de páginas Web e disponibilizar sua estrutura semântica. Os conhecimentos adquiridos durante a pesquisa poderão ser úteis ao desenvolvimento de novas ferramentas atuando em diferentes ambientes. / The Lattes Platform is an excellent database of researchers for the Brazilian society , adopted by most Brazilian funding agencies, universities and research institutes. However, it is limited as to displaying summarized data from a group of people, such as a research department or students supervised by one or more professor. Several projects have already been developed which propose solutions to this problem, including some developing ontologies from the research domain. This work aims to integrate all the functionality of these tools in a single solution, SOS Lattes. The results obtained in the development of this solution are presented as well as the use of ontologies to help identifying inconsistencies in the data, queries for building consolidated reports and rules of inference for correlating multiple databases. Also, this work intends to contribute to the expansion and dissemination of the Semantic Web, by creating a tool that can extract data from Web pages and provide their semantic structure. The knowledge gained during the study may be useful for the development of new tools operating in different environments.
|
346 |
Desenvolvimento de técnica para recomendar atividades em workflows científicos: uma abordagem baseada em ontologias / Development of a strategy to scientific workflow activities recommendation: An ontology-based approachKhouri, Adilson Lopes 16 March 2016 (has links)
O número de atividades disponibilizadas pelos sistemas gerenciadores de workflows científicos é grande, o que exige dos cientistas conhecerem muitas delas para aproveitar a capacidade de reutilização desses sistemas. Para minimizar este problema, a literatura apresenta algumas técnicas para recomendar atividades durante a construção de workflows científicos. Este projeto especificou e desenvolveu um sistema de recomendação de atividades híbrido, considerando informação sobre frequência, entrada e saídas das atividades, e anotações ontológicas para recomendar. Além disso, neste projeto é apresentada uma modelagem da recomendação de atividades como um problema de classificação e regressão, usando para isso cinco classificadores; cinco regressores; um classificador SVM composto, o qual usa o resultado dos outros classificadores e regressores para recomendar; e um ensemble de classificadores Rotation Forest. A técnica proposta foi comparada com as outras técnicas da literatura e com os classificadores e regressores, por meio da validação cruzada em 10 subconjuntos, apresentando como resultado uma recomendação mais precisa, com medida MRR ao menos 70% maior do que as obtidas pelas outras técnicas / The number of activities provided by scientific workflow management systems is large, which requires scientists to know many of them to take advantage of the reusability of these systems. To minimize this problem, the literature presents some techniques to recommend activities during the scientific workflow construction. This project specified and developed a hybrid activity recommendation system considering information on frequency, input and outputs of activities and ontological annotations. Additionally, this project presents a modeling of activities recommendation as a classification problem, tested using 5 classifiers; 5 regressors; a SVM classifier, which uses the results of other classifiers and regressors to recommend; and Rotation Forest , an ensemble of classifiers. The proposed technique was compared to other related techniques and to classifiers and regressors, using 10-fold-cross-validation, achieving a MRR at least 70% greater than those obtained by other techniques
|
347 |
A Framework for Managing Process Variability Through Process Mining and Semantic Reasoning : An Application in Healthcare / Un cadre de configuration des variantes de processus à travers la fouille de processus et le raisonnement sémantique : une application dans le cadre de la santéDetro, Silvana Pereira 15 December 2017 (has links)
Les organisations doivent relever le défi d'adapter leurs processus aux changements qui peuvent survenir dans l'environnement dynamique dans lequel elles opèrent. Les adaptations dans le processus aboutissent à plusieurs variantes de processus, c'est-à-dire dans différentes versions du modèle de processus. Les variantes de processus peuvent différer en termes d'activités, de ressources, de flux de contrôle et de données. Ainsi, le concept d'un modèle de processus personnalisable est apparu et il vise à adapter le modèle de processus en fonction des exigences d'un contexte spécifique. Un modèle de processus personnalisable peut représenter toutes les variantes de processus dans un modèle unique dans lequel les parties communes ne sont représentées qu’une seule fois et les spécificités de chaque variante sont préservées. Alors, grâce à des transformations dans le modèle de processus générique, une variante de processus peut en être dérivée. En tant qu'avantages, cette approche permet d'éliminer les redondances, favorise la réutilisation, entre autres. Cependant, la personnalisation des modèles de processus n'est pas une tâche triviale. La personnalisation doit assurer que la variante obtenue est correcte du point de vue structurel et comportemental, c'est-à-dire la variante obtenue ne doit pas présenter d'activités déconnectées, d’interblocages actifs ou d'interblocages, entre autres. En outre, la variante de processus doit satisfaire à toutes les exigences du contexte de l'application, aux réglementations internes et externes, entre autres. De plus, il est nécessaire de fournir à l'utilisateur des directives et des recommandations lors de la personnalisation du processus. Les directives permettent la personnalisation correcte des variantes de processus, en évitant les problèmes de comportement. Les recommandations concernant le contexte de l'entreprise rendent possible l'amélioration du processus et aussi la personnalisation des variantes en fonction des besoins spécifiques. Dans ce contexte, cette recherche propose un cadre pour la personnalisation des variantes de processus en fonction des besoins de l'utilisateur. La personnalisation est réalisée grâce à l'utilisation d'ontologies pour la sélection des variantes. Le cadre est composé de trois étapes. La première correspond à l'identification des variantes à partir d'un journal d'événements au moyen de techniques d'exploration de processus, qui permettent de découvrir des points de variation, c'est-à-dire les parties du processus sujettes à variation, les alternatives disponibles pour chaque point de variation et les règles de sélection des alternatives disponibles. L'identification des variantes de processus et de leurs caractéristiques à partir d'un journal des événements permet de personnaliser un modèle de processus en fonction du contexte de l'application. À partir de ces aspects, la deuxième étape peut être développée. Cette étape concerne le développement d'un questionnaire, dans lequel chaque question est liée à un point de variation et chaque réponse correspond à la sélection d'une variante. Dans la troisième étape, deux ontologies sont proposées. La première formalise les connaissances liées aux réglementations externes et internes et aux connaissances des spécialistes. La deuxième ontologie se réfère aux points de variation, aux alternatives existantes pour chaque point de variation et aux règles liées à la sélection de chaque alternative. Ensuite, ces ontologies sont intégrées dans une nouvelle ontologie, qui contient les connaissances nécessaires pour personnaliser la variante de processus. Ainsi, à travers le questionnaire et le raisonnement sémantique, la variante est sélectionnée et les recommandations concernant le processus d’affaires sont fournies en fonction de la sélection de l'utilisateur lors de la personnalisation du processus. Le cadre proposé est évalué au moyen d'une étude de cas liée au traitement des patients chez qui [...] / The efficiency of organizations relies on its ability to adapt their business processes according to changes that may occur in the dynamic environment in which they operate. These adaptations result in new versions of the process model, known as process variants. Thus, several process variants can exist, which aim to represent all the related contexts that may differ in activities, resources, control flow, and data. Thus, has emerged the concept of customizable process model. It aims to adapt the process model according to changes in the business context. A process model can be customized by representing the process family in one single model enabling to derive a process variant through transformations in this single model. As benefits, this approach enables to avoid redundancies, promotes the model reuse and comparison, among others. However, the process variant customization is not a trivial-task. It must be ensured that the variant is correct in a structural and behavioural way (e.g. avoiding disconnected activities or deadlocks), and respecting all the requirements of the application context. Besides, the resulting process variant must respect all requirements related to the application context, internal and external regulations, among others. In addition, recommendations and guidance should be provided during the process customization. Guidance help the user to customize correct process variants, i.e., without behavioural problems. Recommendations about the process context help the user in customizing process variants according specific requirements. Recommendations about the business context refers to providing information about the best practices that can improve the quality of the process. In this context, this research aims to propose a framework for customizing process variants according to the user’s requirements. The customization is achieved by reasoning on ontologies based on the rules for selecting a process variant and in the internal/external regulations and expert knowledge. The framework is composed by three steps. The first step proposes to identify the process variants from an event log through process mining techniques, which enable to discover the variation points, i.e., the parts of the model that are subject to variation, the alternatives for the variation points and the rules to select the alternatives. By identifying the process variants and their characteristics from an event log, the process model can be correctly individualized by meeting the requirements of the context of application. Based on these aspects, the second step can be developed. This step refers to the development of the questionnaire-model approach. In the questionnaire approach each variation point is related to a question, and the alternatives for each question corresponds to the selection of the process variants. The third step corresponds to apply two ontologies for process model customization. One ontology formalizes the knowledge related with the internal and/or external regulations and expert knowledge. The other refers to the variation points, the alternatives for them and the rules for choosing each path. The ontologies then are merged into one new ontology, which contain the necessary knowledge for customize the process variants. Thus, by answering the questionnaire and by reasoning on the ontology, the alternatives related with the business process and the recommendations about the business context are provided for the user. The framework is evaluated through a case study related to the treatment of patients diagnosed with acute ischemic stroke. As result, the proposed framework provides a support decision-making during the process model customization
|
348 |
[en] A UNIFIED PROCESS FRAMEWORK OF ONTOLOGY ENGINEERING / [pt] UM PROCESSO UNIFICADO PARA ENGENHARIA DE ONTOLOGIASDANIEL ABADI ORLEAN 12 March 2004 (has links)
[pt] A Web Semântica já está deixando de ser uma visão de Tim
Berners-Lee para virar uma realidade. Diversos projetos
em todo mundo já exploram as potencialidades dessa
segunda geração da Web para tornar seu conteúdo
processável por máquinas. Infelizmente (ou felizmente!),
os computadores não são capazes de desenvolver por livre
e espontânea vontade uma linguagem consensual de
comunicação. É nesta etapa que entram em cena as
ontologias. Conhecida no ramo da filosofia como teorias
sobre a natureza da existência, as ontologias têm sido
encaradas de maneira consideravelmente diferente no
universo computacional. Representam um entendimento comum
e compartilhado sobre um domínio específico e têm como o
objetivo principal permitir a comunicação entre
organizações, pessoas e o uso de metodologias, processos
e/ ou métodos bem definidos. Com o aumento de sua
importância, o projeto e desenvolvimento de ontologias
vêm deixando de ser uma arte para se transformar em um
processo de engenharia. Diversas propostas já foram
apresentadas para o problema de engenharia de ontologias,
muitas delas apoiadas por estudos acadêmicos e
industriais. No entanto, é importante notar que nenhuma
dessas metodologias - que em muitos casos resumem-se
apenas a heurísticas extraídas da experiência de seus
autores ou a orientações sobre como alcançar bons
resultados - atende por completo aos requisitos
potenciais do projeto de uma ontologia. O que se propõe
neste trabalho é a unificação de disciplinas e atividades
oriundas de metodologias distintas em um framework de
processos abrangente, entitulado KUP - Knowldge Unified
Process, que permite um adequado projeto e
desenvolvimento de ontologias e bases de conhecimento.
Entende-se como um processo um conjunto de atividades e
resultados associados a essas atividades com o objetivo
de garantir a geração de um produto final, seja este
produto um software, uma ontologia ou uma ontologia
associada a sua base de conhecimento. A unificação deste
processo segue as melhores práticas em engenharia de
software existentes na indústria e é apoiada por um
framework de avaliação de metodologias consolidado na
academia. Foram realizados dois estudos de caso para este
framework de processos, um envolvendo o desenvolvimento
de uma solução de gestão de conhecimento em segurança da
informação e outro envolvendo a integração de uma
ferramenta de gestão de competências para ambientes de
educação a distância. / [en] The Semantic Web is now a reality. Several projects all
around the world are already using tools technologies
developed to support the second generation of the Web to
provide machine-processable content for software agents,
web services and applications. However, computers can not
agree on a consesual language by themselves. Ontologies can
be used as a way to provide this shared conceptualization,
making posssible the desired communication among
organizations, people and apllications. Several proposals
have been already presented regarding ontology engineering -
many supported by academic and industrial case studies.
However, none of them encompasses all the requirements
identified for an ontology construction project. This work
describes the unification of different features extracted
from those methodologies to build a process framework
named KUP - the Knowledge Unified Process. This unified
process is based on several industry best practices and on
a well accepted ontology methodology evaluation framework.
Two case studies were developed so as to support and
validate this process framework. The first was the
development of a semantic web solution for security
information knowledge management and the second
one was the integration of a skill management tool to a
learning management system, through ontologies.
|
349 |
Aplicações de Modelos Semânticos em Redes SociaisJardim, André Desessards 26 March 2010 (has links)
Made available in DSpace on 2016-03-22T17:26:25Z (GMT). No. of bitstreams: 1
Dissertaandreo II Final.pdf: 5215323 bytes, checksum: 78dc4aa3677577cccbd14cb124510b6a (MD5)
Previous issue date: 2010-03-26 / Until recently, the handling of large amounts of information was a task for specialists, is
now a need for people from all areas, in their professional and in the tasks of everyday
life. The quality in information retrieval is crucial in many professions, and
improvement of systems can have a major impact, especially when it comes to
heterogeneous collections of documents. The research on the Web has been an excellent
field for information retrieval and massive automatic indexing on a large scale. This
research proposes a general study on Semantic Web, Web 2.0, Social Networks and
Ontologies, its applications, languages and methodologies, with the main purpose of
creating of an Social Networks Ontology, in order to see how the social networks can be
enhanced through the Semantic Web, and see how in the future the Semantic Web and
Social Networks will be integrated. The big challenge is to mold this field in a manner
to facilitate the development of applications for it, using technologies and tools of the
Semantic Web. This work was also intended to systematize the study of Social
Networks, Web 2.0, Semantic Web and Ontologies in order to serve as a basis and
reference for future studies / Até recentemente, a manipulação de grandes quantidades de informação era uma tarefa
de especialistas, agora constitui uma necessidade para pessoas de todas as áreas, tanto
na sua atividade profissional como na maioria das tarefas do dia a dia. A qualidade na
recuperação de informação é crucial em muitas profissões, e a melhoria dos sistemas
pode ter um grande impacto, especialmente quando se trata de coleções de documentos
heterogêneos. A pesquisa na Web tem sido um excelente campo para a recuperação de
informação em grande escala e a indexação automática maciça. O presente trabalho
propõe um estudo geral sobre Web Semântica, Web 2.0, Redes Sociais e Ontologias,
suas aplicações, linguagens e metodologias, tendo como finalidade principal, a criação
de uma Ontologia do Domínio das Redes Sociais, com os objetivos de verificar como as
Redes Sociais podem ser potencializadas através da Web Semântica, e de como no
futuro a Web Semântica e as Redes Sociais irão se integrar. O grande desafio consiste
em modelar este domínio de uma forma a facilitar o desenvolvimento de aplicações para
ele, utilizando tecnologias e ferramentas da Web Semântica. Este trabalho teve também
como objetivo sistematizar o estudo das Redes Sociais, Web 2.0, Web Semântica e
Ontologias, de forma a servir de base e referência para estudos futuros
|
350 |
ONTO-Analyst: um método extensível para a identificação e visualização de anomalias em ontologias / ONTO-Analyst: An Extensible Method for the Identification and the Visualization of Anomalies in OntologiesOrlando, João Paulo 21 August 2017 (has links)
A Web Semântica é uma extensão da Web em que as informações tem um significado explícito, permitindo que computadores e pessoas trabalhem em cooperação. Para definir os significados explicitamente, são usadas ontologias na estruturação das informações. À medida que mais campos científicos adotam tecnologias da Web Semântica, mais ontologias complexas são necessárias. Além disso, a garantia de qualidade das ontologias e seu gerenciamento ficam prejudicados quanto mais essas ontologias aumentam em tamanho e complexidade. Uma das causas para essas dificuldades é a existência de problemas, também chamados de anomalias, na estrutura das ontologias. Essas anomalias englobam desde problemas sutis, como conceitos mal projetados, até erros mais graves, como inconsistências. A identificação e a eliminação de anomalias podem diminuir o tamanho da ontologia e tornar sua compreensão mais fácil. Contudo, métodos para identificar anomalias encontrados na literatura não visualizam anomalias, muitos não trabalham com OWL e não são extensíveis por usuários. Por essas razões, um novo método para identificar e visualizar anomalias em ontologias, o ONTO-Analyst, foi criado. Ele permite aos desenvolvedores identificar automaticamente anomalias, usando consultas SPARQL, e visualizá-las em forma de grafos. Esse método usa uma ontologia proposta, a METAdata description For Ontologies/Rules (MetaFOR), para descrever a estrutura de outras ontologias, e consultas SPARQL para identificar anomalias nessa descrição. Uma vez identificadas, as anomalias podem ser apresentadas na forma de grafos. Um protótipo de sistema, chamado ONTO-Analyst, foi criado para a validação desse método e testado em um conjunto representativo de ontologias, por meio da verificação de anomalias representativas. O protótipo testou 18 tipos de anomalias retirados da literatura científica, em um conjunto de 608 ontologias OWL de 4 repositórios públicos importantes e dois artigos. O sistema detectou 4,4 milhões de ocorrências de anomalias nas 608 ontologias: 3,5 milhões de ocorrências de um mesmo tipo e 900 mil distribuídas em 11 outros tipos. Essas anomalias ocorreram em várias partes das ontologias, como classes, propriedades de objetos e de dados, etc. Num segundo teste foi realizado um estudo de caso das visualizações geradas pelo protótipo ONTO-Analyst das anomalias encontradas no primeiro teste. Visualizações de 11 tipos diferentes de anomalias foram automaticamente geradas. O protótipo mostrou que cada visualização apresentava os elementos envolvidos na anomalia e que pelo menos uma solução podia ser deduzida a partir da visualização. Esses resultados demonstram que o método pode eficientemente encontrar ocorrências de anomalias em um conjunto representativo de ontologias OWL, e que as visualizações facilitam o entendimento e correção da anomalia encontrada. Para estender os tipos de anomalias detectáveis, usuários podem escrever novas consultas SPARQL. / The Semantic Web is an extension of the World Wide Web in which the information has explicit meaning, allowing computers and people to work in cooperation. In order to explicitly define meaning, ontologies are used to structure information. As more scientific fields adopt Semantic Web technologies, more complex ontologies are needed. Moreover, the quality assurance of the ontologies and their management are undermined as these ontologies increase in size and complexity. One of the causes for these difficulties is the existence of problems, also called anomalies, in the ontologies structure. These anomalies range from subtle problems, such as poorly projected concepts, to more serious ones, such as inconsistencies. The identification and elimination of anomalies can diminish the ontologies size and provide a better understanding of the ontologies. However, methods to identify anomalies found in the literature do not provide anomaly visualizations, many do not work on OWL ontologies or are not user extensible. For these reasons, a new method for anomaly identification and visualization, the ONTO-Analyst, was created. It allows ontology developers to automatically identify anomalies, using SPARQL queries, and visualize them as graph images. The method uses a proposed ontology, the METAdata description For Ontologies/Rules (MetaFOR), to describe the structure of other ontologies, and SPARQL queries to identify anomalies in this description. Once identified, the anomalies can be presented as graph images. A system prototype, the ONTO-Analyst, was created in order to validate this method and it was tested in a representative set of ontologies, trough the verification of representative anomalies. The prototype tested 18 types of anomalies, taken from the scientific literature, in a set of 608 OWL ontologies from major public repositories and two articles. The system detected 4.4 million anomaly occurrences in the 608 ontologies: 3.5 million occurrences from the same type and 900 thousand distributed in 11 other types. These anomalies occurred in various parts of the ontologies, such as classes, object and data properties, etc. In a second test, a case study was performed in the visualizations generated by the ONTO-Analyst prototype, from the anomalies found in the first test. It was shown that each visualization presented the elements involved in the anomaly and that at least one possible solution could be deduced from the visualization. These results demonstrate that the method can efficiently find anomaly occurrences in a representative set of OWL ontologies and that the visualization aids in the understanding and correcting of said anomalies. In order to extend the types of detectable anomalies, users can write new SPARQL queries.
|
Page generated in 0.0634 seconds