Spelling suggestions: "subject:"ontologies"" "subject:"antologies""
521 |
An Approach Towards Self-Supervised Classification Using CycCoursey, Kino High 12 1900 (has links)
Due to the long duration required to perform manual knowledge entry by human knowledge engineers it is desirable to find methods to automatically acquire knowledge about the world by accessing online information. In this work I examine using the Cyc ontology to guide the creation of Naïve Bayes classifiers to provide knowledge about items described in Wikipedia articles. Given an initial set of Wikipedia articles the system uses the ontology to create positive and negative training sets for the classifiers in each category. The order in which classifiers are generated and used to test articles is also guided by the ontology. The research conducted shows that a system can be created that utilizes statistical text classification methods to extract information from an ad-hoc generated information source like Wikipedia for use in a formal semantic ontology like Cyc. Benefits and limitations of the system are discussed along with future work.
|
522 |
A aquisição da língua inglesa usando as novas tecnologias da informação e comunicação : a apropriação do conhecimentoCampos, Artur André Martinez 08 August 2008 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This work is aimed at studying the English language acquisition of the skills of Reading and Listening through the use of ICT. The Knowledge Society is characterized by
information sharing and new approaches to learning that certainly will reward those students familiar with Information Technologies. The investigation to verify language
learning autonomy was done through the interaction of students and ontologies created to English language learning. The research was performed using websites containing
grammar exercises, reading comprehension activities as much as newspapers and magazines online from the U.S.A, U.K., and others aimed to the development of the reading skills; while websites containing podcasts and AM/FM radio station websites were used to develop listening skills. This language acquisition was verified through a qualitative research using the case study method. A participative investigation was
conducted by a semi-structured interview which gathered data using the SILL quiz. This study researches a new approach to English language acquisition using the Internet due to the ubiquity of computers nowadays. / Este trabalho apresenta um estudo na aquisição da língua inglesa, no que se refere às habilidades de compreensão oral e escrita, através do uso das Tecnologias da Informação e Comunicação, especialmente a Internet. A sociedade do conhecimento é marcada pela apropriação de informação e por novos métodos de aprendizagem que, certamente, contemplarão um aprendiz entrosado com as tecnologias utilizadas no mundo informatizado. A investigação acerca da autonomia na aprendizagem do idioma foi feita através de interações dos alunos com ontologias de domínio encontradas em endereços eletrônicos específicos para o aprendizado de inglês. Para o desenvolvimento da compreensão escrita foram utilizados os sites contendo exercícios gramaticais, atividades de compreensão de texto e os websites de publicações semanais de países de língua inglesa como os EUA, Reino Unido e outros; e para a capacitação da compreensão oral do idioma foram usados sites de podcasts (arquivos de áudio)
projetados para aulas de inglês e de estações de rádio AM e FM dos países mencionados. Essa verificação da aquisição foi feita através de uma pesquisa qualitativa utilizando a abordagem do estudo de caso. A observação participante foi utilizada como instrumento de investigação científica e a entrevista semi-estruturada auxiliou a coleta de dados através do questionário SILL. Este estudo verificou a possibilidade de uma nova forma de aprendizagem do idioma Inglês usando a Internet em virtude da ubiqüidade dos computadores no mundo atual.
|
523 |
Sobre a estruturação de informação em sistemas de segurança computacional: o uso de ontologias / On the structuring of information in computing security systems: the use of ontologiesLuciana Andréia Fondazzi Martimiano 18 September 2006 (has links)
Como a quantidade e a complexidade de informações disponíveis sobre incidentes de segurança é crescente, as tarefas de manipular e gerenciar essas informações tornaram-se bastante custosas. Diversas ferramentas de gerenciamento de segurança estão disponíveis para auxiliar os administradores. Essas ferramentas podem monitorar tudo que entra e saí de uma intranet, como os firewalls; podem monitorar o tráfego interno da rede para saber o que está acontecendo e detectar possíveis ataques, como os sistemas de detecção de intrusão (SDIs); podem varrer arquivos em busca de códigos maliciosos, como os antivírus; podem criar filtros de emails para evitar spams, vírus ou worms; ou podem varrer uma rede em busca de vulnerabilidades nos sistemas, como os scanners e os agentes móveis inteligentes. Essas ferramentas geram uma grande quantidade de logs com informações que são coletadas e armazenadas em formatos próprios e diferentes. Essa falta de um formato único para armazenar as informações de incidentes de segurança, faz com que o trabalho dos administradores fique ainda mais difí?cil, pois eles/elas devem ser capazes de entender todos esses formatos para identificar e correlacionar informações quando, por exemplo, há um ataque ou uma invasãoo em andamento. Esta tese descreve o projeto e o desenvolvimento de ontologias para representar em uma estrutura padronizada informações sobre incidentes de segurança. A ontologia desenvolvida é denominada OntoSec - Security Incident Ontology. Este trabalho cobre: (i) como utilizar ontologias para compartilhar e reusar informações sobre incidentes; (ii) como correlacionar incidentes por meio de ontologias; (iii) como facilitar a interoperabilidade entre diferentes ferramentas de segurança; (iv) a modelagem de um sistema de gerenciamento de incidentes com base na ontologia; e (v) o processo de avaliação da ontologia desenvolvida. Além disso, a OntoSec pretende apoiar as decisões gerenciais realizadas pelos administradores quando problemas de segurança acontecem, possibilitando que essas decisões sejam tomadas de maneira mais eficiente e eficaz / As the amount and the complexity of security incidents information have grown exponentially, managing and manipulating these information have become more expensive. Several security tools can be used to assist the administrators in performing these tasks. These tools can monitor what comes from Internet and goes to it, as the firewalls do; they can monitor the intranet traffic, as usually is done by an Intrusion Detection System (IDS); they can search for malicious codes in files or emails, as made by the antivirus; they can create filters to process spams, viruses or worms; or they can scan the intranet for vulnerabilities, as the scanners and the intelligent agents. These tools collect and store a great amount of information, using different formats. This lack of unique commonly agreed formats to store information about security incidents, make the administrators? job even harder, because they have to be able to understand all these formats to identify and to correlate information when, for instance, there is an attack or an invasion in progress. In this thesis I describe the design and development of ontologies to represent in a standard structure information about security incidents. The ontology developed is named OntoSec - Security Incident Ontology. This work covers: (i) how to use ontologies to share and reuse information about incidents; (ii) how to make it easier to correlate incidents; (iii) how to make it possible the interoperability amongs security tools; (iv) modeling of a security incident management system based on OntoSec; and (v) evaluation process of the ontology that has been developed. Besides that, the OntoSec aims to support the decisions made by the administrators when security problems happen, making the process more efficient and effective
|
524 |
Ontologias e DSLs na geração de sistemas de apoio à decisão, caso de estudo SustenAgro / Ontologies and DSLs in the generation of decision support systems, SustenAgro study caseJohn Freddy Garavito Suarez 03 May 2017 (has links)
Os Sistemas de Apoio à Decisão (SAD) organizam e processam dados e informações para gerar resultados que apoiem a tomada de decisão em um domínio especifico. Eles integram conhecimento de especialistas de domínio em cada um de seus componentes: modelos, dados, operações matemáticas (que processam os dados) e resultado de análises. Nas metodologias de desenvolvimento tradicionais, esse conhecimento deve ser interpretado e usado por desenvolvedores de software para implementar os SADs. Isso porque especialistas de domínio não conseguem formalizar esse conhecimento em um modelo computável que possa ser integrado aos SADs. O processo de modelagem de conhecimento é realizado, na prática, pelos desenvolvedores, parcializando o conhecimento do domínio e dificultando o desenvolvimento ágil dos SADs (já que os especialistas não modificam o código diretamente). Para solucionar esse problema, propõe-se um método e ferramenta web que usa ontologias, na Web Ontology Language (OWL), para representar o conhecimento de especialistas, e uma Domain Specific Language (DSL), para modelar o comportamento dos SADs. Ontologias, em OWL, são uma representação de conhecimento computável, que permite definir SADs em um formato entendível e accessível a humanos e máquinas. Esse método foi usado para criar o Framework Decisioner para a instanciação de SADs. O Decisioner gera automaticamente SADs a partir de uma ontologia e uma descrição naDSL, incluindo a interface do SAD (usando uma biblioteca de Web Components). Um editor online de ontologias, que usa um formato simplificado, permite que especialistas de domínio possam modificar aspectos da ontologia e imediatamente ver as consequência de suasmudanças no SAD.Uma validação desse método foi realizada, por meio da instanciação do SAD SustenAgro no Framework Decisioner. O SAD SustenAgro avalia a sustentabilidade de sistemas produtivos de cana-de-açúcar na região centro-sul do Brasil. Avaliações, conduzidas por especialistas em sustentabilidade da Embrapa Meio ambiente (parceiros neste projeto), mostraram que especialistas são capazes de alterar a ontologia e DSL usadas, sem a ajuda de programadores, e que o sistema produz análises de sustentabilidade corretas. / Decision Support Systems (DSSs) organize and process data and information to generate results to support decision making in a specific domain. They integrate knowledge from domain experts in each of their components: models, data, mathematical operations (that process the data) and analysis results. In traditional development methodologies, this knowledge must be interpreted and used by software developers to implement DSSs. That is because domain experts cannot formalize this knowledge in a computable model that can be integrated into DSSs. The knowledge modeling process is carried out, in practice, by the developers, biasing domain knowledge and hindering the agile development of DSSs (as domain experts cannot modify code directly). To solve this problem, a method and web tool is proposed that uses ontologies, in the Web Ontology Language (OWL), to represent experts knowledge, and a Domain Specific Language (DSL), to model DSS behavior. Ontologies, in OWL, are a computable knowledge representations, which allow the definition of DSSs in a format understandable and accessible to humans and machines. This method was used to create the Decisioner Framework for the instantiation of DSSs. Decisioner automatically generates DSSs from an ontology and a description in its DSL, including the DSS interface (using a Web Components library). An online ontology editor, using a simplified format, allows that domain experts change the ontology and immediately see the consequences of their changes in the in the DSS. A validation of this method was done through the instantiation of the SustenAgro DSS, using the Decisioner Framework. The SustenAgro DSS evaluates the sustainability of sugarcane production systems in the center-south region of Brazil. Evaluations, done by by sustainability experts from Embrapa Environment (partners in this project), showed that domain experts are capable of changing the ontology and DSL program used, without the help of software developers, and that the system produced correct sustainability analysis.
|
525 |
Revisão de crenças em lógicas de descrição e em outras lógicas não clássicas / Belief revision in description logics and other non-classical logicsMarcio Moretto Ribeiro 20 September 2010 (has links)
A area de revisão de crenças estuda como agentes racionais mudam suas crencas ao receberem novas informações. O marco da area de revisão de crenças foi a publicacão do trabalho de Alchourron, Gardenfors e Makinson. Nesse trabalho conhecido como paradigma AGM foram denidos criterios de racionalidade para tipos de mudanca de crencas. Desde então, a área de revisão de crenças foi influenciada por diversas disciplinas como filosoa, computacão e direito. Paralelamente ao desenvolvimento da area de revisão de crenças, os últimos 20 anos foram marcados por um grande avanço no estudo das logicas de descrição. Tal avanço, impulsionado pelo desenvolvimento da web-semântica, levou a adoção de linguagens inspiradas em logicas de descrição (OWL) como padrão para se representar ontologias na web. Nessa tese tratamos do problema de aplicar a teoria da revisão de crenças a lógicas não clássicas e especialmente a logicas de descric~ao. Trabalhos recentes mostraram que o paradigma AGM e incompatvel com diversas logicas de descricão. Estendemos esses resultados mostrando outras lógicas que não são compatíveis com o paradigma AGM. Propomos formas de aplicar a teoria de revisão tanto em bases quanto em conjuntos de crencas a essas logicas. Alem disso, usamos algoritmos conhecidos da área de depuração de ontologias para implementar operações em bases de crenças. / Belief revision theory studies how rational agents change their beliefs after receiving new information. The most in uential work in this area is the paper of Alchourron, Gardenfors and Makinson. In this work, known as AGM paradigm rationality criteria for belief change were dened. Since then, the eld has been in uenced by many areas like philosophy, computer science and law. Parallel to the development of belief revision eld, in the past 20 years there was a huge grow in the study of description logics. The climax of this development was the adoption of OWL (a language based on description logics) as the standard language to represent ontologies on the web. In this work we deal with the problem of applying belief revision in to non-classical logics, specially description logics. Recent works showed that the AGM paradigm is not compliant with several description logics. We have extended this work by showing that other logics are not compliant with AGM paradigm. Furthermore, we propose alternative ways to apply belief revision techniques to these logics. Finally, we show that well known algorithms from the area of ontology debugging eld can be used to implement the proposed constructions.
|
526 |
Formalizing biomedical concepts from textual definitionsPetrova, Alina, Ma, Yue, Tsatsaronis, George, Kissa, Maria, Distel, Felix, Baader, Franz, Schroeder, Michael 07 January 2016 (has links)
BACKGROUND:
Ontologies play a major role in life sciences, enabling a number of applications, from new data integration to knowledge verification. SNOMED CT is a large medical ontology that is formally defined so that it ensures global consistency and support of complex reasoning tasks. Most biomedical ontologies and taxonomies on the other hand define concepts only textually, without the use of logic. Here, we investigate how to automatically generate formal concept definitions from textual ones. We develop a method that uses machine learning in combination with several types of lexical and semantic features and outputs formal definitions that follow the structure of SNOMED CT concept definitions.
RESULTS:
We evaluate our method on three benchmarks and test both the underlying relation extraction component as well as the overall quality of output concept definitions. In addition, we provide an analysis on the following aspects: (1) How do definitions mined from the Web and literature differ from the ones mined from manually created definitions, e.g., MeSH? (2) How do different feature representations, e.g., the restrictions of relations' domain and range, impact on the generated definition quality?, (3) How do different machine learning algorithms compare to each other for the task of formal definition generation?, and, (4) What is the influence of the learning data size to the task? We discuss all of these settings in detail and show that the suggested approach can achieve success rates of over 90%. In addition, the results show that the choice of corpora, lexical features, learning algorithm and data size do not impact the performance as strongly as semantic types do. Semantic types limit the domain and range of a predicted relation, and as long as relations' domain and range pairs do not overlap, this information is most valuable in formalizing textual definitions.
CONCLUSIONS:
The analysis presented in this manuscript implies that automated methods can provide a valuable contribution to the formalization of biomedical knowledge, thus paving the way for future applications that go beyond retrieval and into complex reasoning. The method is implemented and accessible to the public from: https://github.com/alifahsyamsiyah/learningDL.
|
527 |
Formalizing biomedical concepts from textual definitions: Research ArticleTsatsaronis, George, Ma, Yue, Petrova, Alina, Kissa, Maria, Distel, Felix, Baader, Franz, Schroeder, Michael 04 January 2016 (has links)
Background
Ontologies play a major role in life sciences, enabling a number of applications, from new data integration to knowledge verification. SNOMED CT is a large medical ontology that is formally defined so that it ensures global consistency and support of complex reasoning tasks. Most biomedical ontologies and taxonomies on the other hand define concepts only textually, without the use of logic. Here, we investigate how to automatically generate formal concept definitions from textual ones. We develop a method that uses machine learning in combination with several types of lexical and semantic features and outputs formal definitions that follow the structure of SNOMED CT concept definitions.
Results
We evaluate our method on three benchmarks and test both the underlying relation extraction component as well as the overall quality of output concept definitions. In addition, we provide an analysis on the following aspects: (1) How do definitions mined from the Web and literature differ from the ones mined from manually created definitions, e.g., MeSH? (2) How do different feature representations, e.g., the restrictions of relations’ domain and range, impact on the generated definition quality?, (3) How do different machine learning algorithms compare to each other for the task of formal definition generation?, and, (4) What is the influence of the learning data size to the task? We discuss all of these settings in detail and show that the suggested approach can achieve success rates of over 90%. In addition, the results show that the choice of corpora, lexical features, learning algorithm and data size do not impact the performance as strongly as semantic types do. Semantic types limit the domain and range of a predicted relation, and as long as relations’ domain and range pairs do not overlap, this information is most valuable in formalizing textual definitions.
Conclusions
The analysis presented in this manuscript implies that automated methods can provide a valuable contribution to the formalization of biomedical knowledge, thus paving the way for future applications that go beyond retrieval and into complex reasoning. The method is implemented and accessible to the public from: https://github.com/alifahsyamsiyah/learningDL.
|
528 |
Intéropérabilité sémantique dans le domaine du diagnostic in vitro : Représentation des Connaissances et AlignementMary, Melissa 23 October 2017 (has links)
La centralisation des données patients au sein de répertoires numériques soulève des problématiques d’interopérabilité avec les différents systèmes d’information médicaux tels que ceux utilisés en clinique, à la pharmacie ou dans les laboratoires d’analyse. Les instances de santé publique, en charge de développer et de déployer ces dossiers, recommandent l’utilisation de standards pour structurer (syntaxe) et coder l’information (sémantique). Pour les données du diagnostic in vitro (DIV) deux standards sémantiques sont largement préconisés : - la terminologie LOINC® (Logical Observation Identifier Names and Codes) pour représenter les tests de laboratoire ;- l’ontologie SNOMED CT® (Systematized Nomenclature Of MEDicine Clinical Terms) pour exprimer les résultats observés.Ce travail de thèse s’articule autour des problématiques d’interopérabilité sémantique en microbiologie clinique avec deux axes principaux : Comment aligner un Système Organisé de Connaissances du DIV en microbiologie avec l’ontologie SNOMED CT® ? Pour répondre à cet objectif j’ai pris le parti dans mon travail de thèse de développer des méthodologies d’alignement adaptées aux données du diagnostic in vitro plutôt que de proposer une méthode spécifique à l’ontologie SNOMED CT®. Les méthodes usuelles pour l’alignement d’ontologies ont été évaluées sur un alignement de référence entreLOINC® et SNOMED CT®. Les plus pertinentes sont implémentées dans une librairie R, qui sert de point de départ pour créer de nouveaux alignements au sein de bioMérieux. Quels sont les bénéfices et limites d’une représentation formelle des connaissances du DIV ? Pour répondre à cet objectif je me suis intéressée à la formalisation du couple <Test—Résultat>(Observation) au sein d’un compte-rendu de laboratoire. J’ai proposé un formalisme logique pour représenter les tests de la terminologie LOINC® qui a permis de montrer les bénéfices d’une représentation ontologique pour classer et requêter les tests. Dans un second temps, j’ai formalisé un patron d’observations compatible avec l’ontologie SNOMED CT® et aligné sur lesconcepts de la top-ontologie BioTopLite2. Enfin, le patron d’observation a été évaluée afin d’être utilisé au sein des systèmes d’aide à la décision en microbiologie clinique. Pour résumer, ma thèse s’inscrit dans une dynamique de partage et réutilisation des données patients. Les problématiques d’interopérabilité sémantique et de formalisation des connaissances dans le domaine du diagnostic in vitro freinent aujourd’hui encore le développement de systèmes experts. Mes travaux de recherche ont permis de lever certains de ces verrous et pourront être réutilisés dans de nouveaux systèmes intelligents en microbiologie clinique afin de surveiller par exemple l’émergence de bactéries multi-résistantes, et adapter en conséquence des thérapies antibiotiques. / The centralization of patient data in different digital repositories raises issues of interoperability with the different medical information systems, such as those used in clinics, pharmacies or in medical laboratories. The public health authorities, charged with developing and implementing these repositories, recommend the use of standards to structure (syntax) and encode (semantic) health information. For data from in vitro diagnostics (IVD) two standards are recommended: - the LOINC® terminology (Logical Observation Identifier Names and Codes) to represent laboratory tests;- the SNOMED CT® ontology (Systematized Nomenclature Of MEDicine Clinical Terms) to express the observed results.This thesis focuses on the semantic interoperability problems in clinical microbiology with two major axes: How can an IVD Knowledge Organization System be aligned with SNOMED CT®? To answer this, I opted for the development of alignment methodologies adapted to the in vitro diagnostic data rather than proposing a specific method for the SNOMED CT®. The common alignment methods are evaluated on a gold standard alignment between LOINC® and SNOMED CT®. Themost appropriate are implemented in an R library which serves as a starting point to create new alignments at bioMérieux.What are the advantages and limits of a formal representation of DIV knowledge? To answer this, I looked into the formalization of the couple ‘test-result’ (observation) in a laboratory report. I proposed a logical formalization to represent the LOINC® terminology and I demonstrated the advantages of an ontological representation to sort and query laboratory tests. As a second step, I formalized an observation pattern compatible with the SNOMED CT® ontology and aligned onthe concept of the top-ontology BioTopLite2. Finally, the observation pattern was evaluated in order to be used within clinical microbiology expert systems. To resume, my thesis addresses some issues on IVD patient data share and reuse. At present, the problems of semantic interoperability and knowledge formalization in the field of in vitro diagnostics hampers the development of expert systems. My research has enabled some of the obstacles to be raised and could be used in new intelligent clinical microbiology systems, for example in order to be able to monitor the emergence of multi resistant bacteria and consequently adapt antibiotic therapies.
|
529 |
Die Regensburger Verbundklassifikation (RVK) – „ein weites Feld“: Herausforderung von Semantic Web, Ontologien und Entitäten für dieDynamik einer KlassifikationWerr, Naoka 28 January 2011 (has links)
Schlagwörter wie „information overload“, „digital natives“ oder „digital immigrants“ prägen die heutige Informations- und Wissensgesellschaft. Zahlreiche wissenschaftliche Untersuchungen belegen zudem nachdrücklich, dass die technische Entwicklung in den nächsten Jahren noch rasanter fortschreitet als man es jemals vermuten durfte. Internet- Kommunikationsangeboten kommt bereits jetzt eine außergewöhnliche Bedeutung zu - mit steigender Tendenz. Außerdem werden Kommunikationsservices wie Web 2.0-Anwendungen als ein zunehmend wichtiger Faktor von Internetnutzung unterstrichen und der aktuelle Trend zur persönlichen Vernetzung über das Internet stets hervorgehoben. Die Bedeutung der Kernnutzungen des Internets als Inhaltsquelle und Kommunikationsform wird demnach auch weiterhin zunehmen. Diesem Trend müssen sich auch Klassifikationssysteme stellen. Die RVK hat mit dem im Oktober 2009 lancierten Web-Portal einen ersten Schritt in Richtung Vernetzung getan. Die bisher auf verschiedenen Internetseiten disparat untergebrachten Informationen zur RVK sowie die Datenbanken zur RVK sind nunmehr unter einer Oberfläche vereint, miteinander verknüpft und mit Elementen sozialer Software (RVK-Wiki zur größeren Transparenz bei Abstimmungsvorgängen) angereichert. Im Kontext des derzeit ebenfalls als beliebtes Schlagwort thematisierten Semantic Web ist das Portal der RVK ein Paradigmenwechsel in der langen Geschichte der RVK: Das gesamte Wissen zur RVK wird entsprechend seiner Bedeutung konzeptionell verbunden und bereits weitgehend maschinenlesbar (beispielsweise bezogen auf die Suchfunktion in der Datenbank RVK-Online) offeriert. Wissensmanagement sowie die Verbesserung der Qualität der umfangreichen Informationen zur RVK auf semantischer Ebene sind sehr verbessert worden, verbunden mit dem RVK-Wiki könnte man gar von einem ersten Impuls in Richtung Web 3.0 für die RVK sprechen. Auch die hierarchische Struktur der RVK trägt wesentlich zum Semantic Web bei, da in einer Klassifikation gerade hierarchische Strukturen zur „Ordnung“ des im Überfluss vorhandenen implizierten Wissens beitragen. Wesentlich ist demnach die Definition der Relationen im Web (und somit der entsprechenden Ontologien und Entitäten), um der Quantität der Angebote im World Wide Web auch entsprechend qualitativ hochwertige Services mit bibliothekarischem Mehrwert entgegenzusetzen. Für das Datenmodell des Semantic Web ist somit die Bereitstellung von nachhaltigen Normdaten wie es für die RVK ja angedacht - respektive fast umgesetzt ist – notwendig.
|
530 |
Verification of Data-aware Business Processes in the Presence of OntologiesSantoso, Ario 13 May 2016 (has links)
The meet up between data, processes and structural knowledge in modeling complex enterprise systems is a challenging task that has led to the study of combining formalisms from knowledge representation, database theory, and process management. Moreover, to ensure system correctness, formal verification also comes into play as a promising approach that offers well-established techniques. In line with this, significant results have been obtained within the research on data-aware business processes, which studies the marriage between static and dynamic aspects of a system within a unified framework. However, several limitations are still present. Various formalisms for data-aware processes that have been studied typically use a simple mechanism for specifying the system dynamics. The majority of works also assume a rather simple treatment of inconsistency (i.e., reject inconsistent system states). Many researches in this area that consider structural domain knowledge typically also assume that such knowledge remains fixed along the system evolution (context-independent), and this might be too restrictive. Moreover, the information model of data-aware processes sometimes relies on relatively simple structures. This situation might cause an abstraction gap between the high-level conceptual view that business stakeholders have, and the low-level representation of information. When it comes to verification, taking into account all of the aspects above makes the problem more challenging.
In this thesis, we investigate the verification of data-aware processes in the presence of ontologies while at the same time addressing all limitations above. Specifically, we provide the following contributions: (1) We propose a formal framework called Golog-KABs (GKABs), by leveraging on the state of the art formalisms for data-aware processes equipped with ontologies. GKABs enable us to specify semantically-rich data-aware business processes, where the system dynamics are specified using a high-level action language inspired by the Golog programming language. (2) We propose a parametric execution semantics for GKABs that is able to elegantly accommodate a plethora of inconsistency-aware semantics based on the well-known notion of repair, and this leads us to consider several variants of inconsistency-aware GKABs. (3) We enhance GKABs towards context-sensitive GKABs that take into account the contextual information during the system evolution. (4) We marry these two settings and introduce inconsistency-aware context-sensitive GKABs. (5) We introduce the so-called Alternating-GKABs that allow for a more fine-grained analysis over the evolution of inconsistency-aware context-sensitive systems. (6) In addition to GKABs, we introduce a novel framework called Semantically-Enhanced Data-Aware Processes (SEDAPs) that, by utilizing ontologies, enable us to have a high-level conceptual view over the evolution of the underlying system. We provide not only theoretical results, but have also implemented this concept of SEDAPs.
We also provide numerous reductions for the verification of sophisticated first-order temporal properties over all of the settings above, and show that verification can be addressed using existing techniques developed for Data-Centric Dynamic Systems (which is a well-established data-aware processes framework), under suitable boundedness assumptions for the number of objects freshly introduced in the system while it evolves. Notably, all proposed GKAB extensions have no negative impact on computational complexity.
|
Page generated in 0.1118 seconds