351 |
Gamification of collaborative learning scenarios: an ontological engineering approach to deal with the motivation problem caused by computer-supported collaborative learning scripts / Gamificação de cenários de aprendizagem colaborativa: uma abordagem de engenharia de ontologias para lidar com o problema de motivação causado por scripts de aprendizagem colaborativa com suporte computacionalGeiser Chalco Challco 19 October 2018 (has links)
Increase both students motivation and learning outcomes in Collaborative Learning (CL) activities is a challenge that the Computer-Supported Collaborative Learning (CSCL) community has been addressing in the past last years. The use of CSCL scripts to structure and orchestrate the CL process has been shown to be effective to support meaningful interactions and better learning, but the scripted collaboration often does not motivate students to participate in the CL process, which makes more difficult the use of scripts over time in CL activities. To deal with the motivational problems, the researchers, educators and practitioners are now looking at gamification as a solution to motivate and engage students. However, the gamification is a complex task, requiring from instructional designers and practitioners, knowledge about game elements (such as leaderboards and point systems), game design (e.g. how to combine game elements) and their impact on motivation, engagement and learning. Moreover, the gamification is too context-dependent, requiring personalization for each participant and situation. Thus, to address these issues, an ontological engineering approach to gamify CL sessions has been proposed and conducted in this dissertation. In this approach, an ontology has been formalized to enable the systematic representation of knowledge extracted from theories and best practices related to gamification. In this ontology, the concepts, extracted from practices and theories related to gamification, and identified as relevant to deal with the motivational problems in scripted collaborative learning, have been formalized as ontological structures to be used by computer-based mechanisms and procedures in intelligent-theory aware systems. These mechanisms and procedures with ontological structures aim to provide support to give advices and recommendations that will help instructional designers and practitioners to gamify CL sessions. To validate this approach, and to demonstrate its effectiveness and efficiency into deal with the motivational problems in scripted collaborative learning, four empirical studies were conducted in real situations at the University of São Paulo with undergraduate Computer Science and Computer Engineering students. The results of the empirical studies demonstrated that, for CL activities where the CSCL scripts are used as a method to orchestrate and structure the CL process, the ontological engineering approach to gamify CL scenarios is an effective and efficient solution to deal with the motivational problems because the CL sessions obtained by this approach affected in a proper way the participants motivation and learning outcomes. / Aumentar a motivação e os resultados de aprendizagem dos estudantes nas atividades de aprendizagem colaborativa é um desafio que a comunidade de Aprendizagem Colaborativa com Suporte Computacional tem abordado nos últimos anos. O uso de scripts para estruturar e orquestrar o processo de aprendizagem colaborativa demonstrou ser eficaz para dar suporte as interações significativas e um melhor aprendizado, mas a colaboração com scripts muitas vezes não motiva os alunos a participar do processo de aprendizagem colaborativa, o que dificulta o uso de scripts ao longo do tempo em atividades de aprendizgem colaborativas. Para lidar com problemas de motivação, os pesquisadores, educadores e profissionais estão agora olhando a gamificação como uma solução para motivar e envolver os alunos. No entanto, a gamificação é uma tarefa complexa, exigindo de projetistas instrucionais e profissionais, conhecimento sobre elementos do jogo (e.g. leaderboards e sistemas de pontos), design de jogos (e.g. como combinar elementos do jogo) e seu impacto na motivação, engajamento e aprendizado. Além disso, a gamificação é muito dependente do contexto, exigindo personalização para cada participante e situação. Assim, para abordar esses problemas, uma abordagem de engenharia ontologias para gamificar sessões de aprendizagem colaborativa foi proposto e desenvolvida nesta dissertação. Nessa abordagem, uma ontologia foi formalizada para possibilitar a representação sistemática de conhecimentos extraídos de teorias e melhores práticas relacionadas à gamificação. Na ontologia, os conceitos, extraídos de práticas e teorias relacionadas à gamificação, e identificados como relevantes para lidar com problemas motivacionais na aprendizagem colaborativa com scripts, foram formalizados como estruturas ontológicas a serem utilizadas por mecanismos e procedimentos informatizados em sistemas inteligentes cientes de teorias. Esses mecanismos e procedimentos com estruturas ontológicas visam fornecer suporte para dar conselhos e recomendações que ajudarão os projetistas instrucionais e profissionais a gamificar as sessões de aprendizagem colaborativa. Para validar a abordagem e demonstrar sua eficácia e eficiência em lidar com problemas motivacionais na aprendizagem colaborativa com scripts, quatro estudos empíricos foram conduzidos em situações reais na Universidade de São Paulo com estudantes de graduação em Ciência da Computação e Engenharia da Computação. Os resultados dos estudos empíricos demonstraram que, para as atividades de aprendizagem colaborativa no que os scripts são usados como um método para orquestrar e estruturar o processo da aprendizagem colaborativa, a abordagem de engenharia ontológica para gamificar cenários de aprendizagem colaborativa é um eficaz e eficiente solução para lidar com problemas motivacionais porque as sessões de aprendizagem colaborativa obtidas por essa abordagem afetaram de maneira adequada a motivação e os resultados de aprendizagem dos participantes.
|
352 |
OntoLP: construção semi-automática de ontologias a partir de textos da lingua portuguesaRibeiro Junior, Luiz Carlos 21 February 2008 (has links)
Made available in DSpace on 2015-03-05T13:59:42Z (GMT). No. of bitstreams: 0
Previous issue date: 21 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O crescimento da Internet provoca a necessidade de estruturas mais consistentes de representação do conhecimento disponível na rede. Nesse contexto, a Web Semântica e as ontologias aparecem como resposta ao problema. Contudo, a construção de ontologias é extremamente custosa, o que estimula diversas pesquisas visando automatizar a tarefa. Em sua maioria, essas pesquisas partem do conhecimento disponível em textos. As ferramentas
e métodos são, nesse caso, dependentes de idioma. Para que todos tenham acesso aos benefícios da utilização de ontologias em larga escala, estudos específicos para cada
língua são necessários. Nesse sentido, pouco foi feito para o Português. Este trabalho procura avançar nas questões concernentes à tarefa para a língua portuguesa, abrangendo o desenvolvimento e a avaliação de métodos para a construção automática de ontologias a partir de textos. Além disso, foi desenvolvida uma ferramenta de auxílio à construção de ontologias para a língua portuguesa integrada ao ambiente largamente / The internet evolution is in need of more sophisticated knowledge management techniques. In this context, the Semantic Web and Ontologies are being developed as
a way to solve this problem. Ontology learning is, however, a dificult and expensive task. Research on ontology learning is usually based on natural language texts. Language specific tools have to be developed. There is no much research that considers specifically the portuguese language. This work advances in these questions and it considers portuguese in particular. The development and evaluation of methods are presented and discussed. Besides, the developed methods were integrated as a plug-in of the widely used ontology editor Protégé
|
353 |
GamiProM: a Gamification Model based on Profile ManagementDalmina, Leonardo 26 March 2018 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2018-05-07T13:43:19Z
No. of bitstreams: 1
Leonardo Dalmina_.pdf: 3273879 bytes, checksum: df157a5701b423e92352934d75e44473 (MD5) / Made available in DSpace on 2018-05-07T13:43:19Z (GMT). No. of bitstreams: 1
Leonardo Dalmina_.pdf: 3273879 bytes, checksum: df157a5701b423e92352934d75e44473 (MD5)
Previous issue date: 2018-03-26 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O uso de elementos de design de jogos em contextos não relacionados a jogos, definido como gamificação, está sendo cada vez mais usado para aumentar a motivação e o engajamento dos usuários quando eles precisam executar uma tarefa em um ambiente não relacionado a jogo, como o local de trabalho, a escola ou uma aplicação de software. No entanto, quando a gamificação precisa ser implementada, um desafio enfrentado pelos desenvolvedores é identificar quais elementos do jogos engajarão efetivamente os usuários de um software com base em seus perfis de usuário e características motivacionais. Frequentemente, muitas pesquisas tendem a não incluir ou apenas apoiar os tipos de usuário e fatores motivacionais mais comuns. Em resposta a este desafio, esta dissertação propõe um modelo de gamificação genérico intitulado GamiProM que permite um desenvolvedor de software criar uma solução gamificada adaptativa para qualquer área fazendo uso de ontologias e regras, visando fornecer representação do conhecimento bem como adicionar um valor semântico à informação gerada pela gamificação e gerenciamento de perfil. O modelo é avaliado com um teste de correlação que identifica a existência de qualquer associação entre as necessidades psicológicas básicas dos usuários e suas motivações coletadas com a aplicação gamificada, desenvolvida para implementar o modelo proposto. Os resultados mostraram que as motivações coletadas dos perfis gamificados dos usuários têm uma correlação acima de 80% com as necessidades psicológicas básicas analisadas. / The use of game design elements in non-game contexts, defined as gamification, is being increasingly used to raise the motivation and engagement of users when they have to execute a task in a non-game environment, such as the workplace, the school or a software application. However, when gamification needs to be implemented, a challenge faced by developers is to identify what game elements will effectively engage the users of a software based on their user profiles and motivational characteristics. Often, many researches tend to not include or only support the most common user types and motivational factors. In response to this challenge, this thesis proposes a generic gamification model entitled GamiProM that allows a software developer to build an adaptive gamified solution for any area by making use of ontologies and rules, aiming to provide knowledge representation as well as add a semantic value to the information generated by gamification and profile management. The model is evaluated with a correlation test that identifies the existence of any association between the basic psychological needs of the users and their motivations collected with the gamified application, developed to implement the proposed model. The results showed that the motivations collected from the gamified profiles of the users have a correlation above 80% with the basic psychological needs analyzed.
|
354 |
Sistema de aplicação unificada de regras linguísticas e ontologias para a extração de informaçõesAraujo, Denis Andrei de 30 August 2013 (has links)
Submitted by Mariana Dornelles Vargas (marianadv) on 2015-05-29T14:51:35Z
No. of bitstreams: 1
sistema_aplicacao.pdf: 3329376 bytes, checksum: 15eb7bd8bf245f93a6032e0aeec8c11a (MD5) / Made available in DSpace on 2015-05-29T14:51:35Z (GMT). No. of bitstreams: 1
sistema_aplicacao.pdf: 3329376 bytes, checksum: 15eb7bd8bf245f93a6032e0aeec8c11a (MD5)
Previous issue date: 2013 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / A Extração de Informações é um componente importante do conjunto de ferramentas computacionais que visam a identificação de informação relevante a partir de textos em linguagem natural. Regras de extração de conhecimento, baseadas no tratamento linguístico de aspectos específicos dos documentos textuais, podem contribuir para o alcance de melhores desempenhos nesta tarefa. Este trabalho apresenta um modelo para a Extração da Informação baseada em ontologias, a qual se utiliza de técnicas de Processamento da Linguagem Natural e corpus anotado para a identificação das informações de interesse. São descritos os principais componentes da proposta e apresentado um estudo de caso baseado em documentos jurídicos brasileiros. Os resultados obtidos nos experimentos realizados indicam índices relevantes de acurácia e precisão e boas perspectivas quanto a flexibilidade, expressividade e generalização das regras de extração. / Information extraction is an important part of a broader set of enabling tools to assist on identifying relevant information from natural language texts. Knowledge acquisition rules, based on linguistic treatment of specific aspects of textual documents, can provide an even broader set of possibilities. This work presents a model for addressing information extraction from texts based on ontology, which uses Natural Language Processing techniques and annotated corpus to identify relevant information. The main components of the proposal are described and presented a case study based on Brazilian legal documents. The results achieved on experiments indicate relevant accuracy and precision performance and good prospects regarding flexibility, expressiveness and generalization of the extraction rules.
|
355 |
Um modelo para implementação de aplicações da Argument Web integradas com bases de dados abertos e ligadosNiche, Roberto 30 June 2015 (has links)
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2015-10-21T15:18:41Z
No. of bitstreams: 1
ROBERTO NICHE_.pdf: 2843778 bytes, checksum: 593973f2bdcb7e774f0022cc2e08fdea (MD5) / Made available in DSpace on 2015-10-21T15:18:41Z (GMT). No. of bitstreams: 1
ROBERTO NICHE_.pdf: 2843778 bytes, checksum: 593973f2bdcb7e774f0022cc2e08fdea (MD5)
Previous issue date: 2015-06-30 / Milton Valente / Ferramentas de comunicação e colaboração são amplamente utilizadas na internet para expressar opiniões e descrever pontos de vista sobre os mais diversos assuntos. Entretanto elas não foram projetadas para apoiar a identificação precisa dos assuntos tratados e tampouco para permitir o relacionamento entre os elementos que compõem as interações. Os resultados observados são a disponibilidade de uma grande quantidade de informações geradas espontaneamente e a dificuldade de identificação precisa dos elementos de destaque dessas informações, bem como seus relacionamentos e suas fontes. A proposta central da Argument Web está relacionada com a definição de uma infraestrutura para anotar de forma precisa os argumentos das mensagens publicadas e possibilitar que estes estejam relacionados com suas diversas fontes. Quando integrada com a iniciativa de bases de dados abertos e ligados, a Argument Web apresenta o potencial de ampliar a qualidade das discussões colaborativas na Internet e favorecer a sua análise. Entretanto, as iniciativas para implementações de aplicações com base nestes conceitos ainda são restritas. Mesmo nas aplicações conhecidas, ainda são pouco exploradas as características de visualização e utilização de bases de dados abertos e ligados. Neste trabalho é descrito um modelo para a instanciação desse tipo de aplicações, com base no modelo Argument Interchange Format e no uso de linguagens da Web Semântica. O diferencial que este modelo apresenta está relacionado com a facilidade de integração entre fontes externas em formatos de bases de dados ligados. Um protótipo deste modelo foi avaliado em um estudo usando-se bases de dados abertas e ligadas no âmbito da administração pública brasileira, tendo sido observados bons resultados. / Internet communication and collaboration tools are widely used on the Internet to express opinions and describe views on various subjects. However, they were not designed to support the precise identification of the issues raised, nor to allow the relationship among the elements of the interactions. The observed results are the availability of a large amount of information generated spontaneously by users. Even then, the accurate identification of key discussion elements and their interconnecting relationships as well as their sources is still a challenge. The main goal of Argument Web is related to the definition of an infrastructure to note correctly the arguments of the posted messages and enable these to relate to its various sources. When integrated with the initiative to open and connected databases, the Argument Web has the potential to increase the quality of collaborative discussions on the Internet and to encourage their analysis. However, initiatives for application implementations based on these concepts are still restricted. Even in known applications, the display characteristics and use of open and linked data bases are still little explored. This paper describes a model for the creation of such applications, based on the Argument Interchange Format and the use of Semantic Web languages. We consider our main contributions to be twofold: first, our capability to integrate and link external data sources; and second, augmentation through. A prototype was created and employed in a case study, enabling discussion related to Brazilian government issues, in which good results were observed.
|
356 |
Formalisation, acquisition et mise en œuvre de connaissances pour l’intégration virtuelle de bases de données géographiques : les spécifications au cœur du processus d’intégration / Formalisation, acquisition and implementation of specifications knowledge for geographic databases integrationAbadie, Nathalie 20 November 2012 (has links)
Cette thèse traite de l'intégration de bases de données topographiques qui consiste à expliciter les relations de correspondance entre bases de données hétérogènes, de sorte à permettre leur utilisation conjointe. L'automatisation de ce processus d'intégration suppose celle de la détection des divers types d'hétérogénéité pouvant intervenir entre les bases de données topographiques à intégrer. Ceci suppose de disposer, pour chacune des bases à intégrer, de connaissances sur leurs contenus respectifs. Ainsi, l'objectif de cette thèse réside dans la formalisation, l'acquisition et l'exploitation des connaissances nécessaires pour la mise en œuvre d'un processus d'intégration virtuelle de bases de données géographiques vectorielles. Une première étape du processus d'intégration de bases de données topographiques consiste à apparier leurs schémas conceptuels. Pour ce faire, nous proposons de nous appuyer sur une source de connaissances particulière : les spécifications des bases de données topographiques. Celles-ci sont tout d'abord mises à profit pour la création d'une ontologie du domaine de la topographie. Cette ontologie est utilisée comme ontologie de support, dans le cadre d'une première approche d'appariement de schémas de bases de données topographiques, fondée sur des techniques d'appariement terminologiques et structurelles. Une seconde approche, inspirée des techniques d'appariement fondées sur la sémantique, met en œuvre cette ontologie pour la représentation des connaissances sur les règles de sélection et de représentation géométrique des entités géographiques issues des spécifications dans le langage OWL 2, et leur exploitation par un système de raisonnement / This PhD thesis deals with topographic databases integration. This process aims at facilitating the use of several heterogeneous databases by making the relationships between them explicit. To automatically achieve databases integration, several aspects of data heterogeneity must be detected and solved. Identifying heterogeneities between topographic databases implies comparing some knowledge about their respective contents. Therefore, we propose to formalise and acquire this knowledge and to use it for topographic databases integration. Our work focuses on the specific problem of topographic databases schema matching, as a first step in an integration application. To reach this goal, we propose to use a specific knowledge source, namely the databases specifications, which describe the data implementing rules. Firstly, they are used as the main resource for the knowledge acquisition process in an ontology learning application. As a first approach for schema matching, the domain ontology created from the texts of IGN's databases specifications is used as a background knowledge source in a schema matching application based on terminological and structural matching techniques. In a second approach, this ontology is used to support the representation, in the OWL 2 language, of topographic entities selection and geometry capture rules described in the databases specifications. This knowledge is then used by a reasoner in a semantic-based schema matching application
|
357 |
EXEHDA-SO: uma abordagem ontológica para ciência de situação aplicada ao domínio de segurança da informaçãoRosa, Diórgenes Yuri Leal da 22 December 2017 (has links)
Submitted by Aline Batista (alinehb.ufpel@gmail.com) on 2018-04-18T15:04:36Z
No. of bitstreams: 2
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Dissertacao_Diorgenes_Yuri_Leal_da_Rosa.pdf: 3317198 bytes, checksum: ffad37e3b8f5606e8102f983c5628ac8 (MD5) / Approved for entry into archive by Aline Batista (alinehb.ufpel@gmail.com) on 2018-04-19T14:43:29Z (GMT) No. of bitstreams: 2
Dissertacao_Diorgenes_Yuri_Leal_da_Rosa.pdf: 3317198 bytes, checksum: ffad37e3b8f5606e8102f983c5628ac8 (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-04-19T14:43:40Z (GMT). No. of bitstreams: 2
Dissertacao_Diorgenes_Yuri_Leal_da_Rosa.pdf: 3317198 bytes, checksum: ffad37e3b8f5606e8102f983c5628ac8 (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Previous issue date: 2017-12-22 / Sem bolsa / As infraestruturas computacionais modernas, típicas da Computação Ubíqua, pressupõem características de flexibilidade e de permissividade quanto a conectividade do ambiente. Estas características contribuíram ao longo dos últimos anos com a concretização da emergente Internet das Coisas, a qual estende a demanda de conectividade e, por conseguinte, eleva o tráfego em redes de computadores. Entretanto, os mesmos fatores que permitem estas evoluções também potencializam problemas no que diz respeito a Segurança da Informação. Recorrentemente são implantadas, em redes de computadores, soluções de Segurança da Informação para fins específicos, desenvolvidas em linguagens de sintaxe própria, provendo eventos em formatos também distintos. Estes fatores individualizam a análise destas soluções, o que acaba dificultando a identificação de incidentes.
Neste sentido, a Ciência de Situação, enquanto estratégia capaz de integrar eventos de diferentes fontes, torna-se requisito fundamental para a implementação de controles de segurança, permitindo a flexibilidade típica da UbiComp. Considerando isto, a presente dissertação propõe uma abordagem ontológica para Ciência de Situação aplicada ao domínio de Segurança da Informação, denominada
EXEHDA-SO (Execution Environment for Highly Distributed Applications - Security Ontology).
Por meio de processamento de eventos heterogêneos, provenientes de diferentes fontes contextuais, busca-se uma contribuição a fase de compreensão de Ciência de Situação. O modelo EXEHDA-SO é apresentado em três fragmentos denominados Core, Scope e InterCell Analyzer. De forma a validar o modelo proposto foi desenvolvido um estudo de caso alusivo `a infraestrutura computacional da Universidade Federal de Pelotas. Nesta avaliação, considerando as características de heterogeneidade e distribuição do ambiente, foi possível observar as principais contribuições propostas nesta dissertação. / Modern computing infrastructures, typical of Ubiquitous Computing, assume characteristics of flexibility and permissiveness regarding the connectivity of the environment. These characteristics have contributed over the last few years to the emerging Internet of Things, which extends the demand for connectivity and therefore raises computer networks traffic. However, the same factors that allow these evolutions also potentiate problems with regard to Information Security. Information security solutions for specific purposes are developed in computer networks, developed in their own syntax languages, providing events in different formats. These factors individualize the analysis of these solutions, which brings difficulties to incidents identification. In this sense, Situational Science, as a strategy capable of
integrating events from different sources, becomes a fundamental requirement for the security controls implementation, allowing the typical flexibility of UbiComp. Considering this, the present dissertation proposes an ontological approach to Situation Science applied to the Information Security domain, called EXEHDA-SO (Execution Environment for Highly Distributed Applications - Security Ontology). Through the processing of heterogeneous events, coming from different contextual sources, a contribution is made to the understanding phase of Situational Science. The EXEHDA-SO model is presented in three fragments called Core, Scope and InterCell Analyzer. In order to validate the proposed model a case study was developed allusive to Universidade Federal de Pelotas computational infrastructure. In this evaluation, considering the characteristics of heterogeneity and distribution of the environment, it was possible to observe the main contributions proposed in this dissertation.
|
358 |
Information Extraction from Websites using Extraction Ontologies / Extrakce informací z webových stránek pomoci extrakčních ontologiíLabský, Martin January 2002 (has links)
Automatic information extraction (IE) from various types of text became very popular during the last decade. Owing to information overload, there are many practical applications that can utilize semantically labelled data extracted from textual sources like the Internet, emails, intranet documents and even conventional sources like newspaper and magazines. Applications of IE exist in many areas of computer science: information retrieval systems, question answering or website quality assessment. This work focuses on developing IE methods and tools that are particularly suited to extraction from semi-structured documents such as web pages and to situations where available training data is limited. The main contribution of this thesis is the proposed approach of extended extraction ontologies. It attempts to combine extraction evidence from three distinct sources: (1) manually specified extraction knowledge, (2) existing training data and (3) formatting regularities that are often present in online documents. The underlying hypothesis is that using extraction evidence of all three types by the extraction algorithm can help improve its extraction accuracy and robustness. The motivation for this work has been the lack of described methods and tools that would exploit these extraction evidence types at the same time. This thesis first describes a statistically trained approach to IE based on Hidden Markov Models which integrates with a picture classification algorithm in order to extract product offers from the Internet, including textual items as well as images. This approach is evaluated using a bicycle sale domain. Several methods of image classification using various feature sets are described and evaluated as well. These trained approaches are then integrated in the proposed novel approach of extended extraction ontologies, which builds on top of the work of Embley [21] by exploiting manual, trained and formatting types of extraction evidence at the same time. The intended benefit of using extraction ontologies is a quick development of a functional IE prototype, its smooth transition to deployed IE application and the possibility to leverage the use of each of the three extraction evidence types. Also, since extraction ontologies are typically developed by adapting suitable domain ontologies and the ontology remains in center of the extraction process, the work related to the conversion of extracted results back to a domain ontology or schema is minimized. The described approach is evaluated using several distinct real-world datasets.
|
359 |
Aide à la création et à l'exploitation de réglementations basée sur les modèles et techniques du Web sémantique / Semantic web models to support the creation of technical regulatory documents in building industryBouzidi, Khalil Riad 11 September 2013 (has links)
Les réglementations concernant l’industrie de la construction deviennent de plus en plus complexes et touchent plus d’un domaine à la fois. Elles portent sur les produits, les composants et l'exécution des projets. Elles jouent aussi un rôle important pour garantir la qualité d'un bâtiment, ses caractéristiques et minimiser son impact environnemental. Depuis 30 ans, le CSTB prouve son savoir-faire en la matière au travers du développement du REEF, l’encyclopédie complète des textes techniques et réglementaires de la construction. Dans le cadre d’une collaboration entre le CSTB et le laboratoire I3S, nous avons travaillé à la formalisation et au traitement automatisé des informations technico-réglementaires contenues dans le REEF. Nous avons mis en œuvre notre approche pour aider à la création de nouveaux Avis Techniques. Il s'agit de préciser comment ils sont rédigés et comment standardiser leur structure grâce à la mise en œuvre de services sémantiques adaptés. Nous avons réussi à identifier et à comprendre les problèmes liés à la rédaction d'avis techniques et nous nous sommes focalisés sur le renseignement des dossiers techniques par les industriels. Nos contributions sont les suivantes : Nous avons construit manuellement une ontologie du domaine, qui définit les principaux concepts impliqués dans l’élaboration des Avis Technique. Cette ontologie appelée "OntoDT" est couplée avec le thésaurus du projet REEF. Nous l’avons définie à partir de l’étude des dossiers techniques existants, du thesaurus REEF et en interviewant les instructeurs du CSTB. Nous utilisons conjointement les standards SBVR et SPARQL pour reformuler, à la fois dans un langage contrôlé et dans un langage formel, les contraintes réglementaires présentes dans les Guides pratiques. SBVR représente une assurance de la qualité du texte des contraintes réglementaires présentées à l’utilisateur et SPARQL permet l’automatisation de la vérification de ces contraintes. Ces deux représentations reposent sur l’ontologie de domaine que nous avons développée. Nous intégrons des connaissances expertes sur le processus même de vérification des dossiers techniques. Nous avons organisé en différents processus les requêtes SPARQL représentant des contraintes réglementaires. A chaque composant intervenant dans un dossier technique correspond un processus de vérification de sa conformité à la réglementation en vigueur. Les processus sont représentés de manière déclarative en RDF et un moteur de processus interprète ces descriptions RDF pour ordonner et déclencher l’exécution des requêtes nécessaires à la vérification d’un dossier technique particulier. Enfin, nous représentons de façon déclarative en RDF l’association des représentations SBVR et SPARQL des réglementations et nous utilisons ces annotations pour produire à l’utilisateur un rapport de conformité en langue naturelle pour l’assister dans la rédaction d’un avis technique. / Regulations in the Building industry are becoming increasingly complex and involve more than one technical area. They cover products, components and project implementation. They also play an important role to ensure the quality of a building, and to minimize its environmental impact. For more than 30 years, CSTB has proved its expertise in this field through the development of the complete encyclopedia of French technical and regulatory texts in the building domain: the REEF. In the framework of collaboration between CSTB and the I3S laboratory, we are carrying on research on the acquisition of knowledge from the technical and regulatory information contained in the REEF and the automated processing of this knowledge with the final goal of assisting professionals in the use of these texts and the creation of new texts. We are implementing this work in CSTB to help industrials in the writing of Technical Assessments. The problem is how to specify these assessments and standardize their structure using models and adaptive semantic services. The research communities of Knowledge Engineering and Semantic Web play a key role in providing the models and techniques relevant for our research, whose main objective is to simplify access to technical regulatory information, to support professionals in its implementation, and to facilitate the writing of new regulations while taking into account constraints expressed in the existing regulatory corpus. We focus on Technical Assessments based on technical guides capturing both regulations and knowledge of CSTB experts when producing these documents. A Technical Assessment (in French: Avis Technique or ATec) is a document containing technical information on the usability of a product, material, component or element of construction, which has an innovative character. We chose this Technical Assessment as a case study because CSTB has the mastership and a wide experience in these kinds of technical documents. We are particularly interested in the modeling of the regulatory constraints derived from the Technical Guides used to validate the Assessment. These Guides are regulatory complements offered by CSTB to the various industrials to enable easier reading of technical regulations. They collect execution details with a wide range of possible situations of implementations. Our work aims to formalize the Technical Guides in a machine-processable model to assist the creation of Technical Assessments by automating their validation. For this purpose, we first constructed a domain-ontology, which defines the main concepts involved in the Technical Guides. This ontology called “OntoDT” is coupled with domain thesauri. Several are being developed at CSTB among which one seems the most relevant by its volume and its semantic approach: the thesaurus from the REEF project. Our second contribution is the use of standard SBVR (Semantics of Business Vocabulary and Business Rules) and SPARQL to reformulate the regulatory requirements of guides both in a controlled and formal language Third, our model incorporates expert knowledge on the verification process of Technical Documents. We have organized the SPARQL queries representing regulatory constraints into several processes. Each component involved in the Technical Document corresponds to a elementary process of compliance checking. An elementary process contains a set of SPARQL queries to check the compliance of an elementary component. A full complex process for checking a Technical Document is defined recursively and automatically built as a set of elementary processes relative to the components which have their semantic definition in OntoDT. Finally, we represent in RDF the association between the SBVR rules and SPARQL queries representing the same regulatory constraints. We use annotations to produce a compliance report in natural language to assist users in the writing of Technical Assessments.
|
360 |
Desenvolvimento de técnica para recomendar atividades em workflows científicos: uma abordagem baseada em ontologias / Development of a strategy to scientific workflow activities recommendation: An ontology-based approachAdilson Lopes Khouri 16 March 2016 (has links)
O número de atividades disponibilizadas pelos sistemas gerenciadores de workflows científicos é grande, o que exige dos cientistas conhecerem muitas delas para aproveitar a capacidade de reutilização desses sistemas. Para minimizar este problema, a literatura apresenta algumas técnicas para recomendar atividades durante a construção de workflows científicos. Este projeto especificou e desenvolveu um sistema de recomendação de atividades híbrido, considerando informação sobre frequência, entrada e saídas das atividades, e anotações ontológicas para recomendar. Além disso, neste projeto é apresentada uma modelagem da recomendação de atividades como um problema de classificação e regressão, usando para isso cinco classificadores; cinco regressores; um classificador SVM composto, o qual usa o resultado dos outros classificadores e regressores para recomendar; e um ensemble de classificadores Rotation Forest. A técnica proposta foi comparada com as outras técnicas da literatura e com os classificadores e regressores, por meio da validação cruzada em 10 subconjuntos, apresentando como resultado uma recomendação mais precisa, com medida MRR ao menos 70% maior do que as obtidas pelas outras técnicas / The number of activities provided by scientific workflow management systems is large, which requires scientists to know many of them to take advantage of the reusability of these systems. To minimize this problem, the literature presents some techniques to recommend activities during the scientific workflow construction. This project specified and developed a hybrid activity recommendation system considering information on frequency, input and outputs of activities and ontological annotations. Additionally, this project presents a modeling of activities recommendation as a classification problem, tested using 5 classifiers; 5 regressors; a SVM classifier, which uses the results of other classifiers and regressors to recommend; and Rotation Forest , an ensemble of classifiers. The proposed technique was compared to other related techniques and to classifiers and regressors, using 10-fold-cross-validation, achieving a MRR at least 70% greater than those obtained by other techniques
|
Page generated in 0.0254 seconds