Spelling suggestions: "subject:"ontologies"" "subject:"antologies""
531 |
MAISA - Maintenance of semantic annotations / MAISA - Maintenance des annotations sémantiquesCardoso, Silvio Domingos 07 December 2018 (has links)
Les annotations sémantiques sont utilisées dans de nombreux domaines comme celui de la santé et servent à différentes tâches notamment la recherche et le partage d’information ou encore l'aide à la décision. Les annotations sont produites en associant à des documents digitaux des labels de concepts provenant des systèmes d’organisation de la connaissance (Knowledge Organization Systems, ou KOS, en anglais) comme les ontologies. Elles permettent alors aux ordinateurs d'interpréter, connecter et d'utiliser de manière automatique de grandes quantités de données. Cependant, la nature dynamique de la connaissance engendre régulièrement de profondes modifications au niveau du contenu des KOS provoquant ainsi un décalage entre la définition des concepts et les annotations. Une adaptation des annotations à ces changements est nécessaire pour garantir une bonne utilisation par les applications informatiques. De plus, la quantité importante d’annotations affectées rend impossible une adaptation manuelle. Dans ce mémoire de thèse, nous proposons une approche originale appelée MAISA pour résoudre le problème de l'adaptation des annotations sémantiques engendrée par l’évolution des KOS et pour lequel nous distinguons deux cas. Dans le premier cas, nous considérons que les annotations sont directement modifiables. Pour traiter ce problème nous avons défini une approche à base de règles combinant des informations provenant de l’évolution des KOS et des connaissances extraites du Web. Dans le deuxième cas, nous considérons que les annotations ne sont pas modifiables comme c’est bien souvent le cas des annotations associées aux données des patients. L’objectif ici étant de pouvoir retrouver les documents annotées avec une version du KOS donnée lorsque l’utilisateur interroge le système stockant ces documents avec le vocabulaire du même KOS mais d’une version différente. Pour gérer ce décalage de versions, nous avons proposé un graphe de connaissance représentant un KOS et son historique et un mécanisme d’enrichissement de requêtes permettant d’extraire de ce graphe l’historique d’un concept pour l’ajouter à la requête initiale. Nous proposons une évaluation expérimentale de notre approche pour la maintenance des annotations à partir de cas réels construits sur quatre KOS du domaine de la santé : ICD-9-CM, MeSH, NCIt et SNOMED CT. Nous montrons à travers l’utilisation des métriques classiques que l’approche proposée permet, dans les deux cas considérés, d’améliorer la maintenance des annotations sémantiques. / Semantic annotations are often used in a wide range of applications ranging from information retrieval to decision support. Annotations are produced through the association of concept labels from Knowledge Organization System (KOS), i.e. ontology, thesaurus, dictionaries, with pieces of digital information, e.g. images or texts. Annotations enable machines to interpret, link, and use a vast amount of data. However, the dynamic nature of KOS may affect annotations each time a new version of a KOS is released. New concepts can be added, obsolete ones removed and the definition of existing concepts may be refined through the modification of their labels/properties. As a result, many annotations can lose their relevance, thus hindering the intended use and exploitation of annotated data. To solve this problem, methods to maintain the annotations up-to-date are required. In this thesis we propose a framework called MAISA to tackle the problem of adapting outdated annotations when the KOS utilized to create them change. We distinguish two different cases. In the first one we consider that annotations are directly modifiable. In this case, we proposed a rule-based approach implementing information derived from the evolution of KOS as well as external knowledge from the Web. In the second case, we consider that the annotations are not modifiable. The goal is then to keep the annotated documents searchable even if the annotations are produced with a given KOS version but the user used another version to query them. In this case, we designed a knowledge graph that represent a KOS and its successive evolution and propose a method to extract the history of a concept and add the gained label to the initial query allowing to deal with annotation evolution. We experimentally evaluated MAISA on realistic cases-studies built from four well-known biomedical KOS: ICD-9-CM, MeSH, NCIt and SNOMED CT. We show that the proposed maintenance method allow to maintain semantic annotations using standard metrics.
|
532 |
Approches vers des modèles unifiés pour l'intégration de bases de connaissances / Approaches Towards Unified Models for Integrating Web Knowledge BasesKoutraki, Maria 27 September 2016 (has links)
Ma thèse a comme but l’intégration automatique de nouveaux services Web dans une base de connaissances. Pour chaque méthode d’un service Web, une vue est calculée de manière automatique. La vue est représentée comme une requête sur la base de connaissances. L’algorithme que nous avons proposé calcule également une fonction de transformation XSLT associée à la méthode qui est capable de transformer les résultats d’appel dans un fragment conforme au schéma de la base de connaissances. La nouveauté de notre approche c’est que l’alignement repose seulement sur l’alignement des instances. Il ne dépend pas des noms des concepts ni des contraintes qui sont définis par le schéma. Ceci le fait particulièrement pertinent pour les services Web qui sont publiés actuellement sur le Web, parce que ces services utilisent le protocole REST. Ce protocole ne permet pas la publication de schémas. En plus, JSON semble s’imposer comme le standard pour la représentation des résultats d’appels de services. À différence du langage XML, JSON n’utilise pas de noeuds nommés. Donc les algorithmes d’alignement traditionnels sont privés de noms de concepts sur lesquels ils se basent. / My thesis aim the automatic integration of new Web services in a knowledge base. For each method of a Web service, a view is automatically calculated. The view is represented as a query on the knowledge base. Our algorithm also calculates an XSLT transformation function associated to the method that is able to transform the call results in a fragment according to the schema of the knowledge base. The novelty of our approach is that the alignment is based only on the instances. It does not depend on the names of the concepts or constraints that are defined by the schema. This makes it particularly relevant for Web services that are currently available on the Web, because these services use the REST protocol. This protocol does not allow the publication schemes. In addition, JSON seems to establish itself as the standard for the representation of technology call results.
|
533 |
[en] LER: ANNOTATION AND AUTOMATIC CLASSIFICATION OF ENTITIES AND RELATIONS / [pt] LER: ANOTAÇÃO E CLASSIFICAÇÃO AUTOMÁTICA DE ENTIDADES E RELAÇÕESJONATAS DOS SANTOS GROSMAN 30 November 2017 (has links)
[pt] Diversas técnicas para extração de informações estruturadas de dados em linguagem natural foram desenvolvidas e demonstraram resultados muito satisfatórios. Entretanto, para obterem tais resultados, requerem uma série de atividades que geralmente são feitas de modo isolado, como a anotação de textos para geração de corpora, etiquetamento morfossintático, engenharia e extração de atributos, treinamento de modelos de aprendizado de máquina etc., o que torna onerosa a extração dessas informações, dado o esforço e tempo a serem investidos. O presente trabalho propõe e desenvolve uma plataforma em ambiente web, chamada LER (Learning Entities and Relations) que integra o fluxo necessário para essas atividades, com uma interface que visa a facilidade de uso. Outrossim, o trabalho mostra os resultados da implementação e uso da plataforma proposta. / [en] Many techniques for the structured information extraction from natural language data have been developed and have demonstrated their potentials yielding satisfactory results. Nevertheless, to obtain such results, they require some activities that are usually done separately, such as text annotation to generate corpora, Part-Of- Speech tagging, features engineering and extraction, machine learning models training etc., making the information extraction task a costly activity due to the effort and time spent on this. The present work proposes and develops a web based platform called LER (Learning Entities and Relations), that integrates the needed workflow for these activities, with an interface that aims the ease of use. The work also shows the platform implementation and its use.
|
534 |
Semantic Analysis Mapping Framework for Clinical Coding Schemes: A Design Science Research ApproachClunis, Julaine 22 December 2021 (has links)
No description available.
|
535 |
Mixing Description Logics in Privacy-Preserving Ontology PublishingBaader, Franz, Nuradiansyah, Adrian 30 July 2021 (has links)
In previous work, we have investigated privacy-preserving publishing of Description Logic (DL) ontologies in a setting where the knowledge about individuals to be published is an EL instance store, and both the privacy policy and the possible background knowledge of an attacker are represented by concepts of the DL EL. We have introduced the notions of compliance of a concept with a policy and of safety of a concept for a policy, and have shown how, in the context mentioned above, optimal compliant (safe) generalizations of a given EL concept can be computed. In the present paper, we consider a modified setting where we assume that the background knowledge of the attacker is given by a DL different from the one in which the knowledge to be published and the safety policies are formulated. In particular, we investigate the situations where the attacker’s knowledge is given by an FL0 or an FLE concept. In both cases, we show how optimal safe generalizations can be computed. Whereas the complexity of this computation is the same (ExpTime) as in our previous results for the case of FL0, it turns out to be actually lower (polynomial) for the more expressive DL FLE.
|
536 |
Faculty Roles in Curricular Change: Postmodern Narrative OntologiesMallory Lim Chua (15380036) 01 May 2023 (has links)
<p> </p>
<p>Faculty are the primary designers and implementers of engineering curricula within the U.S. higher education system. This places them in a unique position to respond to decades of national calls for curricular change in undergraduate engineering education. Individual and institutional faculty efforts to respond to these calls are inevitably influenced by faculty ontologies of curricular change – in other words, what faculty understand curricular change to be. By ‘ontology,’ I mean what is or what they perceive as what is. Ontologies are agentic, meaning that ontological assumptions shape how faculty envision their own roles and thereby influence the sorts of curricular change actions they envision and legitimize for themselves.</p>
<p>Faculty ontologies of curricular change and their roles therein are complex roles within complex phenomena. By interrogating these ontologies, I make-visible the ways faculty might view – and thereby shape – the curricular worlds they and their students inhabit. To use a theatrical analogy: how do faculty stage their narratives of curricular change – what kinds of worlds do they set up in their stories? What kinds of interactions do they allow within that world? What kinds of characters do they cast themselves and others as playing?</p>
<p>To investigate faculty ontologies of curricular change, I analyzed the narratives they told about several curricular change projects they had been personally involved with. I gathered narrative data by conducting recurring interviews with six faculty narrators. I deconstructed the resulting narrative data corpus using a postmodern approach focused on tensions and contradictions. The resulting analysis generated four distinct and interrelated ontologies for curricular change. These four ontologies are presented as a starting point rather than an exhaustive catalogue, since infinitely many ontologies could be generated. Each of the four ontologies created for this work portrays faculty roles in curricular change in relation to both curriculum and students. Creating multiple ontologies then enabled me to show how the interaction of multiple ontologies can create insights that are not apparent from each ontology alone. Among other things, the interactions of all four ontologies form a complex portrait of faculty as learners who are always unmaking and remaking themselves in the context of curricular change.</p>
<p>By constructing a collective memory of faculty ontologies, I work to interrogate and disrupt current conceptions of roles and relationships in curricular change. These ontologies, and the methods developed to pursue and play with them, serve as tools for “cutting meaning loose” and “keep[ing] difference… at play” (Jackson & Mazzei, 2012, p. 70-71). In turn, these tools open up a wider space of new ideas and possibilities for courses, pedagogies, and cultures to be expressed, evaluated, and legitimized.</p>
|
537 |
Towards an Ontology-Based Phenotypic Query ModelBeger, Christoph, Matthies, Franz, Schäfermeier, Ralph, Kirsten, Toralf, Herre, Heinrich, Uciteli, Alexandr 10 October 2023 (has links)
Clinical research based on data from patient or study data management systems plays an
important role in transferring basic findings into the daily practices of physicians. To support study
recruitment, diagnostic processes, and risk factor evaluation, search queries for such management
systems can be used. Typically, the query syntax as well as the underlying data structure vary
greatly between different data management systems. This makes it difficult for domain experts (e.g.,
clinicians) to build and execute search queries. In this work, the Core Ontology of Phenotypes is used
as a general model for phenotypic knowledge. This knowledge is required to create search queries
that determine and classify individuals (e.g., patients or study participants) whose morphology,
function, behaviour, or biochemical and physiological properties meet specific phenotype classes. A
specific model describing a set of particular phenotype classes is called a Phenotype Specification
Ontology. Such an ontology can be automatically converted to search queries on data management
systems. The methods described have already been used successfully in several projects. Using
ontologies to model phenotypic knowledge on patient or study data management systems is a viable
approach. It allows clinicians to model from a domain perspective without knowing the actual data
structure or query language.
|
538 |
Developing a Framework for Geographic Question Answering Systems Using GIS, Natural Language Processing, Machine Learning, and OntologiesChen, Wei 02 June 2014 (has links)
No description available.
|
539 |
Towards Designing and Generating User Interfaces by Using Expert KnowledgeBraham, Amani 23 December 2022 (has links)
[ES] La investigación reportada en la presente tesis doctoral se lleva a cabo a través de la metodología de la ciencia del diseño que se centra en la creación y evaluación de artefactos. En esta tesis, el principal artefacto es el novedoso enfoque para diseñar y generar interfaces de usuario utilizando el conocimiento experto. Con el fin de permitir el uso del conocimiento experto, el enfoque propuesto se basa en la reutilización de patrones de diseño que incorporan el conocimiento experto del diseño de la interfaz y proporcionan soluciones reutilizables a diversos problemas de diseño. El objetivo principal de dicho enfoque es abordar el uso de patrones de diseño a fin de garantizar que los conocimientos especializados se integren en el diseño y la generación de interfaces de usuario para aplicaciones móviles y web. Las contribuciones específicas de esta tesis se resumen a continuación: Una primera contribución consiste en el marco AUIDP que se define para apoyar el diseño y la generación de interfaces adaptativas para aplicaciones web y móviles utilizando patrones de diseño HCI. El marco propuesto abarca tanto la etapa de diseño como la de ejecución de dichas interfaces. En el momento del diseño, los modelos de patrones de diseño junto con la interfaz de usuario y el perfil de usuario se definen siguiendo una metodología de desarrollo específica. En tiempo de ejecución, los modelos creados se utilizan para permitir la selección de patrones de diseño de HCI y para permitir la generación de interfaces de usuario a partir de las soluciones de diseño proporcionadas por los patrones de diseño relevantes. La segunda contribución es un método de especificación para establecer un modelo de ontología que convierte la representación tradicional basada en texto en la representación formal del patrón de diseño de HCI. Este método adopta la metodología Neon para lograr la transición de las representaciones informales a las formales. El modelo de ontología creado se llama MIDEP, que es una ontología modular que captura el conocimiento sobre los patrones de diseño, así como la interfaz de usuario y el perfil del usuario. La tercera contribución es el IDEPAR, que es el primer sistema dentro del marco global del AUIDP. Este sistema tiene como objetivo recomendar automáticamente los patrones de diseño más relevantes para un problema de diseño dado. Se basa en un enfoque híbrido que utiliza una combinación mixta de técnicas de recomendación basadas en texto y ontología para producir recomendaciones de patrones de diseño que proporcionan soluciones de diseño apropiadas. La cuarta contribución es un sistema generador de interfaz llamado ICGDEP, que se propone para generar automáticamente el código fuente de la interfaz de usuario para aplicaciones web y móviles. El ICGDEP es el segundo sistema dentro del marco global de AUIDP y se basa en el uso de patrones de diseño de HCI que son recomendados por el sistema IDEPAR. Su objetivo principal es generar automáticamente el código fuente de la interfaz de usuario a partir de las soluciones de diseño proporcionadas por los patrones de diseño. Para lograr esto, el sistema ICGDEP utiliza un método que permite la generación de código fuente de interfaz de usuario para la aplicación de destino. Las contribuciones aportadas en la presente tesis han sido validadas a través de diferentes perspectivas. En primer lugar, la evaluación de la ontología MIDEP desarrollada se realiza utilizando preguntas de competencia, enfoques de evaluación basados en la tecnología y basados en aplicaciones. En segundo lugar, la evaluación del sistema IDEPAR se establece mediante un patrón producido por expertos y un estudio de evaluación centrado en el usuario. Luego, el sistema ICGDEP es evaluado en términos de ser utilizado efectivamente por los desarrolladores, considerando el factor de productividad. Por último, la evaluación del marco mundial de AUIDP se lleva a cabo mediante estudios de casos y estudios de usabilidad. / [CA] La investigació reportada en aquesta tesi doctoral es duu a terme a través de la metodologia de la ciència del disseny que se centra en la creació i avaluació d'artefactes. En aquesta tesi, el principal artefacte és el nou enfocament per dissenyar i generar interfícies d'usuari utilitzant el coneixement expert. Per tal de permetre l'ús del coneixement expert, l'enfocament proposat es basa en la reutilització de patrons de disseny que incorporen el coneixement expert del disseny de la interfície i proporcionen solucions reutilitzables a diversos problemes de disseny. L'objectiu principal d'aquest enfocament és abordar l'ús de patrons de disseny per tal de garantir que els coneixements especialitzats s'integrin en el disseny i la generació d'interfícies d'usuari per a aplicacions mòbils i web. Les contribucions específiques d'aquesta tesi es resumeixen a continuació: Una primera contribució consisteix en el marc AUIDP que es defineix per donar suport al disseny i generació d'interfícies adaptatives per a aplicacions web i mòbils utilitzant patrons de disseny HCI. El marc proposat inclou tant l'etapa de disseny com la d'execució de les interfícies esmentades. En el moment del disseny, els models de patrons de disseny juntament amb la interfície d'usuari i el perfil d'usuari es defineixen seguint una metodologia de desenvolupament específica. En temps d'execució, els models creats s'utilitzen per permetre la selecció de patrons de disseny de HCI i per permetre la generació de interfícies d'usuari a partir de les solucions de disseny proporcionades pels patrons de disseny rellevants. La segona contribució és un mètode d'especificació per establir un model d'ontologia que converteix la representació tradicional basada en text en la representació formal del patró de disseny de HCI. Aquest mètode adopta la metodologia Neon per aconseguir la transició de les representacions informals a les formals. El model d'ontologia creat s'anomena MIDEP, una ontologia modular que captura el coneixement sobre els patrons de disseny, així com la interfície d'usuari i el perfil de l'usuari. La tercera contribució és l'IDEPAR, que és el primer sistema dins del marc global de l'AUIDP. Aquest sistema té com a objectiu recomanar automàticament els patrons de disseny més rellevants per a un problema de disseny donat. Es basa en un enfocament híbrid que utilitza una combinació mixta de tècniques de recomanació basades en text i ontologia per produir recomanacions de patrons de disseny que proporcionen solucions de disseny apropiades. La quarta contribució és un sistema generador d'interfície anomenat ICGDEP, que es proposa per generar automàticament el codi font de la interfície d'usuari per a aplicacions web i mòbils. L'ICGDEP és el segon sistema dins del marc global d'AUIDP i es basa en l'ús de patrons de disseny de HCI que són recomanats pel sistema IDEPAR. El seu objectiu principal és generar automàticament el codi font de la interfície d'usuari a partir de les solucions de disseny proporcionades pels patrons de disseny. Per aconseguir-ho, el sistema ICGDEP utilitza un mètode que permet generar codi font d'interfície d'usuari per a l'aplicació de destinació. Les contribucions aportades a la present tesi han estat validades a través de diferents perspectives. En primer lloc, l'avaluació de l'ontologia MIDEP desenvolupada es fa utilitzant preguntes de competència, enfocaments d'avaluació basats en la tecnologia i basats en aplicacions. En segon lloc, l'avaluació del sistema IDEPAR s'estableix mitjançant un patró produït per experts i un estudi d'avaluació centrat en l'usuari. Després, el sistema ICGDEP és avaluat en termes de ser utilitzat efectivament pels desenvolupadors, considerant el factor de productivitat. Finalment, l'avaluació del marc mundial d'AUIDP es fa mitjançant estudis de casos i estudis d'usabilitat. / [EN] The research reported in the present PhD dissertation is conducted through the design science methodology that focuses on creating and evaluating artifacts. In the current thesis, the main artifact is the novel approach to design and generate user interfaces using expert knowledge. In order to enable the use of expert knowledge, the present approach is devoted to reuse design patterns that incorporate expert knowledge of interface design and provide reusable solutions to various design problems. The main goal of the proposed approach is to address the use of design patterns in order to ensure that expert knowledge is integrated into the design and generation of user interfaces for mobile and Web applications. The specific contributions of this thesis are summarized below: This first contribution is the AUIDP framework that is defined to support the design and generation of adaptive interfaces for Web and mobile applications using HCI design patterns. The proposed framework spans over design-time and run-time. At design-time, models of design patterns along with user interface and user profile are defined following a specific development methodology. At run-time, the created models are used to allow the selection of HCI design patterns and to enable the generation of user interfaces from the design solutions provided by the relevant design patterns. The second contribution is a specification method to establish an ontology model that turns traditional text-based representation into formal HCI design pattern representation. This method adopts the Neon methodology to achieve the transition from informal to formal representations. The created ontology model is named MIDEP, which is a modular ontology that captures knowledge about design patterns as well as the user interface and user's profile. The third contribution is the IDEPAR, which is the first system within the global AUIDP framework. This system aims to automatically recommend the most relevant design patterns for a given design problem. It is based on a hybrid approach that relies on a mixed combination of text-based and ontology-based recommendation techniques to produce design pattern recommendations that provide appropriate design solutions. The fourth contribution is an interface generator system called ICGDEP, which is proposed to automatically generate the user interface source code for Web and mobile applications. The proposed ICGDEP is the second system within the global AUIDP framework and relies on the use of HCI design patterns that are recommended by the IDEPAR system. It mainly aims at automatically generating the user interface source code from the design solutions provided by design patterns. To achieve this, the ICGDEP system is based on a generation method that allows the generation of user interface source code for the target application. The contributions provided in the present thesis have been validated through different perspectives. First, the evaluation of the developed MIDEP ontology is performed using competency questions, technology-based, and application-based evaluation approaches. Second, the evaluation of the IDEPAR system is established through an expert-based gold standard and a user-centric evaluation study. Then, the ICGDEP system is evaluated in terms of being effectively used by developers, considering the productivity factor. Finally, the evaluation of the global AUIDP framework is conducted through case studies and usability studies. / Braham, A. (2022). Towards Designing and Generating User Interfaces by Using Expert Knowledge [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/190920
|
540 |
Writing Sound Into the Wind: How Score Technologies Affect Our MusickingBhagwati, Sandeep 01 October 2024 (has links)
In diesem leicht überarbeiteten Text seiner Keynote auf dem Jahreskongress 2019 der GMTH erörtert Sandeep Bhagwati grundlegende Konzepte des aktuellen Notationsdiskurses, wie z. B. notational perspective und comprovisation. Er erläutert den Platz von Notationsformen innerhalb einer laufenden Entwicklung, in der sich die Klangerzeugung allmählich vom menschlichen Handeln und seiner Übersetzung ins Visuelle entfernt, und entfaltet das Feld möglicher Notationen, das durch neue sensorische Technologien eröffnet wird. Wird die Einführung solcher reaktionsfähiger und fließender Notationstechnologien die Natur dessen, was wir Musik nennen, erneut verändern? Schließlich stellt er sich mögliche Verschiebungen in der Ontologie des Musizierens vor, die durch derartige ›unsichtbare‹ Notationen und durch nichtmenschliches Handeln im Bereich des Musizierens hervorgerufen werden könnten. / In this slightly updated text of his keynote speech at the Annual Congress 2019 of the GMTH, Sandeep Bhagwati discusses foundational concepts of current discourses on notation, such as notational perspective and comprovisation. He elaborates on the place of notation in an ongoing evolution that sees sound production gradually move away from human agency and its translation into the visual and unfolds the field for possible notation opened up by new sensory technologies. Will the introduction of such responsive and fluid score technologies once more change the very nature of what we call music? Finally, he imagines possible shifts in the ontology of musicking that may be occasioned by such ‘invisible’ notations and through non-human agency in musicking.
|
Page generated in 0.2162 seconds