Spelling suggestions: "subject:"ontologies"" "subject:"antologies""
321 |
The Spaces of Carbon: Calculation,Technology, and Discourse in the Production of Carbon Forestry Offsets in Costa RicaLansing, David M. 28 September 2009 (has links)
No description available.
|
322 |
A framework for analysing the complexity of ontologyKazadi, Yannick Kazela 11 1900 (has links)
M. Tech. (Department of Information and Communication Technology, Faculty of Applied and Computer Sciences), Vaal University of Technology / The emergence of the Semantic Web has resulted in more and more large-scale ontologies being developed in real-world applications to represent and integrate knowledge and data in various domains. This has given rise to the problem of selection of the appropriate ontology for reuse, among the set of ontologies describing a domain. To address such problem, it is argued that the evaluation of the complexity of ontologies of a domain can assist in determining the suitable ontologies for the purpose of reuse. This study investigates existing metrics for measuring the design complexity of ontologies and implements these metrics in a framework that provides a stepwise process for evaluating the complexity of ontologies of a knowledge domain. The implementation of the framework goes through a certain number of phases including the: (1) download of 100 Biomedical ontologies from the BioPortal repository to constitute the dataset, (2) the design of a set of algorithms to compute the complexity metrics of the ontologies in the dataset including the depth of inheritance (DIP), size of the vocabulary (SOV), entropy of ontology graphs (EOG), average part length (APL) and average number of paths per class (ANP), the tree impurity (TIP), relationship richness (RR) and class richness (CR), (3) ranking of the ontologies in the dataset through the aggregation of their complexity metrics using 5 Multi-attributes Decision Making (MADM) methods, namely, Weighted Sum Method (WSM), Weighted Product Method (WPM), Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), Weighted Linear Combination Ranking Technique (WLCRT) and Elimination and Choice Translating Reality (ELECTRE) and (4) validation of the framework through the summary of the results of the previous phases and analysis of their impact on the issues of selection and reuse of the biomedical ontologies in the dataset. The ranking results of the study constitute important guidelines for the selection and reuse of biomedical ontologies in the dataset. Although the proposed framework in this study has been applied in the biomedical domain, it could be applied in any other domain of Semantic Web to analyze the complexity of ontologies.
|
323 |
Using Concept Maps as a Tool for Cross-Language Relevance DeterminationRichardson, W. Ryan 02 August 2007 (has links)
Concept maps, introduced by Novak, aid learners' understanding. I hypothesize that concept maps also can function as a summary of large documents, e.g., electronic theses and dissertations (ETDs). I have built a system that automatically generates concept maps from English-language ETDs in the computing field. The system also will provide Spanish translations of these concept maps for native Spanish speakers. Using machine translation techniques, my approach leads to concept maps that could allow researchers to discover pertinent dissertations in languages they cannot read, helping them to decide if they want a potentially relevant dissertation translated.
I am using a state-of-the-art natural language processing system, called Relex, to extract noun phrases and noun-verb-noun relations from ETDs, and then produce concept maps automatically. I also have incorporated information from the table of contents of ETDs to create novel styles of concept maps. I have conducted five user studies, to evaluate user perceptions about these different map styles.
I am using several methods to translate node and link text in concept maps from English to Spanish. Nodes labeled with single words from a given technical area can be translated using wordlists, but phrases in specific technical fields can be difficult to translate. Thus I have amassed a collection of about 580 Spanish-language ETDs from Scirus and two Mexican universities and I am using this corpus to mine phrase translations that I could not find otherwise.
The usefulness of the automatically-generated and translated concept maps has been assessed in an experiment at Universidad de las Americas (UDLA) in Puebla, Mexico. This experiment demonstrated that concept maps can augment abstracts (translated using a standard machine translation package) in helping Spanish speaking users find ETDs of interest. / Ph. D.
|
324 |
Unification in the Description Logic ELHR+ without the Top Concept modulo Cycle-Restricted Ontologies: (Extended Version)Baader, Franz, Fernandez Gil, Oliver 23 April 2024 (has links)
Unification has been introduced in Description Logic (DL) as a means to detect redundancies in ontologies. In particular, it was shown that testing unifiability in the DL EL is an NP-complete problem, and this result has been extended in several directions. Surprisingly, it turned out that the complexity increases to PSpace if one disallows the use of the top concept in concept descriptions. Motivated by features of the medical ontology SNOMED CT, we extend this result to a setting where the top concept is disallowed, but there is a background ontology consisting of restricted forms of concept and role inclusion axioms. We are able to show that the presence of such axioms does not increase the complexity of unification without top, i.e., testing for unifiability remains a PSpace-complete problem.
|
325 |
Ontology-Mediated Queries for Probabilistic Databases: Extended VersionBorgwardt, Stefan, Ceylan, Ismail Ilkan, Lukasiewicz, Thomas 28 December 2023 (has links)
Probabilistic databases (PDBs) are usually incomplete, e.g., contain only the facts that have been extracted from the Web with high confidence. However, missing facts are often treated as being false, which leads to unintuitive results when querying PDBs. Recently, open-world probabilistic databases (OpenPDBs) were proposed to address this issue by allowing probabilities of unknown facts to take any value from a fixed probability interval. In this paper, we extend OpenPDBs by Datalog± ontologies, under which both upper and lower probabilities of queries become even more informative, enabling us to distinguish queries that were indistinguishable before. We show that the dichotomy between P and PP in (Open)PDBs can be lifted to the case of first-order rewritable positive programs (without negative constraints); and that the problem can become NP^PP-complete, once negative constraints are allowed. We also propose an approximating semantics that circumvents the increase in complexity caused by negative constraints.
|
326 |
Ceramics as indicators of Late Bronze Age environments at Zürich-Alpenquai (Switzerland).Jennings, Benjamin R. 11 June 2015 (has links)
Yes / Lake-dwellings in the northern Alpine region are renowned for their extraordinary organic preservation. In addition to organic remains, thousands of ceramic sherds are also recovered. This paper addresses ceramic sherds from the Late Bronze Age site Zürich-Alpenquai, and assesses over 2000 sherds for indications of erosion and abrasion in addition to quantifying sherd size and plotting the spatial distribution of these factors. Recording such wear patterns can provide indications of deposition practices in addition to environmental conditions pre- and post-deposition. In this manner the study of ceramic remains from wetland sites for abrasion can complement environmental studies addressing conditions at the time of artefact deposition, and contribute to discussions of influences for lake-settlement abandonment.
|
327 |
Modeling Email Phishing AttacksAlmoqbil, Abdullah 12 1900 (has links)
Cheating, beguiling, and misleading information exist all around us; understanding deception and its consequences is crucial in our information environment. This study investigates deception in phishing emails that successfully bypassed Microsoft 365 filtering system. We devised a model that explains why some people are deceived and how targeted individuals and organizations can prevent or counter attacks. The theoretical framework used in this study is Anderson's functional ontology construction (FOC). The methodology involves quantitative and qualitative descriptive design, where the data source is the set of phishing emails archived from a Tier 1 University. We looked for term frequency-inverse document frequency (Tf-idf) and the distribution of words over documents (topic modeling) and found the subjects of phishing emails that targeted educational organizations are related to finances, jobs, and technologies. Also, our analysis shows the phishing emails in the dataset come under six categories; reward, urgency, curiosity, fear, job, and entertainment. Results indicate that staff and students were primarily targeted, and a list of the most used verbs for deception was compiled. We uncovered the stimuli being used by scammers and types of reinforcements used to misinform the target to ensure successful trapping via phishing emails. We identified how scammers pick their targets and how they tailor and systematically orchestrate individual attack on targets. The limitations of this study pertain to the sample size and the collection method. Future work will focus on implementing the derived model into software that can perform deception identification, target alerting and protection against advanced email phishing.
|
328 |
Ontoloby-based semantic query processing in database systemsNecib, Chokri Ben 11 January 2008 (has links)
Die Bedeutung der in den relationalen Datenbankmanagementsystemen dargestellten Realwelt-Objekten wird weder explizit noch vollständig beschrieben. Demzufolge treffen häufig diese Systeme mit den Anfrageantworten nicht die Benutzerabsichten. Die vorliegende Dissertation präsentiert einen ontologie-basierten Ansatz für die semantische Anfrageverarbeitung. In diesem Ansatz sollen semantische Informationen aus einer gegebenen Ontologie abgeleitet und für die Umformulierung der Benutzeranfrage verwendet werden. Dies führt zu einer neuen Anfrage, die für den Benutzer sinnvollere Ergebnisse aus der Datenbank zurückliefern kann. Wir definieren und spezifizieren Einschränkungen und Abbildungen zwischen der Ontologie- und den Datenbank-Konzepten, um eine Ontologie mit einer Datenbank zu verknüpfen. Des Weiteren entwickeln wir eine Reihe von Algorithmen, die uns helfen, diese Abbildungen auf eine halbautomatische Weise zu finden. Au"serdem entwickeln wir eine Reihe von semantischen Regeln, die für die Umformulierung einer Anfrage benutzt werden. Die Haupteigenschaft einer Regel ist es, Begriffe einer Anfrage durch andere Begriffe zu ersetzen oder anzureichern, die von denselben ontologischen Konzepten dargestellt werden. Weiterhin benutzen wir die Theorie der Termersetzungssysteme, um den Transformationsprozess zu formalisieren und die wesentlichen Eigenschaften für das Anwenden der Regeln zu studieren. Aufbauend auf diesem Ansatz wurde ein Prototyp implementiert und wurde die Fähigkeit unseres Ansatzes durch einer real existierenden Anwendung ausgewertet. / Currently, database management systems solely rely on exact syntax of queries to retrieve data. As consequence query answers often do not meet the user''s intention. In this thesis we propose an ontology-based semantic query processing approach for database systems. We use ontologies to transform a user query into another query that may provide a more meaningful answer to the user. For this purpose, we define and specify different mappings that relate concepts of an ontology with those of an underlying database and develop a set of algorithms that allow us to find these mappings in a semi-automatic way. Moreover, we propose a set of semantic rules for transforming queries using terms derived from the ontology. We classify the rules and demonstrate their usefulness using practical examples. Furthermore, we make use of the theory of term rewriting systems to formalize the transformation process and to study the basic properties for applying these rules. Finally, we implement a prototype system using current technologies and evaluate its capability by using a real world application.
|
329 |
VENCE : un modèle performant d'extraction de résumés basé sur une approche d'apprentissage automatique renforcée par de la connaissance ontologiqueMotta, Jesus Antonio 23 April 2018 (has links)
De nombreuses méthodes et techniques d’intelligence artificielle pour l’extraction d'information, la reconnaissance des formes et l’exploration de données sont utilisées pour extraire des résumés automatiquement. En particulier, de nouveaux modèles d'apprentissage automatique semi supervisé avec ajout de connaissance ontologique permettent de choisir des phrases d’un corpus en fonction de leur contenu d'information. Le corpus est considéré comme un ensemble de phrases sur lequel des méthodes d'optimisation sont appliquées pour identifier les attributs les plus importants. Ceux-ci formeront l’ensemble d’entrainement, à partir duquel un algorithme d’apprentissage pourra abduire une fonction de classification capable de discriminer les phrases de nouveaux corpus en fonction de leur contenu d’information. Actuellement, même si les résultats sont intéressants, l’efficacité des modèles basés sur cette approche est encore faible notamment en ce qui concerne le pouvoir discriminant des fonctions de classification. Dans cette thèse, un nouveau modèle basé sur l’apprentissage automatique est proposé et dont l’efficacité est améliorée par un ajout de connaissance ontologique à l’ensemble d’entrainement. L’originalité de ce modèle est décrite à travers trois articles de revues. Le premier article a pour but de montrer comment des techniques linéaires peuvent être appliquées de manière originale pour optimiser un espace de travail dans le contexte du résumé extractif. Le deuxième article explique comment insérer de la connaissance ontologique pour améliorer considérablement la performance des fonctions de classification. Cette insertion se fait par l’ajout, à l'ensemble d’entraînement, de chaines lexicales extraites de bases de connaissances ontologiques. Le troisième article décrit VENCE , le nouveau modèle d’apprentissage automatique permettant d’extraire les phrases les plus porteuses d’information en vue de produire des résumés. Une évaluation des performances de VENCE a été réalisée en comparant les résultats obtenus avec ceux produits par des logiciels actuels commerciaux et publics, ainsi que ceux publiés dans des articles scientifiques très récents. L’utilisation des métriques habituelles de rappel, précision et F_measure ainsi que l’outil ROUGE a permis de constater la supériorité de VENCE. Ce modèle pourrait être profitable pour d’autres contextes d’extraction d’information comme pour définir des modèles d’analyse de sentiments. / Several methods and techniques of artificial intelligence for information extraction, pattern recognition and data mining are used for extraction of summaries. More particularly, new machine learning models with the introduction of ontological knowledge allow the extraction of the sentences containing the greatest amount of information from a corpus. This corpus is considered as a set of sentences on which different optimization methods are applied to identify the most important attributes. They will provide a training set from which a machine learning algorithm will can abduce a classification function able to discriminate the sentences of new corpus according their information content. Currently, even though the results are interesting, the effectiveness of models based on this approach is still low, especially in the discriminating power of classification functions. In this thesis, a new model based on this approach is proposed and its effectiveness is improved by inserting ontological knowledge to the training set. The originality of this model is described through three papers. The first paper aims to show how linear techniques could be applied in an original way to optimize workspace in the context of extractive summary. The second article explains how to insert ontological knowledge to significantly improve the performance of classification functions. This introduction is performed by inserting lexical chains of ontological knowledge based in the training set. The third article describes VENCE , the new machine learning model to extract sentences with the most information content in order to produce summaries. An assessment of the VENCE performance is achieved comparing the results with those produced by current commercial and public software as well as those published in very recent scientific articles. The use of usual metrics recall, precision and F_measure and the ROUGE toolkit showed the superiority of VENCE. This model could benefit other contexts of information extraction as for instance to define models for sentiment analysis.
|
330 |
Contrôle d'accès par les ontologies : Outil de validation automatique des droits d'accèsSadio, Étienne Théodore 23 April 2018 (has links)
De nos jours, nous assistons à l'émergence d'un écosystème informatique au sein de l'entreprise due à la cohabitation de plusieurs types de systèmes et d'équipements informatique. Cette diversité ajoute de la complexité dans la gestion de la sécurité informatique en général et le contrôle d'accès en particulier. En effet, plusieurs systèmes informatiques implémentent le contrôle d'accès en se basant sur des modèles comme le MAC1, DAC2, RBAC3 entre autres. Ainsi, chaque système a sa propre stratégie donc son propre de modèle de contrôle d'accès. Cela crée une hétérogénéité dans la gestion des droits d'accès. Pour répondre à ce besoin de gestion du contrôle d'accès, nous avons, dans ce mémoire, présenté la conception d'une ontologie et d'un outil de gestion automatique des droits d'accès dans un environnement hétérogène. Cet outil se base sur notre ontologie qui permet d'introduire une abstraction sur les modèles de contrôle d'accès implémentés dans les différents systèmes à analyser. Ainsi, les administrateurs de sécurité disposent un outil pour valider l'ensemble des droits d'accès dans leurs écosystèmes informatique. / Today, we are seeing the emergence of an IT ecosystem within companies due to the coexistence of several types of systems and computer equipements. This diversity adds complexity in management of computer security in general and particulary in access control. Indeed, several computer systems implement access control techniques based on models like MAC4, DAC5, RBAC6 among others. Each system has its own strategy based on its own access control model. This creates a heterogeneity in the management of access rights. To respond to this need related to the management of access control, we presented the design of an ontology and we developped an automated management tool of access rights in a heterogeneous environment. This tool is based on our ontology which introduces an abstraction on access control models implemented in different systems that we want analyze. Thus, security administrators have a tool to validate all access rights in their IT ecosystems.
|
Page generated in 0.0672 seconds