• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 6
  • 6
  • 4
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 70
  • 70
  • 22
  • 19
  • 17
  • 16
  • 14
  • 14
  • 12
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

An Ontology-based Hybrid Recommendation System Using Semantic Similarity Measure And Feature Weighting

Ceylan, Ugur 01 September 2011 (has links) (PDF)
The task of the recommendation systems is to recommend items that are relevant to the preferences of users. Two main approaches in recommendation systems are collaborative filtering and content-based filtering. Collaborative filtering systems have some major problems such as sparsity, scalability, new item and new user problems. In this thesis, a hybrid recommendation system that is based on content-boosted collaborative filtering approach is proposed in order to overcome sparsity and new item problems of collaborative filtering. The content-based part of the proposed approach exploits semantic similarities between items based on a priori defined ontology-based metadata in movie domain and derived feature-weights from content-based user models. Using the semantic similarities between items and collaborative-based user models, recommendations are generated. The results of the evaluation phase show that the proposed approach improves the quality of recommendations.
12

Using semantic similarity measures across Gene Ontology to predict protein-protein interactions

Helgadóttir, Hanna Sigrún January 2005 (has links)
<p>Living cells are controlled by proteins and genes that interact through complex molecular pathways to achieve a specific function. Therefore, determination of protein-protein interaction is fundamental for the understanding of the cell’s lifecycle and functions. The function of a protein is also largely determined by its interactions with other proteins. The amount of protein-protein interaction data available has multiplied by the emergence of large-scale technologies for detecting them, but the drawback of such measures is the relatively high amount of noise present in the data. It is time consuming to experimentally determine protein-protein interactions and therefore the aim of this project is to create a computational method that predicts interactions with high sensitivity and specificity. Semantic similarity measures were applied across the Gene Ontology terms assigned to proteins in S. cerevisiae to predict protein-protein interactions. Three semantic similarity measures were tested to see which one performs best in predicting such interactions. Based on the results, a method that predicts function of proteins in connection with connectivity was devised. The results show that semantic similarity is a useful measure for predicting protein-protein interactions.</p>
13

Εξατομικευμένη αναζήτηση πληροφορίας με χρήση σημασιολογικών δικτύων / Personalized web search through the use of semantic networks

Ζώτος, Νικόλαος 15 November 2007 (has links)
Κατά την αναζήτηση στον Παγκόσμιο Ιστό, είναι πιθανό να επιστρέφονται πολλά αποτελέσματα για ερωτήματα που είναι ασαφή και αμφιλεγόμενα. Τα snippets που εξάγονται από τις σελίδες που ανακτήθηκαν, είναι ένας δείκτης της χρησιμότητας της σελίδας ως προς την θεματική πρόθεση του ερωτήματος και μπορούν να χρησιμοποιηθούν για να εστιάσουμε στο αντικείμενο της αναζήτησης. Στην παρούσα εργασία προτείνουμε μια καινοτόμο μέθοδο αυτόματης εξαγωγής snippets ιστοσελίδων που είναι πολύ σχετικά με την πρόθεση του ερωτήματος αλλά και αντιπροσωπευτικά του συνολικού περιεχομένου των σελίδων. Θα δείξουμε ότι η χρήση σημασιολογίας ως βάση της θεματικά προσανατολισμένης ανάκτησης πληροφορίας μας βοηθάει να προτείνουμε στον χρήστη snippets υψηλής ποιότητας. Τα snippets που παράγονται με την μέθοδο που προτείνουμε είναι σημαντικά καλύτερα όσον αφορά την απόδοση της ανάκτησης σε σχέση με αυτά που προκύπτουν από στατιστική επεξεργασία της σελίδας. Επιπλέον, μπορούμε να χρησιμοποιήσουμε τη σημασιολογική εξαγωγή snippets για να αυξήσουμε την απόδοση των παραδοσιακών αλγορίθμων, οι οποίοι βασίζονται στην επικάλυψη λέξεων ή σε στατιστικά βάρη, αφού αυτοί συνήθως παράγουν διαφορετικά αποτελέσματα. Η επιλογή από την πλευρά του χρήστη των πιο σχετικών με το ερώτημά του snippets, μπορεί να χρησιμοποιηθεί στο να βελτιώσουμε τα επιστρεφόμενα αποτελέσματα και να προωθήσουμε τις πιο χρήσιμες προς αυτόν σελίδες. / When searching the web, it is often possible that there are too many results available for ambiguous queries. Text snippets, extracted from the retrieved pages, are an indicator of the pages’ usefulness to the query intention and can be used to focus the scope of search results. In this paper, we propose a novel method for automatically extracting web page snippets that are highly relevant to the query intention and expressive of the pages’ entire content. We show that the usage of semantics, as a basis for focused retrieval, produces high quality text snippet suggestions. The snippets delivered by our method are significantly better in terms of retrieval performance compared to those derived using the pages’ statistical content. Furthermore, our study suggests that semantically-driven snippet generation can also be used to augment traditional passage retrieval algorithms based on word overlap or statistical weights, since they typically differ in coverage and produce different results. User clicks on the query relevant snippets can be used to refine the query results and promote the most comprehensive among the relevant documents.
14

An Event-related Potential Investigation on Associative Encoding and the Effects of Intra-list Semantic Similarity

Kim, Alice Sun-Nam 14 July 2009 (has links)
Event-related potentials were recorded as subjects were presented with pairs of words, one word at a time, to examine the electrocortical manifestations of association formation and the effect of intra-list semantic similarity. Two types of lists were presented: Same – all pairs belonged to the same semantic category; Different – all pairs belonged to a different semantic category. Subjects were told to memorize the pairs for a following paired associate recall test. Recall was better for the Different than Same lists. Subsequent recall was predicted by the amplitudes of a potential lasting throughout the epoch and the P555 to each word of a pair (likely reflecting state- and item-related encoding activity, respectively), as well as a late positive wave that occurred after the offset of the second word, which is thought to reflect association formation. A larger N425 was elicited by pairs in the Different than Same lists, likely reflecting semantic integration.
15

An Event-related Potential Investigation on Associative Encoding and the Effects of Intra-list Semantic Similarity

Kim, Alice Sun-Nam 14 July 2009 (has links)
Event-related potentials were recorded as subjects were presented with pairs of words, one word at a time, to examine the electrocortical manifestations of association formation and the effect of intra-list semantic similarity. Two types of lists were presented: Same – all pairs belonged to the same semantic category; Different – all pairs belonged to a different semantic category. Subjects were told to memorize the pairs for a following paired associate recall test. Recall was better for the Different than Same lists. Subsequent recall was predicted by the amplitudes of a potential lasting throughout the epoch and the P555 to each word of a pair (likely reflecting state- and item-related encoding activity, respectively), as well as a late positive wave that occurred after the offset of the second word, which is thought to reflect association formation. A larger N425 was elicited by pairs in the Different than Same lists, likely reflecting semantic integration.
16

LEXICAL KNOWLEDGE OF VERB-PARTICLE BY SAUDI ENGLISH LEARNERS

ALTURKI, Fadwi Waleed 01 May 2015 (has links)
Verb-particle constructions are one of the most complex components of the English language. Understanding and producing such difficult constructs in a second language (L2) is a challenge for L2 learners of English. This research was based on the study by Blais and Gonnerman (2013). The purpose of the current study was to measure American and Saudi participants' sensitivity to the degree of semantic similarity between verb/verb-particle constructions. The survey of similarity ratings was administered to 107 American native English speakers and 67 Saudi English learners. The participants were asked to rate 78 items based on their knowledge of the semantic similarity between verb/verb-particle pairs. Results revealed two major findings; American native speakers and Saudi English learners did not behave consistently with the similarity rating task, and the results did not support the previous categorizations of 78 items that established by Blais and Gonnerman. Extrapolating from these findings, it appears that similarity judgments of verb/verb-particle pairs may be sample-specific, even among native speakers. Therefore, it is questionable whether Blais and Gonnerman's instrument can be used to reliably compare the judgments of different samples of native and non-native speakers.
17

Knowledge-based Semantic Measures : From Theory to Applications / Mesures sémantiques à base de connaissance : de la théorie aux applicatifs

Harispe, Sébastien 25 April 2014 (has links)
Les notions de proximité, de distance et de similarité sémantiques sont depuis longtemps jugées essentielles dans l'élaboration de nombreux processus cognitifs et revêtent donc un intérêt majeur pour les communautés intéressées au développement d'intelligences artificielles. Cette thèse s'intéresse aux différentes mesures sémantiques permettant de comparer des unités lexicales, des concepts ou des instances par l'analyse de corpus de textes ou de représentations de connaissance (e.g. ontologies). Encouragées par l'essor des technologies liées à l'Ingénierie des Connaissances et au Web sémantique, ces mesures suscitent de plus en plus d'intérêt à la fois dans le monde académique et industriel. Ce manuscrit débute par un vaste état de l'art qui met en regard des travaux publiés dans différentes communautés et souligne l'aspect interdisciplinaire et la diversité des recherches actuelles dans ce domaine. Cela nous a permis, sous l'apparente hétérogénéité des mesures existantes, de distinguer certaines propriétés communes et de présenter une classification générale des approches proposées. Par la suite, ces travaux se concentrent sur les mesures qui s'appuient sur une structuration de la connaissance sous forme de graphes sémantiques, e.g. graphes RDF(S). Nous montrons que ces mesures reposent sur un ensemble réduit de primitives abstraites, et que la plupart d'entre elles, bien que définies indépendamment dans la littérature, ne sont que des expressions particulières de mesures paramétriques génériques. Ce résultat nous a conduits à définir un cadre théorique unificateur pour les mesures sémantiques. Il permet notamment : (i) d'exprimer de nouvelles mesures, (ii) d'étudier les propriétés théoriques des mesures et (iii) d'orienter l'utilisateur dans le choix d'une mesure adaptée à sa problématique. Les premiers cas concrets d'utilisation de ce cadre démontrent son intérêt en soulignant notamment qu'il permet l'analyse théorique et empirique des mesures avec un degré de détail particulièrement fin, jamais atteint jusque-là. Plus généralement, ce cadre théorique permet de poser un regard neuf sur ce domaine et ouvre de nombreuses perspectives prometteuses pour l'analyse des mesures sémantiques. Le domaine des mesures sémantiques souffre d'un réel manque d'outils logiciels génériques et performants ce qui complique à la fois l'étude et l'utilisation de ces mesures. En réponse à ce manque, nous avons développé la Semantic Measures Library (SML), une librairie logicielle dédiée au calcul et à l'analyse des mesures sémantiques. Elle permet d'utiliser des centaines de mesures issues à la fois de la littérature et des fonctions paramétriques étudiées dans le cadre unificateur introduit. Celles-ci peuvent être analysées et comparées à l'aide des différentes fonctionnalités proposées par la librairie. La SML s'accompagne d'une large documentation, d'outils logiciels permettant son utilisation par des non informaticiens, d'une liste de diffusion, et de façon plus large, se propose de fédérer les différentes communautés du domaine afin de créer une synergie interdisciplinaire autour la notion de mesures sémantiques : http://www.semantic-measures-library.org Cette étude a également conduit à différentes contributions algorithmiques et théoriques, dont (i) la définition d'une méthode innovante pour la comparaison d'instances définies dans un graphe sémantique – nous montrons son intérêt pour la mise en place de système de recommandation à base de contenu, (ii) une nouvelle approche pour comparer des concepts représentés dans des taxonomies chevauchantes, (iii) des optimisations algorithmiques pour le calcul de certaines mesures sémantiques, et (iv) une technique d'apprentissage semi-supervisée permettant de cibler les mesures sémantiques adaptées à un contexte applicatif particulier en prenant en compte l'incertitude associée au jeu de test utilisé. Travaux validés par plusieurs publications et communications nationales et internationales. / The notions of semantic proximity, distance, and similarity have long been considered essential for the elaboration of numerous cognitive processes, and are therefore of major importance for the communities involved in the development of artificial intelligence. This thesis studies the diversity of semantic measures which can be used to compare lexical entities, concepts and instances by analysing corpora of texts and knowledge representations (e.g., ontologies). Strengthened by the development of Knowledge Engineering and Semantic Web technologies, these measures are arousing increasing interest in both academic and industrial fields.This manuscript begins with an extensive state-of-the-art which presents numerous contributions proposed by several communities, and underlines the diversity and interdisciplinary nature of this domain. Thanks to this work, despite the apparent heterogeneity of semantic measures, we were able to distinguish common properties and therefore propose a general classification of existing approaches. Our work goes on to look more specifically at measures which take advantage of knowledge representations expressed by means of semantic graphs, e.g. RDF(S) graphs. We show that these measures rely on a reduced set of abstract primitives and that, even if they have generally been defined independently in the literature, most of them are only specific expressions of generic parametrised measures. This result leads us to the definition of a unifying theoretical framework for semantic measures, which can be used to: (i) design new measures, (ii) study theoretical properties of measures, (iii) guide end-users in the selection of measures adapted to their usage context. The relevance of this framework is demonstrated in its first practical applications which show, for instance, how it can be used to perform theoretical and empirical analyses of measures with a previously unattained level of detail. Interestingly, this framework provides a new insight into semantic measures and opens interesting perspectives for their analysis.Having uncovered a flagrant lack of generic and efficient software solutions dedicated to (knowledge-based) semantic measures, a lack which clearly hampers both the use and analysis of semantic measures, we consequently developed the Semantic Measures Library (SML): a generic software library dedicated to the computation and analysis of semantic measures. The SML can be used to take advantage of hundreds of measures defined in the literature or those derived from the parametrised functions introduced by the proposed unifying framework. These measures can be analysed and compared using the functionalities provided by the library. The SML is accompanied by extensive documentation, community support and software solutions which enable non-developers to take full advantage of the library. In broader terms, this project proposes to federate the several communities involved in this domain in order to create an interdisciplinary synergy around the notion of semantic measures: http://www.semantic-measures-library.org This thesis also presents several algorithmic and theoretical contributions related to semantic measures: (i) an innovative method for the comparison of instances defined in a semantic graph – we underline in particular its benefits in the definition of content-based recommendation systems, (ii) a new approach to compare concepts defined in overlapping taxonomies, (iii) algorithmic optimisation for the computation of a specific type of semantic measure, and (iv) a semi-supervised learning-technique which can be used to identify semantic measures adapted to a specific usage context, while simultaneously taking into account the uncertainty associated to the benchmark in use. These contributions have been validated by several international and national publications.
18

Detecting opinion spam and fake news using n-gram analysis and semantic similarity

Ahmed, Hadeer 14 November 2017 (has links)
In recent years, deceptive contents such as fake news and fake reviews, also known as opinion spams, have increasingly become a dangerous prospect, for online users. Fake reviews affect consumers and stores a like. Furthermore, the problem of fake news has gained attention in 2016, especially in the aftermath of the last US presidential election. Fake reviews and fake news are a closely related phenomenon as both consist of writing and spreading false information or beliefs. The opinion spam problem was formulated for the first time a few years ago, but it has quickly become a growing research area due to the abundance of user-generated content. It is now easy for anyone to either write fake reviews or write fake news on the web. The biggest challenge is the lack of an efficient way to tell the difference between a real review or a fake one; even humans are often unable to tell the difference. In this thesis, we have developed an n-gram model to detect automatically fake contents with a focus on fake reviews and fake news. We studied and compared two different features extraction techniques and six machine learning classification techniques. Furthermore, we investigated the impact of keystroke features on the accuracy of the n-gram model. We also applied semantic similarity metrics to detect near-duplicated content. Experimental evaluation of the proposed using existing public datasets and a newly introduced fake news dataset introduced indicate improved performances compared to state of the art. / Graduate
19

Grouping Biological Data

Rundqvist, David January 2006 (has links)
Today, scientists in various biomedical fields rely on biological data sources in their research. Large amounts of information concerning, for instance, genes, proteins and diseases are publicly available on the internet, and are used daily for acquiring knowledge. Typically, biological data is spread across multiple sources, which has led to heterogeneity and redundancy. The current thesis suggests grouping as one way of computationally managing biological data. A conceptual model for this purpose is presented, which takes properties specific for biological data into account. The model defines sub-tasks and key issues where multiple solutions are possible, and describes what approaches for these that have been used in earlier work. Further, an implementation of this model is described, as well as test cases which show that the model is indeed useful. Since the use of ontologies is relatively new in the management of biological data, the main focus of the thesis is on how semantic similarity of ontological annotations can be used for grouping. The results of the test cases show for example that the implementation of the model, using Gene Ontology, is capable of producing groups of data entries with similar molecular functions.
20

Graph-based Centrality Algorithms for Unsupervised Word Sense Disambiguation

Sinha, Ravi Som 12 1900 (has links)
This thesis introduces an innovative methodology of combining some traditional dictionary based approaches to word sense disambiguation (semantic similarity measures and overlap of word glosses, both based on WordNet) with some graph-based centrality methods, namely the degree of the vertices, Pagerank, closeness, and betweenness. The approach is completely unsupervised, and is based on creating graphs for the words to be disambiguated. We experiment with several possible combinations of the semantic similarity measures as the first stage in our experiments. The next stage attempts to score individual vertices in the graphs previously created based on several graph connectivity measures. During the final stage, several voting schemes are applied on the results obtained from the different centrality algorithms. The most important contributions of this work are not only that it is a novel approach and it works well, but also that it has great potential in overcoming the new-knowledge-acquisition bottleneck which has apparently brought research in supervised WSD as an explicit application to a plateau. The type of research reported in this thesis, which does not require manually annotated data, holds promise of a lot of new and interesting things, and our work is one of the first steps, despite being a small one, in this direction. The complete system is built and tested on standard benchmarks, and is comparable with work done on graph-based word sense disambiguation as well as lexical chains. The evaluation indicates that the right combination of the above mentioned metrics can be used to develop an unsupervised disambiguation engine as powerful as the state-of-the-art in WSD.

Page generated in 0.0874 seconds