• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • 1
  • 1
  • Tagged with
  • 14
  • 14
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Assisting the software reuse process through classification and retrieval of software models

Lester, Neil January 2000 (has links)
No description available.
2

Computing Semantic Association: Comparing Spreading Activation and Spectral Association for Ontology Learning

Wohlgenannt, Gerhard, Belk, Stefan, Schett, Matthias January 2013 (has links) (PDF)
Spreading activation is a common method for searching semantic or neural networks, it iteratively propagates activation for one or more sources through a network { a process that is computationally intensive. Spectral association is a recent technique to approximate spreading activation in one go, and therefore provides very fast computation of activation levels. In this paper we evaluate the characteristics of spectral association as replacement for classic spreading activation in the domain of ontology learning. The evaluation focuses on run-time performance measures of our implementation of both methods for various network sizes. Furthermore, we investigate differences in output, i.e. the resulting ontologies, between spreading activation and spectral association. The experiments confirm an excessive speedup in the computation of activation levels, and also a fast calculation of the spectral association operator if using a variant we called brute force. The paper concludes with pros and cons and usage recommendations for the methods. (authors' abstract)
3

感情プライミング効果における活性化拡散仮説の検討

林, 幹也, Hayashi, Mikiya 27 December 2004 (has links)
国立情報学研究所で電子化したコンテンツを使用している。
4

Concepts Extraction and Change Detection from Navigated Information over the Internet

Chang, Chia-Hao 25 July 2004 (has links)
The emergence of the Internet has made the global information communications much easier than before. Users can navigate the desired information over the Internet by means of search engines. Even though search engine can help users search specified topic in a primary way, users usually cannot gain the overall idea of what the entire navigated results mean. In addition, information over the Internet keeps changing. Users cannot even keep track of the changes, let alone to comprehend the meanings of such changes. Consequently, this research proposes a two-stage incremental approach to figuring out the concept structure that represents the main concepts of the search results in the first stage, and keeping track of the concept changes with time based on spreading activation theory to assist users in the second stage. Experiments are conducted to examine the feasibility of our proposed approach. The first experiment is to evaluate the results from the first stage. It shows that the performance on recall and precision is quite satisfactory based on human experts¡¦ results. The second experiment is to examine the changing results from the entire proposed approach. It shows that high degree of agreement with our results is achieved from domain experts. Both experiments justify the feasibility of our proposed approach in real applications. That is, applying our proposed approach, users can easily focus on the topic they are interested in and learn its trend with great support. Keywords: Internet, Concepts Extraction, Concept Change Detection, Spreading Activation Theory.
5

A Semantic-Expanding Method for Document Recommendation

Yang, Yung-Fang 05 August 2002 (has links)
none
6

Evaluation and development of conceptual document similarity metrics with content-based recommender applications

Gouws, Stephan 12 1900 (has links)
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: The World Wide Web brought with it an unprecedented level of information overload. Computers are very effective at processing and clustering numerical and binary data, however, the automated conceptual clustering of natural-language data is considerably harder to automate. Most past techniques rely on simple keyword-matching techniques or probabilistic methods to measure semantic relatedness. However, these approaches do not always accurately capture conceptual relatedness as measured by humans. In this thesis we propose and evaluate the use of novel Spreading Activation (SA) techniques for computing semantic relatedness, by modelling the article hyperlink structure of Wikipedia as an associative network structure for knowledge representation. The SA technique is adapted and several problems are addressed for it to function over the Wikipedia hyperlink structure. Inter-concept and inter-document similarity metrics are developed which make use of SA to compute the conceptual similarity between two concepts and between two natural-language documents. We evaluate these approaches over two document similarity datasets and achieve results which compare favourably with the state of the art. Furthermore, document preprocessing techniques are evaluated in terms of the performance gain these techniques can have on the well-known cosine document similarity metric and the Normalised Compression Distance (NCD) metric. Results indicate that a near two-fold increase in accuracy can be achieved for NCD by applying simple preprocessing techniques. Nonetheless, the cosine similarity metric still significantly outperforms NCD. Finally, we show that using our Wikipedia-based method to augment the cosine vector space model provides superior results to either in isolation. Combining the two methods leads to an increased correlation of Pearson p = 0:72 over the Lee (2005) document similarity dataset, which matches the reported result for the state-of-the-art Explicit Semantic Analysis (ESA) technique, while requiring less than 10% of the Wikipedia database as required by ESA. As a use case for document similarity techniques, a purely content-based news-article recommender system is designed and implemented for a large online media company. This system is used to gather additional human-generated relevance ratings which we use to evaluate the performance of three state-of-the-art document similarity metrics for providing content-based document recommendations. / AFRIKAANSE OPSOMMING: Die Wêreldwye-Web het ’n vlak van inligting-oorbelading tot gevolg gehad soos nog nooit tevore. Rekenaars is baie effektief met die verwerking en groepering van numeriese en binêre data, maar die konsepsuele groepering van natuurlike-taal data is aansienlik moeiliker om te outomatiseer. Tradisioneel berus sulke algoritmes op eenvoudige sleutelwoordherkenningstegnieke of waarskynlikheidsmetodes om semantiese verwantskappe te bereken, maar hierdie benaderings modelleer nie konsepsuele verwantskappe, soos gemeet deur die mens, baie akkuraat nie. In hierdie tesis stel ons die gebruik van ’n nuwe aktiverings-verspreidingstrategie (AV) voor waarmee inter-konsep verwantskappe bereken kan word, deur die artikel skakelstruktuur van Wikipedia te modelleer as ’n assosiatiewe netwerk. Die AV tegniek word aangepas om te funksioneer oor die Wikipedia skakelstruktuur, en verskeie probleme wat hiermee gepaard gaan word aangespreek. Inter-konsep en inter-dokument verwantskapsmaatstawwe word ontwikkel wat gebruik maak van AV om die konsepsuele verwantskap tussen twee konsepte en twee natuurlike-taal dokumente te bereken. Ons evalueer hierdie benadering oor twee dokument-verwantskap datastelle en die resultate vergelyk goed met die van ander toonaangewende metodes. Verder word teks-voorverwerkingstegnieke ondersoek in terme van die moontlike verbetering wat dit tot gevolg kan hê op die werksverrigting van die bekende kosinus vektorruimtemaatstaf en die genormaliseerde kompressie-afstandmaatstaf (GKA). Resultate dui daarop dat GKA se akkuraatheid byna verdubbel kan word deur gebruik te maak van eenvoudige voorverwerkingstegnieke, maar dat die kosinus vektorruimtemaatstaf steeds aansienlike beter resultate lewer. Laastens wys ons dat die Wikipedia-gebasseerde metode gebruik kan word om die vektorruimtemaatstaf aan te vul tot ’n gekombineerde maatstaf wat beter resultate lewer as enige van die twee metodes afsonderlik. Deur die twee metodes te kombineer lei tot ’n verhoogde korrelasie van Pearson p = 0:72 oor die Lee dokument-verwantskap datastel. Dit is gelyk aan die gerapporteerde resultaat vir Explicit Semantic Analysis (ESA), die huidige beste Wikipedia-gebasseerde tegniek. Ons benadering benodig egter minder as 10% van die Wikipedia databasis wat benodig word vir ESA. As ’n toetstoepassing vir dokument-verwantskaptegnieke ontwerp en implementeer ons ’n stelsel vir ’n aanlyn media-maatskappy wat nuusartikels aanbeveel vir gebruikers, slegs op grond van die artikels se inhoud. Joernaliste wat die stelsel gebruik ken ’n punt toe aan elke aanbeveling en ons gebruik hierdie data om die akkuraatheid van drie toonaangewende maatstawwe vir dokument-verwantskap te evalueer in die konteks van inhoud-gebasseerde nuus-artikel aanbevelings.
7

Modelling subphonemic information flow : an investigation and extension of Dell's (1986) model of word production

Moat, Helen Susannah January 2011 (has links)
Dell (1986) presented a spreading activation model which accounted for a number of early speech error results, including the relative proportions of anticipations, perseverations and exchanges found in speech error corpora, the lexical bias effect, the phonological similarity effect, and the effect of speech rate on error rate. This model has had an immense influence on the past 20 years of research into word production, with the original paper being cited over 1,000 times. Many studies have questioned how activation should flow between words and phonemes in this model. This thesis aimed to clarify what current speech error evidence tells us about how activation flows between phonemes and subphonemic representations, like features. Does activation cascade from phonemes to features, and does it feed back? The work presented here extends previous modelling investigations in two ways. Firstly, whereas previous modelling research has tended to evaluate model behaviour using arbitrarily chosen parameter settings, we illuminate the influence of the parameters on model behaviour and propose methods to draw general conclusions about model behaviour from large numbers of simulations at orthogonally varied parameter settings. Secondly, we extend the scope of the simulations to consider output at a subphonemic level, modelling recent data acquired via acoustic and articulatory measurements, such as voicing onset time (VOT), electropalatography (EPG) and ultrasound, alongside older transcribed speech error data. Throughout the thesis, we consider whether parameter settings which lead the model to capture individual results also permit other results to be accounted for and do not cause otherwise implausible behaviour. Through manipulating parameter settings in Dell's (1986) original model, we find that increasing the number of steps before selection generally does not decrease the error rate, but rather increases it, contrary to results reported by Dell (1986). This calls into question the claim that an increase in steps before selection provides a good model of a slower speech rate. We also demonstrate that the model captures the negative correlation reported by Dell, Burger, and Svec (1997) between error rate and the ratio of anticipations to perseverations, and further predicts that there should be a negative correlation between this ratio and the proportion of errors which are non-contextual. However, our results show that no parameter setting allows the model to generate enough exchanges to match even minimum estimates from a reanalysis of multiple speech error corpus reports, without falling foul of other constraints; in particular, limits on the overall number of errors generated. We suggest that the exchange completion triggering mechanism proposed by Dell (1986) is not strong enough, and that current corpus evidence provides little support for his account of word sequencing. Focusing on single word production therefore, the second part of the thesis investigates behaviour of models with output at a subphonemic level. We find that, provided sufficient contextual errors occur at the featural level, a model in which only the identity of the selected phoneme is conveyed to the featural level can account for: (i) the phonological similarity effect found in transcribed records of speech errors (whereas in models with output at the phoneme level, feedback from features to phonemes is required); (ii) detectable influences of intended phonemes in VOT measurements of unintended phonemes, as well as the effect of error outcome lexicality on these results ( findings presented in support of cascading from phonemes by Goldrick & Blumstein, 2006); and (iii) increased similarity of EPG measurements of articulations to reference measurements of competing articulations when production of the competing onset would result in a word (McMillan, Corley, & Lickley, 2009). Initial results appear to con firm however that, in contrast, phonological similarity effects on the relationship of articulatory and acoustic measurements of productions to reference measurements (McMillan, 2008) can only be accounted for in an architecture with feedback from features to phonemes. To strengthen conclusions about articulatory evidence of lexical bias and phonological similarity effects, future work needs to consider the extremely strong effects of frequency observed in these simulations. The results presented in this thesis contribute to a greater comprehension of the behaviour of Dell's (1986) influential model, and further demonstrate that the model can be extended to account for new instrumental evidence, whilst clarifying the constraints on activation flow between phonemes and features which this new evidence imposes.
8

Recherche d’information s´emantique : Graphe sémantico-documentaire et propagation d’activation / Semantic Information Retrieval : Semantic-Documentary Graph and Spreading Information

Bannour, Ines 09 May 2017 (has links)
La recherche d’information sémantique (RIS), cherche à proposer des modèles qui permettent de s’appuyer, au delà des calculs statistiques, sur la signification et la sémantique des mots du vocabulaire, afin de mieux caractériser les documents pertinents au regard du besoin de l’utilisateur et de les retrouver. Le but est ainsi de dépasser les approches classiques purement statistiques (de « sac de mots »), fondées sur des appariements de chaînes de caractères sur la base des fréquences des mots et de l’analyse de leurs distributions dans le texte. Pour ce faire, les approches existantes de RIS, à travers l’exploitation de ressources sémantiques externes (thésaurus ou ontologies), procèdent en injectant des connaissances dans les modèles classiques de RI de manière à désambiguïser le vocabulaire ou à enrichir la représentation des documents et des requêtes. Il s’agit le plus souvent d’adaptations de ces modèles, on passe alors à une approche « sac de concepts » qui permet de prendre en compte la sémantique notamment la synonymie. Les ressources sémantiques, ainsi exploitées, sont « aplaties », les calculs se cantonnent, généralement, à des calculs de similarité sémantique. Afin de permettre une meilleure exploitation de la sémantique en RI, nous mettons en place un nouveau modèle, qui permet d’unifier de manière cohérente et homogène les informations numériques (distributionnelles) et symboliques (sémantiques) sans sacrifier la puissance des analyses. Le réseau sémantico-documentaire ainsi modélisé est traduit en graphe pondéré. Le mécanisme d’appariement est assuré par une propagation d’activation dans le graphe. Ce nouveau modèle permet à la fois de répondre à des requêtes exprimées sous forme de mots clés, de concepts oumême de documents exemples. L’algorithme de propagation a le mérite de préserver les caractéristiques largement éprouvéesdes modèles classiques de recherche d’information tout en permettant une meilleure prise en compte des modèles sémantiques et de leurs richesse. Selon que l’on introduit ou pas de la sémantique dans ce graphe, ce modèle permet de reproduire une RI classique ou d’assurer en sus certaines fonctionnalités sémantiques. La co-occurrence dans le graphe permet alors de révélerune sémantique implicite qui améliore la précision en résolvant certaines ambiguïtés sémantiques.L’exploitation explicite des concepts ainsi que des liens du graphe, permettent la résolution des problèmes de synonymie, de term mismatch et de couverture sémantique. Ces fonctionnalités sémantiques, ainsi que le passage à l’échelle du modèle présenté, sont validés expérimentalement sur un corpus dans le domaine médical. / Semantic information retrieval (SIR) aims to propose models that allow us to rely, beyond statistical calculations, on the meaning and semantics of the words of the vocabulary, in order to better represent relevant documents with respect to user’s needs, and better retrieve them.The aim is therefore to overcome the classical purely statistical (« bag of wordsé») approaches, based on strings’ matching and the analysis of the frequencies of the words and their distributions in the text.To do this, existing SIR approaches, through the exploitation of external semantic resources (thesauri, ontologies, etc.), proceed by injecting knowledge into the classical IR models (such as the vector space model) in order to disambiguate the vocabulary or to enrich the representation of documents and queries.These are usually adaptations of the classical IR models. We go so to a « bag of concepts » approach which allows us to take account of synonymy. The semantic resources thus exploited are « flattened », the calculations are generally confined to calculations of semantic similarities.In order to better exploit the semantics in RI, we propose a new model, which allows to unify in a coherent and homogeneous way the numerical (distributional) and symbolic (semantic) information without sacrificing the power of the analyzes of the one for the other. The semantic-documentary network thus modeled is translated into a weighted graph. The matching mechanism is provided by a Spreading activation mechanism in the graph. This new model allows to respond to queries expressed in the form of key words, concepts or even examples of documents. The propagation algorithm has the merit of preserving the well-tested characteristics of classical information retrieval models while allowing a better consideration of semantic models and their richness.Depending on whether semantics is introduced in the graph or not, this model makes it possible to reproduce a classical IR or provides, in addition, some semantic functionalities. The co-occurrence in the graph then makes it possible to reveal an implicit semantics which improves the precision by solving some semantic ambiguities. The explicit exploitation of the concepts as well as the links of the graph allow the resolution of the problems of synonymy, term mismatch, semantic coverage, etc. These semantic features, as well as the scaling up of the model presented, are validated experimentally on a corpus in the medical field.
9

Creativity and positive symptoms in schizophrenia revisited: Structural connectivity analysis with diffusion tensor imaging / 統合失調症における創造性と陽性症状再考:拡散テンソル画像による構造的結合性解析

Son, Shuraku 23 May 2016 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(医学) / 甲第19889号 / 医博第4138号 / 新制||医||1016(附属図書館) / 32966 / 京都大学大学院医学研究科医学専攻 / (主査)教授 古川 壽亮, 教授 髙橋 良輔, 教授 富樫 かおり / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
10

Creating Socio-Technical Patches for Information Foraging: A Requirements Traceability Case Study

Cepulis, Darius 30 October 2018 (has links)
No description available.

Page generated in 0.1137 seconds