• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1329
  • 556
  • 320
  • 111
  • 83
  • 57
  • 54
  • 54
  • 37
  • 37
  • 28
  • 25
  • 25
  • 24
  • 23
  • Tagged with
  • 3108
  • 979
  • 507
  • 473
  • 424
  • 415
  • 401
  • 354
  • 326
  • 290
  • 288
  • 275
  • 257
  • 255
  • 243
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

A framework for semantically verifying schema mappings for data exchange

Walny, Jagoda Katarzyna. January 2010 (has links)
Thesis (M.Sc.)--University of Alberta, 2010. / Title from PDF file main screen (viewed on May 27, 2010). A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Master of Science, Department of Computing Science, University of Alberta. Includes bibliographical references.
182

A framework for improving tractability in software development

Patnaik, Sambit. January 2006 (has links) (PDF)
Thesis (M.S.)--University of Alabama at Birmingham, 2006. / Description based on contents viewed Jan. 29, 2007; title from title screen. Includes bibliographical references (p. 68-69).
183

Computational modelling of the language production system : semantic memory, conflict monitoring, and cognitive control processes /

Hockey, Andrew. January 2006 (has links) (PDF)
Thesis (M.Phil.) - University of Queensland, 2007. / Includes bibliography.
184

Querying For Relevant People In Online Social Networks

January 2010 (has links)
abstract: Online social networks, including Twitter, have expanded in both scale and diversity of content, which has created significant challenges to the average user. These challenges include finding relevant information on a topic and building social ties with like-minded individuals. The fundamental question addressed by this thesis is if an individual can leverage social network to search for information that is relevant to him or her. We propose to answer this question by developing computational algorithms that analyze a user's social network. The features of the social network we analyze include the network topology and member communications of a specific user's social network. Determining the "social value" of one's contacts is a valuable outcome of this research. The algorithms we developed were tested on Twitter, which is an extremely popular social network. Twitter was chosen due to its popularity and a majority of the communications artifacts on Twitter is publically available. In this work, the social network of a user refers to the "following relationship" social network. Our algorithm is not specific to Twitter, and is applicable to other social networks, where the network topology and communications are accessible. My approaches are as follows. For a user interested in using the system, I first determine the immediate social network of the user as well as the social contacts for each person in this network. Afterwards, I establish and extend the social network for each user. For each member of the social network, their tweet data are analyzed and represented by using a word distribution. To accomplish this, I use WordNet, a popular lexical database, to determine semantic similarity between two words. My mechanism of search combines both communication distance between two users and social relationships to determine the search results. Additionally, I developed a search interface, where a user can interactively query the system. I conducted preliminary user study to evaluate the quality and utility of my method and system against several baseline methods, including the default Twitter search. The experimental results from the user study indicate that my method is able to find relevant people and identify valuable contacts in one's social circle based on the query. The proposed system outperforms baseline methods in terms of standard information retrieval metrics. / Dissertation/Thesis / M.S. Computer Science 2010
185

Mapping dynamic brain connectivity using EEG, TMS, and Transfer Entropy

Repper-Day, Christopher January 2017 (has links)
To understand how the brain functions, we must investigate the transient interactions that underpin communication between cortical regions. EEG possesses the optimal temporal resolution to capture functional connectivity, but it lacks the spatial resolution to identify the cortical locations responsible. To circumvent this problem electrophysiological connectivity should be investigated at the source level. There are many quantifiers of connectivity applied to EEG data, but some are not sensitive to the direct, or indirect, influence of one region over another, and others require the specification of a priori models so are unsuitable for exploratory analyses. Transfer Entropy (TE) can be used to infer the direction of linear and non-linear information exchange between signals over a range of time-delays within EEG data. This thesis explores the creation of a new method of mapping dynamic brain connectivity using a trial-based TE analysis of EEG source data, and the application of this technique to the investigation of semantic and number processing within the brain. The first paper (Chapter 2) documents the analyses of a semantic category and number magnitude judgement task using traditional ERP techniques. As predicted, the well-known semantic N400 component was found, and localised to left ATL and inferior frontal cortex. An N365 component related to number magnitude judgement was localised to right superior parietal regions including the IPS. These results offer support for the hub-and-spoke model of semantics, and the triple parietal model of number processing. The second paper (Chapter 3) documents an analysis of the same data with the new trial-based TE analysis. Word and number data were analysed at 0-200ms, 200-400ms, and 400-600ms following stimulus presentation. In the earliest window, information exchange was occurring predominately between occipital sources, but by the latest window it had become spread out across the brain. Task-dependent differences of regional information exchange revealed that temporal sources were sending more information to occipital sources following words at 0-200ms. Furthermore, the direction and timing of information movement within a front-temporal-parietal network was identified during 0-400ms of the number magnitude judgment. The final paper (Chapter 4), documents an attempt to track the influence of TMS through the brain using the TE analysis. TMS was applied to bilateral ATL and IPS because they are both important hubs in the brain networks that support semantic and number processing respectively. Left ATL TMS influenced sources located primarily in wide-spread left temporal lobe, and inferior frontal and inferior occipital cortices. The anatomical connectivity profile of the temporal lobe suggests that these are all plausible locations, and they exhibited excellent spatial similarities to the results of neuroimaging experiments that probed semantic knowledge. The analysis of right ATL TMS obtained a mirror image of the left. Left parietal stimulation resulted in a bilateral parietal, superior occipital, and superior prefrontal influence, which extended slightly further in the ipsilateral hemisphere to stimulation site. A result made possible by the short association and callosal fibres that connect these areas. Again, the results at the contralateral site were a virtual mirror image. The thesis concludes with a review of the experimental findings, and a discussion of methodological issues still to be resolved, ideas for extensions to the method, and the broader implications of the method on connectivity research.
186

Towards Making Distributed RDF processing FLINker

Azzam, Amr, Kirrane, Sabrina, Polleres, Axel January 2018 (has links) (PDF)
In the last decade, the Resource Description Framework (RDF) has become the de-facto standard for publishing semantic data on the Web. This steady adoption has led to a significant increase in the number and volume of available RDF datasets, exceeding the capabilities of traditional RDF stores. This scenario has introduced severe big semantic data challenges when it comes to managing and querying RDF data at Web scale. Despite the existence of various off-the-shelf Big Data platforms, processing RDF in a distributed environment remains a significant challenge. In this position paper, based on an indepth analysis of the state of the art, we propose to manage large RDF datasets in Flink, a well-known scalable distributed Big Data processing framework. Our approach, which we refer to as FLINKer extends the native graph abstraction of Flink, called Gelly, with RDF graph and SPARQL query processing capabilities.
187

Applying a semantic layer in a source code retrieval tool

Durão, Frederico Araujo 31 January 2008 (has links)
Made available in DSpace on 2014-06-12T15:51:21Z (GMT). No. of bitstreams: 1 license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2008 / O reuso de software é uma área de pesquisa da engenharia de software que tem por objetivo prover melhorias na produtividade e qualidade da aplicação através da redução do esforço. Trata-se de reutilizar artefatos existentes, ao invés de construí-los do zero a fim de criar novas aplicações. Porém, para obter os benefícios inerentes ao reuso, alguns obstáculos devem ser superados como, por exemplo, a questão da busca e recuperação de componentes. Em geral, há uma lacuna entre a formulação do problema, na mente do desenvolvedor e a recuperação do mesmo no repositório, o que resulta em resultados irrelevantes diminuindo as chances de reuso. Dessa maneira, mecanismos que auxiliem na formulação das consultas e que contribuam para uma recuperação mais próxima à necessidade do desenvolvedor, são bastante oportunos para solucionar os problemas apresentados. Nesse contexto, este trabalho propõe a extensão de uma ferramenta de busca por palavra-chave através de uma camada semântica que tem por objetivo principal aumentar a precisão da busca e, conseqüentemente, aumentar as chances de reuso do componente procurado. A criação da camada semântica é representada basicamente por dois componentes principais: um para auxiliar o usuário na formulação da consulta, através do uso de uma ontologia de domínio, e outro para tornar a recuperação mais eficiente, através de uma indexação semântica dos componentes no repositório
188

Representation of Multi-Level Domains on The Web

SILVA, F. B. 28 September 2016 (has links)
Made available in DSpace on 2018-08-02T00:03:44Z (GMT). No. of bitstreams: 1 tese_10271_representation_of_multi_level_domains_on_the_web_2016 - freddy.pdf: 1850597 bytes, checksum: 49e1ac6068e9ec186891d6c01f4acab9 (MD5) Previous issue date: 2016-09-28 / Estratégias de modelagem conceitual e representação de conhecimento frequentemente tratam entidades em dois níveis: um nível de classes e um nível de indivíduos que instanciam essas classes. Em vários domínios, porém, as próprias classes podem estar sujeitas a categorização, resultando em classes de classes (ou metaclasses). Ao representar estes domínios, é preciso capturar não apenas as entidades de diferentes níveis de classificação, mas também as suas relações (possivelmente complexas). No domínio de taxonomias biológicas, por exemplo, um dado organismo (por exemplo, o leão Cecil morto em 2015 no Parque Nacional Hwange no Zimbábue) é classificado em diversos táxons (como, por exemplo, Animal, Mamífero, Carnívoro, Leão), e cada um desses táxons é classificado por um ranking taxonômico (por exemplo, Reino, Classe, Ordem, Espécie). Assim, para representar o conhecimento referente a esse domínio, é necessário representar entidades em níveis diferentes de classificação. Por exemplo, Cecil é uma instância de Leão, que é uma instância de Espécie. Espécie, por sua vez, é uma instância de Ranking Taxonômico. Além disso, quando representamos esses domínios, é preciso capturar não somente as entidades diferentes níveis de classificação, mas também suas (possivelmente complicadas) relações. Por exemplo, nós gostaríamos de afirmar que instâncias do gênero Panthera também devem ser instâncias de exatamente uma instância de Espécie (por exemplo, Leão). A necessidade de suporte à representação de domínios que lidam com múltiplos níveis de classificação deu origem a uma área de investigação chamada modelagem multi-nível. Observa-se que a representação de modelos com múltiplos níveis é um desafio em linguagens atuais da Web Semântica, como há pouco apoio para orientar o modelador na produção correta de ontologias multi-nível, especialmente por causa das nuanças de restrições que se aplicam a entidades de diferentes níveis de classificação e suas relações. A fim de lidar com esses desafios de representação, definimos um vocabulário que pode ser usado como base para a definição de ontologias multi-nível em OWL, juntamente com restrições de integridade e regras de derivação. É oferecida uma ferramenta que recebe como entrada um modelo de domínio, verifica conformidade com as restrições de integridade propostas e produz como saída um modelo enriquecido com informações derivadas. Neste processo, é empregada uma teoria axiomática chamada MLT (uma Teoria de Modelagem Multi-Nível). O conteúdo da plataforma Wikidata foi utilizado para demonstrar que o vocabulário poderia evitar inconsistências na representação multi-nível em um cenário real.
189

Prime validity affects masked repetition and masked semantic priming : evidence for an episodic resource-retrieval account of priming

Bodner, Glen Edward 02 February 2018 (has links)
In several experiments, masked repetition priming in the lexical decision task was greater when prime validity, defined as the proportion of repetition versus unrelated primes, was high (.8 vs. .2), even though primes were displayed for only 45 or 60 ms. A similar effect was also found with masked semantic primes. Prime validity effects are not predicted on a lexical entry-opening account of masked priming nor are they consistent with the use of prime validity effects as a marker for the consciously controlled use of primes. Instead, it is argued that episodic traces are formed even for masked primes, are available as a resource that can aid word identification, and are generally more likely to be recruited when their validity is high. However, prime validity effects did not obtain when targets varied markedly from trial to trial in how easy they were to process. Here, it appears that trial-to-trial discrepancies made the lexical decision task more difficult, causing an increase in prime recruitment, at least when prime validity was low. Consistent with this claim, prime validity effects emerged when these trial-to-trial discrepancies were minimized. / Graduate
190

Automatic construction of conceptual models to support early stages of software development : a semantic object model approach

Chioasca, Erol-Valeriu January 2015 (has links)
The earliest stage of software development almost always involves converting requirements descriptions written in natural language (NLRs) into initial conceptual models, represented by some formal notation. This stage is time-consuming and demanding, as initial models are often constructed manually, requiring human modellers to have appropriate modelling knowledge and skills. Furthermore, this stage is critical, as errors made in initial models are costly to correct if left undetected until the later stages. Consequently, the need for automated tool support is desirable at this stage. There are many approaches that support the modelling process in the early stages of software development. The majority of approaches employ linguistic-driven analysis to extract essential information from input NLRs in order to create different types of conceptual models. However, the main difficulty to overcome is the ambiguous and incomplete nature of NLRs. Semantic-driven approaches have the potential to address the difficulties of NLRs, however, the current state of the art methods have not been designed to address the incomplete nature of NLRs. This thesis presents a semantic-driven automatic model construction approach which addresses the limitations of current semantic-driven NLR transformation approaches. Central to this approach is a set of primitive conceptual patterns called Semantic Object Models (SOMs), which superimpose a layer of semantics and structure on top of NLRs. These patterns serve as intermediate models to bridge the gap between NLRs and their initial conceptual models. The proposed approach first translates a given NLR into a set of individual SOM instances (SOMi) and then composes them into a knowledge representation network called Semantic Object Network (SON). The proposed approach is embodied in a software tool called TRAM. The validation results show that the proposed semantic-driven approach aids users in creating improved conceptual models. Moreover, practical evaluation of TRAM indicates that the proposed approach performs better than its peers and has the potential for use in real world software development.

Page generated in 0.0645 seconds