101 |
Cross-modality semantic integration as a function of depth of processing in third gradesMiceli, Laura L. 01 January 1979 (has links)
No description available.
|
102 |
Semantic Service Integration & Metropolitan Medical NetworkPatel, Nikeshbhai 07 September 2005 (has links)
A Thesis Submitted to the Faculty of Indiana University by Nikeshbhai Patel in Partial Fulfillment of the Requirements for the Degree of Master of Science, August 2005 / Medical health partners use heterogeneous data formats, legacy software
and strictly licensed vocabularies which make it hard to integrate their data and
work. Integration of services and data are the two main necessities. The current
architecture used provides partial solution by providing one-to-one mapping
wrappers. This thesis provides discussion on difficulties encountered by the coexistence
of so many medical vocabularies and efforts to provide interoperation.
Also other problems are listed which hinders the interoperation between health
partners.
Solution is proposed for some of these problems by forming semantic
network based on multi-agent technology. Service composition and integration
stages are shown to develop future advance health services. Middle layer is
implemented which performs integration and provides common platform for
sharing information, using global ontology and local domain ontology. Inferencebased
matchmaking algorithm proposed in this thesis helps in mapping and
achieving our goal. Six different filtering techniques are selected and used in
matchmaking algorithm. Analysis of these filtering techniques is provided to
understand the integration process. In the ending section an abstract idea is
proposed on basis of network architecture and matchmaking algorithm to
develop Open Terminological System.
|
103 |
Effects of Semantic Context and Word-Class on Successful Lexical AccessBannon, Julie January 2023 (has links)
Language production is ubiquitous in everyday life. A critical component of language production is the retrieval of individual words. In this thesis, we investigated the process of lexical access across six experiments that required participants to produce words in different contexts. First, we examined whether semantic relationships between proper names lead to competition during lexical access. Participants were asked to name celebrity pictures after either reading a famous or non-famous prime name or classifying a prime name as belonging to a famous or non-famous person. Results revealed that successful name retrievals decreased with increasing trial number. Within individual trials, tip-of-the-tongue states increased only after the classification of famous prime names. These findings indicate that the effects of competition from related proper names vary based on the particular semantic context in which they are retrieved. Next, we examined how the broader semantic context of sentences affects access to object names. It is widely accepted that highly constraining contexts can facilitate lexical access through predictive processing. We examined whether prediction during language processing still confers a benefit in situations where predictions were either almost correct or completely incorrect. In three experiments that investigated both language production and comprehension, we found a clear cost to incorrect predictions which we hypothesize may be used as an error signal in language learning to fine tune the language system. Finally, we investigated function word production using a task that required individuals to read aloud short paragraphs that contained errors on function words under distracting versus silent conditions. We found that background speech did not affect the likelihood that speakers would spontaneously correct the errors, but did increase non-target function word substitution errors. Overall, these studies support a framework in which lexical access is influenced by both word-class and semantic context at the point of retrieval. / Dissertation / Doctor of Philosophy (PhD) / Language plays a key role in our everyday lives, including in social interactions, academic success, and overall daily functioning. The process of producing and understanding language is deceptively easy for the average person, but there are significant outstanding questions about how linguistic processes operate. The retrieval of individual words in particular has been the subject of decades of investigation. The goal of the present thesis is to investigate how we retrieve words when we speak, or the process of lexical access, by eliciting production of words across various contexts. The studies reported here demonstrate the effects of semantic context on lexical access, as well as how this process differs for words that convey syntactic versus meaningful content (i.e., words that differ in lexical class). Our findings build on theories of lexical access by demonstrating unique effects of the roles of semantic contexts and lexical class on word retrieval.
|
104 |
From the Wall to the Web: A Microformat for Visual ArtBukva, Emir 07 December 2009 (has links)
No description available.
|
105 |
A Semantics-based Approach to Machine PerceptionHenson, Cory Andrew January 2013 (has links)
No description available.
|
106 |
Role of semantic indexing for text classificationSani, Sadiq January 2014 (has links)
The Vector Space Model (VSM) of text representation suffers a number of limitations for text classification. Firstly, the VSM is based on the Bag-Of-Words (BOW) assumption where terms from the indexing vocabulary are treated independently of one another. However, the expressiveness of natural language means that lexically different terms often have related or even identical meanings. Thus, failure to take into account the semantic relatedness between terms means that document similarity is not properly captured in the VSM. To address this problem, semantic indexing approaches have been proposed for modelling the semantic relatedness between terms in document representations. Accordingly, in this thesis, we empirically review the impact of semantic indexing on text classification. This empirical review allows us to answer one important question: how beneficial is semantic indexing to text classification performance. We also carry out a detailed analysis of the semantic indexing process which allows us to identify reasons why semantic indexing may lead to poor text classification performance. Based on our findings, we propose a semantic indexing framework called Relevance Weighted Semantic Indexing (RWSI) that addresses the limitations identified in our analysis. RWSI uses relevance weights of terms to improve the semantic indexing of documents. A second problem with the VSM is the lack of supervision in the process of creating document representations. This arises from the fact that the VSM was originally designed for unsupervised document retrieval. An important feature of effective document representations is the ability to discriminate between relevant and non-relevant documents. For text classification, relevance information is explicitly available in the form of document class labels. Thus, more effective document vectors can be derived in a supervised manner by taking advantage of available class knowledge. Accordingly, we investigate approaches for utilising class knowledge for supervised indexing of documents. Firstly, we demonstrate how the RWSI framework can be utilised for assigning supervised weights to terms for supervised document indexing. Secondly, we present an approach called Supervised Sub-Spacing (S3) for supervised semantic indexing of documents. A further limitation of the standard VSM is that an indexing vocabulary that consists only of terms from the document collection is used for document representation. This is based on the assumption that terms alone are sufficient to model the meaning of text documents. However for certain classification tasks, terms are insufficient to adequately model the semantics needed for accurate document classification. A solution is to index documents using semantically rich concepts. Accordingly, we present an event extraction framework called Rule-Based Event Extractor (RUBEE) for identifying and utilising event information for concept-based indexing of incident reports. We also demonstrate how certain attributes of these events e.g. negation, can be taken into consideration to distinguish between documents that describe the occurrence of an event, and those that mention the non-occurrence of that event.
|
107 |
Availability of constituents' semantic representations during the processing of opaque and transparent compound wordsMarchak, Kristan Unknown Date
No description available.
|
108 |
Unsupervised induction of semantic rolesLang, Joel January 2012 (has links)
In recent years, a considerable amount of work has been devoted to the task of automatic frame-semantic analysis. Given the relative maturity of syntactic parsing technology, which is an important prerequisite, frame-semantic analysis represents a realistic next step towards broad-coverage natural language understanding and has been shown to benefit a range of natural language processing applications such as information extraction and question answering. Due to the complexity which arises from variations in syntactic realization, data-driven models based on supervised learning have become the method of choice for this task. However, the reliance on large amounts of semantically labeled data which is costly to produce for every language, genre and domain, presents a major barrier to the widespread application of the supervised approach. This thesis therefore develops unsupervised machine learning methods, which automatically induce frame-semantic representations without making use of semantically labeled data. If successful, unsupervised methods would render manual data annotation unnecessary and therefore greatly benefit the applicability of automatic framesemantic analysis. We focus on the problem of semantic role induction, in which all the argument instances occurring together with a specific predicate in a corpus are grouped into clusters according to their semantic role. Our hypothesis is that semantic roles can be induced without human supervision from a corpus of syntactically parsed sentences, by leveraging the syntactic relations conveyed through parse trees with lexical-semantic information. We argue that semantic role induction can be guided by three linguistic principles. The first is the well-known constraint that semantic roles are unique within a particular frame. The second is that the arguments occurring in a specific syntactic position within a specific linking all bear the same semantic role. The third principle is that the (asymptotic) distribution over argument heads is the same for two clusters which represent the same semantic role. We consider two approaches to semantic role induction based on two fundamentally different perspectives on the problem. Firstly, we develop feature-based probabilistic latent structure models which capture the statistical relationships that hold between the semantic role and other features of an argument instance. Secondly, we conceptualize role induction as the problem of partitioning a graph whose vertices represent argument instances and whose edges express similarities between these instances. The graph thus represents all the argument instances for a particular predicate occurring in the corpus. The similarities with respect to different features are represented on different edge layers and accordingly we develop algorithms for partitioning such multi-layer graphs. We empirically validate our models and the principles they are based on and show that our graph partitioning models have several advantages over the feature-based models. In a series of experiments on both English and German the graph partitioning models outperform the feature-based models and yield significantly better scores over a strong baseline which directly identifies semantic roles with syntactic positions. In sum, we demonstrate that relatively high-quality shallow semantic representations can be induced without human supervision and foreground a promising direction of future research aimed at overcoming the problem of acquiring large amounts of lexicalsemantic knowledge.
|
109 |
And Action! : A Study of the Semantic Domains of Action through Interpretation of MetaphorWesterdahl, Henrik January 2023 (has links)
The aim of this essay is determining and describing some of the semantic domains of the concept of action. Action belongs to the type of abstract nouns with unclear semantic domains. In other words, there are difficulties in determining the precise semantic patterns that the word ‘action’ refers to. In order to shed light on the semantic domains of action, a collection of metaphors using words for body parts has been studied. In metaphor, action can be denoted, described or used to denote or describe something else. That means semantic references to actions can be contained within metaphoric expressions, not the least in metaphors applying words for body parts. This study focuses on hands, feet and fingers. Their respective conceptual models are analysed, to see how they pertain to action as a phenomenon. The discussion subsequently identifies the semantic patterns that relate to how these body parts are conceptualised in the English language. The semantic domains inferred are related to neuropsychology, in order to show how similar patterns have been identified and described in relation to action. The conclusion of this essay is that the semantic domains of space, time, motion and intention are referred to as integral to the meaning of action. In other words, that which is denoted by the word ‘action’ is a movement that occurs in a conceptual space and time, with an intention. As the study shows, also metaphors that describe or denote inactivity adhere to this pattern in reversed form.
|
110 |
A Semantics-based User Interface Model for Content Annotation, Authoring and ExplorationKhalili, Ali 02 February 2015 (has links) (PDF)
The Semantic Web and Linked Data movements with the aim of creating, publishing and interconnecting machine readable information have gained traction in the last years.
However, the majority of information still is contained in and exchanged using unstructured documents, such as Web pages, text documents, images and videos.
This can also not be expected to change, since text, images and videos are the natural way in which humans interact with information.
Semantic structuring of content on the other hand provides a wide range of advantages compared to unstructured information.
Semantically-enriched documents facilitate information search and retrieval, presentation, integration, reusability, interoperability and personalization.
Looking at the life-cycle of semantic content on the Web of Data, we see quite some progress on the backend side in storing structured content or for linking data and schemata.
Nevertheless, the currently least developed aspect of the semantic content life-cycle is from our point of view the user-friendly manual and semi-automatic creation of rich semantic content.
In this thesis, we propose a semantics-based user interface model, which aims to reduce the complexity of underlying technologies for semantic enrichment of content by Web users.
By surveying existing tools and approaches for semantic content authoring, we extracted a set of guidelines for designing efficient and effective semantic authoring user interfaces.
We applied these guidelines to devise a semantics-based user interface model called WYSIWYM (What You See Is What You Mean) which enables integrated authoring, visualization and exploration of unstructured and (semi-)structured content.
To assess the applicability of our proposed WYSIWYM model, we incorporated the model into four real-world use cases comprising two general and two domain-specific applications.
These use cases address four aspects of the WYSIWYM implementation:
1) Its integration into existing user interfaces,
2) Utilizing it for lightweight text analytics to incentivize users,
3) Dealing with crowdsourcing of semi-structured e-learning content,
4) Incorporating it for authoring of semantic medical prescriptions.
|
Page generated in 0.0332 seconds