• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 3
  • Tagged with
  • 11
  • 11
  • 7
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automatic novice program comprehension for semantic bug detection

Ade-Ibijola, Abejide Olu January 2016 (has links)
A thesis submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfillment of the requirements for the Degree of Doctor of Philosophy in Computer Science April 2016 / Automatically comprehending novice programs with the aim of giving useful feedback has been an Artificial Intelligence problem for over four decades. Solving this problem basically entails manipulating the underlying program plans; i.e. extracting and comparing the novice's plan to the expert's plan and inferring where the novice's bug is from. The bugs of interest in this domain are often semantic bugs as all syntactic bugs are handled by automatic debuggers --- built in most compilers. Hence, a program that debugs like the human expert should understand the problem and know the expected solution(s) in order to detect semantic bugs. This work proposes a new approach to comprehending novice programs using: regular expressions for the recognition of plans in program text, principles from formal language theory for defining the space of program plan variations, and automata-based algorithms for the detection of semantic bugs. The new approach is tested with a repository of novice programs with known semantic bugs and specific bugs were detected. As a proof of concept, the theories presented in this work are further implemented in software prototypes. If the new idea is implemented in a robust software tool, it will find applications in comprehending first year students' programs, thereby supporting the human expert in teaching programming.
2

A framework for semantically verifying schema mappings for data exchange

Walny, Jagoda Katarzyna. January 2010 (has links)
Thesis (M.Sc.)--University of Alberta, 2010. / Title from PDF file main screen (viewed on May 27, 2010). A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Master of Science, Department of Computing Science, University of Alberta. Includes bibliographical references.
3

Monte Carlo semantics : robust inference and logical pattern processing with natural language text

Bergmair, Richard January 2011 (has links)
No description available.
4

Using natural language generation to provide access to semantic metadata

Hielkema, Feikje January 2010 (has links)
In recent years, the use of using metadata to describe and share resources has grown in importance, especially in the context of the Semantic Web.  However, access to metadata is difficult for users without experience with description logic or formal languages, and currently this description applies to most web users.  There is a strong need for interfaces that provide easy access to semantic metadata, enabling novice users to browse, query and create it easily. This thesis describes a natural language generation interface to semantic metadata called LIBER (Language Interface for Browsing and Editing Rdf), driven by domain ontologies which are integrated with domain-specific linguistic information.  LIBER uses the linguistic information to generate fluent descriptions and search terms through syntactic aggregation. The tool contains three modules to support metadata creation, querying and browsing, which implement the WYSIWYM (What You See Is What You Meant) natural language generation approach.  Users can add and remove information by editing system-generated feedback texts.  Two studies have been conducted to evaluate LIBER’s usability, and compare it to a different Semantic Web interface.  The studies showed subjects with no prior experience of the Semantic Web could use LIBER effectively to create, search and browse metadata, and were a useful source of ideas in which to improve LIBER’s usability.  However, the results of these studies were less positive than we had hoped, and users actually preferred the other Semantic Web tool.  This has raised questions about which user audience LIBER should aim for, and the extent to which the underlying ontologies influence the usability of the interface. LIBER’s portability to other domains is supported by a tool with which ontology developers without a background in linguistics can prepare their ontologies for use in LIBER by adding the necessary linguistic information.
5

Improved Scoring Models for Semantic Image Retrieval Using Scene Graphs

Conser, Erik Timothy 28 September 2017 (has links)
Image retrieval via a structured query is explored in Johnson, et al. [7]. The query is structured as a scene graph and a graphical model is generated from the scene graph's object, attribute, and relationship structure. Inference is performed on the graphical model with candidate images and the energy results are used to rank the best matches. In [7], scene graph objects that are not in the set of recognized objects are not represented in the graphical model. This work proposes and tests two approaches for modeling the unrecognized objects in order to leverage the attribute and relationship models to improve image retrieval performance.
6

Semantic methods for execution-level business process modeling modeling support through process verification and service composition /

Weber, Ingo M. January 1900 (has links)
Diss.--Univ. Karlsruhe, 2009 / Includes bibliographical references and index.
7

Detecting Frames and Causal Relationships in Climate Change Related Text Databases Based on Semantic Features

January 2018 (has links)
abstract: The subliminal impact of framing of social, political and environmental issues such as climate change has been studied for decades in political science and communications research. Media framing offers an “interpretative package" for average citizens on how to make sense of climate change and its consequences to their livelihoods, how to deal with its negative impacts, and which mitigation or adaptation policies to support. A line of related work has used bag of words and word-level features to detect frames automatically in text. Such works face limitations since standard keyword based features may not generalize well to accommodate surface variations in text when different keywords are used for similar concepts. This thesis develops a unique type of textual features that generalize <subject, verb, object> triplets extracted from text, by clustering them into high-level concepts. These concepts are utilized as features to detect frames in text. Compared to uni-gram and bi-gram based models, classification and clustering using generalized concepts yield better discriminating features and a higher classification accuracy with a 12% boost (i.e. from 74% to 83% F-measure) and 0.91 clustering purity for Frame/Non-Frame detection. The automatic discovery of complex causal chains among interlinked events and their participating actors has not yet been thoroughly studied. Previous studies related to extracting causal relationships from text were based on laborious and incomplete hand-developed lists of explicit causal verbs, such as “causes" and “results in." Such approaches result in limited recall because standard causal verbs may not generalize well to accommodate surface variations in texts when different keywords and phrases are used to express similar causal effects. Therefore, I present a system that utilizes generalized concepts to extract causal relationships. The proposed algorithms overcome surface variations in written expressions of causal relationships and discover the domino effects between climate events and human security. This semi-supervised approach alleviates the need for labor intensive keyword list development and annotated datasets. Experimental evaluations by domain experts achieve an average precision of 82%. Qualitative assessments of causal chains show that results are consistent with the 2014 IPCC report illuminating causal mechanisms underlying the linkages between climatic stresses and social instability. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2018
8

Semantic search of multimedia data objects through collaborative intelligence

Chan, Wing Sze 01 January 2010 (has links)
No description available.
9

Answering complex questions : supervised approaches

Sadid-Al-Hasan, Sheikh, University of Lethbridge. Faculty of Arts and Science January 2009 (has links)
The term “Google” has become a verb for most of us. Search engines, however, have certain limitations. For example ask it for the impact of the current global financial crisis in different parts of the world, and you can expect to sift through thousands of results for the answer. This motivates the research in complex question answering where the purpose is to create summaries of large volumes of information as answers to complex questions, rather than simply offering a listing of sources. Unlike simple questions, complex questions cannot be answered easily as they often require inferencing and synthesizing information from multiple documents. Hence, this task is accomplished by the query-focused multidocument summarization systems. In this thesis we apply different supervised learning techniques to confront the complex question answering problem. To run our experiments, we consider the DUC-2007 main task. A huge amount of labeled data is a prerequisite for supervised training. It is expensive and time consuming when humans perform the labeling task manually. Automatic labeling can be a good remedy to this problem. We employ five different automatic annotation techniques to build extracts from human abstracts using ROUGE, Basic Element (BE) overlap, syntactic similarity measure, semantic similarity measure and Extended String Subsequence Kernel (ESSK). The representative supervised methods we use are Support Vector Machines (SVM), Conditional Random Fields (CRF), Hidden Markov Models (HMM) and Maximum Entropy (MaxEnt). We annotate DUC-2006 data and use them to train our systems, whereas 25 topics of DUC-2007 data set are used as test data. The evaluation results reveal the impact of automatic labeling methods on the performance of the supervised approaches to complex question answering. We also experiment with two ensemble-based approaches that show promising results for this problem domain. / x, 108 leaves : ill. ; 29 cm
10

An exploratory study using the predicate-argument structure to develop methodology for measuring semantic similarity of radiology sentences

Newsom, Eric Tyner 12 November 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The amount of information produced in the form of electronic free text in healthcare is increasing to levels incapable of being processed by humans for advancement of his/her professional practice. Information extraction (IE) is a sub-field of natural language processing with the goal of data reduction of unstructured free text. Pertinent to IE is an annotated corpus that frames how IE methods should create a logical expression necessary for processing meaning of text. Most annotation approaches seek to maximize meaning and knowledge by chunking sentences into phrases and mapping these phrases to a knowledge source to create a logical expression. However, these studies consistently have problems addressing semantics and none have addressed the issue of semantic similarity (or synonymy) to achieve data reduction. To achieve data reduction, a successful methodology for data reduction is dependent on a framework that can represent currently popular phrasal methods of IE but also fully represent the sentence. This study explores and reports on the benefits, problems, and requirements to using the predicate-argument statement (PAS) as the framework. A convenient sample from a prior study with ten synsets of 100 unique sentences from radiology reports deemed by domain experts to mean the same thing will be the text from which PAS structures are formed.

Page generated in 0.0981 seconds