• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 156
  • 18
  • 12
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 196
  • 196
  • 196
  • 196
  • 195
  • 53
  • 50
  • 49
  • 42
  • 33
  • 31
  • 29
  • 26
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Designing intelligent language tutoring systems for integration into foreign language instruction

Amaral, Luiz Alexandre Mattos do, January 2007 (has links)
Thesis (Ph. D.)--Ohio State University, 2007. / Full text release at OhioLINK's ETD Center delayed at author's request
72

Text data analysis for a smart city project in a developing nation

Currin, Aubrey Jason January 2015 (has links)
Increased urbanisation against the backdrop of limited resources is complicating city planning and management of functions including public safety. The smart city concept can help, but most previous smart city systems have focused on utilising automated sensors and analysing quantitative data. In developing nations, using the ubiquitous mobile phone as an enabler for crowdsourcing of qualitative public safety reports, from the public, is a more viable option due to limited resources and infrastructure limitations. However, there is no specific best method for the analysis of qualitative text reports for a smart city in a developing nation. The aim of this study, therefore, is the development of a model for enabling the analysis of unstructured natural language text for use in a public safety smart city project. Following the guidelines of the design science paradigm, the resulting model was developed through the inductive review of related literature, assessed and refined by observations of a crowdsourcing prototype and conversational analysis with industry experts and academics. The content analysis technique was applied to the public safety reports obtained from the prototype via computer assisted qualitative data analysis software. This has resulted in the development of a hierarchical ontology which forms an additional output of this research project. Thus, this study has shown how municipalities or local government can use CAQDAS and content analysis techniques to prepare large quantities of text data for use in a smart city.
73

Exploration of Visual, Acoustic, and Physiological Modalities to Complement Linguistic Representations for Sentiment Analysis

Pérez-Rosas, Verónica 12 1900 (has links)
This research is concerned with the identification of sentiment in multimodal content. This is of particular interest given the increasing presence of subjective multimodal content on the web and other sources, which contains a rich and vast source of people's opinions, feelings, and experiences. Despite the need for tools that can identify opinions in the presence of diverse modalities, most of current methods for sentiment analysis are designed for textual data only, and few attempts have been made to address this problem. The dissertation investigates techniques for augmenting linguistic representations with acoustic, visual, and physiological features. The potential benefits of using these modalities include linguistic disambiguation, visual grounding, and the integration of information about people's internal states. The main goal of this work is to build computational resources and tools that allow sentiment analysis to be applied to multimodal data. This thesis makes three important contributions. First, it shows that modalities such as audio, video, and physiological data can be successfully used to improve existing linguistic representations for sentiment analysis. We present a method that integrates linguistic features with features extracted from these modalities. Features are derived from verbal statements, audiovisual recordings, thermal recordings, and physiological sensors signals. The resulting multimodal sentiment analysis system is shown to significantly outperform the use of language alone. Using this system, we were able to predict the sentiment expressed in video reviews and also the sentiment experienced by viewers while exposed to emotionally loaded content. Second, the thesis provides evidence of the portability of the developed strategies to other affect recognition problems. We provided support for this by studying the deception detection problem. Third, this thesis contributes several multimodal datasets that will enable further research in sentiment and deception detection.
74

Knowledge intensive natural language generation with revision

Cline, Ben E. 09 September 2008 (has links)
Traditional natural language generation systems use a pipelined architecture. Two problems with this architecture are poor task decomposition and the lack of interaction between conceptual and stylistic decisions making. A revision architecture operating in a knowledge intensive environment is proposed as a means to deal with these two problems. In a revision system. text is produced and refined iteratively. A text production cycle consists of two steps. First, the text generators produce initial text. Second, this text is examined for defects by revisors. When defects are found the revisors make suggestions for the regeneration of the text. The text generator/revision cycle continues to polish the text iteratively until no more defects can be found. Although previous research has focused on stylistic revisions only. this paper describes techniques for both stylistic and conceptual revisions. Using revision to produce extended natural language text through a series of drafts provides three significant advantages over a traditional natural language generation system. First, it reduces complexity through task decomposition. Second, it promotes text polishing techniques that benefit from the ability to examine generated text in the context of the underlying knowledge from which it was generated. Third, it provides a mechanism for the integrated handling of conceptual and stylistic decisions. For revision to operate intelligently and efficiently, the revision component must have access to both the surface text and the underlying knowledge from which it was generated. A knowledge intensive architecture with a uniform knowledge base allows the revision software to quickly locate referents, choices made in producing the defective text, alternatives to the decisions made at both the conceptual and stylistic levels, and the intent of the text. The revisors use this knowledge, along with facts about the topic at hand and knowledge about how text is produced. to select alternatives for improving the text. The Kalos system was implemented to illustrate revision processing in a natural language generation system. It produces advanced draft quality text for a microprocessor users' guide from a knowledge base describing the microprocessor. It uses revision techniques in a knowledge intensive environment to iteratively polish its initial generation. The system performs both conceptual and stylistic revisions. Example output from the system, showing both types of revision, is presented and discussed. Techniques for dealing with the computational problems caused by the system's uniform knowledge base are described. / Ph. D.
75

Semantic annotation of Chinese texts with message structures based on HowNet

Wong, Ping-wai., 黃炳蔚. January 2007 (has links)
published_or_final_version / abstract / Humanities / Doctoral / Doctor of Philosophy
76

Generating affective natural language for parents of neonatal infants

Mahamood, Saad Ali January 2010 (has links)
The thesis presented here describes original research in the field of Natural Language Generation (NLG). NLG is the subfield of artificial intelligence that is concerned with the automatic production of documents from underlying data. This thesis in particular focuses on developing new and novel methods for generating text that takes into consideration the recipient’s level of stress as a factor to adapt the resultant textural output. This consideration of taking the recipient level of stress was particularly salient due to the domain that this research was conducted under; providing information for parents of pre-term infants during neonatal intensive care (NICU). A highly technical and stressful environment for parents where emotional sensitivity must be shown for the nature of information presented. We have investigated the emotional and informational needs of these parents through an extensive past literature review and two separate research studies with former and current NICU parents. The NLG system built for this research was called BabyTalk Family (BT-Family). A system that can produce a textual summary of medical events that has occurred for a baby in NICU in last twenty-four hours for parents. The novelty of this system is that is capable of estimating the level of stress of the recipient and by using several affective NLG strategies it is able to tailor it’s output for a stressed audience. Unlike traditional NLG systems where the output would remain unchanged regardless of emotional state of the recipient. The key innovation in this system was the integration of several affective strategies in the Document Planner for tailoring textual output for stress recipients. BT-Family’s output was evaluated with thirteen parents that previously had baby in neonatal care. We developed a methodology for an evaluation that involved a direct comparison between stressed and unstressed text for the same given medical scenario for variables such as preference, understandability, helpfulness, and emotional appropriateness. The results, obtained showed the parents overwhelming preferred the stressed text for all of the variables measured.
77

Using natural language generation to provide access to semantic metadata

Hielkema, Feikje January 2010 (has links)
In recent years, the use of using metadata to describe and share resources has grown in importance, especially in the context of the Semantic Web.  However, access to metadata is difficult for users without experience with description logic or formal languages, and currently this description applies to most web users.  There is a strong need for interfaces that provide easy access to semantic metadata, enabling novice users to browse, query and create it easily. This thesis describes a natural language generation interface to semantic metadata called LIBER (Language Interface for Browsing and Editing Rdf), driven by domain ontologies which are integrated with domain-specific linguistic information.  LIBER uses the linguistic information to generate fluent descriptions and search terms through syntactic aggregation. The tool contains three modules to support metadata creation, querying and browsing, which implement the WYSIWYM (What You See Is What You Meant) natural language generation approach.  Users can add and remove information by editing system-generated feedback texts.  Two studies have been conducted to evaluate LIBER’s usability, and compare it to a different Semantic Web interface.  The studies showed subjects with no prior experience of the Semantic Web could use LIBER effectively to create, search and browse metadata, and were a useful source of ideas in which to improve LIBER’s usability.  However, the results of these studies were less positive than we had hoped, and users actually preferred the other Semantic Web tool.  This has raised questions about which user audience LIBER should aim for, and the extent to which the underlying ontologies influence the usability of the interface. LIBER’s portability to other domains is supported by a tool with which ontology developers without a background in linguistics can prepare their ontologies for use in LIBER by adding the necessary linguistic information.
78

Distributed representations for compositional semantics

Hermann, Karl Moritz January 2014 (has links)
The mathematical representation of semantics is a key issue for Natural Language Processing (NLP). A lot of research has been devoted to finding ways of representing the semantics of individual words in vector spaces. Distributional approaches—meaning distributed representations that exploit co-occurrence statistics of large corpora—have proved popular and successful across a number of tasks. However, natural language usually comes in structures beyond the word level, with meaning arising not only from the individual words but also the structure they are contained in at the phrasal or sentential level. Modelling the compositional process by which the meaning of an utterance arises from the meaning of its parts is an equally fundamental task of NLP. This dissertation explores methods for learning distributed semantic representations and models for composing these into representations for larger linguistic units. Our underlying hypothesis is that neural models are a suitable vehicle for learning semantically rich representations and that such representations in turn are suitable vehicles for solving important tasks in natural language processing. The contribution of this thesis is a thorough evaluation of our hypothesis, as part of which we introduce several new approaches to representation learning and compositional semantics, as well as multiple state-of-the-art models which apply distributed semantic representations to various tasks in NLP. Part I focuses on distributed representations and their application. In particular, in Chapter 3 we explore the semantic usefulness of distributed representations by evaluating their use in the task of semantic frame identification. Part II describes the transition from semantic representations for words to compositional semantics. Chapter 4 covers the relevant literature in this field. Following this, Chapter 5 investigates the role of syntax in semantic composition. For this, we discuss a series of neural network-based models and learning mechanisms, and demonstrate how syntactic information can be incorporated into semantic composition. This study allows us to establish the effectiveness of syntactic information as a guiding parameter for semantic composition, and answer questions about the link between syntax and semantics. Following these discoveries regarding the role of syntax, Chapter 6 investigates whether it is possible to further reduce the impact of monolingual surface forms and syntax when attempting to capture semantics. Asking how machines can best approximate human signals of semantics, we propose multilingual information as one method for grounding semantics, and develop an extension to the distributional hypothesis for multilingual representations. Finally, Part III summarizes our findings and discusses future work.
79

An evaluation of machine learning algorithms for tweet sentiment analysis

Unknown Date (has links)
Sentiment analysis of tweets is an application of mining Twitter, and is growing in popularity as a means of determining public opinion. Machine learning algorithms are used to perform sentiment analysis; however, data quality issues such as high dimensionality, class imbalance or noise may negatively impact classifier performance. Machine learning techniques exist for targeting these problems, but have not been applied to this domain, or have not been studied in detail. In this thesis we discuss research that has been conducted on tweet sentiment classification, its accompanying data concerns, and methods of addressing these concerns. We test the impact of feature selection, data sampling and ensemble techniques in an effort to improve classifier performance. We also evaluate the combination of feature selection and ensemble techniques and examine the effects of high dimensionality when combining multiple types of features. Additionally, we provide strategies and insights for potential avenues of future work. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2015 / FAU Electronic Theses and Dissertations Collection
80

Approximate content match of multimedia data with natural language queries.

January 1995 (has links)
Wong Kit-pui. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 117-119). / ACKNOWLEDGMENT --- p.4 / ABSTRACT --- p.6 / KEYWORDS --- p.7 / Chapter Chapter 1 --- INTRODUCTION --- p.9 / Chapter Chapter 2 --- APPROACH --- p.14 / Chapter 2.1 --- Challenges --- p.15 / Chapter 2.2 --- Knowledge Representation --- p.16 / Chapter 2.3 --- Proposed Information Model --- p.17 / Chapter 2.4 --- Restricted Language Set --- p.20 / Chapter Chapter 3 --- THEORY --- p.26 / Chapter 3.1 --- Features --- p.26 / Chapter 3.1.1 --- Superficial Details --- p.30 / Chapter 3.1.2 --- Hidden Details --- p.31 / Chapter 3.2 --- Matching Process --- p.36 / Chapter 3.2.1 --- Inexact Match --- p.37 / Chapter 3.2.2 --- An Illustration --- p.38 / Chapter 3.2.2.1 --- Stage 1 - Query Parsing --- p.39 / Chapter 3.2.2.2 --- Stage 2 - Gross Filtering --- p.41 / Chapter 3.2.2.3 --- Stage 3 - Fine Scoring --- p.42 / Chapter 3.3 --- Extending Knowledge --- p.46 / Chapter 3.3.1 --- Attributes with Intermediate Closeness --- p.47 / Chapter 3.3.2 --- Comparing Different Entities --- p.48 / Chapter 3.4 --- Putting Concepts to Work --- p.50 / Chapter Chapter 4 --- IMPLEMENTATION --- p.52 / Chapter 4.1 --- Overall Structure --- p.53 / Chapter 4.2 --- Choosing NL Parser --- p.55 / Chapter 4.3 --- Ambiguity --- p.56 / Chapter 4.4 --- Storing Knowledge --- p.59 / Chapter 4.4.1 --- Type Hierarchy --- p.60 / Chapter 4.4.1.1 --- Node Name --- p.61 / Chapter 4.4.1.2 --- Node Identity --- p.61 / Chapter 4.4.1.3 --- Operations --- p.68 / Chapter 4.4.1.3.1 --- Direct Edit --- p.68 / Chapter 4.4.1.3.2 --- Interactive Edit --- p.68 / Chapter 4.4.2 --- Implicit Features --- p.71 / Chapter 4.4.3 --- Database of Captions --- p.72 / Chapter 4.4.4 --- Explicit Features --- p.73 / Chapter 4.4.5 --- Transformation Map --- p.74 / Chapter Chapter 5 --- ILLUSTRATION --- p.78 / Chapter 5.1 --- Gloss Tags --- p.78 / Chapter 5.2 --- Parsing --- p.81 / Chapter 5.2.1 --- Resolving Nouns and Verbs --- p.81 / Chapter 5.2.2 --- Resolving Adjectives and Adverbs --- p.84 / Chapter 5.2.3 --- Normalizing Features --- p.89 / Chapter 5.2.4 --- Resolving Prepositions --- p.90 / Chapter 5.3 --- Matching --- p.93 / Chapter 5.3.1 --- Gross Filtering --- p.94 / Chapter 5.3.2 --- Fine Scoring --- p.96 / Chapter Chapter 6 --- DISCUSSION --- p.101 / Chapter 6.1 --- Performance Measures --- p.101 / Chapter 6.1.1 --- General Parameters --- p.101 / Chapter 6.1.2 --- Experiments --- p.103 / Chapter 6.1.2.1 --- Inexact Matching Behaviour --- p.103 / Chapter 6.1.2.2 --- Exact Matching Behaviour --- p.106 / Chapter 6.2 --- Difficulties --- p.108 / Chapter 6.3 --- Possible Improvement --- p.110 / Chapter 6.4 --- Conclusion --- p.112 / REFERENCES --- p.117 / APPENDICES --- p.121 / Appendix A Notation --- p.121 / Appendix B Glossary --- p.123 / Appendix C Proposed Feature Slots and Value --- p.126 / Appendix D Sample Captions and Queries --- p.128 / Appendix E Manual Pages --- p.130 / Appendix F Directory Structure --- p.136 / Appendix G Imported Toolboxes --- p.137 / Appendix H Program Listing --- p.140

Page generated in 0.0952 seconds