• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 943
  • 156
  • 74
  • 56
  • 27
  • 23
  • 18
  • 13
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 4
  • Tagged with
  • 1622
  • 1622
  • 1622
  • 626
  • 573
  • 469
  • 387
  • 376
  • 271
  • 256
  • 246
  • 230
  • 221
  • 212
  • 208
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Dokumentenbasierte Steuerung von Geschäftsprozessen

Reichelt, Dominik 10 October 2014 (has links) (PDF)
Geschäftsprozesse im Verwaltungs- und Dienstleistungsbereich werden häufig durch den Eingang von Dokumenten angestoßen. Hierfür ist es unerlässlich, dass sie den richtigen Mitarbeiter im Unternehmen oder der Organisation erreichen. Oftmals sind jedoch dem externen Sender die internen Organisationsstrukturen nicht klar, so dass eine zentrale Stelle angeschrieben wird. Diese muss dann das Dokument, basierend auf seinem Inhalt, an die zuständigen Kollegen weiterleiten. Dies kann beträchtlichen personellen Aufwand mit sich bringen. In der Forschungsarbeit wird ein System entwickelt, das diese Aufgabe maschinell erfüllen soll. Hierzu werden verschiedenartige Klassifikationsverfahren erprobt und hinsichtlich ihrer Verlässlichkeit beurteilt. Weiterhin werden Verbesserungen gegenüber gängigen maschinellen Verfahren angestrebt.
272

The mat sat on the cat : investigating structure in the evaluation of order in machine translation

McCaffery, Martin January 2017 (has links)
We present a multifaceted investigation into the relevance of word order in machine translation. We introduce two tools, DTED and DERP, each using dependency structure to detect differences between the structures of machine-produced translations and human-produced references. DTED applies the principle of Tree Edit Distance to calculate edit operations required to convert one structure into another. Four variants of DTED have been produced, differing in the importance they place on words which match between the two sentences. DERP represents a more detailed procedure, making use of the dependency relations between words when evaluating the disparities between paths connecting matching nodes. In order to empirically evaluate DTED and DERP, and as a standalone contribution, we have produced WOJ-DB, a database of human judgments. Containing scores relating to translation adequacy and more specifically to word order quality, this is intended to support investigations into a wide range of translation phenomena. We report an internal evaluation of the information in WOJ-DB, then use it to evaluate variants of DTED and DERP, both to determine their relative merit and their strength relative to third-party baselines. We present our conclusions about the importance of structure to the tools and their relevance to word order specifically, then propose further related avenues of research suggested or enabled by our work.
273

An Evaluation of NLP Toolkits for Information Quality Assessment

Karlin, Ievgen January 2012 (has links)
Documentation is often the first source, which can help user to solve problems or provide conditions of use of some product. That is why it should be clear and understandable. But what does “understandable” mean? And how to detect whether some text is unclear? And this thesis can answer on those questions.The main idea of current work is to measure clarity of the text information using natural language processing capabilities. There are three global steps to achieve this goal: to define criteria of bad clarity of text information, to evaluate different natural language toolkits and find suitable for us, and to implement a prototype system that, given a text, measures text clarity.Current thesis project is planned to be included to VizzAnalyzer (quality analysis tool, which processes information on structure level) and its main task is to perform a clarity analysis of text information extracted by VizzAnalyzer from different XML-files.
274

Dokumentenbasierte Steuerung von Geschäftsprozessen

Reichelt, Dominik January 2014 (has links)
Geschäftsprozesse im Verwaltungs- und Dienstleistungsbereich werden häufig durch den Eingang von Dokumenten angestoßen. Hierfür ist es unerlässlich, dass sie den richtigen Mitarbeiter im Unternehmen oder der Organisation erreichen. Oftmals sind jedoch dem externen Sender die internen Organisationsstrukturen nicht klar, so dass eine zentrale Stelle angeschrieben wird. Diese muss dann das Dokument, basierend auf seinem Inhalt, an die zuständigen Kollegen weiterleiten. Dies kann beträchtlichen personellen Aufwand mit sich bringen. In der Forschungsarbeit wird ein System entwickelt, das diese Aufgabe maschinell erfüllen soll. Hierzu werden verschiedenartige Klassifikationsverfahren erprobt und hinsichtlich ihrer Verlässlichkeit beurteilt. Weiterhin werden Verbesserungen gegenüber gängigen maschinellen Verfahren angestrebt.
275

Commonsense Knowledge Representation and Reasoning in Statistical Script Learning

I-Ta Lee (9736907) 15 December 2020 (has links)
<div> <div> <div> <div> <p>A recent surge of research on commonsense knowledge has given the AI community new opportunities and challenges. Many studies focus on constructing commonsense knowledge representations from natural language data. However, how to learn such representations from large-scale text data is still an open question. This thesis addresses the problem through statistical script learning, which learns event representations from stereotypical event relationships using weak supervision. These event representations serve as an abundant source of commonsense knowledge to be applied in downstream language tasks. We propose three script learning models that generalize previous works with new insight. A feature-enriched model characterizes fine-grained and entity-based event properties to address specific semantics. A multi-relational model generalizes traditional script learning models which rely on one type of event relationship—co-occurrence—to a multi-relational model that considers typed event relationships, going beyond simple event similarities. A narrative graph model leverages a narrative graph to inform an event with a grounded situation to maintain a global consistency of event states. Also, pretrained language models such as BERT are used to further improve event semantics.</p><p>Our three script learning models do not rely on annotated datasets, as the cost of creating these at large scales is unreasonable. Based on weak supervision, we extract events from large collections of textual data. Although noisy, the learned event representations carry profound commonsense information, enhancing performance in downstream language tasks.</p> <p>We evaluate their performance with various intrinsic and extrinsic evaluations. In the intrinsic evaluations, although the three models are evaluated in terms of various aspects, the shared core task is Multiple Choice Narrative Cloze (MCNC), which measures the model’s ability to predict what happens next, out of five candidate events, in a given situation. This task facilitates fair comparisons between script learning models for commonsense inference. The three models were proposed in three consecutive years, from 2018 to 2020, each outperforming the previous year’s model as well as the competitors’ baselines. Our best model outperforms EventComp, a widely recognized baseline, by a large margin in MCNC: i.e., absolute accuracy improvements of 9.73% (53.86% → 63.59%). In the extrinsic evaluations, we use our models for implicit discourse sense classification (IDSC), a challenging task in which two argument spans are annotated with an implicit discourse sense; the task is to predict the sense type, which requires a deep understanding of common sense between discourse arguments. Moreover, in an additional work we touch on a more interesting group of tasks about psychological commonsense reasoning. Solving these requires reasoning about and understanding human mental states such as motivation, emotion, and desire. Our best model, an enhancement of the narrative graph model, combines the advantages of the above three works to address entity-based features, typed event relationships, and grounded context in one model. The model successfully captures the context in which events appear and interactions between characters’ mental states, outperforming previous works.</p> <div> <div> <div> <p>The main contributions of this thesis are as follows: (1) We identify the importance of entity-based features for representing commonsense knowledge with script learning. (2) We create one of the first, if not the first, script learning models that addresses the multi-relational nature between events. (3) We publicly release contextualized event representations (models) trained on large-scale newswire data. (4) We develop a script learning model that combines entity-based features, typed event relationships, and grounded context in one model, and show that it is a good fit for modeling psychological common sense.</p><p>To conclude, this thesis presents an in-depth exploration of statistical script learning, enhancing existing models with new insight. Our experimental results show that models informed with the new knowledge aspects significantly outperform previous works in both intrinsic and extrinsic evaluations. </p> </div> </div> </div> </div> </div> </div> </div>
276

Chatbot : A qualitative study of users' experience of Chatbots / Chatbot : En kvalitativ studie om användarnas upplevelse av Chatbottar

Aljadri, Sinan January 2021 (has links)
The aim of the present study has been to examine users' experience of Chatbot from a business perspective and a consumer perspective. The study has also focused on highlighting what limitations a Chatbot can have and possible improvements for future development. The study is based on a qualitative research method with semi-structured interviews that have been analyzed on the basis of a thematic analysis. The results of the interview material have been analyzed based on previous research and various theoretical perspectives such as Artificial Intelligence (AI), Natural Language Processing (NLP). The results of the study have shown that the experience of Chatbot can differ between businesses that offer Chatbot, which are more positive and consumers who use it as customer service. Limitations and suggestions for improvements around Chatbotar are also a consistent result of the study. / Den föreliggande studie har haft som syfte att undersöka användarnas upplevelse av Chatbot utifrån verksamhetsperspektiv och konsumentperspektiv. Studien har också fokuserat på att lyfta fram vilka begränsningar en Chatbot kan ha och eventuella förbättringar för framtida utvecklingen. Studien är baserad på en kvalitativ forskningsmetod med semistrukturerade intervjuer som har analyserats utifrån en tematisk analys. Resultatet av intervjumaterialet har analyserat utifrån tidigare forskning och olika teoretiska perspektiv som Artificial Intelligence (AI), Natural Language Processing (NLP). Resultatet av studien har visat att upplevelsen av Chatbot kan skilja sig mellan verksamheter som erbjuder Chatbot, som är mer positiva och konsumenter som använder det som kundtjänst. Begränsningar och förslag på förbättringar kring Chatbotar är också ett genomgående resultat i studien.
277

RECOMMENDATION SYSTEMS IN SOCIAL NETWORKS

Behafarid Mohammad Jafari (15348268) 18 May 2023 (has links)
<p> The dramatic improvement in information and communication technology (ICT) has made an evolution in learning management systems (LMS). The rapid growth in LMSs has caused users to demand more advanced, automated, and intelligent services. CourseNetworking is a next-generation LMS adopting machine learning to add personalization, gamification, and more dynamics to the system. This work tries to come up with two recommender systems that can help improve CourseNetworking services. The first one is a social recommender system helping CourseNetworking to track user interests and give more relevant recommendations. Recently, graph neural network (GNN) techniques have been employed in social recommender systems due to their high success in graph representation learning, including social network graphs. Despite the rapid advances in recommender systems performance, dealing with the dynamic property of the social network data is one of the key challenges that is remained to be addressed. In this research, a novel method is presented that provides social recommendations by incorporating the dynamic property of social network data in a heterogeneous graph by supplementing the graph with time span nodes that are used to define users long-term and short-term preferences over time. The second service that is proposed to add to Rumi services is a hashtag recommendation system that can help users label their posts quickly resulting in improved searchability of content. In recent years, several hashtag recommendation methods are proposed and developed to speed up processing of the texts and quickly find out the critical phrases. The methods use different approaches and techniques to obtain critical information from a large amount of data. This work investigates the efficiency of unsupervised keyword extraction methods for hashtag recommendation and recommends the one with the best performance to use in a hashtag recommender system. </p>
278

Approaches to natural language processing in app development

Djoweini, Camran, Hellberg, Henrietta January 2018 (has links)
Natural language processing is an on-going field that is not yet fully established. A high demand for natural language processing in applications creates a need for good development-tools and different implementation approaches developed to suit the engineers behind the applications. This project approaches the field from an engineering point of view to research approaches, tools, and techniques that are readily available today for development of natural language processing support. The sub-area of information retrieval of natural language processing was examined through a case study, where prototypes were developed to get a deeper understanding of the tools and techniques used for such tasks from an engineering point of view. We found that there are two major approaches to developing natural language processing support for applications, high-level and low-level approaches. A categorization of tools and frameworks belonging to the two approaches as well as the source code, documentation and, evaluations, of two prototypes developed as part of the research are presented. The choice of approach, tools and techniques should be based on the specifications and requirements of the final product and both levels have their own pros and cons. The results of the report are, to a large extent, generalizable as many different natural language processing tasks can be solved using similar solutions even if their goals vary. / Datalingvistik (engelska natural language processing) är ett område inom datavetenskap som ännu inte är fullt etablerat. En hög efterfrågan av stöd för naturligt språk i applikationer skapar ett behov av tillvägagångssätt och verktyg anpassade för ingenjörer. Detta projekt närmar sig området från en ingenjörs synvinkel för att undersöka de tillvägagångssätt, verktyg och tekniker som finns tillgängliga att arbeta med för utveckling av stöd för naturligt språk i applikationer i dagsläget. Delområdet ‘information retrieval’ undersöktes genom en fallstudie, där prototyper utvecklades för att skapa en djupare förståelse av verktygen och teknikerna som används inom området. Vi kom fram till att det går att kategorisera verktyg och tekniker i två olika grupper, beroende på hur distanserad utvecklaren är från den underliggande bearbetningen av språket. Kategorisering av verktyg och tekniker samt källkod, dokumentering och utvärdering av prototyperna presenteras som resultat. Valet av tillvägagångssätt, tekniker och verktyg bör baseras på krav och specifikationer för den färdiga produkten. Resultaten av studien är till stor del generaliserbara eftersom lösningar till många problem inom området är likartade även om de slutgiltiga målen skiljer sig åt.
279

Determining Whether and When People Participate in the Events They Tweet About

Sanagavarapu, Krishna Chaitanya 05 1900 (has links)
This work describes an approach to determine whether people participate in the events they tweet about. Specifically, we determine whether people are participants in events with respect to the tweet timestamp. We target all events expressed by verbs in tweets, including past, present and events that may occur in future. We define event participant as people directly involved in an event regardless of whether they are the agent, recipient or play another role. We present an annotation effort, guidelines and quality analysis with 1,096 event mentions. We discuss the label distributions and event behavior in the annotated corpus. We also explain several features used and a standard supervised machine learning approach to automatically determine if and when the author is a participant of the event in the tweet. We discuss trends in the results obtained and devise important conclusions.
280

On Semantic Cognition, Inductive Generalization, and Language Models

Kanishka Misra (9708551) 05 September 2023 (has links)
<p dir="ltr">Our ability to understand language and perform reasoning crucially relies on a robust system of semantic cognition (G. L. Murphy, 2002; Rogers & McClelland, 2004; Rips et al., 2012; Lake & Murphy, 2021): processes that allow us to learn, update, and produce inferences about everyday concepts (e.g., cat, chair), properties (e.g., has fur, can be sat on), categories (e.g., mammals, furniture), and relations (e.g., is-a, taller-than). Meanwhile, recent progress in the field of natural language processing (NLP) has led to the development of language models (LMs): sophisticated neural networks that are trained to predict words in context (Devlin et al., 2019; Radford et al., 2019; Brown et al., 2020), and as a result build representations that encode the knowledge present in the statistics of their training environment. These models have achieved impressive levels of performance on a range of tasks that require sophisticated semantic knowledge (e.g. question answering and natural language inference), often even reaching human parity. To what extent do LMs capture the nuances of human conceptual knowledge and reasoning? Centering around this broad question, this dissertation uses core ideas in human semantic cognition as guiding principles and lays down the groundwork to establish effective evaluation and improvement of conceptual understanding in LMs. In particular, I build on prior work that focuses on characterizing what semantic knowledge is made available in the behavior and representations of LMs, and extend it by additionally proposing tests that focus on functional consequences of acquiring basic semantic knowledge.<br><br>I primarily focus on inductive generalization (Hayes & Heit, 2018)—the unique ability of humans to rely on acquired conceptual knowledge to project or generalize novel information—as a context within which we can analyze LMs’ encoding of conceptual knowledge. I do this, since the literature surrounding inductive generalization contains a variety of empirical regularities that map to specific conceptual abstractions and shed light on how humans store, organize and use conceptual knowledge. Before explicitly analyzing LMs for these empirical regularities, I test them on two other contexts, which also feature the role of inductive generalization. First I test the extent to which LMs demonstrate typicality effects—a robust finding in human categorization literature where certain members of a category are considered to be more central to the category than are others. Specifically, I test the behavior 19 different LMs on two contexts where typicality effects modulate human behavior: 1) verification of sentences expressing taxonomic category membership, and 2) projecting novel properties from individual category members to the entire category. In both tests, LMs achieved positive but modest correlations with human typicality ratings, suggesting that they can to a non-trivial extent capture subtle differences between category members. Next, I propose a new benchmark to test the robustness of LMs in attributing properties to everyday concepts, and in making inductive leaps to endow properties to novel concepts. On testing 31 different LMs for these capacities, I find that while they can correctly attribute properties to everyday concepts and even predict the properties of novel concepts in simple settings, they struggle to do so robustly. Combined with the analyses of typicality effects, these results suggest that the ability of LMs to demonstrate impressive conceptual knowledge and reasoning behavior can be explained by their sensitivities to shallow predictive cues. When these cues are carefully controlled for, LMs show critical failures in demonstrating robust conceptual understanding. Finally, I develop a framework that can allow us to characterize the extent to which the distributed representations learned by LMs can encode principles and abstractions that characterize inductive behavior of humans. This framework operationalizes inductive generalization as the behavior of an LM after its representations have been partially exposed (via gradient-based learning) to novel conceptual information. To simulate this behavior, the framework uses LMs that are endowed with human-elicited property knowledge, by training them to evaluate the truth of sentences attributing properties to concepts. I apply this framework to test four different LMs on 13 different inductive phenomena documented for humans (Osherson et al., 1990; Heit & Rubinstein, 1994). Results from these analyses suggest that building representations from word distributions can successfully allow the encoding of many abstract principles that can guide inductive behavior in the models—principles such as sensitivity to conceptual similarity, hierarchical organization of categories, reasoning about category coverage, and sample size. At the same time, the tested models also systematically failed at demonstrating certain phenomena, showcasing their inability to demonstrate pragmatic reasoning, preference to rely on shallow statistical cues, and lack of context sensitivity with respect to high-level intuitive theories.</p>

Page generated in 0.1057 seconds